id
stringlengths 3
8
| url
stringlengths 32
207
| title
stringlengths 1
114
| text
stringlengths 93
492k
|
---|---|---|---|
7501807
|
https://en.wikipedia.org/wiki/R%C3%A9seau%20ACTION%20TI
|
Réseau ACTION TI
|
Le Réseau ACTION TI is a non-profit information technology professional association in Quebec, Canada. Founded in 1977, the group was previously called the Fédération informatique du Québec and changed its name in autumn 2008.
As of 2019, ACTION TI has around 1,870 members divided into
six sections: Estrie, Mauricie, Montréal, Québec, Saguenay-Lac-Saint-Jean, and Laval-Laurentides-Lanaudière.
Mission
ACTION TI seeks to connect people in the information technology (IT) sectors
of Québec, organizing events and helping promote excellence and improve knowledge and skills.
Accomplishments
ACTION TI holds two annual conferences, the JIQ conference on trends in business and information technology, and the Datavore conference on data analysis and visualization.
The group awards the Prix Méritic, given to significant figures in the IT industry who can be viewed as role models. Two prizes are given, one for upper management in IT and an individual career award. In 2017, a third prize was added for the entrepreneur of a small/medium-sized business located in Quebec City.
The has been held each year since 1987, aiming to recognise excellence in information technology in Québec by rewarding individuals, businesses, or organizations for their contributions to the industry. One winner of the categories is also chosen to win a special Excellence award.
Trophies are awarded at ACTION TI's annual gala; winners are chosen by jury.
References
External links
Réseau Action TI web site
Professional associations based in Quebec
Information technology organizations based in Canada
|
34315325
|
https://en.wikipedia.org/wiki/Euro%20Truck%20Simulator%202
|
Euro Truck Simulator 2
|
Euro Truck Simulator 2 is an open world
truck simulator game developed and published by SCS Software for Microsoft Windows, Linux, and macOS and was initially released as open development on 19 October 2012. The game is a direct sequel to the 2008 game Euro Truck Simulator and it is the second video game in the Truck Simulator series. The basic premise of the game is that the player can drive one of a choice of articulated trucks across a condensed depiction of Europe, picking up cargo from various locations and delivering it. As the game progresses, it is possible for the player to buy more vehicles and depots, as well as hire other drivers to work for them. The game has a never-ending play style.
The game has sold over 9 million units as of March 2021, according to a press release by Renault Trucks.
Gameplay
Gameplay follows an open world style, with players being essentially free to travel anywhere in their lorries, provided they can keep up with fuel, maintenance and toll costs, alongside fines if they incur any. When starting out, players choose their HQ's location in any of the game map's cities. At first, the player can only take what is known as quick jobs—these jobs involve making hired driver deliveries while employed by a delivery company, with a provided truck and all expenses (fuel, road tolls, ferry crossings) covered. As the player earns money or takes bank loans, they can eventually afford to buy themselves a truck, acquire a home garage, and start accepting better-paying jobs by using their own truck instead of being a driver for hire with gear being supplied. Money earned in the game can be spent on upgrading or purchasing new trucks, hiring NPC drivers to take on deliveries, buying more garages and expanding the home garage to accommodate more trucks and drivers. The skills of the drivers hired by the player also grow with experience and the player can create a huge fleet of the trucks and drivers expanding the business across Europe.
As players progress and earn money, they are also able to purchase their own trailers, which can be fully customized in the same way as trucks. Player-owned trailers can be used to receive and deliver cargo at existing storage locations. Like trucks, trailers can also be allocated to company employees, resulting in higher income.
The player gains experience points after each delivery. A skill point is awarded after each level-up. Skill points can be used to unlock deliveries that require different ADR classes, longer distance deliveries, special cargo loads, fragile cargo loads, deliveries that are urgent and eco-driving. This progression allows the player to take on better-paying jobs.
The base game features 71 cities in twelve countries, over twenty types of cargo and over fifteen fictional European companies. There are seven map DLCs that expand the game to more countries and locations, and multiple other truck and trailer DLCs.
The game also features a "Radio" feature, which allows players to play imported MP3 and OGG files. It also allows the player to listen to Internet radio.
Trucks
The game includes two new truck companies, Scania and Renault, with MAN returning from the original game. Initially, DAF, Iveco, Mercedes-Benz and Volvo trucks were not officially licensed and had their names changed to DAV, Ivedo, Majestic and Valiant respectively. Later updates included the official branding for the DAF XF, Volvo FH, Iveco Stralis, Scania Streamline and Mercedes-Benz Actros. In 2017 the new Scania S and R were introduced. The MAN TGX Euro 6 was added as a playable vehicle on 8 February 2019, followed by the Renault T on 26 September 2019. On 6 April 2021, Renault Trucks unveiled their new T-High range of trucks through a joint venture with SCS Software, becoming the first time a new truck has been announced through a video game. About three months later, on 10 June 2021 SCS Software announced the release of the new DAF XG and XG+ models, a day after their official reveal. Then, on 6 December of that year, the company announced the release of the EfficientLine 3 model of the MAN TGX, followed by the release of the new model of the DAF XF a few days later on 14 December.
Development
Initially released for Windows and available for purchase & download via SCS's own website, the game would eventually be added to Steam in January 2013. Expanding beyond the Windows version, SCS announced in March 2013 that they were developing a Mac version of the game. One month later, they released a Linux beta version of the game to the public through Steam. On 27 February, they stated "the Mac OS port of ETS 2 is taking longer than anybody would like, but trust us, we are still working hard on it." Finally, on 19 December 2014, they announced on their blog that the OS X version of the game is ready for a public beta available on Steam. On 21 January 2015, a 64-bit version of Euro Truck Simulator 2 was released, which allows for more memory to be used by the game.
Downloadable content
Map expansion packs
Euro Truck Simulator 2 offers several DLC base map expansion packs. Below is a list of major expansion packs provided by the developers.
Reception
The game was generally well received by critics, holding a score of 79/100 on Metacritic, indicating "generally favorable reviews".
In a review for Destructoid, Jim Sterling praised the game's accessibility, noting how easy the GPS and map features were to use, as well as the option to stream European internet radio, and the multitude of control options available. They also praised the graphics, stating that "[f]rom the shape of the traffic lights to the atmosphere of the backdrops, there's a sense of individuality to each new territory you uncover, and the trucks themselves are lovingly recreated with an intricate level of detail", although they did criticise the AI of the other vehicles on the road.
In a similarly favourable review, Tim Stone of PC Gamer called it "unexpectedly engrossing", praising the size of the map and the variation of the roads and scenery available. He did however have reservations about the accuracy of the surroundings, commenting "no one seems to have told SCS’s countryside crafters that rural Britain features long green things called hedges. Cities are often depicted with the shortest of visual shorthand – a few warehouses, the odd landmark if you’re lucky."
Rock, Paper, Shotgun listed Euro Truck Simulator 2 ninth on their list of "The 25 Best Simulation Games Ever Made".
Notes
References
External links
2012 video games
Video game sequels
Windows games
Linux games
MacOS games
Oculus Rift games
SCS Software games
Steam Greenlight games
Video games developed in the Czech Republic
Video games featuring protagonists of selectable gender
Video games set in Austria
Video games set in Belgium
Video games set in Bulgaria
Video games set in Denmark
Video games set in Estonia
Video games set in France
Video games set in Finland
Video games set in Germany
Video games set in Hungary
Video games set in Italy
Video games set in Latvia
Video games set in Lithuania
Video games set in Luxembourg
Video games set in Norway
Video games set in Poland
Video games set in Portugal
Video games set in Romania
Video games set in Russia
Video games set in Slovakia
Video games set in Spain
Video games set in Sweden
Video games set in Switzerland
Video games set in the Czech Republic
Video games set in the Netherlands
Video games set in the United Kingdom
Video games set in Turkey
Truck simulation video games
Video games with Steam Workshop support
Multiplayer and single-player video games
|
8110163
|
https://en.wikipedia.org/wiki/Edmark
|
Edmark
|
Edmark Corporation (or simply Edmark) was a publisher of educational print materials and an educational software developer in Redmond, Washington. They developed software for Microsoft Windows and Mac OS in several languages and sold it in over a dozen countries.
History
Edmark was founded in 1970 by Gordon B.Bleil by combining the assets of Educational Aids and Services Co. a small supplier of educational materials and programs and L-Tec Systems Inc. which had developed programs from its research. The Child Development and Mental Retardation Center of the University of Washington under the direction of Dr. Sidney Bijou had conducted research into the operant conditioning and reinforcement theories of B.F. Skinner as applicable to human learning. From this research they developed academic programs which for the first time proved the viability of teaching reading to people with severe mental limitations. Bleil adapted this research into The Edmark Reading Program which for the next decade was the principal product of the company.
Bleil left the company to return to banking in 1980 and retained no interest in the company.
They began developing software in 1992. Edmark was listed on NASDAQ. Their audience was children between the ages of 2 and 16 years. Edmark had more than 65 industry design awards.
In 1989, their children, Richard, Lucy, Heather and Chris became directors. Richard became the chairman, Heather became the CCO, Chris became the president and Lucy became the CEO in October 1989. Edmark hired former teacher Donna Stanger as vice-president of product development in October 1991.
In 1992, Edmark released Millie's Math House and KidDesk. Sally Narodick resigned as CEO in September citing the stress, and Donna Stanger became the CEO
Edmark was acquired by IBM on November 13, 1996 for $102.3 million ($15.50 per share for two-thirds of Edmark's shares) to expand its presence in home software.
In September 2000, it was sold to Riverdeep Interactive Learning for about $85 million.
As of 2017, Houghton Mifflin Harcourt is offering the Edmark, Edmark House Series, Mighty Math, and Thinkin' Things brands as licensing opportunities on its website.
Software
KidDesk (1992)
Strategy Challenges Collection 1 (formerly Strategy Games of the World) (1995) Release date: November 1995
Strategy Challenges Collection 2: In the Wild (1997)
ThemeWeavers: Animals
ThemeWeavers: Nature
Travel the World with Timmy! Deluxe
Let’s Go Read! 1: An Island Adventure - ages 4–6
Let’s Go Read! 2: An Ocean Adventure - ages 7–12
Stories & More: Animal Friends
Stories & More: Time and Place
MindTwister Math
Space Academy GX-1
Virtual Labs: Light
Virtual Labs: Electricity
Talking Walls - runner-up for the Macworld 16th Annual Editors' Choice Award for Education
Talking Walls: The Stories Continue
Early Learning House
Millie's Math House (1992) - ages 2–6 Release date: October 1992
Bailey's Book House (1993) - ages 2–6 Release date: June 1993
Sammy's Science House (1994) - ages 3–7 Release date: June 1994
Trudy's Time & Place House (1995) - ages 3–7 Release date: September 1995
Stanley’s Sticker Stories (1996) Release date: June 1996
Thinkin' Things
Thinkin' Things Collection 1 (Formerly Thinkin Things) (1993) - ages 4–8 Release date: September 1993
Thinkin' Things Collection 2 (1994) - ages 6–12 Release date: October 1994
Thinkin' Things Collection 3 (1995) - ages 7–13 Release date: October 1995
Thinkin' Things: Toony the Loon’s Lagoon (remastered version of Thinkin Things Collection 1)
Thinkin' Things: All Around FrippleTown - ages 4–8, won the 1999 Macworld Editors' Choice Award for Education
Thinkin' Things Sky Island Mysteries - ages 8–12
Thinkin' Science
Thinkin' Science Series: ZAP! (1998)
Thinkin' Space
Imagination Express
Imagination Express: Neighborhood Release date: October 1994
Imagination Express: Castle Release date: November 1994
Imagination Express: Rain Forest Release date: May 1995
Imagination Express: Ocean Release date: October 1995
Imagination Express: Pyramids
Imagination Express: Time Trip, USA
Mighty Math
Mighty Math Carnival Countdown (1996) - ages 4–8 Release date: July 1996
Mighty Math Number Heroes (1996) - ages 7–12 Release date: July 1996
Mighty Math Zoo Zillions
Mighty Math Calculating Crew
Mighty Math Astro Algebra
Mighty Math Cosmic Geometry
Reception
Computer Gaming World in 1993 stated that "Bailey's Book House combines the best of educational theory with a loving attention to detail and an engaging presentation ... a real winner".
References
External links
Educated Ideas
COMPANY NEWS;EDMARK STOCK IS HURT BY OUTLOOK ON EARNINGS
Edmark Corp. reports earnings for Qtr to Dec 31
Behind the Fading of a Onetime Software Star
I.B.M. In $110 Million Deal For Ailing Software Seller
LIBRARY/THINKING SKILLS; Thinkin' Things Collection 3
SOFTWARE; Home Lessons For Children Ready to Read
Houghton Mifflin Harcourt
Software companies established in 1970
Riverdeep subsidiaries
Companies based in Redmond, Washington
Educational software companies
Publishing companies established in 1970
Mattel
1970 establishments in Washington (state)
2000 mergers and acquisitions
|
23158064
|
https://en.wikipedia.org/wiki/D3%20Security%20Management%20Systems
|
D3 Security Management Systems
|
D3 Security Management Systems, Inc., a privately held company headquartered in Vancouver, British Columbia, is a developer of software for security, governance, risk management, and compliance functions of organizations. The company operates as an enterprise on-site and software as a service provider with products designed for investigative case management, incident management, computer-assisted dispatch, security guard tour and security post orders. Through its partnerships, D3 Security Management Systems receives affiliate revenue from other cloud computing partners and value-added resellers for windows mobile devices.
History
D3 Security Management Systems was formed in 2002 by Canadian entrepreneur Gordon Benoit. The original incident management software focused on delivering a configurable platform for automating the business processes of the physical security function.
In 2004 D3 Security Management Systems, leveraging a common thin client architecture and back-end framework, developed modules for computer-assisted dispatch and security guard tour. Following their release and implementation in early 2005, D3 Security Management Systems added a security post orders module. Beginning in 2006, D3 Security Management Systems implemented enhancements for more robust role-based access controls and transactional sharing of database records. The former permitted compliance with need to know regulations for privacy and the later enabled collaboration between functional units. As a result, D3 Security Management Systems released its Case Collaboration Platform (CCP), a configurable platform catering to the governance, risk management, and compliance business functions, expanding product scope from corporate security and physical security incident management to ethics and compliance, human resources and IT security case management.
Cloud Computing Security
As a software as a service provider, D3 Security Management Systems was an early adopter of solutions designed to secure the cloud. Using its published application programming interface for interaction with technology designed to extend single sign-on to the cloud, D3 Security Management Systems enabled single sign-on without inserting a third-party security product into its proprietary platform.
See also
Enterprise social software
Security management
References
External links
Official site
Companies based in Vancouver
Software companies of Canada
2002 establishments in British Columbia
Software companies established in 2002
Companies established in 2002
|
1088971
|
https://en.wikipedia.org/wiki/Stop-and-wait%20ARQ
|
Stop-and-wait ARQ
|
Stop-and-wait ARQ, also referred to as alternating bit protocol, is a method in telecommunications to send information between two connected devices. It ensures that information is not lost due to dropped packets and that packets are received in the correct order. It is the simplest automatic repeat-request (ARQ) mechanism. A stop-and-wait ARQ sender sends one frame at a time; it is a special case of the general sliding window protocol with transmit and receive window sizes equal to one in both cases. After sending each frame, the sender doesn't send any further frames until it receives an acknowledgement (ACK) signal. After receiving a valid frame, the receiver sends an ACK. If the ACK does not reach the sender before a certain time, known as the timeout, the sender sends the same frame again. The timeout countdown is reset after each frame transmission. The above behavior is a basic example of Stop-and-Wait. However, real-life implementations vary to address certain issues of design.
Typically the transmitter adds a redundancy check number to the end of each frame. The receiver uses the redundancy check number to check for possible damage. If the receiver sees that the frame is good, it sends an ACK. If the receiver sees that the frame is damaged, the receiver discards it and does not send an ACK—pretending that the frame was completely lost, not merely damaged.
One problem is when the ACK sent by the receiver is damaged or lost. In this case, the sender doesn't receive the ACK, times out, and sends the frame again. Now the receiver has two copies of the same frame, and doesn't know if the second one is a duplicate frame or the next frame of the sequence carrying identical DATA.
Another problem is when the transmission medium has such a long latency that the sender's timeout runs out before the frame reaches the receiver. In this case the sender resends the same packet. Eventually the receiver gets two copies of the same frame, and sends an ACK for each one. The sender, waiting for a single ACK, receives two ACKs, which may cause problems if it assumes that the second ACK is for the next frame in the sequence.
To avoid these problems, the most common solution is to define a 1 bit sequence number in the header of the frame. This sequence number alternates (from 0 to 1) in subsequent frames. When the receiver sends an ACK, it includes the sequence number of the next packet it expects. This way, the receiver can detect duplicated frames by checking if the frame sequence numbers alternate. If two subsequent frames have the same sequence number, they are duplicates, and the second frame is discarded. Similarly, if two subsequent ACKs reference the same sequence number, they are acknowledging the same frame.
Stop-and-wait ARQ is inefficient compared to other ARQs, because the time between packets, if the ACK and the data are received successfully, is twice the transit time (assuming the turnaround time can be zero). The throughput on the channel is a fraction of what it could be. To solve this problem, one can send more than one packet at a time with a larger sequence number and use one ACK for a set. This is what is done in Go-Back-N ARQ and the Selective Repeat ARQ.
See also
Alternating bit protocol
Data link layer
Error detection and correction
References
Tanenbaum, Andrew S., Computer Networks, 4th ed.
Logical link control
Error detection and correction
de:ARQ-Protokoll
it:Stop-and-wait ARQ
kk:Stop-and-wait ARQ
|
31427769
|
https://en.wikipedia.org/wiki/Kiwix
|
Kiwix
|
Kiwix is a free and open-source offline web browser created by Emmanuel Engelhart and Renaud Gaudin in 2007. It was first launched to allow offline access to Wikipedia, but has since expanded to include other projects from the Wikimedia Foundation, as well as public domain texts from Project Gutenberg. Available in more than 100 languages, Kiwix has been included in several high-profile projects, from smuggling operations in North Korea and encyclopedic access in Cuba to Google Impact Challenge's recipient Bibliothèques Sans Frontières.
History
Founder Emmanuel Engelhart sees Wikipedia as a common good, saying "The contents of Wikipedia should be available for everyone! Even without Internet access. This is why I have launched the Kiwix project."
After becoming a Wikipedia editor in 2004, Engelhart became interested in developing offline versions of Wikipedia. A project to make a Wikipedia CD, initiated in 2003, was a trigger for the project.
In 2012, Kiwix won a grant from Wikimedia France to build kiwix-plug, which was deployed to universities in eleven countries known as the Afripedia Project. In February 2013 Kiwix won SourceForge's Project of the Month award and an Open Source Award in 2015.
Description
The software is designed as an offline reader for a web content. It can be used on computers without an internet connection, computers with a slow or expensive connection, or to avoid censorship. It can also be used while travelling (e.g. on a plane or train).
Users first download Kiwix, then download content for offline viewing with Kiwix. Compression saves disk space and bandwidth. All of English-language Wikipedia, with pictures, fits on a large USB stick or external media (82 GB as of March 2021, or 43 GB with no pictures).
All content files are compressed in ZIM format, which makes them smaller, but leaves them easy to index, search, and selectively decompress.
The ZIM files are then opened with Kiwix, which looks and behaves like a web browser. Kiwix offers full text search, tabbed navigation, and the option to export articles to PDF and HTML.
There is an HTTP server version called kiwix-serve; this allows a computer to host Kiwix content, and make it available to other computers on a network. The other computers see an ordinary website. Kiwix-hotspot is an HTTP server version for plug computers, which is often used to provide a Wi-Fi server.
Available content
A list of content available on Kiwix is available for download, including language-specific sublists. Content can be loaded through Kiwix itself.
Since 2014, most Wikipedia versions are available for download in various different languages. For English Wikipedia, a full version containing pictures as well as an alternative version containing text only can be downloaded from the archive. The servers are updated every two to ten months, depending on the size of the file. For English Wikipedia, the update frequency is thus substantially lower than the bzip2 database downloads by the Wikimedia Foundation, which are updated twice a month.
Besides Wikipedia, content from the Wikimedia foundation such as Wikisource, Wikiquote, Wikivoyage, Wikibooks, and Wikiversity are also available for offline viewing in various different languages.
In November 2014, a ZIM version of all open texts forming part of Project Gutenberg was made available.
Besides public domain content, works licensed under a Creative Commons license are available for download as well. For example, offline versions of the Ubuntu wiki containing user documentation for the Ubuntu operating system, ZIM editions of TED conference talks and videos from Crash Course are available in the Kiwix archive as ZIM file formats.
Deployments
Kiwix can be installed on a desktop computer as a stand-alone program, installed on a tablet or smartphone, or can create its own WLAN environment from a Raspberry Pi.
As a software development project, Kiwix itself is not directly involved in deployment projects. However, third party organizations do use the software as a component of their own projects. Examples include:
Universities and libraries that can't afford broadband Internet access.
The Afripedia Project set up kiwix servers in French-speaking universities, some of them with no Internet access, in 11 African countries.
Schools in developing countries, where access to the internet is difficult or too expensive.
Installed on computers used for the One Laptop per Child project.
Installed on Raspberry Pis for use in schools with no easy access to electricity in Tanzania by the Tanzania Development Trust.
Installed on tablets in schools in Mali as part of the MALebooks project.
Used by school teachers and university professors, as well as students, in Senegal.
Deployed in Benin during teacher training seminars run by Zedaga, a Swiss NGO specialized in education.
The Fondation Orange has used kiwix-serve in its own French language technological knowledge product they have deployed in Africa.
A special version for the organization SOS Children's Villages was developed, initially for developing countries, but it is also used in the developed world.
At sea and in other remote areas:
Aboard ships in Antarctic waters.
By the Senegalese Navy in their patrol ships.
Included in Navigatrix, a Linux distribution for people on boats.
On a train or plane.
In European and US prison education programs.
Package managers and app stores
Kiwix was formerly available in the native package managers of some Linux distributions. However, Kiwix is currently not available in most package databases, due to XULRunner, a program on which Kiwix depends, being deprecated by Mozilla and removed from the package databases. Kiwix is available in the Sugar and ArchLinux Linux distributions. It is also available on Android.
Kiwix is available in the Microsoft Store, on Google Play, and Apple's iOS App Store. It is also available as an installable HTML5 app (Kiwix JS) in the form of browser extensions for Firefox and Chromium (Chrome, Edge) and as a Progressive Web Application (PWA), all of which work offline. Since 2015, a series of "customized apps" have also been released, of which Medical Wikipedia and PhET simulations are the two largest.
See also
GoldenDict supports the ZIM file format since 2013, including offline use (except on Android) and the ability to create full-text indices.
XOWA
Internet-in-a-Box
Wikipedia:Database download
References
External links
Explanation of Kiwix from Wikimania 2013
Kiwix stories on the Wikimedia Blog
Wikimedia offline projects
2010 software
Articles containing video clips
Book censorship
Cross-platform software
E-book suppliers
Free and open-source Android software
Free educational software
Free mobile software
Free software
Free web server software
Internet censorship
Internet privacy software
Publishing
Software development
Software that uses XUL
Software using the GPL license
Travel gear
Wikipedia
|
11173391
|
https://en.wikipedia.org/wiki/Sunflow
|
Sunflow
|
Sunflow is an open-source global illumination rendering system written in Java. The project is currently inactive; the last announcement on the program's official page was made in 2007.
References
External links
Sunflow Rendering System website
Sunflow Forum
Sunflow documentation wiki
3D rendering software for Linux
Computer-aided design software for Linux
Cross-platform software
Free 3D graphics software
Free computer-aided design software
Free software programmed in Java (programming language)
Global illumination software
|
1410175
|
https://en.wikipedia.org/wiki/Microarchitecture
|
Microarchitecture
|
In computer engineering, microarchitecture, also called computer organization and sometimes abbreviated as µarch or uarch, is the way a given instruction set architecture (ISA) is implemented in a particular processor. A given ISA may be implemented with different microarchitectures; implementations may vary due to different goals of a given design or due to shifts in technology.
Computer architecture is the combination of microarchitecture and instruction set architecture.
Relation to instruction set architecture
The ISA is roughly the same as the programming model of a processor as seen by an assembly language programmer or compiler writer. The ISA includes the instructions, execution model, processor registers, address and data formats among other things. The microarchitecture includes the constituent parts of the processor and how these interconnect and interoperate to implement the ISA.
The microarchitecture of a machine is usually represented as (more or less detailed) diagrams that describe the interconnections of the various microarchitectural elements of the machine, which may be anything from single gates and registers, to complete arithmetic logic units (ALUs) and even larger elements. These diagrams generally separate the datapath (where data is placed) and the control path (which can be said to steer the data).
The person designing a system usually draws the specific microarchitecture as a kind of data flow diagram. Like a block diagram, the microarchitecture diagram shows microarchitectural elements such as the arithmetic and logic unit and the register file as a single schematic symbol. Typically, the diagram connects those elements with arrows, thick lines and thin lines to distinguish between three-state buses (which require a three-state buffer for each device that drives the bus), unidirectional buses (always driven by a single source, such as the way the address bus on simpler computers is always driven by the memory address register), and individual control lines. Very simple computers have a single data bus organization they have a single three-state bus. The diagram of more complex computers usually shows multiple three-state buses, which help the machine do more operations simultaneously.
Each microarchitectural element is in turn represented by a schematic describing the interconnections of logic gates used to implement it. Each logic gate is in turn represented by a circuit diagram describing the connections of the transistors used to implement it in some particular logic family. Machines with different microarchitectures may have the same instruction set architecture, and thus be capable of executing the same programs. New microarchitectures and/or circuitry solutions, along with advances in semiconductor manufacturing, are what allows newer generations of processors to achieve higher performance while using the same ISA.
In principle, a single microarchitecture could execute several different ISAs with only minor changes to the microcode.
Aspects
The pipelined datapath is the most commonly used datapath design in microarchitecture today. This technique is used in most modern microprocessors, microcontrollers, and DSPs. The pipelined architecture allows multiple instructions to overlap in execution, much like an assembly line. The pipeline includes several different stages which are fundamental in microarchitecture designs. Some of these stages include instruction fetch, instruction decode, execute, and write back. Some architectures include other stages such as memory access. The design of pipelines is one of the central microarchitectural tasks.
Execution units are also essential to microarchitecture. Execution units include arithmetic logic units (ALU), floating point units (FPU), load/store units, branch prediction, and SIMD. These units perform the operations or calculations of the processor. The choice of the number of execution units, their latency and throughput is a central microarchitectural design task. The size, latency, throughput and connectivity of memories within the system are also microarchitectural decisions.
System-level design decisions such as whether or not to include peripherals, such as memory controllers, can be considered part of the microarchitectural design process. This includes decisions on the performance-level and connectivity of these peripherals.
Unlike architectural design, where achieving a specific performance level is the main goal, microarchitectural design pays closer attention to other constraints. Since microarchitecture design decisions directly affect what goes into a system, attention must be paid to issues such as chip area/cost, power consumption, logic complexity, ease of connectivity, manufacturability, ease of debugging, and testability.
Microarchitectural concepts
Instruction cycles
To run programs, all single- or multi-chip CPUs:
Read an instruction and decode it
Find any associated data that is needed to process the instruction
Process the instruction
Write the results out
The instruction cycle is repeated continuously until the power is turned off.
Multicycle microarchitecture
Historically, the earliest computers were multicycle designs. The smallest, least-expensive computers often still use this technique. Multicycle architectures often use the least total number of logic elements and reasonable amounts of power. They can be designed to have deterministic timing and high reliability. In particular, they have no pipeline to stall when taking conditional branches or interrupts. However, other microarchitectures often perform more instructions per unit time, using the same logic family. When discussing "improved performance," an improvement is often relative to a multicycle design.
In a multicycle computer, the computer does the four steps in sequence, over several cycles of the clock. Some designs can perform the sequence in two clock cycles by completing successive stages on alternate clock edges, possibly with longer operations occurring outside the main cycle. For example, stage one on the rising edge of the first cycle, stage two on the falling edge of the first cycle, etc.
In the control logic, the combination of cycle counter, cycle state (high or low) and the bits of the instruction decode register determine exactly what each part of the computer should be doing. To design the control logic, one can create a table of bits describing the control signals to each part of the computer in each cycle of each instruction. Then, this logic table can be tested in a software simulation running test code. If the logic table is placed in a memory and used to actually run a real computer, it is called a microprogram. In some computer designs, the logic table is optimized into the form of combinational logic made from logic gates, usually using a computer program that optimizes logic. Early computers used ad-hoc logic design for control until Maurice Wilkes invented this tabular approach and called it microprogramming.
Increasing execution speed
Complicating this simple-looking series of steps is the fact that the memory hierarchy, which includes caching, main memory and non-volatile storage like hard disks (where the program instructions and data reside), has always been slower than the processor itself. Step (2) often introduces a lengthy (in CPU terms) delay while the data arrives over the computer bus. A considerable amount of research has been put into designs that avoid these delays as much as possible. Over the years, a central goal was to execute more instructions in parallel, thus increasing the effective execution speed of a program. These efforts introduced complicated logic and circuit structures. Initially, these techniques could only be implemented on expensive mainframes or supercomputers due to the amount of circuitry needed for these techniques. As semiconductor manufacturing progressed, more and more of these techniques could be implemented on a single semiconductor chip. See Moore's law.
Instruction set choice
Instruction sets have shifted over the years, from originally very simple to sometimes very complex (in various respects). In recent years, load–store architectures, VLIW and EPIC types have been in fashion. Architectures that are dealing with data parallelism include SIMD and Vectors. Some labels used to denote classes of CPU architectures are not particularly descriptive, especially so the CISC label; many early designs retroactively denoted "CISC" are in fact significantly simpler than modern RISC processors (in several respects).
However, the choice of instruction set architecture may greatly affect the complexity of implementing high-performance devices. The prominent strategy, used to develop the first RISC processors, was to simplify instructions to a minimum of individual semantic complexity combined with high encoding regularity and simplicity. Such uniform instructions were easily fetched, decoded and executed in a pipelined fashion and a simple strategy to reduce the number of logic levels in order to reach high operating frequencies; instruction cache-memories compensated for the higher operating frequency and inherently low code density while large register sets were used to factor out as much of the (slow) memory accesses as possible.
Instruction pipelining
One of the first, and most powerful, techniques to improve performance is the use of instruction pipelining. Early processor designs would carry out all of the steps above for one instruction before moving onto the next. Large portions of the circuitry were left idle at any one step; for instance, the instruction decoding circuitry would be idle during execution and so on.
Pipelining improves performance by allowing a number of instructions to work their way through the processor at the same time. In the same basic example, the processor would start to decode (step 1) a new instruction while the last one was waiting for results. This would allow up to four instructions to be "in flight" at one time, making the processor look four times as fast. Although any one instruction takes just as long to complete (there are still four steps) the CPU as a whole "retires" instructions much faster.
RISC makes pipelines smaller and much easier to construct by cleanly separating each stage of the instruction process and making them take the same amount of time—one cycle. The processor as a whole operates in an assembly line fashion, with instructions coming in one side and results out the other. Due to the reduced complexity of the classic RISC pipeline, the pipelined core and an instruction cache could be placed on the same size die that would otherwise fit the core alone on a CISC design. This was the real reason that RISC was faster. Early designs like the SPARC and MIPS often ran over 10 times as fast as Intel and Motorola CISC solutions at the same clock speed and price.
Pipelines are by no means limited to RISC designs. By 1986 the top-of-the-line VAX implementation (VAX 8800) was a heavily pipelined design, slightly predating the first commercial MIPS and SPARC designs. Most modern CPUs (even embedded CPUs) are now pipelined, and microcoded CPUs with no pipelining are seen only in the most area-constrained embedded processors. Large CISC machines, from the VAX 8800 to the modern Pentium 4 and Athlon, are implemented with both microcode and pipelines. Improvements in pipelining and caching are the two major microarchitectural advances that have enabled processor performance to keep pace with the circuit technology on which they are based.
Cache
It was not long before improvements in chip manufacturing allowed for even more circuitry to be placed on the die, and designers started looking for ways to use it. One of the most common was to add an ever-increasing amount of cache memory on-die. Cache is very fast and expensive memory. It can be accessed in a few cycles as opposed to many needed to "talk" to main memory. The CPU includes a cache controller which automates reading and writing from the cache. If the data is already in the cache it is accessed from there – at considerable time savings, whereas if it is not the processor is "stalled" while the cache controller reads it in.
RISC designs started adding cache in the mid-to-late 1980s, often only 4 KB in total. This number grew over time, and typical CPUs now have at least 512 KB, while more powerful CPUs come with 1 or 2 or even 4, 6, 8 or 12 MB, organized in multiple levels of a memory hierarchy. Generally speaking, more cache means more performance, due to reduced stalling.
Caches and pipelines were a perfect match for each other. Previously, it didn't make much sense to build a pipeline that could run faster than the access latency of off-chip memory. Using on-chip cache memory instead, meant that a pipeline could run at the speed of the cache access latency, a much smaller length of time. This allowed the operating frequencies of processors to increase at a much faster rate than that of off-chip memory.
Branch prediction
One barrier to achieving higher performance through instruction-level parallelism stems from pipeline stalls and flushes due to branches. Normally, whether a conditional branch will be taken isn't known until late in the pipeline as conditional branches depend on results coming from a register. From the time that the processor's instruction decoder has figured out that it has encountered a conditional branch instruction to the time that the deciding register value can be read out, the pipeline needs to be stalled for several cycles, or if it's not and the branch is taken, the pipeline needs to be flushed. As clock speeds increase the depth of the pipeline increases with it, and some modern processors may have 20 stages or more. On average, every fifth instruction executed is a branch, so without any intervention, that's a high amount of stalling.
Techniques such as branch prediction and speculative execution are used to lessen these branch penalties. Branch prediction is where the hardware makes educated guesses on whether a particular branch will be taken. In reality one side or the other of the branch will be called much more often than the other. Modern designs have rather complex statistical prediction systems, which watch the results of past branches to predict the future with greater accuracy. The guess allows the hardware to prefetch instructions without waiting for the register read. Speculative execution is a further enhancement in which the code along the predicted path is not just prefetched but also executed before it is known whether the branch should be taken or not. This can yield better performance when the guess is good, with the risk of a huge penalty when the guess is bad because instructions need to be undone.
Superscalar
Even with all of the added complexity and gates needed to support the concepts outlined above, improvements in semiconductor manufacturing soon allowed even more logic gates to be used.
In the outline above the processor processes parts of a single instruction at a time. Computer programs could be executed faster if multiple instructions were processed simultaneously. This is what superscalar processors achieve, by replicating functional units such as ALUs. The replication of functional units was only made possible when the die area of a single-issue processor no longer stretched the limits of what could be reliably manufactured. By the late 1980s, superscalar designs started to enter the market place.
In modern designs it is common to find two load units, one store (many instructions have no results to store), two or more integer math units, two or more floating point units, and often a SIMD unit of some sort. The instruction issue logic grows in complexity by reading in a huge list of instructions from memory and handing them off to the different execution units that are idle at that point. The results are then collected and re-ordered at the end.
Out-of-order execution
The addition of caches reduces the frequency or duration of stalls due to waiting for data to be fetched from the memory hierarchy, but does not get rid of these stalls entirely. In early designs a cache miss would force the cache controller to stall the processor and wait. Of course there may be some other instruction in the program whose data is available in the cache at that point. Out-of-order execution allows that ready instruction to be processed while an older instruction waits on the cache, then re-orders the results to make it appear that everything happened in the programmed order. This technique is also used to avoid other operand dependency stalls, such as an instruction awaiting a result from a long latency floating-point operation or other multi-cycle operations.
Register renaming
Register renaming refers to a technique used to avoid unnecessary serialized execution of program instructions because of the reuse of the same registers by those instructions. Suppose we have two groups of instruction that will use the same register. One set of instructions is executed first to leave the register to the other set, but if the other set is assigned to a different similar register, both sets of instructions can be executed in parallel (or) in series.
Multiprocessing and multithreading
Computer architects have become stymied by the growing mismatch in CPU operating frequencies and DRAM access times. None of the techniques that exploited instruction-level parallelism (ILP) within one program could make up for the long stalls that occurred when data had to be fetched from main memory. Additionally, the large transistor counts and high operating frequencies needed for the more advanced ILP techniques required power dissipation levels that could no longer be cheaply cooled. For these reasons, newer generations of computers have started to exploit higher levels of parallelism that exist outside of a single program or program thread.
This trend is sometimes known as throughput computing. This idea originated in the mainframe market where online transaction processing emphasized not just the execution speed of one transaction, but the capacity to deal with massive numbers of transactions. With transaction-based applications such as network routing and web-site serving greatly increasing in the last decade, the computer industry has re-emphasized capacity and throughput issues.
One technique of how this parallelism is achieved is through multiprocessing systems, computer systems with multiple CPUs. Once reserved for high-end mainframes and supercomputers, small-scale (2–8) multiprocessors servers have become commonplace for the small business market. For large corporations, large scale (16–256) multiprocessors are common. Even personal computers with multiple CPUs have appeared since the 1990s.
With further transistor size reductions made available with semiconductor technology advances, multi-core CPUs have appeared where multiple CPUs are implemented on the same silicon chip. Initially used in chips targeting embedded markets, where simpler and smaller CPUs would allow multiple instantiations to fit on one piece of silicon. By 2005, semiconductor technology allowed dual high-end desktop CPUs CMP chips to be manufactured in volume. Some designs, such as Sun Microsystems' UltraSPARC T1 have reverted to simpler (scalar, in-order) designs in order to fit more processors on one piece of silicon.
Another technique that has become more popular recently is multithreading. In multithreading, when the processor has to fetch data from slow system memory, instead of stalling for the data to arrive, the processor switches to another program or program thread which is ready to execute. Though this does not speed up a particular program/thread, it increases the overall system throughput by reducing the time the CPU is idle.
Conceptually, multithreading is equivalent to a context switch at the operating system level. The difference is that a multithreaded CPU can do a thread switch in one CPU cycle instead of the hundreds or thousands of CPU cycles a context switch normally requires. This is achieved by replicating the state hardware (such as the register file and program counter) for each active thread.
A further enhancement is simultaneous multithreading. This technique allows superscalar CPUs to execute instructions from different programs/threads simultaneously in the same cycle.
See also
Control unit
Hardware architecture
Hardware description language (HDL)
Instruction-level parallelism (ILP)
List of AMD CPU microarchitectures
List of Intel CPU microarchitectures
Processor design
Stream processing
VHDL
Very large-scale integration (VLSI)
Verilog
References
Further reading
Central processing unit
Instruction processing
Microprocessors
|
19418556
|
https://en.wikipedia.org/wiki/Trojan%20Box%20Set%20series
|
Trojan Box Set series
|
The Trojan Box Set series is a range of various artist triple CD box sets, periodically released by the British reggae record label Trojan Records since 1998. The series covers a wide variety of reggae subgenres, styles and themes.
Each set has standard formatting. The title is Trojan Box Set (except RAS and Creole Reggae) The artwork is usually a colourful, distinctive variation on a generic design, with the Trojan Records logo as the central motif (except Upsetter, RAS, Creole Reggae and Reggae For Kids). There are usually 50 songs in each set (except 12" with 35). The packaging is a clamshell cardboard box with an individual hard paper slipcase for each disc, plus basic track information and/or liner notes. Despite being billed as "limited edition", many sets have remained widely available. In addition to the 69 box sets released by the label, three additional box sets were released in limited Japan-only markets in 2005-2007 and are not widely available in other markets: Trojan Katsuo Box (2005), Trojan HMV Box Set (2006), and Trojan Tower Box Set (2007). The Japan releases include more songs per set than the other 69 releases: Trojan Katsuo Box features 70 songs, Trojan HMV Box Set features 51 songs, and Trojan Tower Box Set features 54 songs. Like the other 69 releases, the Japanese box sets include three slipcases with tracklists, but rather than essays or liner notes, include original drawings and artwork. The cover art for these box sets also differs from the others in format and style.
Titles in the Trojan Box Set series
1998
Trojan Ska Box Set (TRBCD001)
Trojan Dub Box Set (TRBCD002)
Trojan Rocksteady Box Set (TRBCD003)
Trojan D.J. Box Set (TRBCD004)
1999
Trojan Lovers Box Set (TRBCD005)
Trojan Tribute To Bob Marley Box Set (TRBCD006)
Trojan Instrumentals Box Set (TRBCD007)
Trojan Roots Box Set (TRBCD008)
Trojan Jamaican Superstars Box Set (TRBCD009)
Trojan Producer Series Box Set (TRBCD010)
Trojan Rare Groove Box Set (TRBCD011)
Trojan Singles Box Set (TRBCD012)
2000
Trojan Ska Volume 2 Box Set (TRBCD014)
Trojan Dub Volume 2 Box Set (TRBCD015)
Trojan Jamaican Hits Box Set (TRBCD017)
Trojan 'Tighten Up' Box Set (TRBCD018)
Trojan Club Reggae Box Set (TRBCD020)
Trojan Soulful Reggae Box Set (TRBCD021)
Trojan Rastafari Box Set (TRBCD022)
Trojan Dancehall Box Set (TRBCD023)
2002
Trojan Skinhead Reggae Box Set (TJETD003)
Trojan UK Hits Box Set (TJETD010)
Trojan Mod Reggae Box Set (TJETD020)
Trojan Upsetter Box Set (TJETD021)
Trojan Bob Marley & Friends Box Set (TJETD028)
Trojan Revive Box Set (TJETD029)
Trojan Calypso Box Set (TJETD033)
Trojan Jamaican R&B Box Set (TJETD042)
Trojan X-Rated Box Set (TJETD048)
Trojan Rude Boy Box Set (TJETD055)
Trojan British Reggae Box Set (TJETD070)
2003
Trojan Reggae Brothers Box Set (TJETD072)
Trojan Reggae Sisters Box Set (TJETD073)
Trojan 12" Box Set (TJETD084)
Trojan Reggae Duets Box Set (TJETD093)
Trojan Nyahbinghi Box Set (TJETD094)
Trojan Originals Box Set (TJETD098)
Trojan Ganja Reggae Box Set (TJETD102)
Trojan Reggae Chill-out Box Set (TJETD115)
Trojan 35th Anniversary Box Set (TJETD130)
Trojan Carnival Box Set (TJETD132)
Trojan Christmas Box Set (TJETD142)
2004
Trojan Ska Revival Box Set (TJETD146)
RAS Reggae Box Set (TJETD152)
The Creole Reggae Box Set (TJETD161)
Trojan Suedehead Box Set (TJETD169)
Trojan Sixties Box Set (TJETD174)
Trojan Sunshine Reggae Box Set (TJETD185)
Trojan Seventies Box Set (TJETD192)
Trojan Jamaica Box Set (TJETD199)
Trojan Ragga Box Set (TJETD207)
Trojan Reggae Rarities Box Set (TJETD215)
2005
Trojan Beatles Tribute Box Set (TJETD220)
Trojan Roots & Culture Box Set (TJETD226)
Trojan Ska Rarities Box Set (TJETD238)
Trojan Dancehall Roots Box Set (TJETD243)
Trojan Eighties Box Set (TJETD253)
Trojan Reggae For Kids Box Set (TJETD262)
Trojan Legends Box Set (TJETD271)
Trojan Dub Rarities Box Set (TJETD278)
Trojan Mod Reggae Vol.2 Box Set (TJETD287)
Trojan Lovers Rock Box Set (TJETD292)
Trojan Roots Reggae Box Set (TJETD296)
Trojan Rocksteady Rarities Box Set (TJETD299)
Trojan Katsuo Box (TJETD9000)
2006
Trojan Motor City Reggae Box Set (TJETD309)
Trojan Bob Marley Covers Box Set (TJETD323)
Trojan Rockers Box Set (TJETD338)
Trojan HMV Box Set (TJETD9001)
2007
Trojan Slack Reggae Box Set (TJETD348)
Trojan Country Reggae Box Set (TJETD365)
Trojan Tower Box Set (TJETD9002)
External links
Trojan Box Set website
Trojan Records website
Compilation album series
1990s compilation albums
2000s compilation albums
Trojan Records albums
|
2476874
|
https://en.wikipedia.org/wiki/AveDesk
|
AveDesk
|
AveDesk is a freeware (although it is touted as "Donationware", which means the software is solely donation-supported in terms of financing) widget engine for Windows XP that runs small, self-contained widgets called "desklets", as well as ObjectDock "docklets" (small plugins intended for use by ObjectDock and other similar programs), and is created by Andreas Verhoeven, a freelance software programmer.
Unlike most other software programs of its kind, AveDesk is heavily community driven. A dedicated section of the forums on Aqua-Soft, an online community of skinning enthusiasts dedicated to emulating the look and feel of Mac OS X Leopard, is used by users of the software to report bugs or request for new software features directly to the programmer, cutting any red tape in the way. New features are also better discussed among the users of the software program and the programmer himself.
The "Ave" in "AveDesk" is a shortened version of the author's name, Andreas Verhoeven.
Features
AveDesk desklets are skinnable plugins developed in Visual C++ that can display themselves as widgets, rather than just simply script files. One advantage is that the desklets can have its entire appearance more easily changed to suit the tastes of its users, rather than having to create an entirely new desklet, as in most other widget engines. However, due to the same reason, users cannot easily create custom-made desklets for AveDesk as other similar programs (such as Konfabulator and DesktopX). To work around this, AveDesk users usually use a plugin called SysStats, which allow users to easily create and run desklets for AveDesk using scripts (such as JavaScript and VBScript), coupled with specially structured INI files and computer image files that make up the look and functionality of the widget.
With the release of version 1.3 of AveDesk, a new scripting engine, called AveScripter, will be developed to take advantage of the updated internal architecture of desklets. The engine is more closely integrated with AveDesk, and is able to take advantage of the internal features that come with the new version, such as visual effects included with AveDesk and a special library of graphical user interface controls intended for AveDesk desklets, called AveControls.
Desklets
AveDesk is mainly used by Windows users emulating the look and feel of Mac OS X. This can be seen in the default set of desklets included in the program. Some of the more commonly used ones include:
A "PidlShortcut" desklet (the most popular among the default set of desklets in AveDesk), a skinnable shortcut desklet that can point to a computer file or folder, but with customisable looks and functions and the ability to use a high-resolution PNG image as an icon for the shortcut (instead of the usual low-resolution Windows icon), as well as to provide additional information, such as the number of files in the folder or the size of the disk drive,
A skinnable "iTunesDesklet" desklet (also known as "AveTunes"), which is an iTunes remote control, similar in functionality to its Mac OS X's Dashboard counterpart, but can have its appearance changed through skins, and
A "StickyNotes" desklet, which can hold simple notes and is very similar to the Stickies widget in Mac OS X's Dashboard. This desklet is an updated replacement for the "Notes" desklet found in earlier versions of AveDesk.
A "Translator" desklet, which uses an online language translator to translate text from one language to another, and is similar to its Mac OS X's Dashboard's counterpart, but can have its appearance changed through skins. Among the languages supported are the more commonly spoken languages in Europe (such as English, French and German), as well as Chinese, Korean and Japanese.
Version 1.3 of AveDesk adds several new internal features as mentioned earlier, and a few new desklets were made to take advantage of them. In addition to the "Translator" and "StickyNotes" desklets (which were added in version 1.3) described above, two other desklets worth mentioning are:
A "ChalkBoard" desklet. Essentially a simple electronic drawing pad, a user can use a mouse to write or draw on the pad. The user can choose between five drawing colours and three pen sizes.
A "WordSearcher" desklet, which allows a user to search either an online dictionary or thesaurus. The desklet's colour changes to green if an entry is found, and red if it is unable to find the entry.
Note that this is not an exhaustive list of all the desklets included with AveDesk.
Features
Among the features of AveDesk not usually found in other widget engines are:
An installer feature. New users are easily confused with the correct installation of new desklets. To work around this, desklet creators can create specially pre-packaged desklets (which are actually ZIP files with the correct directory structure and instructions for the software program). Users need only to open the package with AveDesk, which will then automatically and correctly install the desklets for the user.
Modules, which extend the functionality of the software program itself. These modules act as plugins to the AveDesk program itself, and are not desklets or widgets. Modules provide additional functionality such as the ability to show or hide "PidlShortcut" desklets that point to specific disk drives as they are mounted or dismounted, or to automatically hide all normal desktop icons when the program is started.
A themes feature. This feature allows a user of AveDesk to save the configurations and positions of AveDesk desklets he or she has running on the desktop, so if the user wishes to go back to that configuration in the future, he or she only needs to load that theme into AveDesk, saving the hassle of rearranging and reconfiguring the desklets. This feature came about after some users of earlier versions voiced the need to use multiple configurations of AveDesk desklets.
Showcase. Similar to Konfabulator's Konsposé, and patterned after Mac OS X's Dashboard and Exposé, this feature quickly brings AveDesk desklets to the foreground, with the background dimmed. The user can set the hotkeys used to activate ShowCase, as well as setting the "dimness" of the background.
The ability to add custom visual effects to desklets, and to create new ones using specially crafted scripts. Unlike other widget engines, where visual effects are limited to the scope of the widgets, users can add their own visual effects onto AveDesk desklets, separate from the desklet. Many of the effects mimic the animations present on Mac OS X, but a few effects were unique to AveDesk, which some cross-platform users using the software application feel are better than their counterparts on Mac OS X.
A control panel. Instead of a context menu listing all open widgets (as in most other widget engines), AveDesk uses a control panel for centralised desklet management. The control panel list all currently open desklets (along with a "live" thumbnail image of the desklet), as well as a menu bar. From there, a user can open and close desklets, configure AveDesk's options, set the defaults for new AveDesk desklets, switch or create new AveDesk themes, or install and manage modules. Like other widget engines, however, AveDesk does place an icon in the taskbar, but it does not contain a list of running desklets. Its main purpose is to activate the control panel, to switch themes, and to shut down the program itself.
ObjectDock docklet support. Highly unusual for widget engines, AveDesk not only support its own plugin formats, but plugin formats of other applications that uses plugins (in this case, ObjectDock). One could run an ObjectDock docklet inside AveDesk as if it were an AveDesk desklet.
Labels and sublabels. Each AveDesk desklet can have its own label, which the user can change. Some desklets makes heavy use of this feature, such as the "PidlShortcut" desklet; the labels provide the name of the file or folder it is pointing to, and the sublabels can provide additional information about the file or folder. Sublabels were added in version 1.1 to allow such shortcut desklets to more closely mimic their Mac OS X counterparts, which has this feature for all icons. This feature exists in the first place because ObjectDock docklets (which AveDesk can support, as described above) usually emanate information through labels, as well as the icon representing the docklet itself.
History
AveDesk was actually a spinoff of another different software project. Originally, Andreas Verhoeven (who was then a creator of ObjectDock docklets) was developing a program that could run ObjectDock docklets in Y'z Dock (a now-defunct program that was similar to ObjectDock) and vice versa, in order to resolve incompatibilities between the two programs. Instead, he managed to have ObjectDock docklets running on the desktop, independent of ObjectDock (hence the support for ObjectDock docklets). From there, he began further developing the idea that eventually led to AveDesk. During its development, he coined the term "desklets" to describe the widgets of AveDesk (as desklets are to computer desktops as docklets are to dock programs such as ObjectDock).
In late 2003, Andreas released AveDesk 1.0 on the Aqua-Soft online community. Soon after, a number of desklets intended just for AveDesk appeared. Some of the more notable desklets that helped propel AveDesk's popularity was a simple but skinnable shortcut desklet (the predecessor to the "PidlShortcut" desklet), that can be easily customised to take on the look and feel of Mac OS X desktop shortcuts but complete with functioning Windows context menus, a disk drive desklet (similar to the shortcut desklet) that could be set to appear only when that particular disk drive has been mounted, and a Trash can desklet that has the additional functions of ejecting a CD or DVD disk drive when its icon has been dragged onto it. These desklets helped to increase the appeal of AveDesk to skinning enthusiasts who wanted to emulate the look and feel of Mac OS X.
In late 2004, Andreas released version 1.1 of AveDesk to an eager audience of skinning enthusiasts, who had previously been teased with screenshots of the new version of AveDesk while it was in development. Users who installed the new version were greeted with improved desklets and new, custom visual effects (as described above). These visual effects further increased the appeal of AveDesk, and version 1.1 was embraced heartily by skinning enthusiasts, making it one of the more popular widget engines for Windows, along with already popular widget engines at that time (such as DesktopX).
In early 2005, version 1.2 of AveDesk was released. Version 1.2 was one of the biggest (if not the biggest) update to AveDesk. Many new features were added to this version, with some of them requested by users of earlier versions; the themes and installer features, as well as Showcase were among the features new to the release. The various shortcut desklets were also replaced by a new, updated "PidlShortcut" desklet, bringing together and improving on the functions and quality of the shortcut desklets it displaced. Additional visual effects, such as improved shadows for the text on each desklet's label and fade-in/fade-out effects, were also added.
On October 26, 2005, Avedesk version 1.3 was released. This version was originally intended to be a minor update to iron out some quirks in previous versions of AveDesk, but it eventually became one of the biggest update to be rolled out. Several new features were incorporated into the release, such as additional hardware-accelerated visual effects when closing or configuring each desklet, along with improved desklets, some of which were described above.
Currently, the software program is at version 1.4 which added support for Windows Vista.
External links
Homepage
Home of the SysStats widgets
Home of AveScripter & other popular widgets
Windows-only software
Widget engines
|
40601008
|
https://en.wikipedia.org/wiki/History%20of%20software
|
History of software
|
Software is a set of programmed instructions stored in the memory of stored-program digital computers for execution by the processor. Software is a recent development in human history, and it is fundamental to the Information Age.
Ada Lovelace's programs for Charles Babbage's Analytical Engine in the 19th century is often considered the founder of the discipline, though the mathematician's efforts remained theoretical only, as the technology of Lovelace and Babbage's day proved insufficient to build his computer. Alan Turing is credited with being the first person to come up with a theory for software in 1935, which led to the two academic fields of computer science and software engineering.
The first generation of software for early stored-program digital computers in the late 1940s had its instructions written directly in binary code, generally written for mainframe computers. Later, the development of modern programming languages alongside the advancement of the home computer would greatly widen the scope and breadth of available software, beginning with assembly language, and continuing on through functional programming and object-oriented programming paradigms.
Before stored-program digital computers
Origins of computer science
Computing as a concept goes back to ancient times, with devices such as the abacus, the Antikythera mechanism, and Al-Jazari's programmable castle clock. However, these devices were pure hardware and had no software - their computing powers were directly tied to their specific form and engineering.
Software requires the concept of a general-purpose processor - what is now described as a Turing machine - as well as computer memory in which reusable sets of routines and mathematical functions comprising programs can be stored, started, and stopped individually, and only appears recently in human history.
The first known computer algorithm was written by Ada Lovelace in the 19th century for the Analytical Engine, to translate Luigi Menabrea's work on Bernoulli numbers for machine instruction. However, this remained theoretical only - the lesser state of engineering in the lifetime of these two mathematicians proved insufficient to construct the Analytical Engine.
The first modern theory of software was proposed by Alan Turing in his 1935 essay Computable numbers with an application to the Entscheidungsproblem (decision problem).
This eventually led to the creation of the twin academic fields of computer science and software engineering, which both study software and its creation. Computer science is more theoretical (Turing's essay is an example of computer science), whereas software engineering is focused on more practical concerns.
However, prior to 1946, software as we now understand it programs stored in the memory of stored-program digital computers did not yet exist. The very first electronic computing devices were instead rewired in order to "reprogram" them. The ENIAC, one of the first electronic computers, was programmed largely by women who had been previously working as human computers. Engineers would give the programmers blueprints of the ENIAC wiring and expected them to figure out how to program the machine. The women who worked as programmers prepped the ENIAC for its first public reveal, wiring the patch panels together for the demonstrations. Kathleen Booth developed Assembly Language in 1950 to make it easier to program the computers she worked on at Birkbeck College.
Grace Hopper worked as one of the first programmers of the Harvard Mark I. She later created a 500-page manual for the computer. Hopper is often falsely credited with coining the terms "bug" and "debugging," when she found a moth in the Mark II, causing a malfunction; however, the term was in fact already in use when she found the moth. Hopper developed the first compiler and brought her idea from working on the Mark computers to working on UNIVAC in the 1950s. Hopper also developed the programming language FLOW-MATIC to program the UNIVAC. Frances E. Holberton, also working at UNIVAC, developed a code, C-10, which let programmers use keyboard inputs and created the Sort-Merge Generator in 1951. Adele Mildred Koss and Hopper also created the precursor to a report generator.
Early days of computer software (1948–1979)
In his manuscript "A Mathematical Theory of Communication", Claude Shannon (1916–2001) provided an outline for how binary logic could be implemented to program a computer. Subsequently, the first computer programmers used binary code to instruct computers to perform various tasks. Nevertheless, the process was very arduous. Computer programmers had to provide long strings of binary code to tell the computer what data to store. Code and data had to be loaded onto computers using various tedious mechanisms, including flicking switches or punching holes at predefined positions in cards and loading these punched cards into a computer. With such methods, if a mistake was made, the whole program might have to be loaded again from the beginning.
The very first time a stored-program computer held a piece of software in electronic memory and executed it successfully, was 11 am 21 June 1948, at the University of Manchester, on the Manchester Baby computer. It was written by Tom Kilburn, and calculated the highest factor of the integer 2^18 = 262,144. Starting with a large trial divisor, it performed division of 262,144 by repeated subtraction then checked if the remainder was zero. If not, it decremented the trial divisor by one and repeated the process. Google released a tribute to the Manchester Baby, celebrating it as the "birth of software".
FORTRAN was developed by a team led by John Backus at IBM in the 1950s. The first compiler was released in 1957. The language proved so popular for scientific and technical computing that by 1963 all major manufacturers had implemented or announced FORTRAN for their computers.
COBOL was first conceived of when Mary K. Hawes convened a meeting (which included Grace Hopper) in 1959 to discuss how to create a computer language to be shared between businesses. Hopper's innovation with COBOL was developing a new symbolic way to write programming. Her programming was self-documenting. Betty Holberton helped edit the language which was submitted to the Government Printing Office in 1960. FORMAC was developed by Jean E. Sammet in the 1960s. Her book, Programming Languages: History and Fundamentals (1969), became an influential text.
Apollo Mission
The Apollo Mission to the moon depended on software to program the computers in the landing modules. The computers were programmed with a language called "Basic" (no relation with the BASIC programming language developed at Dartmouth at about the same time). The software also had an interpreter which was made up of a series of routines and an executive (like a modern-day operating system), which specified which programs to run and when. Both were designed by Hal Laning. Margaret Hamilton, who had previously been involved with software reliability issues when working on the US SAGE air defense system, was also part of the Apollo software team. Hamilton was in charge of the onboard flight software for the Apollo computers. Hamilton felt that software operations were not just part of the machine, but also intricately involved with the people who operated the software. Hamilton also coined the term "software engineering" while she was working at NASA.
The actual "software" for the computers in the Apollo missions was made up of wires that were threaded through magnetic cores. Where the wire went through a magnetic core, that represented a "1" and where the wire went around the core, that represented a "0." Each core stored 64 bits of information. Hamilton and others would create the software by punching holes in punch cards, which were then later processed on a Honeywell mainframe where the software could be simulated. When the code was "solid," then it was sent to be woven into the magnetic cores at Raytheon, where women known as "Little Old Ladies" worked on the wires. The program itself was "indestructible" and could even withstand lightning strikes, which happened to Apollo 12. Wiring the computers took several weeks to do, freezing software development during that time.
While using the simulators to test the programming, Hamilton discovered ways that code could produce dangerous errors when human mistakes were made while using it. NASA believed that the astronauts would not make mistakes due to their training. Hamilton was not allowed to program code to prevent errors that would lead to system crash, so she annotated the code in the program documentation. Her ideas to add error-checking code was rejected as "excessive." However, exactly what Hamilton predicted would happen occurred on the Apollo 8 flight, when human error caused the computer to wipe out all of the navigational data.
Bundling of software with hardware and its legal issues
Later, software was sold to multiple customers by being bundled with the hardware by original equipment manufacturers (OEMs) such as Data General, Digital Equipment and IBM. When a customer bought a minicomputer, at that time the smallest computer on the market, the computer did not come with pre-installed software, but needed to be installed by engineers employed by the OEM.
This bundling attracted the attention of US antitrust regulators, who sued IBM for improper "tying" in 1969, alleging that it was an antitrust violation that customers who wanted to obtain its software had to also buy or lease its hardware in order to do so. However, the case was dropped by the US Justice Department, after many years of attrition, as it concluded it was "without merit".
Data General also encountered legal problems related to bundling although in this case, it was due to a civil suit from a would-be competitor. When Data General introduced the Data General Nova, a company called Digidyne wanted to use its RDOS operating system on its own hardware clone. Data General refused to license their software and claimed their "bundling rights". The US Supreme Court set a precedent called Digidyne v. Data General in 1985 by letting a 9th circuit appeal court decision on the case stand, and Data General was eventually forced into licensing the operating system because it was ruled that restricting the license to only DG hardware was an illegal tying arrangement. Even though the District Court noted that "no reasonable juror could find that within this large and dynamic market with much larger competitors", Data General "had the market power to restrain trade through an illegal tie-in arrangement", the tying of the operating system to the hardware was ruled as per se illegal on appeal.
In 2008, Psystar Corporation was sued by Apple Inc. for distributing unauthorized Macintosh clones with OS X preinstalled, and countersued. One of the arguments in the countersuit - citing the Data General case - was that Apple dominates the market for OS X compatible computers by illegally tying the operating system to Apple computers. District Court Judge William Alsup rejected this argument, saying, as the District Court had ruled in the Data General case over 20 years prior, that the relevant market was not simply one operating system (Mac OS) but all PC operating systems, including Mac OS, and noting that Mac OS did not enjoy a dominant position in that broader market. Alsup's judgement also noted that the surprising Data General precedent that tying of copyrighted products was always illegal had since been "implicitly overruled" by the verdict in the Illinois Tool Works Inc. v. Independent Ink, Inc. case.
Packaged software (Late 1960s-present)
An industry producing independently packaged software - software that was neither produced as a "one-off" for an individual customer, nor "bundled" with computer hardware - started to develop in the late 1960s.
Unix (1970s–present)
Unix was an early operating system which became popular and very influential, and still exists today. The most popular variant of Unix today is macOS (previously called OS X and Mac OS X), while Linux is closely related to Unix.
The rise of Microcomputers
In January 1975, Micro Instrumentation and Telemetry Systems began selling its Altair 8800 microcomputer kit by mail order. Microsoft released its first product Altair BASIC later that year, and hobbyists began developing programs to run on these kits. Tiny BASIC was published as a type-in program in Dr. Dobb's Journal, and developed collaboratively.
In 1976, Peter R. Jennings for instance created his Microchess program for MOS Technology's KIM-1 kit, but since it did not come with a tape drive, he would send the source code in a little booklet to his mail-order customers, and they would have to type the whole program in by hand. In 1978, Kathe and Dan Spracklen released the source of their Sargon (chess) program in a computer magazine. Jennings later switched to selling paper tape, and eventually compact cassettes with the program on it.
It was an inconvenient and slow process to type in source code from a computer magazine, and a single mistyped or worse, misprinted character could render the program inoperable, yet people still did so. (Optical character recognition technology, which could theoretically have been used to scan in the listings rather than transcribe them by hand, was not yet in wide use.)
Even with the spread of cartridges and cassette tapes in the 1980s for distribution of commercial software, free programs (such as simple educational programs for the purpose of teaching programming techniques) were still often printed, because it was cheaper than making and attaching cassette tapes to magazines.
However, eventually a combination of four factors brought this practice of printing complete source code listings of entire programs in computer magazines to an end:
programs started to become very large
floppy discs started to be used for distributing software, and then came down in price
regular people started to use computers and wanted a simple way to run a program
computer magazines started to include cassette tapes or floppy discs with free or trial versions of software on them
Very quickly, commercial software started to be pirated, and commercial software producers were very unhappy at this. Bill Gates, cofounder of Microsoft, was an early moraliser against software piracy with his famous Open Letter to Hobbyists in 1976.
1980s–present
Before the microcomputer, a successful software program typically sold up to 1,000 units at $50,000–60,000 each. By the mid-1980s, personal computer software sold thousands of copies for $50–700 each. Companies like Microsoft, MicroPro, and Lotus Development had tens of millions of dollars in annual sales. They similarly dominated the European market with localized versions of already successful products.
A pivotal moment in computing history was the publication in the 1980s of the specifications for the IBM Personal Computer published by IBM employee Philip Don Estridge, which quickly led to the dominance of the PC in the worldwide desktop and later laptop markets a dominance which continues to this day. Microsoft, by successfully negotiating with IBM to develop the first operating system for the PC (MS-DOS), profited enormously from the PC's success over the following decades, via the success of MS-DOS and its add-on-cum-successor, Microsoft Windows. Winning the negotiation was a pivotal moment in Microsoft's history.
Free and open source software
Recent developments
App stores
Applications for mobile devices (cellphones and tablets) have been termed "apps" in recent years. Apple chose to funnel iPhone and iPad app sales through their App Store, and thus both vet apps, and get a cut of every paid app sold. Apple does not allow apps which could be used to circumvent their app store (e.g. virtual machines such as the Java or Flash virtual machines).
The Android platform, by contrast, has multiple app stores available for it, and users can generally select which to use (although Google Play requires a compatible or rooted device).
This move was replicated for desktop operating systems with GNOME Software (for Linux), the Mac App Store (for macOS), and the Windows Store (for Windows). All of these platforms remain, as they have always been, non-exclusive: they allow applications to be installed from outside the app store, and indeed from other app stores.
The explosive rise in popularity of apps, for the iPhone in particular but also for Android, led to a kind of "gold rush", with some hopeful programmers dedicating a significant amount of time to creating apps in the hope of striking it rich. As in real gold rushes, not all of these hopeful entrepreneurs were successful.
Formalization of software development
The development of curricula in computer science has resulted in improvements in software development. Components of these curricula include:
Structured and Object Oriented programming
Data structures
Analysis of Algorithms
Formal languages and compiler construction
Computer Graphics Algorithms
Sorting and Searching
Numerical Methods, Optimization and Statistics
Artificial Intelligence and Machine Learning
How software has affected hardware
As more and more programs enter the realm of firmware, and the hardware itself becomes smaller, cheaper and faster as predicted by Moore's law, an increasing number of types of functionality of computing first carried out by software, have joined the ranks of hardware, as for example with graphics processing units. (However, the change has sometimes gone the other way for cost or other reasons, as for example with softmodems and microcode.)
Most hardware companies today have more software programmers on the payroll than hardware designers, since software tools have automated many tasks of printed circuit board (PCB) engineers.
Computer software and programming language timeline
The following tables include year by year development of many different aspects of computer software including:
High level languages
Operating systems
Networking software and applications
Computer graphics hardware, algorithms and applications
Spreadsheets
Word processing
Computer aided design
1971–1974
1975–1978
1979–1982
1983–1986
1987–1990
1991–1994
1995–1998
1999–2002
2003–2006
2007–2010
2011–2014
See also
Forensic software engineering
History of computing hardware
History of operating systems
History of software engineering
List of failed and overbudget custom software projects
Women in computing
Timeline of women in computing
References
Sources
External links
History of computer science
History of computing
|
27569034
|
https://en.wikipedia.org/wiki/Production%20truck
|
Production truck
|
A television production truck or OB van is a small mobile production control room to allow filming of events and video production at locations outside a regular television studio. They are used for remote broadcasts, outside broadcasting (OB), and electronic field production (EFP). Some require a crew of as many as 30 people, with additional trucks for additional equipment as well as a satellite truck, which transmits video back to the studio by sending it up through a communications satellite using a satellite dish, which then transmits it back down to the studio. Alternatively, some production trucks include a satellite transmitter and satellite dish for this purpose in a single truck body to save space, time and cost.
Other television production trucks are smaller in size and generally require two or three people in the field to manage. For instance broadcast journalism news reporters providing live television, local news in the field electronic news gathering (ENG) outside a formal television studio. In some cases, it can be a station wagon, people carrier or even a motorbike (especially in cities with congested streets or where a rapid response is needed and a motorbike is more manoeuvrable).
History
One of the BBC's early Outside Broadcast vehicles, MCR 1 (short for Mobile Control Room), was built by the joint Marconi-EMI company and delivered to the BBC just in time to televise the Coronation of George VI and Elizabeth in May 1937. MCR 2 was identical to MCR 1 and was delivered in the summer of 1938. The MCRs could handle three cameras. Initially, they were standard Emitrons, but were later supplemented by Super Emitrons, which performed much better than the standard ones in low light. The MCRs were built on the chassis of an AEC Regal single-decker bus.
After the Second World War, the joint company Marconi-EMI ended. The BBC ordered two 3-camera MCRs from EMI. The cameras were equipped with CPS tubes, had electronic viewfinders and a 3 lens turret. MCR 4 was delivered in time to be used on the 1948 Olympics.
After developing colour television in the mid 1960s, the BBC began to develop a fleet of colour OB units, known as CMCRs. Type 2 scanners, introduced in 1969, first came equipped with Pye PC80 cameras but these were soon superseded by EMI 2001 colour cameras. Nine Type 2 vehicles were built by Pye, with these trucks remaining in service through the 1970s and into the mid 1980s. Throughout this time, they would see use on some of the BBC's most prestigious programmes, including Royal Events, Doctor Who, Wimbledon Tennis, and Question Time. Type 2 units went on to be replaced by Type 5 units.
Although made from converted HGVs, inside these trucks were incredibly cramped as a result of housing all of the components of a television gallery. The vehicles were normally made up of three sections:
A section to house the camera control units, or CCUs, and camera monitoring equipment. Being so large and complex, these cameras required a team of skilled engineers to keep them functioning. During a production, the camera operator would control the pan and the focus but it was the engineer who controlled the exposure and the colour balance.
A section for the production crew, led by the director, who would orchestrate the overall production.
A section for the sound crew which housed their mixing desk and other sound equipment. From here the sound crew controlled not only the sound of the programme but all the production communications which allowed the whole crew to communicate to one another. Without which the production would undoubtedly grind to a halt.
Interior
A typical modern OB vehicle is usually divided into five parts, but many vehicles are customised to specific roles.
Production control
This is the production hub of the vehicle, and is where the majority of the production crew sit in front of a wall of video monitors. The video monitors show all the video feeds from various sources, including computer graphics, professional video cameras, video servers and slow-motion replay machines. The wall of monitors contains a preview monitor showing what could be the next source on air and a program monitor that shows the feed currently going to air or being recorded. The keyed dirty feed (with digital on-screen graphic) is what is actually transmitted back to the central studio that is controlling the outside broadcast. A clean feed (without the graphics) could be sent to other vehicles for use in their production. Behind the directors there is usually a desk with monitors where the assistant producers can work. It is essential that the directors and assistant producers are in communication with each other during events, so that replays and slow-motion shots can be selected and aired.
Chyron, a well known manufacturer of character generators, “keys” graphics over a specified source the TD chooses, but is generally used for images, and lower third messages, as well as occasionally smaller videos. The Bug Box character generator works the same way but is only for sporting events - the operator is in charge of ensuring that the time, score, and statistics are displayed on the broadcast as appropriate.
Crew
Television director – responsible for directing the overall production, including cameras, replays and inserts
Television producers – responsible for the overall running of the production, liaising with talent and choosing when to take commercial breaks
Technical director (also known as a vision mixer) – operates the vision mixer / video switcher, switching the video sources, including graphics, to air as directed
Production assistant (also known as a script supervisor) – responsible for communicating with the broadcast channel about timings, counting in and out of breaks, and giving timings on replays and packages
Assistant producers – often there will be an assistant producer who will be the communication link between the director and the VTR crew, providing information on which channel has the best replay of a certain moment for example
Graphics Operator and Graphics Coordinator – There are a wide range of digital on-screen graphic elements used in television production.
Equipment
Vision mixer – switch between multiple video feeds to produce an easy to watch television experience.
Video monitor – monitor different routable sources on multiple monitors to help select which feed is the best at any given time.
Character generator – used to generate a variety of graphics which can be keyed over a video source.
Sound
This is where the audio engineer (sound supervisor in the UK) uses a mixing console (being fed with all the various audio feeds: reporters, commentary, on-field microphones, etc.) to control which channels are added to the output and follows instructions from the director. They ensure that the audio is within pre-set limits, typically with the help of peak programme meters and loudness monitors. They relay the information from producers and directors to their A2's (audio assistants) who typically set up the audio cables and equipment throughout the arenas and the booth where the commentators sit. The audio engineer normally also has a dirty feed monitor to help with the synchronization of sound and video. Intercom is also generally the responsibility of the sound department.
Crew
Audio mix engineer (A1) (also known as audio mixer, audio director or sound supervisor) – The A1 mixes the sounds that the audience will listen to. They will mix the assorted sounds such as crowd noise, effect sounds, announcers, etc. They route the different sources of sounds from microphones, cameras, discs, video tapes, telephones, EVS, or outside audio sources, into the audio mixing board for control. They are also in charge of ensuring the audio is successfully being transmitted. They also ensure the intercom is working for every station in the production, as well as dial up coordination with a network director.
Audio assistant (A2) – The A2s work under the direction of the A1 as they set up all the audio equipment around the venue for various sounds. They also set up the intercom system between the production truck and stage or announcer booths. They are also in charge of placing microphones on the talent as they enter and exit.
Equipment
Audio Mixing console – combine any source of audio and change the level and dynamics of the audio, digital or analog audio sources.
Audio router – used to ensure that all sources of audio appear in the right place on the audio mixing console or in other parts of the production truck
Multitrack recording devices – recording individual tracks of the incoming sources allowing for a dub to be done at a later time
Intercom – two-wire or four-wire intercom allows everyone on the production able to communicate quickly and effectively.
VTR
The VTR area has a collection of machines including video servers and may also house additional power supplies or computer equipment. The "tape room" has VTR operators who monitor one or more cameras that go into machines and can be played back for replays when an exciting or important play occurs during the game. These operators can play back in slow motion or pause to show a key part of the action. VTR operators also play replay rollouts that lead into commercial breaks, run title sequences and introductory clips, or show the highlights of the event at the end of play.
This area is often called "EVS", after prominent supplier EVS Broadcast Equipment, who make replay machines and associated software.
Crew
Video Tape Operator (also known as LSM Operator or EVS Operator) – The Tape Operators control the recording equipment, nowadays video servers, that receive the video from the various cameras. They coordinate with the Director on playing back pre-recorded video, and other replays of action they recorded.
Equipment
Video server - used to record, store and play back video clips (and sometimes visual effects) used during the broadcast
Video tape recorder – previously used to record, store and play back video
Racks / engineering
In this area, the professional video cameras are controlled using camera control units (CCU) by multiple vision engineers, to make sure that the iris is at the correct level and that all cameras look the same. These operators shade, balance, and focus the cameras from this position inside the vehicle. This area is controlled by an operator called a V1 (vision supervisor in the UK) and depending on the size of the show may have multiple V2s. This area is also where the majority of the racked technical equipment is stored, including the video router and converters.
Crew
Engineer In Charge (EIC) – a broadcast engineer who has a great deal of knowledge about the truck than anyone else on the production. They are involved in installing all required equipment, having the correct skills needed to fix and maintain the equipment. EIC’s usually stay on one truck for years learning all the intricacies about each machine and how to fix them in difficult situations.
Vision engineer (also known as a video technician or camera shader) – The vision engineers are in charge of all the cameras' iris and overall look of the camera's video. The vision engineers also troubleshoot issues that may arise with the cameras and cable length.
Equipment
Broadcast reference monitor – used to monitor the output of cameras and the transmission for confidence checking
Video router – send video and audio to any destination from any source.
Frame synchronizer – puts Asynchronous or “wild” video sources into Synchronization with other video signals.
Test card Signal generator – used for checking signal paths and troubleshooting.
Transmission
Some production trucks contain an integrated transmission area, where the outgoing feeds are monitored by the vehicle's engineers to ensure the audience have a good picture and a high-quality signal output. It is then transmitted directly from the truck if it has satellite or fibre uplink facilities, or is sent to other vehicles (typically a dedicated satellite truck) who handle this directly.
Support Vehicles
Most larger production trucks will travel with a tender vehicle, which will contain additional equipment which cannot be stored in the production truck itself. This equipment includes:
Camera equipment – Professional video cameras, lenses, tripods, camera pedestals, etc.
Electrical cables – triaxial cable, coaxial cable, audio multicore cable, XLR audio cables, optical fiber, power cable, etc.
Sound equipment – a variety of microphones, talkback receivers, etc.
These vehicles will often contain workbenches for basic maintenance and repairs.
Transmission of video
The transmission of the raw video feed from the remote location to the studio or master control is called backhaul. There are several ways of transmitting the backhaul:
Direct microwave link
The earliest method, used before satellites, is to beam the video directly back to the studio using a microwave dish, where another dish receives the signal. Microwave transmission requires an unobstructed line-of-sight path from the transmitting to the receiving antenna, which can be difficult to achieve in urban locations. Some production trucks have a small microwave dish mounted on a telescoping mast, that can be raised 30 to 40 feet to "see" over buildings and other obstructions. It is still used for short ranges.
Communication satellites
One of the most common techniques is to use a satellite dish to transmit the video feed on a microwave uplink signal to a communication satellite orbiting the Earth, which then retransmits it back to a dish at the studio. Satellite feed allows televising live events virtually anywhere on Earth. The satellite is in a geostationary orbit about the Earth and so appears at a stationary position in the sky, so the dish merely has to be pointed initially at the satellite when the truck reaches its remote location and does not have to turn to "track" the satellite. Satellite feed became common in the 1970s when there were enough satellites in orbit that a consumer market for satellite use started in television. This open market for satellite space spawned a flurry in mobile satellite uplink trucks for hire, making possible the television viewing of live events all over the world. The first satellite trucks were allocated frequencies in the C band (5.700-6.500 GHz) which required large 2-meter dishes. In the 1980s frequencies in the Ku band (12 to 18 GHz). were authorized, which required only small dishes less than a meter in diameter, but these are not usable in rainy weather because of rain fade.
Today, the satellite dish and microwave transmitter may be on a satellite truck (uplink truck) separate from the production truck, but some production trucks (called "hybrids") also incorporate the satellite dish and transmitter.
Fiber optic lines
Where available, production trucks can use existing high capacity fiber optic cable to send video directly via the Internet to broadcasting companies for distribution. These accept an asynchronous serial interface (ASI) digital stream from the video encoder. This is a very high quality, low loss way of sending video quickly and securely around the world.
Cellular networks
Portable, battery powered, IP video encoders are sometimes used over cellular modems that leverage a technology called bonded cellular, to transmit video signals from a camera back to a control room or content delivery network. There have been recent tests using 5G for backhaul, with fibre optic as backup.
Gallery
References
External links
LifeAtSky on YouTube – Step into our UHD OB Truck A tour of NEP UK's Aurora production truck
Broadcast engineering
Film and video technology
Television terminology
|
8795217
|
https://en.wikipedia.org/wiki/Psychological%20subversion
|
Psychological subversion
|
Psychological subversion (PsychSub) is the name given by Susan Headley to a method of verbally manipulating people for information. It is similar in practice to so-called social engineering and pretexting, but has a more military focus to it. It was developed by Headley as an extension of knowledge she gained during hacking sessions with notorious early computer network hackers like Kevin Mitnick and Lewis de Payne.
Usage example
Headley often gave the following example of the use of psychological subversion: Suppose the hacker needed access to a certain classified military computer called, say, IBAS. He would obtain the name of the base commander or other high-ranking official, gain access to the DNS network, (which is the separate military telephone network) and dial up the computer center he needed to reach, which was often in a secured facility. The person who answered the phone would usually be a low-ranking enlisted person, and the hacker would say something like, "This is Lieutenant Johanson, and General Robertson cannot access his IBAS account, and he'd like to know WHY?" This is all said in a very threatening tone of voice, clearly implying that if the general can't get into his account right away, there will be severe negative repercussions, most likely targeting the hapless person who answered the phone.
The hacker has the subject off guard and very defensive, wanting nothing more than to appease the irritated general as quickly as possible. The hacker then goes silent, giving the victim ample time to stammer into the phone and build up his fear level, while listening for clues from the victim as to how best to proceed. Eventually, the hacker suggests that the tech create a temporary account for the general, or change the general's password to that of the hacker's choice.
The hacker would then have gained access to a classified military computer. It is important to note that this technique would not work any more, in no small part thanks to Headley's teaching of the military agencies about such methods during the 1980s.
Scientific methodology
While pretexting methods and so-called social engineering are based on on-the-fly adaptations during a phone call made to the victim with very little pre-planning or forethought, the practice of PsychSub is based on the principles of NLP and practical psychology. The goal of the hacker or attacker who is using PsychSub is generally more complex and involves preparation, analysis of the situation, and careful thought about what exact words to use and the tone of voice in which to use them.
Classified thesis
Headley's thesis entitled "The Psychological Subversion of Trusted Systems" was classified by the DOD in 1984 and so far has not seen the light of day. As a result, further information about PsychSub is generally unavailable outside of Headley's own seminars on the subject during the 1980s at CIA technology and spycraft-type seminars such as Surveillance Expo.
References
(1) Headley's talk at a hacker convention in Las Vegas
Deception
Psychological abuse
Social engineering (computer security)
|
52569384
|
https://en.wikipedia.org/wiki/Fyuse
|
Fyuse
|
Fyuse is a spatial photography app which lets users capture and share interactive 3D images. By tilting or swiping one's smartphone, one can view such "" from various angles — as if one were walking around an object or subject.
The app blends photography and video to create an interactive medium and was first published for iOS in April 2014. The Android version was released at the end of 2014.
The app
Fyuse lets users capture panoramas, selfies, and full 360° views of objects and allows one to view captured moments from different angles. It has its own personal gallery, social network and standalone web integration.
With the app, Fyusion also created a social networking platform similar to Instagram. can be shared, commented on, liked and re-shared to one's followers (called Echoes). One can build a network of followers and with engagement tracking, one can see how many times an image has been interacted with The images can also be saved for private, offline view, or shared to other social networks, like Facebook or Twitter, or embedded on a website where the images can be interacted with by desktop users via dragging the mouse. Furthermore, in the compass tab other can be discovered using the app's system of tags and categories. One's feed is prepopulated with top users, and one can follow people to see when they post a new . The app will also find one's friends if one signs up with Facebook or connects it with one's Twitter account.
To create a one moves around a person or object with one's phone's camera in one direction or moving/tilting one's phone around while holding one's finger on the screen.
By combining photography and video the app allows one to capture moments that one may not have otherwise been able to capture by recording not one moment in time but stitched together little moments. According to Fyusion CEO Radu Rusu, a photo freezes a moment in time, while a video captures moments in a linear timeline — both still flat, when viewed. A image captures a moment in space, where one can not only see one side of something, but also around it.
When it is done rendering, can also be edited – one can trim the for length and edit the brightness, contrast, exposure, saturation and sharpness. One can also add a vignette and apply a filters, with options to adjust their intensity. After editing, one can write a description, add hashtags, and tag parts of the before one can (voluntarily) publish and share it.
Version 1.0 has been described as "alpha prototype" and version 2.0 was released on 17 December 2014.
Version 3.0 introduced 3D tagging by which users can layer 3D graphic that animate accordingly with each interaction to add some context to the content.
Version 4.0 was released on December 21 2016 for iOS.
Since January 2016 (v3.2) the app allows the export of as Live Photos.
The app has also been described as a more sophisticated version of 3D stickers and flip images.
Applications
The app has many applications for e-commerce such as for fashion designers who want to showcase a garment from every angle, or real estate listings and Airbnb-type sites that want to make their rental properties seem as enticing as possible.
The app can also be used for interactive art, 360° panoramas and selfies.
History
San Francisco-based Fyusion Inc.'s three founders — Radu B. Rusu, CTO Stefan Holzer, and VP of Engineering Stephen Miller — worked together at Willow Garage, the robotics research lab started by early Google employee Scott Hassan in the area of "personal robotics" — Hassan decided to turn the lab into more of an incubator, suggesting that the members spin off their technologies into consumer-facing enterprises. Rusu first set out with an open-source 3D perception software startup called Open Perception. Fyusion was officially founded in 2013, and soon after Rusu and his cofounders patented the technology for spatial photography. The company closed a seed funding round at the end of May, raising $3.35 million from investors, including an angel investment from Sun Microsystems cofounder Andreas Bechtolsheim. In 2014 the Fyuse team consisted of 13 employees, mostly engineers and designers, recruited from around the globe. In March 2015 the team displayed their app at Katy Perry's premiere for the movie "Prismatic World Tour on Epix" where Perry also took Fyuse for a test run.
Augmented reality
In September 2016 Fyusion unveiled its platform for creating augmented reality content using ones smartphone. It takes the images from ones smartphone and converts them into 3D holographic images, which one can then view on an AR headset. According to Rusu "by making it easy for people to capture their surroundings on any mobile device, [Fyusion is] revolutionizing the way that people view the world around them" and also states that for "AR to be successful, anyone should be able to create content for it" opposed to the current "small number of content creators and an even smaller number of hardware players". According to him "the applications of [Fyusion's] technology for consumers and businesses are incredibly limitless". The platform uses the company's patented 3D spatio-temporal platform that uses advanced sensor fusion, machine learning and computer vision algorithms and part of the platform is built into the Fyuse app. Before committing to releasing a separate consumer product the company intends to wait until the Hololens device becomes available to the public. Until then any Fyuse representation created using Fyuse is AR ready and will be able to be shown in Hololens in the future.
Fyuse - Point of No Return
Fyuse - Point of No Return is a science fiction short advert for Fyuse 3.0 in which Fyuse's digital medium is extrapolated into the future. In the film a woman uses a mini scanning-drone to 3D scan a tree with Fyuse and later recreate it as an augmented reality object at another place.
References
External links
Applications of computer vision
IOS software
Android (operating system) software
2014 software
Photo software
Image sharing websites
Video software
American social networking websites
Internet properties established in 2014
Augmented reality applications
American photography websites
|
43048075
|
https://en.wikipedia.org/wiki/Joan%20Greenbaum
|
Joan Greenbaum
|
Joan Greenbaum (born October 7, 1942) is an American political economist, labor activist, and Professor Emerita at the CUNY Graduate Center (Environmental Psychology) and LaGuardia Community College (Computer Systems Information). She also taught and conducted research at Aarhus University (Computer Science) (1986–88; 1991–92; 2007), and the University of Oslo (Informatics) (1995–96). Her numerous books and articles focus on participatory design of technology information systems, technology and workplace organization, and gender and technology.
Personal life and education
Born to Harriet and Nathan Greenbaum in Bronx, NY, Greenbaum attended public schools in White Plains, NY. She earned a B.A. in economics at Penn State (1963) and a Ph.D. in Political Economy from Union Graduate School (1977), with coursework at the New School for Social Research and a scholarship from the Institute for Policy Studies. Greenbaum has four sons and several grandchildren.
Technology
As an undergraduate student, Greenbaum programmed one of the first computers, the IBM 650, a vacuum tube computer system, in binary code. Following college, she worked as a computer programmer at IBM at a time when few women worked in computer systems. Greenbaum was the first woman faculty member in the Computer Systems Information Department (then called Data Processing) at LaGuardia Community College shortly after it was founded as a cooperative educational institution, making higher education accessible to local factory works and other laborers (1973-2007). More recently, she was a fellow at the research institute AI Now (2019-2020)
Labor activism
Greenbaum is a longtime labor activist championing workers' rights and raising awareness of social justice issues. In the late 1960s-early 70s, she was an activist with Computer People for Peace, an organization of workers in the computer field who were against the War in Vietnam. She was active in early efforts to unionize computer workers, which she discussed in a 2019 interview with Logic magazine, and said:
"I believe everything starts with a single issue. You start with a single issue, and my issue was working conditions."
She is a member of the executive board of the Professional Staff Congress (PSC), the union that represents more than 30,000 faculty and staff at the City University of New York, and co-founded the Environmental Health and Safety Watchdogs. She was given the Unsung Hero Award at the New York State United Teachers (NYSUT) Health and Safety Conference in 2013.
Participatory Design
Greenbaum's academic work has been most influential among scholars in the technology and design fields, specifically those working on participatory design of computer systems, which involves the active involvement of all stakeholders (e.g. employees, partners, customers, citizens, end users) in the design process to help ensure the result meets their needs and is usable. She collaborated with scholars in Scandinavia, where the concept of cooperative design took root and who developed strategies and techniques for workers to influence the design and use of computer applications at the workplace. Also in the area of participatory design, Greenbaum's work has been applied to studies of museums and cultural heritage institutions. Dagny Stuedahl, a professor in media design who has written about participatory design methods in museums in Norway, has been influenced by Greenbaum's focus on the organizational context for participation and involvement in processes that is central for the innovations in heritage institutions.
Books
From her work with participatory design, Greenbaum wrote three books: In the Name of Efficiency (Temple University Press, 1979); Design at Work: Cooperative Design of Computer Systems (Erlbaum Press, 1991), which she co-authored with Morten Kyng; and Windows on the Workplace: Computers, Jobs, and the Organization of Office Work in the Late Twentieth Century (1995, Monthly Review Press) In the Name of Efficiency is considered a core text in the field of labor studies, while Design at Work, her most cited piece of work, remains one of the central publications in the field of information systems design and organizational change. Windows on the Workplace: captures stories of organizations and the people who work for them, focusing on the history of office technology in the 50 years prior to publication. The 2004 second edition was updated to include the use of the internet in offices. Of her work, John Bellamy Foster wrote, "Joan Greenbaum, who has conducted extensive research into high technology and the division of labor in office work...argues that "deskilling," though an important and fundamental strategy," often only lays "the groundwork for other devices in management's bag of tricks"
Selected peer reviewed articles and book chapters
“The Head and the Heart : Using Gender Analysis to Study Social Construction of computer systems”, Computers and Society, ACM SIGCAS, July 1990
"Return to the Garden of Eden? Learning, Working, and Living”, with Fischer, Gerhard & Frieder Nake, The Journal of the Learning Sciences, 9(4), Fall 2000, pp505–513.
“Got Air”, with David Kotelchuck, Working USA, Fall 2003, Vol. 7, No. 2. reprinted in “Got Air: Indoor air quality in US offices”, with David Kotelchuck, in Vernon Mogensen, ed., Worker Safety Under Siege: Labor, Capital and the Politics of Workplace Safety (M.E. Sharpe, 2005).
“Appropriating digital environments: (Re) constructing the physical through the digital”, in ReSearching a Digital Bauhaus, T. Binder, L. Malbom (eds). Springer Verlag, 2008.
"Participation, the camel and the elephant of design", with Daria Loi, CoDesign, International Journal of Cocreation in Design and the Arts. Vol. 8, No 2-3 Sept 2012, Taylor & Francis.
"Heritage: Having a Say", Chapter 2 with Finn Kensing, in The Handbook of Participatory Design, Keld Bodker, Jesper Simonsen, Toni Robertson, eds., Routledge 2012.
Selected Keynote Addresses
“The Design Challenge-Creating a Mosaic out of Chaos”, ACM Computer Human Interaction Conference (CHI), Denver, May 1995.
"Creating a Sense of Place”, Information Technology,Transnational Democracy & Gender, Lulea, Sweden, 2003.
“Social Inclusion & Exclusion in IT design”, International Federation for Information Processing (IFIP), Limerick, Ireland, July 2006.
“Political Economy of Mobile Technology”, 30th International Labour Process Conference, Stockholm, Sweden, March 2012
References
Citations
Sources
Living people
1942 births
Pennsylvania State University alumni
People from the Bronx
Graduate Center, CUNY faculty
Trade unionists from New York (state)
20th-century American non-fiction writers
American women non-fiction writers
American computer programmers
20th-century American women writers
21st-century American women
|
25428827
|
https://en.wikipedia.org/wiki/2007%20Stanford%20Cardinal%20football%20team
|
2007 Stanford Cardinal football team
|
The 2007 Stanford Cardinal football team represented Stanford University in the 2007 NCAA Division I FBS football season. In Jim Harbaugh's inaugural season at Stanford, the 41-point underdog Cardinal pulled off the second greatest point-spread upset in college football history by defeating the #1 USC Trojans in a mid-season game (USC had been ranked No. 1 in all national pre-season polls, picked unanimously to win the Pac-10 Conference, and expected to contend for a national championship – until the Stanford upset). To cap off Harbaugh's first season, the Cardinal defeated archrival Cal in Stanford's final game of the season to win the Stanford Axe for the first time in six years (marking the only game in a series of eight stretching between 2002 and 2009 that was won by Stanford).
The team played their home games at Stanford Stadium in Stanford, California and competed in the Pacific-10 Conference. The Cardinal improved on their 1–11 record from the 2006 season by going 4–8 in the 2007 season.
Schedule
Coaches
Game summaries
UCLA
In Jim Harbaugh's debut game as Stanford's new head coach, UCLA's offense amassed 600 yards and overwhelmed the Cardinal defense in the second half, as UCLA won handily. UCLA's Ben Olson threw 5 touchdown passes and finished 16–29 for 286 yards while fellow Bruin Kahlil Bell led the running game by gaining 195 yards on 19 carries. This individual performance was the 18th best single game rushing performance in Bruin football history, placing Bell right after Freeman McNeil, who had 197 yards against Stanford in 1979, and right before Gaston Green, who had 194 yards against Tennessee in 1985.
San Jose State
Oregon
Arizona State
USC
The struggling Stanford Cardinal continued Pac-10 play by playing the USC Trojans in the Los Angeles Memorial Coliseum, where the Trojans had not lost in six seasons. In a major upset, USC stumbled at home to the 41-point underdog Cardinal, losing 24–23.
Harbaugh made headlines prior to the season by claiming 2007 would be USC Coach Pete Carroll's last year with the Trojans before departing to the NFL, drawing a terse rebuke from Carroll; Harbaugh later called the 2007 Trojans one of the best teams in the history of college football at Pac-10 Media Day, reiterating the position in the week before their game. However, there were no hard feelings between the coaches. The two kept in cordial phone contact and Carroll made light of Harbaugh's comments several times during the season.
Stanford's starting quarterback, redshirt senior T. C. Ostrander, suffered a seizure on the afternoon of September 30, one day after the game against Arizona State; he was released from Stanford Hospital after a few hours, but as a precautionary measure he was held out of the game against USC. The starting quarterback position fell to Tavita Pritchard, a redshirt sophomore with three passes in his college career. Stanford was also without two other key starters: defensive lineman Ekom Udofia (ankle) and offensive lineman Allen Smith (knee). On October 3, it was announced that USC running back C. J. Gable, who was averaging a team-best 11 yards a carry, would undergo season-ending abdominal surgery to correct a nagging sports hernia that had limited his ability since the previous season; because he had only played in the first three games, he would seek a medical redshirt season. Gable's fellow running back, Stafon Johnson, was also held out of the game due to a foot bruise suffered the previous week.
Stanford was the last team to beat USC at the Coliseum, doing so on September 29, 2001 under Tyrone Willingham (who had since become the coach of Washington) against then-first-year coach Carroll. By game week, the line for the game favored the Trojans by 39.5 points, and reached 41 points by gametime. The loss ended multiple USC streaks, including a five-game win streak against Stanford and a 35-game home winning streak. For sportsbooks, the loss to a 41-point underdog marked the biggest upset in their history.
There were a few positive efforts for the Trojans: Tight end Fred Davis caught five passes for a career-best 152 yards, including a 63-yard touchdown; and nose tackle Sedrick Ellis had three sacks. However, there were many more errors and substandard performances: quarterback John David Booty, who broke a bone in the middle finger of his throwing hand in the first half, had four passes intercepted in the second half. The offensive line had been suffering since losing two starters in one play during the previous week's game at Washington, but the effect was severe against Stanford; the offensive line gave up four sacks, one more than the Trojans had surrendered all season, and USC gained only 95 yards rushing. Key receiver Patrick Turner dropped several passes, the defense gave up 17 points in the fourth quarter and USC had an extra-point attempt blocked, a point which became a crucial difference. Like their previous game against Washington, USC out-gained Stanford by 224 yards (459 to 235) but made many crucial turnovers and penalties. In the press conference following the game, Carroll summarized his concerns: "It's real clear that we have fallen out of line with our philosophy that has guided this program for years; we're turning the ball over too much."
Opinions in the sports press ranged from proclaiming the end of the USC's era of dominance in college football to calling the loss a major, but not fatal, set-back to any hopes for a Trojans run at the national championship. The Trojans fell to No. 10 in the AP Poll; however, USC only fell to No. 7 in both the Coaches Poll and Harris Poll, both of which are the human components for determining who the BCS chooses for the National Championship Game. As a result, USC remained in outside title contention with upcoming games against consensus-No. 2 California and top-10 Oregon. The upset landed the Trojans in ESPN.com's Bottom 10.
In an interview the following month, Carroll assessed the mistakes that led to the loss as his own:
At the end of the regular season, Sports Illustrated chose Stanford's upset of USC as the second "Biggest Upset of 2007" after Division I FCS Appalachian State's upset of No. 5 Michigan.
TCU
A week after defeating top-ranked USC, Stanford welcomed TCU to Stanford Stadium for homecoming. It was also the first meeting between the two schools. The Cardinal found themselves with a double-digit lead late in the second half of this game, as they led the Horned Frogs 31–17 with 3:54 remaining in the 3rd quarter. TCU's Andy Dalton then hit Jimmy Young for a 70-yard touchdown and Aaron Brown for a 2-yard touchdown pass on fourth down to tie the game at 31. Stanford kicked a field goal with 7:22 remaining to re-take the lead, 34–31. Brown gave TCU its first lead of the game with a 2-yard touchdown run with 4:13 left. An intentional safety by TCU in the final seconds made the final score 38–36. Dalton ended the game with a career-high 344 passing yards.
Arizona
Oregon State
Washington
Washington State
Notre Dame
The Fighting Irish concluded their season on a high note, winning its second straight game and its second win on the road. Notre Dame's Robert Hughes ran for 136 yards and the go-ahead 6-yard touchdown with 6:06 remaining in the 4th quarter to help the Irish beat the Cardinal 21–14. The Irish's Jimmy Clausen went 19–32 for 196 yards and one touchdown. The Cardinal missed 4 field goals and turned the ball over twice. Notre Dame, meanwhile, committed 4 turnovers, including 3 fumbles and an interception.
Notre Dame almost added another score on what would have been a spectacular finish to the half. Notre Dame's David Bruton intercepted Stanford quarterback Tavita Pritchard's last-play heave at the 3-yard line and began a three-lateral return to the end zone that was called back on a personal foul on Notre Dame defensive lineman Trevor Laws. Irish Safety Tom Zbikowski ran the final 30 yards after a lateral from Darrin Walls, and the only thing missing was the band on the field as it was 25 years ago when California shocked Stanford with The Play.
California
Stanford led Cal for the entirety of the 110th Big Game, winning 20–13 and gaining The Axe after Cal had held onto it for five straight years, marking Cal Coach Jeff Tedford's first loss to the Cardinal, something Harbaugh's two predecessors had failed to do. Stanford confused Cal on defense by alternating quarterbacks T. C. Ostrander and Tavita Pritchard in offensive series. The Golden Bears's Nate Longshore was 22/47 with 252 yards, 1 touchdown, and two interceptions, throwing one at the 7-yard line with 2:10 remaining. Cal's Justin Forsett ran for 96 yards on 19 carries. The Golden Bears's Robert Jordan caught 4 receptions for 99 yards including a 46-yard touchdown reception. Despite injuries that had depleted the Cardinal's backfield to the point where one player was converted to a running back, Stanford rushed for 120 yards. California's offense was limited to one touchdown and a field goal, Cal's worst offensive performance of the season. Longshore continued to struggle in the second half, leading the offense to only one field goal after half time. Cal committed 10 penalties for 118 yards.
References
Stanford
Stanford Cardinal football seasons
Stanford Cardinal football
|
4194138
|
https://en.wikipedia.org/wiki/Parliamentary%20Information%20Technology%20Committee
|
Parliamentary Information Technology Committee
|
The Parliamentary Information Technology Committee (PITCOM) was a United Kingdom Parliament Associate Parliamentary Group set up "to address the public policy issues generated by IT and its application across the UK economy, public and private".
It was formed in January 1981 by a merger of the All-Party IT Committee and the Parliamentary Computer Forum. Its constitution and term of reference changed over time to fit the changing rules for registered All Party Groups (e.g. that all officers be members of the House of Commons or the House of Lords, that groups be reconstituted after a General Election etc.). On 18 July 2011 it merged with the All-Party Group on the Digital Economy to form "The Parliamentary and ICT Forum" (PICTFOR).
Objectives
The objectives stated in the first edition of the PITCOM Journal (published 1982 - 1999) were:
1 To promote among Members of Parliament and their advisers, and informed awareness of the potential and the limitations of the microelectronics, computing, communications and information handling technologies; their industrial, economic and social impact : and the actions necessary to maximise the industrial, economic and social advantages which these technologies make possible.
2) To analyse in consultation with industry, current and future problems in the field of information and computer technologies and to consult with suppliers, users and responsible organisations concerned and to arrange meetings, presentations, seminars and visits so as to promote continuity of analysis and policy in this field.
3) To provide a meeting place for informal, off-the-record exchanges of information, ideas and opinions on subjects of mutual concern between Members of Parliament, their advisers and members of the microelectronics, computing, communications and information handling industries.
History
The founding chairman was Ian Lloyd MP, the vice-chairmen were Gwilym Roberts MP, Michael Marshall MP and Philip Virgo. The Treasurers were Gary Waller MP and David Mathieson. The Secretaries were Lord Lloyd of Kilgerran and Brian Murphy. The Membership Secretary was Richard Marriott.
Ian (later Sir Ian) Lloyd MP chaired PITCOM until 1987.
In that period PITCOM established a pattern of organising half a dozen evening meetings a year on current political topics plus an annual high-profile exhibition or event. The first of the latter, in 1981, was a week-long exhibition on computer-based aids for the disabled opened by Sir George Young MP, then Parliamentary Under-Secretary of State at the Department of health and Social Services. The following year PITCOM organised a more ambitious event, on Computers in Schools. Relays of children from over 30 schools manned 26 systems in the Upper Waiting Room of the House of Commons, visited by over 120 MPs. Also in 1982 PITCOM organised a major seminar on "The Freedom of Broadcasting". This was addressed by, among others, the Home Secretary (Rt Hon W Whitelaw MP) and Mrs Mary Whitehouse. It also included the first public political discussions in the UK on the new Cable TV technologies that were expected to transform the world of Broadcasting.
In 1984, over dinner after a PITCOM meeting on the effects of piracy on the nascent computer games industry, it was agreed that "something must be done" but that PITCOM should not compromise its neutral position by taking a lead. Those round the table decided to support the formation of a British Computer Society copyright committee to look at the issues. That committee met once and delegated a sub-group to report back on what should be done. That sub-group never reported back. Instead the participants formed the Federation Against Software Theft as a company limited by guarantee and organised the campaign that led to the Copyright (Computer) Amendment Act in 1985. This is believed to have been the shortest time from start of a campaign to legislation on the statute book since the 1930s. Many PITCOM members helped expedite the process.
1n 1986 IBM loaned the main auditorium of its South Bank Centre to PITCOM for a daylong seminar on IT Skills Shortages, organised with the assistance of the National Computing Centre and the IT Skills Agency.
In 1987 Michael (later Sir Michael) Marshall MP, took over as chairman. He continued with the same formula of meetings and events for several years but also added overseas study tours. The first tour was to Texas in 1987. In 1989 PITCOM hosted the launch of the Women IT Campaign. This had seedcorn funding from DTI proportionate to the funding from industry. Over the period 1989 - 1994 the campaign team used £500,000 from DTI to leverage over £2 million from industry to organise careers events and advice and a kite-marking service for returner programmes, During that period the proportion of girls applying for IT-related degree course rose from barely 10% to over 25% and those employed on IT also rose significantly. Many PITCOM members were involved in the campaign and contributed to its success.
In 1994 PITCOM organised fringe meetings at the main party conferences and also organised the re-launch of EURIM (the European Informatics Group, now retitled "The Digital Policy Alliance") to organise policy studies and secure action where it found consensus. It was agreed that EURIM should be politically, financially and organisationally independent from PITCOM but companies would not be allowed to join EURIM unless they were also members of PITCOM. EURIM should also report quarterly to the PITCOM Council to for a discussion on priorities and co-operation. The reasons for the separation were partly to do with the rules for all-party party groups(e.g. the personal liability of officers who had to be MPs or Peers) and partly because EURIM was expected to work to secure action on its recommendations while PITCOM was expected to be strictly neutral, even where it found consensus. The requirement for EURIM members to join PITCOM was quickly dropped because of complaints by companies with no London-based staff. The reporting requirement was not dropped until in 2005.
In 1995 PITCOM had a very successful study tour of the United States (New Jersey, New York and Washington) to look at "The Politics of Multi-Media" during the run-up to the "reform" of the Federal Communications Commission to handle converged technologies and the digital age. In 1996 PITCOM visited Canada for the first time.
In 1997 John McWilliam MP (a Deputy Speaker) took over as chairman. He was also a Director of EURIM and decided which activities should be routed through EURIM and which through PITCOM. Thus the meetings to help organise the scrutiny of the legislation that created Ofcom were run through EURIM rather than PITCOM. During John's period of office PITCOM organised a study tour of Sweden, Finland and Germany, a second tour of Canada, a visit to Paris and tours of Japan and California. These were invaluable in helping put IT initiatives and arguments into international context.
In 2004 Christine Stewart Munro took over as Secretary of PITCOM from Frank Richardson who had been involved in the creation of PITCOM and been secretary since 1984.
In 2005 Andrew Miller MP became chairman and in 2006 PITCOM celebrated its 25th anniversary. He instituted an annual competition
for schools, organised by e-Skills, to not only interest the children but also their constituency MPs. In this period PITCOM opened up relations with the Internet Governance Forum, sponsoring MPs to attend its meetings and helping organise UK inputs
and reports back.
The Rt Hon Alun Michael MP became chairman in 2011 by which time there was common agreement on the need to rationalise the growing number of all-party groups addressing IT related issues. Discussions were opened with other relevant groups and with EURIM (to formalise the de facto division of labour). The merger to form PICTFOR was the first tangible stage in that process.
External links
PITCOM
PICTFOR, the Parliamentary Internet, Communications and Technology Forum
1981 establishments in the United Kingdom
Information Technology Committee
Information technology organisations based in the United Kingdom
|
464046
|
https://en.wikipedia.org/wiki/Fourteenth%20Air%20Force
|
Fourteenth Air Force
|
The Fourteenth Air Force (14 AF; Air Forces Strategic) was a numbered air force of the United States Air Force Space Command (AFSPC). It was headquartered at Vandenberg Air Force Base, California.
The command was responsible for the organization, training, equipping, command and control, and employment of Air Force space forces to support operational plans and missions for U.S. combatant commanders and their subordinate components and was the Air Force Component to U.S. Strategic Command for space operations.
Established on 5 March 1943 at Kunming, China, 14 AF was a United States Army Air Forces combat air force activated in the Asiatic-Pacific Theater of World War II. It primarily fought in China. After World War II Fourteenth Air Force subsequently served Air Defense Command, Continental Air Command, and the Air Force Reserve (AFR).
14 AF was commanded by Major General Stephen N. Whiting. Its Command Chief Master Sergeant was Chief Master Sergeant Patrick F. McMahon. &
On 20 December 2019, the USAF's Fourteenth Air Force was redesignated as the United States Space Force's Space Operations Command (SPOC). On 21 Oct 2020, Space Operations Command, HQ was redesignated back to Fourteenth Air Force and inactivated.
History
World War II
1st American Volunteer group
With the United States entry into World War II against the Empire of Japan in December 1941, Claire Chennault, the commander of the American Volunteer Group (AVG) (known as the Flying Tigers) of the Chinese Air Force was called to Chungking, China, on 29 March 1942, for a conference to decide the fate of the AVG. Present at the conference were Chiang Kai-shek; his wife, Madame Chiang Kai-shek; Lt. Gen. Joseph W. Stilwell, commander of all U.S. forces in the China Burma India Theater; and Colonel Clayton L. Bissell, who had arrived in early March. Bissell was General Henry H. 'Hap' Arnold's choice to command the USAAF's proposed combat organization in China.
As early as 30 December 1941, the U.S. War Department in Washington, D.C., had authorized the induction of the Flying Tigers into the U.S. Army Air Forces (USAAF). Chennault was opposed to inducting the Flying Tigers into the Army. Stilwell and Bissell made it clear to both Chennault and Chiang that unless the AVG became part of the U.S. Army Air Force, its supplies would be cut off. Chennault agreed to return to active duty but he made it clear to Stilwell that his men would have to speak for themselves.
Chiang Kai-shek finally agreed to induction of the AVG into the USAAF, after Stilwell promised that the fighter group absorbing the induction would remain in China with Chennault in command. With the situation in Burma rapidly deteriorating, Stilwell and Bissell wanted the AVG dissolved by 30 April 1942. Chennault, wanting to keep the Flying Tigers going as long as possible, proposed the group disband on 4 July, when the AVG's contracts with the Nationalist Chinese government expired. Stilwell and Bissell accepted.
China Air Task Force
Chennault was recalled to active duty in the USAAF on 15 April 1942 in the grade of Major General. Chennault was told that he would have to be satisfied with command a China Air Task Force of fighters and bombers as part of the Tenth Air Force. Its mission was to defend the aerial supply operation over the Himalayan mountains between India and China – nicknamed the Hump – and to provide air support for Chinese ground forces. Bissell had been promoted to brigadier general with one day's seniority to Chennault in order to command all American air units in China as Stillwell's Air Commander (in August 1942 he became commanding general of the Tenth Air Force). Friction developed when Chennault and the Chinese government were disturbed by the possibility that Chennault would no longer control combat operations in China. However, when Tenth Air Force commanding general Lewis Brereton was transferred to Egypt on 26 June, Stillwell used the occasion to issue an announcement that Chennault would continue to command all air operations in China.
The CATF had 51 fighters in July 1942: 31 Curtiss 81A-1 (export Tomahawks) and P-40B Tomahawks, and 20 P-40E Warhawks. Only 29 were flyable. The 81A-1s and P-40Bs were from the original 100 fighters China had purchased for use by the Flying Tigers; the P-40E Warhawks had been flown from India to China in May 1942 as part of the 23rd Fighter Group, attached to the AVG to gain experience and provide continuity to the takeover of operations of the AVG. Both fighters were good medium-altitude day fighters, with their best performance between 15,000 and 18,000 feet, and they were excellent ground-strafing aircraft.
The 11th Bombardment Squadron (Medium), consisting of the seven B-25s flown in from India, made up the bomber section of Chennault's command. These seven B-25C Mitchells were the remnants of an original 12 sent from India. Four were lost on a bombing mission en route and a fifth developed mechanical problems such that it was grounded and used for spare parts.
The AVG was disbanded on 4 July 1942, simultaneous with the activation of the 23rd FG. Its personnel were offered USAAF commissions but only five of the AVG pilots accepted them. The remainder of the AVG pilots, many disgruntled with Bissell, became civilian transport pilots in China, went back to America into other jobs, or joined or rejoined the other military services and fought elsewhere in the war. An example was Fritz Wolf who returned to the Navy with the rank of Lieutenant, senior grade and assigned as fighter pilot instructor at the Jacksonville Naval Air Station in Florida.
The 23rd Fighter Group with the 74th, 75th and 76th Fighter squadrons, its table of organization rounded out by the transfer of men and P-40s from two squadrons of the 51st Fighter Group in India.
A fourth fighter squadron for the 23rd Group was obtained by subterfuge. In June and July 1942, Chennault got the Tenth Air Force to relocate the 51st FG's 16th Fighter Squadron, commanded by Major John Alison, to his main base in Kunming, China, to gain combat experience. Chennault took them into the CATF – and never returned them.
On 19 March 1943, the CATF was disbanded and its units made part of the newly activated Fourteenth Air Force, with Chennault, now a major general, still in command. In the nine months of its existence, the China Air Task Force shot down 149 Japanese planes, plus 85 probables, with a loss of only 16 P-40s. It had flown 65 bombing missions against Japanese targets in China, Burma and Indochina, dropping 311 tons of bombs and losing only one B-25 bomber.
The members of Fourteenth Air Force and the US press adopted the name Flying Tigers for themselves after the AVG's dissolution. Especially the 23d Fighter Group was often called by the same nickname.
Fourteenth Air Force
The Fourteenth Air Force official web site says:
After the China Air Task Force was discontinued, the Fourteenth Air Force (14 AF) was established by the special order of President Roosevelt on 10 March 1943. Chennault was appointed the commander and promoted to Major General. The "Flying Tigers" of 14 AF (who adopted the "Flying Tigers" designation from the AVG) conducted highly effective fighter and bomber operations along a wide front that stretched from the bend of the Yellow River and Tsinan in the north to Indochina in the south, from Chengtu and the Salween River in the west to both East and South China Seas and the island of Formosa in the east. They were also instrumental in supplying Chinese forces through the airlift of cargo across "The Hump" in the China-Burma-India theater. By the end of World War II, 14 AF had achieved air superiority over the skies of China and established a ratio of 7.7 enemy planes destroyed for every American plane lost in combat. Over all, military officials estimated that over 4,000 Japanese planes were destroyed or damaged in the China-Burma-India theater during World War II. In addition, they estimated that air units in China destroyed 1,100,000 tons of shipping, 1,079 locomotives, 4,836 trucks and 580 bridges. The United States Army Air Forces credits 14 AF with the destruction of 2,315 Japanese aircraft, 356 bridges, 1,225 locomotives and 712 railroad cars.
Chinese-American Composite Wing
In addition to the core Fourteenth Air Force (14AF) structure, a second group, the Chinese-American Composite Wing (CACW), existed as a combined 1st Bomber, 3rd Fighter, and 5th Fighter Group with pilots from both the United States and the Republic of China. U.S. service personnel destined for the CACW entered the China theater in mid-July 1943. Aircraft assigned to the CACW included later series P-40 Warhawks (with the Nationalist Chinese Air Force blue sky and 12-pointed white sun national insignia, rudder markings, and squadron/aircraft numbering) and B-25 Mitchell medium bombers. In late 1944, USAAF-marked P-51 Mustangs began to be assigned to CACW pilots—first P-51B and C series followed by, in early 1945, D and K series. The latter were a reduced-weight versions sharing many of the external characteristics of the D series aircraft including the bubble canopy. All U.S. pilots assigned to the CACW were listed as rated pilots in Chinese Air Force and were authorized to wear the pilot's wings of both nations. One of the known Chinese pilots is Captain Ho Weng Toh of Singapore, the last known surviving Flying Tigers' member in Asia. Captain Ho flew the B-25 bomber, as part of the 1st Bomber Group.
Members of the 3rd FG were honored with a Distinguished Unit Citation (now Presidential Unit Citation) for a sustained campaign: Mission "A" in the late summer of 1944. Mission "A" halted a major Japanese ground offensive and resulted in the award of individual decorations for several of the group's pilots for the planning and execution of the mission.
Most CACW bases existed near the boundary of Japanese-Occupied China and one "Valley Field" existed in an area within Japanese-held territory. Specific field locations included Hanchung, Ankang, Hsian, Laohokow, Enshih, Liangshan, Peishyi, Chihkiang, Hengyang, Kweilin, Liuchow, Chanyi, Suichwan, and Lingling. Today, the 1st, 3rd and 5th Groups of CACW are still operating in Taiwan, reorganized as 443rd, 427th and 401st Tactical Fighter Wings of the Republic of China Air Force.
World War II Units
68th Composite WingConstituted as 68th Fighter Wing, 9 August 1943.Redesignated 68th Composite Wing, December 1943.Inactivated 10 October 1945.
23d Fighter Group (Flying Tigers) (P-40, P-51).July 1942 – December 1945
308th Bombardment Group:(B-24)March 1943 – February 1945
69th Composite WingConstituted as 69th Bombardment Wing, 9 August 1943.Redesignated 69th Composite Wing, December 1943.Reassigned to Tenth Air Force, August 1945.
51st Fighter Group (P-40, P-38, P-51)October 1943 – August 1945
341st Bombardment Group (Medium) (B-25)January 1944 – August 1945,
312th Fighter WingConstituted as 312th Fighter Wing, 7 March 1944.Reassigned to United States, December 1945.
81st Fighter Group: 1944–1945 (P-40, P-47)May 1944 – December 1945
33d Fighter Group: 1944 Xfer from 10th AF (P-38, P-47)April 1944 – September 1944
311th Fighter Group: 1944–1945 Xfer from 10th AF (A-36, P-51)August 1944 – December 1945
Chinese-American Composite Wing (Provisional) (1943–1945)
3d Fighter Group (P-40, P-51)
7th Fighter Squadron
8th Fighter Squadron
28th Fighter Squadron
32d Fighter Squadron
5th Fighter Group (P-40, P-51)
17th Fighter Squadron
26th Fighter Squadron
27th Fighter Squadron
29th Fighter Squadron
1st Bombardment Group (Medium) (B-25)
1st Bombardment Squadron
2d Bombardment Squadron
3d Bombardment Squadron
Other assigned units:
402d Fighter Group:May–July 1943. Assigned but never equipped.
476th Fighter Group: May–July 1943. Assigned but never equipped.
341st Bombardment Group: Xfer from Tenth AF (B-25)January 1944 – November 1945
443d Troop Carrier Group: Xfer from Tenth AF (C-47/C-54)August–November 1945
426th Night Fighter Squadron: Xfer from 10th AF (P-61)427th Night Fighter Squadron: Xfer from 10th AF (P-61)
John Birch
American missionary John Birch was recommended to Chennault for intelligence work by Jimmy Doolittle, whom he had assisted when Doolittle's crew landed in China after the raid on Tokyo. Inducted into the Fourteenth on its formation, and later seconded to the OSS, he built a formidable network of Chinese informants to provide the Flying Tigers with intelligence on Japanese land and sea military positions and the disposition of shipping and railways. He was killed by Chinese communists when he attempted to see a downed plane they had been assigned to guard ten days after the war ended, which led to him being chosen as the namesake of the John Birch Society. The incident is recounted in the memoir of Paul Frillmann, China: The Remembered Life, who had started the war as the chaplain for the Flying Tigers.
Air Defense Command
In March 1946, USAAF Chief General Carl Spaatz had undertaken a major re-organization of the postwar USAAF that had included the establishment of Major Commands (MAJCOM), who would report directly to HQ United States Army Air Forces. Continental Air Forces was inactivated, and Tenth Air Force was assigned to the postwar Air Defense Command in March 1946 and subsequently to Continental Air Command (ConAC) in December 1948 being primarily concerned with air defense.
The command was re-activated on 24 May 1946 at Orlando Army Air Base (later, AFB), Florida. It was originally assigned to provide air defense over a wide region of the Southeast United States along the border of North Carolina, Tennessee, Arkansas and Oklahoma, including Texas south to the Rio Grande. In addition to the command and control of the active Air Force interceptor and radar units in its region, it also became the command organization for the Air Force Reserve and state Air National Guard units.
By 1949 with the establishment of the Western Air Defense Force (WADF) and Eastern Air Defense Force (EADF), the air defense mission of the command was transferred to primary to the EADF, leaving Fourteenth AF free to focus on its reserve training tasks. It was then reassigned to Continental Air Command and moved to Robins AFB, Georgia, in October 1949.
During the Korean War, 14 AF participated in the mobilization of Air National Guard and Air Force Reserve units and individuals from its headquarters at Robins Air Force Base (AFB), Georgia. After the Korean War, the reserve wings of 14 AF participated in various airlift operations, such as Operation SIXTEEN TONS, Operation SWIFT LIFT and Operation READY SWAP. 14 AF was inactivated on 1 September 1960.
Fourteenth Air Force was activated on 20 January 1966, at Gunter AFB, Alabama as part of Air Defense Command with the inactivation of its organization of Air Defense Sectors. Its area of responsibility was essentially the same as its 1948 region, with its region shifted slightly west to include New Mexico. Eastern North and South Carolina were under the activated First Air Force.
On 16 January 1968 Air Defense Command was re-designated Aerospace Defense Command (ADCOM) as part of a restructuring of USAF air defense forces. The command was re designated as Fourteenth Aerospace Force on 1 July 1968 and moved to Ent AFB, Colorado, absorbing the resources of the 9th Aerospace Defense Division. As part of ADCOM's new emphasis on defenses against Intercontinental Ballistic Missiles (ICBMs) and Submarine-launched ballistic missile (SLBMs), the mission of the 14th Aerospace Force was to detect foreign missile launches, track missiles and satellites in space, launch space vehicles, maintain a satellite data base of all man-made objects in space, and performing anti-satellite actions. Its former region of the southeast was reassigned to the 31st and 32d Air Divisions.
As the 14th Aerospace Force, the command supervised the Ballistic Missile Early Warning System (BMEWS) network of Radars along the Arctic Circle. Additional radars came under the command's control for the sole purpose of detecting, identifying, tracking and sending back to NORAD data on any SLBM. All man-made objects became numbers in the USAF SPACETRACK network operated by the 14th Aerospace Force.
Air Force Reserve
Budget reductions and reorganizations within ADCOM brought many changes and reductions in aerospace resources along with almost continual turmoil in the command structure of 14 AF during the 1970s. In 1976 the headquarters of the 14th Aerospace Force was inactivated, being moved to Dobbins AFB, Georgia and activated as the Fourteenth Air Force (Reserve).
The mission of the command at Dobbins was changed to the supervision, management and support of Air Force Reserve airlift forces for Military Airlift Command and participated in such missions as Operation Just Cause. It was again re-designated Fourteenth Air Force on 1 December 1985, and inactivated on 1 July 1993.
Air Force Space Command
On 1 July 1993, 14 AF returned to its former space role and became a Numbered Air Force for Air Force Space Command, responsible for performing space operations. In 1997, 14 AF established the Space Operations Center at Vandenberg AFB in California for the 24-hour command and control of all space operations resources. In 2002, 14 AF became the Air Force space operational component of United States Strategic Command.
As the Air Force's sole Numbered Air Force for space and its concurrent United States Strategic Command mission of Joint Space Operations, the operational mission of 14 AF included space launch from the east and west coasts, satellite command and control, missile warning, space surveillance and command and control of assigned and attached joint space forces. The overall mission was to control and exploit space for global and theater operations, thereby ensuring U.S. warfighters were supported by the best space capabilities available.
In 2005, 14 AF officially opened up its newly renovated operations center. The new command and control capabilities of the Joint Space Operations Center ensured unity of effort for all space capabilities supporting joint military operations around the globe.
On December 20, 2019, Air Force Secretary Barbara Barrett redesignated the 14 AF as Space Operations Command (SPOC), part of the newly established U.S. Space Force. On 21 Oct 2020, Space Operations Command, HQ was redesignated back to Fourteenth Air Force and inactivated.
14th Air Force's component wings and groups in 2019 were:
614th Air and Space Operations Center, operates Joint Space Operations Center
21st Space Wing, Peterson Air Force Base, Colorado
30th Space Wing, Vandenberg Air Force Base, California
45th Space Wing, Patrick Air Force Base, Florida
50th Space Wing, Schriever Air Force Base, Colorado
460th Space Wing, Buckley Air Force Base, Colorado
Lineage
Established as China Air Task Force (CATF) **, 14 July 1942
Activated on 14 July 1942 absorbing equipment and personnel of 1st AVG
Inactivated on 19 March 1943
Established as Fourteenth Air Force on 5 March 1943
Activated on 19 March 1943 absorbing equipment and personnel of CATF
Inactivated on 6 January 1946
Activated on 24 May 1946
Inactivated on 1 September 1960.
Activated on 20 January 1966
Redesignated Fourteenth Aerospace Force on 1 July 1968.
Inactivated on 1 October 1976.
Redesignated Fourteenth Air Force (Reserve), and activated on 8 October 1976
Redesignated Fourteenth Air Force' on 1 December 1985.
Inactivated on 1 July 1993.
Activated 1 July 1993
* Authorized as a "Special Air Unit" by President Roosevelt in 1941 and equipped with United States equipment, however not officially affiliated with the United States military. The 1st American Volunteer Group was formally disbanded on 4 July 1942. Each member was offered a commission in the United States Army Air Forces. Some accepted the offer, once again put on their American uniforms, and remained in China. Others later returned to the ranks of the Army, Navy, or Marine Corps but fought in other areas of the world. Eighteen accepted offers to fly for the China National Aviation Corporation. The equipment and those members of the 1st AVG choosing to join the USAAF were absorbed into United States Army Air Forces China Air Task Force on 14 July 1942 as the 23d Fighter Group.** Assigned to Tenth Air Force.
Assignments
Assigned to U.S. Army Forces, China-Burma-India Theater, 10 March 1943
Assigned to U.S. Forces, China Theater, about 24 October 1944
Air Defense Command, 20 January 1946
Continental Air Command, 1 December 1948
Air (later, Aerospace) Defense Command, 1 July 1968
Absorbed Resources of 9th Aerospace Defense Division
Air Force Reserve, 8 October 1976
Air Force Space Command 1 July 1993 - 2019
Components
Air Divisions
8th Air Division, 1 May 1949 – 1 August 1950
9th Air Division, 1 May 1949 – 1 August 1950
31st Air Division, 1 April 1966 – 1 July 1968
32d Air Division, 1 April 1966 – 1 July 1968
Wings
71st Missile Warning Wing
Reassigned from 9th Aerospace Defense Division, 1 July 1968
Stationed at Ent AFB, Colorado
Moved to McGuire AFB, New Jersey, 21 July 1969
Inactivated, 30 April 1971
73d Aerospace Surveillance Wing
Reassigned from 9th Aerospace Defense Division, 1 July 1968
Stationed at Ent AFB, Colorado
Moved to Tyndall AFB, Florida and inactivated 30 April 1971
4756th Air Defense Wing (Training)
Reassigned from 73d Air Division, 1 April 1966
Stationed at Tyndall AFB, Florida
Discontinued, 1 January 1968
4780th Air Defense Wing (Training)
Reassigned from 73d Air Division, 1 April 1966
Stationed at Perrin AFB, Texas
Reassigned to Tenth Air Force, 1 July 1968
Groups
10th Aerospace Defense Group
Reassigned from 9th Aerospace Defense Division, 1 July 1968
Stationed at Vandenberg AFB, California
Inactivated on 1 November 1979
12th Missile Warning Group
Reassigned from 71st Missile Warning Wing, 30 April 1971
Stationed at Thule Air Base, Greenland
Reassigned to 21st Air Division, 1 October 1976
Squadrons
4751st Air Defense Squadron (Missile)
Reassigned from 4756th Air Defense Wing, 15 June 1966
Stationed at Tyndall AFB, Florida
Reassigned to Air Defense Weapons Center (ADC), 1 January 1968
14th Missile Warning Squadron
Activated at Laredo AFB, Texas, 8 July 1972
Moved to MacDill AFB, Florida, 30 June 1975
Reassigned to ADCOM, 1 October 1976
16th Surveillance Squadron
Reassigned from 73d Aerospace Surveillance Wing, 30 April 1971
Stationed at Shemya AFS, Alaska
Reassigned Alaskan ADCOM Region, 1 October 1976
18th Surveillance Squadron
Reassigned from 73d Aerospace Surveillance Wing, 30 April 1971
Stationed at Edwards AFB, California
Inactivated 1 October 1975
19th Surveillance Squadron
Reassigned from 73d Aerospace Surveillance Wing, 30 April 1971
Stationed at Diyarbakir, Turkey
Reassigned to 21st Air Division, 1 October 1976
20th Surveillance Squadron
Reassigned from 73d Aerospace Surveillance Wing, 30 April 1971
Stationed at Eglin AFB, Florida
Inactivated 1 October 1975
Reassigned to 20th Air Division, 1 October 1976
17th Radar Squadron
Activated 1 September 1972 at Ko Kha ASN, Thailand
Inactivated 31 May 1976
Stations
Kunming, China, 10 March 1943
Peishiyi, China, 7 August – 15 December 1945
Fort Lawton, Washington, 5–6 January 1946
Orlando AB, Florida, 24 May 1946
Robins AFB, Georgia, October 1949
Gunter AFB, Alabama, 1 April 1966
Colorado Springs, Colorado, 1 July 1968
Dobbins AFB (later, ARB), Georgia, 8 October 1976
Vandenberg AFB, California, 1 July 1993
List of commanders
See also
Combined Force Space Component Command
South-East Asian Theatre of World War II
Burma Campaign
Operation Ichi-Go
RAF Third Tactical Air Force
Notes
References
Sources
Cornett, Lloyd H. and Johnson, Mildred W. A Handbook of Aerospace Defense Organization 1946 – 1980, Office of History, Aerospace Defense Center, Peterson Air Force Base, Colorado
Maurer, Maurer (1983). Air Force Combat Units of World War II. Maxwell AFB, Alabama: Office of Air Force History. .
Ravenstein, Charles A. (1984). Air Force Combat Wings Lineage and Hon
Rust, Kenn C. and Stephen Muth. Fourteenth Air Force Story...in World War II. Temple City, California: Historical Aviation Album, 1977. .
Author unknown. This is the Fourteenth Air Force. Mitchell AIr Force Base, New York: Office of Information Services, Continental Air Command, 1957.
Author unknown. A Short History of the 14th Air Force Flying Tigers, 1943–1959''. Robins Air Force Base, Georgia: Headquarters Fourteenth Air Force (CONAC), 1959.
External links
Fourteenth Air Force Factsheet
Fourteenth Air Force official website
Annals of the Flying Tigers
Fourteenth Air Force in China 1943–1945
14th Air Force
Night Fighter by J R Smith – a first-hand account of a P-61 radar observer in World War II China
14
14
Military units and formations established in 1943
1943 establishments in China
Military units and formations in California
Air Force 14
Military units and formations disestablished in 2019
|
45314986
|
https://en.wikipedia.org/wiki/Biswanath%20Mukherjee
|
Biswanath Mukherjee
|
Biswanath Mukherjee is an Indian-American Distinguished Professor of computer science at University of California, Davis and a fellow of IEEE.
Early life
Mukherjee obtained his bachelor's degree in technology with honors from Indian Institute of Technology Kharagpur in 1980 and got his Ph.D. from University of Washington in 1987. The same year (1987) he joined the Department of Computer Science at University of California, Davis where he became a Professor in 1995, and Distinguished Professor in 2011. From 1997 to 2000 he served as chair of the Computer Science Department. He was a founding member of the Board of Directors (2002-2007) of IPLocks, Inc., a Silicon Valley startup company acquired by Fortinet. During 1995–2000, he held the Child Family Professorship at UC Davis. To date, he has supervised 77 PhDs to completion and currently mentors 6 advisees, mainly PhD students.
Career
Besides being a respected computer scientist, he also served on numerous editorial boards and was General Co-Chair of the IEEE/OSA Optical Fiber Communications (OFC) Conference 2011, Technical Program Co-Chair of OFC'2009, and Technical Program Chair of the IEEE INFOCOM'96 conference. He is Editor of Springer's Optical Networks Book Series. He has served on eight journal editorial boards, most notably IEEE/ACM Transactions on Networking and IEEE Network. In addition, he has guest-edited Special Issues of Proceedings of the IEEE, IEEE/OSA Journal of Lightwave Technology, IEEE Journal on Selected Areas in Communications, and IEEE Communications. He was the first elected chairman of Communication Society's Optical Networking Technical Committee. He was a founding member of the Board of Directors (2002-2007) of IPLocks, Inc., a Silicon Valley startup company, acquired by Fortinet, Inc. He also served as a founding member of the Board of Directors (2015-2018) of Optella, Inc., an optical components startup, acquired by Cosemi, Inc. To date, he has served on the Technical Advisory Board for over a dozen startup companies, including Teknovus (acquired by Broadcom), Intelligent Fiber Optic Systems, and LookAhead Decisions. He is Founder and President of Ennetix, Inc., a startup company incubated at UC Davis and developing AI-powered, cloud-based, application-centric network performance analytics and management software for improved user experience.
Major Contributions
First proposal/prototype for a network intrusion detection system (1990). L. Todd Heberlein, B. Mukherjee, et al., "A Network Security Monitor (NSM)," Proc., 1990 IEEE Symposium on Security and Privacy, pp. 296–304, Oakland, CA, May 1990.
First proposal/prototype for a filtering firewall (1990); today a $8B/yr industry . C. Kwok and B. Mukherjee, "Cut-through bridging for CSMA/CD Local Area Networks," IEEE Transactions on Communications, vol. 38, pp. 938–942, July 1990.
Many contributions and books on Optical backbone network design, IP over optical, virtualization, etc. (1991 onwards).
First proposal/prototype for dynamic bandwidth allocation in Ethernet optical access networks (2002); over 100 million units deployed worldwide. G. Kramer, B. Mukherjee, and G. Pesavento, "IPACT: A dynamic protocol for an Ethernet PON," IEEE Communications Magazine, vol. 40, no. 2, pp. 74–80, Feb. 2002.
First proposal to integrate optical and wireless networks (2007). Suman Sarkar, Sudhir Dixit, and Biswanath Mukherjee, "Hybrid Wireless-Optical Broadband Access Network (WOBAN): A Review of Relevant Challenges," IEEE/OSA Journal of Lightwave Technology, vol. 25, no. 11, Nov. 2007.
Keynote/Plenary Talks
(Since 2010)
May 13, 2019: "Rising Power of the Network User," 23rd Conference On Optical Network Design And Modelling (ONDM 2019), Athens, Greece.
December 19, 2018: "Rising Power of the Network User," 12th IEEE International Conference on Advanced Networks and Telecommunications Systems (ANTS) (), Indore, India.
July 19, 2018: "Rising Power of the Network User," DOE Mini-Symposium on Data over Distance: Convergence of Networking, Storage, Transport, and Software, Hanover, MD, USA.
June 20, 2018: "Rising Power of the Network User," European Conference on Networks and Communications (EuCNC) 2018, Ljubljana, Slovenia.
January 5, 2017: "Cloud Computing and Virtualization," Indian Science Congress, Tirupati, India.
January 5, 2017: "5G and IoT," Indian Science Congress, Tirupati, India.
September 24, 2016: "Disaster Resilience of Telecom Infrastructure," Chinacom, Chongqing, China.
September 13, 2016: "Disaster Resilience of Telecom Infrastructure," Reliable Networks Design and Modeling (RNDM) Conference, Halmstad, Sweden.
September 12, 2016: "Network Resilience for Massive Failures and Attacks," COST/RECODIS, Halmstad, Sweden.
March 4, 2016: "Network Adaptability to Combat Disaster Disruptions and Cascading Failures," National Communications Conference (NCC), IIT Gauhati, India.
February 29, 2016: Guest of Honor Lecture, Research Day, SRM University, Chennai, India.
March 25, 2015: "Network Adaptability from Disaster Disruption and Cascading Failures," Design of Reliable Computer Networks (DRCN) Conference, Kansas City, USA.
October 28, 2013: "Software-Defined Optical Networks (SDONs)," ETRI Annual Conference, Jeju, Korea.
April 18, 2013: "Panorama of Optical Network Survivability," Optical Network Design and Modeling (ONDM) Conference, Brest, France.
September 3, 2012: "Emerging Technology/Research Trends in Next Generation Optical Networks," Chief Guest and Inauguration Address, Faculty Development Program, SRM University, Chennai, India.
July 4, 2011: "Network Convergence in the Future Internet", at 12th International Conference on High Performance Switching and Routing (HPSR), Cartagena, Spain.
June 27, 2011: "Some “Opaque” Problems in Transparent Optical Networks", at International Conference on Transparent Optical Networks (ICTON), Stockholm, Sweden.
May 31, 2011: Brazilian Symposium on Networks and Distributed System (SBRC), Campo Grande, Brazil.
November 9, 2010: Guest Professorship Lecture: "Network Convergence in the Future Internet", at Beijing University of Posts and Telecommunications (BUPT), China.
July 12, 2010: "Convergence in the Optical Internet (COIN)", at 9th Convergence in the Optical Internet (COIN), Jeju, Korea.
Awards
May 2018: Co-winner, Charles Kao Award (named after Nobel Laureate and Fiber Optic Pioneer Charles Kao) for the Best Paper in IEEE Journal on Optical Communications and Networks (JOCN) for the paper: D. Chitimalla, K. Kondepu, L. Valcarenghi, M. Tornatore, and B. Mukherjee, “5G Fronthaul–Latency and Jitter Studies of CPRI Over Ethernet,” IEEE Journal of Optical Communications & Networking, vol. 9, no. 2, pp. 172–182, February 2017.
Dec. 2017: Co-winner, IEEE Communications Society's Transmission, Access, and Optical Systems (TAOS) Best Paper Award for IEEE Globecom 2017 Optical Networks and Systems (ONS) Symposium, for the paper: Yu Wu, Massimo Tornatore, Yongli Zhao, and Biswanath Mukherjee, “TDM EPON Fronthaul Upstream Capacity Improvement Via Classification and Sifting ,” Proc., IEEE Globecom 2017, Singapore, Dec. 2017.
May 2016: Winner, UC Davis International Community Building Award.
December 2015: Winner of the IEEE Communications Society's inaugural (2015) Outstanding Technical Achievement Award "for pioneering work on shaping the optical networking area".
2009: Winner, Outstanding Senior Faculty Award, College of Engineering, UC Davis.
2009: Winner, Outstanding Senior Faculty Award, College of Engineering, UC Davis.
November 2007: My PhD student Marwan Batayneh, myself, and our research collaborators Dr. Andreas Kirstädter, Dr. Dominic A. Schupke, and Dr. Marco Hoffman from Nokia Siemens Networks, Germany, won the IEEE Globecom 2007 Optical Networking Symposium Best Paper Award for the paper "Lightpath-Level Protection versus Connection-Level Protection for Carrier-Grade Ethernet in a Mixed-Line-Rate Telecom Network".
December 2006: B. Mukherjee elevated to IEEE Fellow.
February 2006: B. Mukherjee named to Child Family Professorship at UC Davis.
Winner 2004: Distinguished Graduate Mentoring Award, University of California, Davis.
Ph.D. Dissertation Award Supervisor, UC Davis College of Engineering 2004 Zuhair Munir Best Doctoral Dissertation Award, for Dr. Keyao Zhu's PhD Dissertation: Design and Analysis of Traffic-Groomable Optical WDM Networks.
Ph.D. Dissertation Award Supervisor, UC Davis College of Engineering 2000 Best Doctoral Dissertation Award, for Dr. Laxman Sahasrabuddhe's PhD Dissertation: Multicasting and Fault Tolerance in WDM Optical Networks.
1992: My papers "WDM-Based Local Lightwave Networks – Part I: Single-Hop Networks; Part II: Multihop Networks," in IEEE Network were nominated for IEEE and IEEE Communications Society Prize Paper Awards.
1994: Co-winner, Paper Award, 17th National Computer Security Conference, for the paper "Testing Intrusion Detection Systems: Design Methodologies and Results from an Early Prototype."
1991: Co-winner, Best Paper Award, 14th National Computer Security Conference, for the paper "DIDS (Distributed Intrusion Detection System – Motivation, Architecture, and an Early Prototype."
1986–87: General Electric Foundation Fellowship, University of Washington.
1984–85: GTE Teaching Fellowship, University of Washington.
Works
Journal Papers and Conference Proceedings
For a complete list of works visit Google Scholar Profile.
Books Published
Biswanath Mukherjee, Optical WDM Networks, Springer, 2006.
Biswanath Mukherjee, Optical Communication Networks, New York, NY: McGraw-Hill, July 1997.
Canhui (Sam) Ou and Biswanath Mukherjee, Survivable Optical WDM Networks, Springer, 2005.
Keyao Zhu, Hongyue Zhu, and Biswanath Mukherjee, Traffic Grooming in Optical WDM Mesh Networks, Springer, 2005
Technical Reports (Selected)
B. Mukherjee, "The p(i)-persistent protocol and voice-data integration for unidirectional broadcast bus networks,"Ph.D. dissertation, Department of Electrical Engineering, University of Washington, Seattle, WA, May 1987. (Also, Technical Report No. 241, Department of Electrical Engineering, University of Washington, Seattle, WA, May 1987.)
J. Cai and B. Mukherjee, "Design, development, and measured performance of a 3C505-based CSMA/CD bridge,"Technical Report No. CSE-90-8, Division of Computer Science, University of California, Davis, CA, March 1990.
G. V. Dias, K. N. Levitt, and B. Mukherjee, "Modeling Attacks on Computer Systems: Evaluating Vulnerabilities and Forming a Basis for Attack July, Detection,"Technical Report No. CSE-90-41, Division of Computer Science, University of California, Davis, CA, 1990.
A. S. Acampora, L. G. Kazovsky, V. W. S. Chan, K.-W. Cheung, E. Desurvire, L. F. Eastman, J. Escobar, A. Ganz, M. Gerla, P. A. Humblet, M. N. Islam, S. Kang, K. Liu, B. Mukherjee, P. R. Prucnal, R. Ramaswami, I. Rubin, A. A. M. Saleh, J. Sauer, and C. M. Verber, "Proposal for a New National Science Foundation Program on Optical Networks,"NSF Technical Report, March 1993.
B. Mukherjee, F. Neri, et al."Report of US/EU Workshop on Key Issues and Grand Challenges in Optical Networking,"Brussels, Belgium, June 2006.
References
External links
Living people
20th-century births
Indian computer scientists
Fellow Members of the IEEE
University of California, Davis faculty
IIT Kharagpur alumni
Year of birth missing (living people)
|
38218992
|
https://en.wikipedia.org/wiki/European%20Cybercrime%20Centre
|
European Cybercrime Centre
|
The European Cybercrime Centre (EC3 or EC³) is the body of the Police Office (Europol) of the European Union (EU), headquartered in The Hague, that coordinates cross-border law enforcement activities against computer crime and acts as a centre of technical expertise on the matter.
History
When officially launched on 11 January 2013, the European Cybercrime Centre was not expected to be fully operational until 2015.
It began with a staff of 30, with plans to expand to 40 by the end of 2013.
It began operations with a budget of about 3.6 million euros.
Organisational structure and key personnel
The head of EC3 reports directly to the head of Europol.
The first person to head the department was the former head of Danish domestic intelligence, , who left Europol in January 2015 to become Barclays' Chief Intelligence Security Officer.
Responsibilities and cooperation with other bodies
EC3 was tasked with assisting member states in their efforts to dismantle and disrupt cybercrime networks and developing tools and providing training.
EC3 works with the European Union Intelligence and Situation Centre (INTCEN), the United Nations Office on Drugs and Crime (UNDCP), the World Customs Organization (WCO), the European Border and Coast Guard Agency (EBCG, also known as Frontex), and the European Anti-fraud Office (OLAF).
Press releases in 2015 also revealed that EC3 works with American security services, such as the Federal Bureau of Investigation (FBI).
There is some overlap with the responsibilities of the European Union Agency for Network and Information Security (ENISA).
At a press conference on 10 February 2014, asked about massive identity theft uncovered by the German Federal Office for Information Security, the then head of the EC3, Troels Oerting, said that his unit was not responsible for combatting "politically motivated hacking and/or espionage against EU institutions".
Activities
In February 2014, Troels Oerting reported successes that the unit had had in 2013. These included catching internet extortioners, with 13 arrests. They had also been involved in fighting malware attacks on banks using botnets and – in cooperation with Microsoft and experts from the German Federal Criminal Police Office – taking down the ZeroAccess botnet.
In 2014, details were revealed of Operation Onymous, which took down a number of Darknet sites, including Pandora , Cloud 9, Hydra, Blue Sky, Topix, Flugsvamp, Cannabis Road, Black Market and Silk Road 2.0.
In 2015, American media reported on a coordinated FBI operation with the assistance of EC3 to take down Dark0de, the largest English -language communication and trading platform for cybercriminals.
Participating states
As well as the EU member states, there is cooperation with a number of other states, including Australia, Canada, North Macedonia, Norway, Switzerland, Monaco, Bosnia and Herzegovina, Colombia, Moldova, Russia, Turkey, the Republic of Serbia, Montenegro, Ukraine and the United States.
See also
European Network and Information Security Agency
National Cyber Security Centre (disambiguation)
References
External links
2013 establishments in the Netherlands
Computer security organizations
European Union agencies' subsidiary organisations
Europol
Government agencies established in 2013
Information technology organizations based in Europe
International law enforcement agencies
Cybercrime Centre
Organisations based in The Hague
Cybercrime Centre
|
27814726
|
https://en.wikipedia.org/wiki/Climate%20and%20Forecast%20Metadata%20Conventions
|
Climate and Forecast Metadata Conventions
|
The Climate and Forecast (CF) metadata conventions are conventions for the description of Earth sciences data, intended to promote the processing and sharing of data files. The metadata defined by the CF conventions are generally included in the same file as the data, thus making the file "self-describing". The conventions provide a definitive description of what the data values found in each netCDF variable represent, and of the spatial and temporal properties of the data, including information about grids, such as grid cell bounds and cell averaging methods. This enables users of files from different sources to decide which variables are comparable, and is a basis for building software applications with powerful data extraction, grid remapping, data analysis, and data visualization capabilities.
History and Evolution
The CF conventions were introduced in 2003, after several years of development by a collaboration that included staff from U.S. and European climate and weather laboratories. The conventions contained generalizations and extensions to the earlier Cooperative Ocean/Atmosphere Research Data Service (COARDS) conventions and the Gregory/Drach/Tett (GDT) conventions. As the scope of the CF conventions grew along with its user base, the CF community adopted an open governance model. In December 2008 the trio of standards, netCDF+CF+OPeNDAP, was adopted by IOOS as a recommended standard (number 08-012) for the representation and transport of gridded data. The CF conventions are being considered by the NASA Standards Process Group (SPG) and others as more broadly applicable standards.
Applications and User Base
The CF conventions have been adopted by a wide variety of national and international programs and activities in the Earth sciences. For example, they were required for the climate model output data collected for Coupled model intercomparison projects, which are widely used for the Intergovernmental Panel on Climate Change assessment reports.
They are promoted as an important element of scientific community coordination by the World Climate Research Programme. They are also used as a technical foundation for a number of software packages and data systems, including the Climate Model Output Rewriter (CMOR), which is post processing software for climate model data, and the Earth System Grid, which distributes climate and other data. The CF conventions have also been used to describe the physical fields transferred between individual Earth system model software components, such as atmosphere and ocean components, as the model runs
.
Supported Data Types
CF is intended for use with state estimation and forecasting data, in the atmosphere, ocean, and other physical domains. It was designed primarily to address gridded data types such as numerical weather prediction model outputs and climatology data in which data binning is used to impose a regular structure. However, the CF conventions are also applicable to many classes of observational data and have been adopted by a number of groups for such applications.
Supported Data Formats
CF originated as a standard for data written in netCDF, but its structure is general and it has been adapted for use with other data formats. For example, using the CF conventions with Hierarchical Data Format data has been explored.
Design Principles
Several principles guide the development of CF conventions:
Data should be self-describing, without external tables needed for interpretation.
Conventions should be developed only as needed, rather than anticipating possible needs.
Conventions should not be onerous to use for either data-writers or data-readers.
Metadata should be readable by humans as well as interpretable by programs.
Redundancy should be avoided to prevent inconsistencies when writing data.
Specific CF metadata descriptors use values of attributes to represent
Data provenance: title, institution, contact, source (e.g. model), history (audit trail of operations), references, comment
Description of associated activity: project, experiment
Description of data: units, standard_name, long_name, auxiliary_variables, missing_value, valid_range, flag_values, flag_meanings
Description of coordinates: coordinates, bounds, grid_mapping (with formula_terms); time specified with reference_time ("time since T0") and calendar attributes.
Meaning of grid cells: cell_methods, cell_measures, and climatological statistics.
A central element of the CF Conventions is the CF Standard Name Table. The CF Standard Name Table uniquely associates a standard name with each geophysical parameter in a data set, where each name provides a precise description of physical quantities being represented. Note that this is the string value of the standard_name attribute, not the name of the parameter. The CF standard name table identifies over 1,000 physical quantities, each with a precise description and associated canonical units. Guidelines for construction of CF standard names are documented on the conventions web site.
As an example of the information provided by CF standard names, the entry for sea-level atmospheric pressure includes:
standard name: air_pressure_at_sea_level
description: sea_level means mean sea level, which is close to the geoid in sea areas. Air pressure at sea level is the quantity often abbreviated as MSLP or PMSL.
canonical units: Pa
Software
NetCDF-Java Library parses CF Conventions and creates Coordinate System objects from them
OriginPro version 2021b supports netCDF CF Convention. Averaging can be performed during import to allow handling of large datasets in a GUI software.
References
External links
CF Metadata Home Page
The CF Metadata Convention (BADC page)
NASA Standards Process Group
Standard for the CF Metadata Conventions (Marine Metadata Interoperability Project page)
Ocean Data Standards on Metadata
Metadata
Earth sciences metadata conventions
Meteorological data and networks
Science software
|
4378023
|
https://en.wikipedia.org/wiki/Event%20loop
|
Event loop
|
In computer science, the event loop is a programming construct or design pattern that waits for and dispatches events or messages in a program. The event loop works by making a request to some internal or external "event provider" (that generally blocks the request until an event has arrived), then calls the relevant event handler ("dispatches the event"). The event loop is also sometimes referred to as the message dispatcher, message loop, message pump, or run loop.
The event-loop may be used in conjunction with a reactor, if the event provider follows the file interface, which can be selected or 'polled' (the Unix system call, not actual polling). The event loop almost always operates asynchronously with the message originator.
When the event loop forms the central control flow construct of a program, as it often does, it may be termed the main loop or main event loop. This title is appropriate, because such an event loop is at the highest level of control within the program.
Message passing
Message pumps are said to 'pump' messages from the program's message queue (assigned and usually owned by the underlying operating system) into the program for processing. In the strictest sense, an event loop is one of the methods for implementing inter-process communication. In fact, message processing exists in many systems, including a kernel-level component of the Mach operating system. The event loop is a specific implementation technique of systems that use message passing.
Alternative designs
This approach is in contrast to a number of other alternatives:
Traditionally, a program simply ran once, then terminated. This type of program was very common in the early days of computing, and lacked any form of user interactivity. This is still used frequently, particularly in the form of command-line-driven programs. Any parameters are set up in advance and passed in one go when the program starts.
Menu-driven designs. These still may feature a main loop, but are not usually thought of as event driven in the usual sense. Instead, the user is presented with an ever-narrowing set of options until the task they wish to carry out is the only option available. Limited interactivity through the menus is available.
Usage
Due to the predominance of graphical user interfaces, most modern applications feature a main loop. The get_next_message() routine is typically provided by the operating system, and blocks until a message is available. Thus, the loop is only entered when there is something to process.
function main
initialize()
while message != quit
message := get_next_message()
process_message(message)
end while
end function
File interface
Under Unix, the "everything is a file" paradigm naturally leads to a file-based event loop. Reading from and writing to files, inter-process communication, network communication, and device control are all achieved using file I/O, with the target identified by a file descriptor. The select and poll system calls allow a set of file descriptors to be monitored for a change of state, e.g. when data becomes available to be read.
For example, consider a program that reads from a continuously updated file and displays its contents in the X Window System, which communicates with clients over a socket (either Unix domain or Berkeley):
def main():
file_fd = open("logfile.log")
x_fd = open_display()
construct_interface()
while True:
rlist, _, _ = select.select([file_fd, x_fd], [], []):
if file_fd in rlist:
data = file_fd.read()
append_to_display(data)
send_repaint_message()
if x_fd in rlist:
process_x_messages()
Handling signals
One of the few things in Unix that does not conform to the file interface are asynchronous events (signals). Signals are received in signal handlers, small, limited pieces of code that run while the rest of the task is suspended; if a signal is received and handled while the task is blocking in select(), select will return early with EINTR; if a signal is received while the task is CPU bound, the task will be suspended between instructions until the signal handler returns.
Thus an obvious way to handle signals is for signal handlers to set a global flag and have the event loop check for the flag immediately before and after the select() call; if it is set, handle the signal in the same manner as with events on file descriptors. Unfortunately, this gives rise to a race condition: if a signal arrives immediately between checking the flag and calling select(), it will not be handled until select() returns for some other reason (for example, being interrupted by a frustrated user).
The solution arrived at by POSIX is the pselect() call, which is similar to select() but takes an additional sigmask parameter, which describes a signal mask. This allows an application to mask signals in the main task, then remove the mask for the duration of the select() call such that signal handlers are only called while the application is I/O bound. However, implementations of pselect() have not always been reliable; versions of Linux prior to 2.6.16 do not have a pselect() system call, forcing glibc to emulate it via a method prone to the very same race condition pselect() is intended to avoid.
An alternative, more portable solution, is to convert asynchronous events to file-based events using the self-pipe trick, where "a signal handler writes a byte to a pipe whose other end is monitored by select() in the main program". In Linux kernel version 2.6.22, a new system call signalfd() was added, which allows receiving signals via a special file descriptor.
Implementations
Windows applications
On the Microsoft Windows operating system, a process that interacts with the user must accept and react to incoming messages, which is almost inevitably done by a message loop in that process. In Windows, a message is equated to an event created and imposed upon the operating system. An event can be user interaction, network traffic, system processing, timer activity, inter-process communication, among others. For non-interactive, I/O only events, Windows has I/O completion ports. I/O completion port loops run separately from the Message loop, and do not interact with the Message loop out of the box.
The "heart" of most Win32 applications is the WinMain() function, which calls GetMessage() in a loop. GetMessage() blocks until a message, or "event", is received (with function PeekMessage() as a non-blocking alternative). After some optional processing, it will call DispatchMessage(), which dispatches the message to the relevant handler, also known as WindowProc. Normally, messages that have no special WindowProc() are dispatched to DefWindowProc, the default one. DispatchMessage() calls the WindowProc of the HWND handle of the message (registered with the RegisterClass() function).
Message ordering
More recent versions of Microsoft Windows guarantee to the programmer that messages will be delivered to an application's message loop in the order that they were perceived by the system and its peripherals. This guarantee is essential when considering the design consequences of multithreaded applications.
However, some messages have different rules, such as messages that are always received last, or messages with a different documented priority.
X Window System
Xlib event loop
X applications using Xlib directly are built around the XNextEvent family of functions; XNextEvent blocks until an event appears on the event queue, whereupon the application processes it appropriately. The Xlib event loop only handles window system events; applications that need to be able to wait on other files and devices could construct their own event loop from primitives such as ConnectionNumber, but in practice tend to use multithreading.
Very few programs use Xlib directly. In the more common case, GUI toolkits based on Xlib usually support adding events. For example, toolkits based on Xt Intrinsics have XtAppAddInput() and XtAppAddTimeout().
Please note that it is not safe to call Xlib functions from a signal handler, because the X application may have been interrupted in an arbitrary state, e.g. within XNextEvent. See for a solution for X11R5, X11R6 and Xt.
GLib event loop
The GLib event loop was originally created for use in GTK but is now used in non-GUI applications as well, such as D-Bus. The resource polled is the collection of file descriptors the application is interested in; the polling block will be interrupted if a signal arrives or a timeout expires (e.g. if the application has specified a timeout or idle task). While GLib has built-in support for file descriptor and child termination events, it is possible to add an event source for any event that can be handled in a prepare-check-dispatch model.
Application libraries that are built on the GLib event loop include GStreamer and the asynchronous I/O methods of GnomeVFS, but GTK remains the most visible client library. Events from the windowing system (in X, read off the X socket) are translated by GDK into GTK events and emitted as GLib signals on the application's widget objects.
macOS Core Foundation run loops
Exactly one CFRunLoop is allowed per thread, and arbitrarily many sources and observers can be attached. Sources then communicate with observers through the run loop, with it organising queueing and dispatch of messages.
The CFRunLoop is abstracted in Cocoa as an NSRunLoop, which allows any message (equivalent to a function call in non-reflective runtimes) to be queued for dispatch to any object.
See also
Asynchronous I/O
Event-driven programming
Inter-process communication
Message passing
The game loop in Game programming
References
External links
Meandering Through the Maze of MFC Message and Command Routing
Using Messages and Message Queues (MSDN)
Using Window Procedures (MSDN)
WindowProc (MSDN)
Control flow
Events (computing)
|
220672
|
https://en.wikipedia.org/wiki/Comm
|
Comm
|
The command in the Unix family of computer operating systems is a utility that is used to compare two files for common and distinct lines. is specified in the POSIX standard. It has been widely available on Unix-like operating systems since the mid to late 1980s.
History
Written by Lee E. McMahon, first appeared in Version 4 Unix.
The version of bundled in GNU coreutils was written by Richard Stallman and David MacKenzie.
Usage
reads two files as input, regarded as lines of text. outputs one file, which contains three columns. The first two columns contain lines unique to the first and second file, respectively. The last column contains lines common to both. This functionally is similar to .
Columns are typically distinguished with the character. If the input files contain lines beginning with the separator character, the output columns can become ambiguous.
For efficiency, standard implementations of expect both input files to be sequenced in the same line collation order, sorted lexically. The sort (Unix) command can be used for this purpose.
The algorithm makes use of the collating sequence of the current locale. If the lines in the files are not both collated in accordance with the current locale, the result is undefined.
Return code
Unlike , the return code from has no logical significance concerning the relationship of the two files. A return code of 0 indicates success, a return code >0 indicates an error occurred during processing.
Example
$ cat foo
apple
banana
eggplant
$ cat bar
apple
banana
banana
zucchini
$ comm foo bar
apple
banana
banana
eggplant
zucchini
This shows that both files have one banana, but only bar has a second banana.
In more detail, the output file has the appearance that follows. Note that the column is interpreted by the number of leading tab characters. \t represents a tab character and \n represents a newline (Escape character#Programming and data formats).
Comparison to diff
In general terms, is a more powerful utility than . The simpler is best suited for use in scripts.
The primary distinction between and is that discards information about the order of the lines prior to sorting.
A minor difference between and is that will not try to indicate that a line has "changed" between the two files; lines are either shown in the "from file #1", "from file #2", or "in both" columns. This can be useful if one wishes two lines to be considered different even if they only have subtle differences.
Other options
has command-line options to suppress any of the three columns. This is useful for scripting.
There is also an option to read one file (but not both) from standard input.
Limits
Up to a full line must be buffered from each input file during line comparison, before the next output line is written.
Some implementations read lines with the function which does not impose any line length limits if system memory suffices.
Other implementations read lines with the function . This function requires a fixed buffer. For these implementations, the buffer is often sized according to the POSIX macro .
See also
Comparison of file comparison tools
List of Unix commands
cmp (Unix) – character oriented file comparison
cut (Unix) – splitting column-oriented files
References
External links
Free file comparison tools
Comm
Unix SUS2008 utilities
Plan 9 commands
Inferno (operating system) commands
|
279585
|
https://en.wikipedia.org/wiki/Uptime
|
Uptime
|
Uptime is a measure of system reliability, expressed as the percentage of time a machine, typically a computer, has been working and available. Uptime is the opposite of downtime.
It is often used as a measure of computer operating system reliability or stability, in that this time represents the time a computer can be left unattended without crashing, or needing to be rebooted for administrative or maintenance purposes.
Conversely, long uptime may indicate negligence, because some critical updates can require reboots on some platforms.
Records
In 2005, Novell reported a server with a 6-year uptime. Although that might sound unusual, that is actually common when servers are maintained under an industrial context and host critical applications such as banking systems.
Netcraft maintains the uptime records for many thousands of web hosting computers.
A server running Novell NetWare has been reported to have been shut down after 16 years of uptime due to a failing hard disk.
Determining system uptime
Microsoft Windows
Windows Task Manager
Some versions of Microsoft Windows include an uptime field in Windows Task Manager, under the "Performance" tab. The format is D:HH:MM:SS (days, hours, minutes, seconds).
systeminfo
The output of the systeminfo command includes a "System Up Time" or "System Boot Time" field.
C:\>systeminfo | findstr "Time:"
System Up Time: 0 Days, 8 Hours, 7 Minutes, 19 Seconds
The exact text and format is dependent on the language and locale. The time given by systeminfo is not reliable. It does not take into account time spent in sleep or hibernation. Thus, the boot time will drift forward every time the computer sleeps or hibernates.
NET command
The NET command with its STATISTICS sub-command provides the date and time the computer started, for both the NET STATISTICS WORKSTATION and NET STATISTICS SERVER variants. The command NET STATS SRV is shorthand for NET STATISTICS SERVER. The exact text and date format is dependent on the configured language and locale.
C:\>NET STATISTICS WORKSTATION | findstr "since"
Statistics since 8/31/2009 8:52:29 PM
Windows Management Instrumentation (WMI)
Uptime can be determined via Windows Management Instrumentation (WMI), by querying the LastBootUpTime property of the Win32_OperatingSystem class. At the command prompt, this can be done using the wmic command:
C:\>wmic os get lastbootuptime
LastBootUpTime
20110508161751.822066+060
The timestamp uses the format yyyymmddhhmmss.nnn, so in the above example, the computer last booted up on 8 May 2011 at 16:17:51.822. The text "LastBootUpTime" and the timestamp format do not vary with language or locale. WMI can also be queried using a variety of application programming interfaces, including VBScript or PowerShell.
Uptime.exe
Microsoft formerly provided a downloadable utility called Uptime.exe, which reports elapsed time in days, hours, minutes, and seconds.
C:\>Uptime
SYSTEMNAME has been up for: 2 day(s), 4 hour(s), 24 minute(s), 47 second(s)
The time given by Uptime.exe is not reliable. It does not take into account time spent in sleep or hibernation. Thus, the boot time will drifts forward every time the computer sleeps or hibernates.
FreeDOS
The uptime command is also available for FreeDOS. The version was developed by M. Aitchison.
Linux
Using uptime
Users of Linux systems can use the BSD uptime utility, which also displays the system load averages for the past 1, 5 and 15 minute intervals:
$ uptime
18:17:07 up 68 days, 3:57, 6 users, load average: 0.16, 0.07, 0.06
Using /proc/uptime
Shows how long the system has been on since it was last restarted:
$ cat /proc/uptime
350735.47 234388.90
The first number is the total number of seconds the system has been up. The second number is how much of that time the machine has spent idle, in seconds. On multi core systems (and some Linux versions) the second number is the sum of the idle time accumulated by each CPU.
BSD
Using uptime
BSD-based operating systems such as FreeBSD, Mac OS X and SySVr4 have the uptime command (See ).
$ uptime
3:01AM up 69 days, 7:53, 0 users, load averages: 0.08, 0.07, 0.05
Using sysctl
There is also a method of using sysctl to call the system's last boot time:
$ sysctl kern.boottime
kern.boottime: { sec = 1271934886, usec = 667779 } Thu Apr 22 12:14:46 2010
OpenVMS
On OpenVMS systems, the show system command can be used at the DCL command prompt to obtain the system uptime. The first line of the resulting display includes the system's uptime, displayed as days followed by hours:minutes:seconds. In the following example, the command qualifier /noprocess suppresses the display of per-process detail lines of information.
$ show system/noprocess
OpenVMS V7.3-2 on node JACK 29-JAN-2008 16:32:04.67 Uptime 894 22:28:52The command output above shows that node JACK on 29 January 2008 at 16:32:04.67 has uptime: 894 days 22 hours 28 minutes and 52 seconds.
See also
Availability
List of Unix commands
Maintenance window
System profiler
Transmission Control Protocol#TCP timestamps – can allow remote estimation of uptime
Website monitoring
Who (Unix) – can display the time the system was booted
References
Real-time computing
Unix user management and support-related utilities
Fault-tolerant computer systems
Windows administration
|
6154171
|
https://en.wikipedia.org/wiki/George%20Varghese
|
George Varghese
|
George Varghese (born 1960) is a Principal Researcher at Microsoft Research. Before joining MSR's lab in Silicon Valley in 2013, he was a Professor of Computer Science at the University of California San Diego, where he led the Internet Algorithms Lab and also worked with the Center for Network Systems and the Center for Internet Epidemiology. He is the author of the textbook Network Algorithmics published by Morgan Kaufmann in 2004.
Education
Varghese received his B.Tech in electrical engineering from IIT Bombay in 1981, his M.S. in computer studies from NCSU in 1983 and his Ph.D. in computer science from MIT in 1993, where his advisor was Nancy Lynch. He is a Fellow of the ACM since 2002.
Research
Transparent bridge architecture
Before his Ph.D., George spent several years as part of the network architecture and advanced development group at Digital Equipment Corporation, where he wrote the first specification for the first transparent bridge architecture (based on the inventions of Mark Kempf and Radia Perlman). After several iterations and other authors, this became the IEEE 802 bridge specification, a widely implemented standard that is the basis of the billion dollar transparent bridging industry. He was also part of the DEC team that invented the Gigaswitch and the Giganet (a precursor to Gigabit Ethernet).
Network algorithmics
Varghese is best known for helping define network algorithmics, a field of study which resolves networking bottlenecks using interdisciplinary techniques that include changes to hardware and operating systems as well as efficient algorithms.
Among his contributions to network algorithmics are Deficit Round Robin (co-invented with M. Shreedhar), a scheduling algorithm that is widely used in routers, and timing wheels (with Tony Lauck), an algorithm for fast timers that is used as the basis of fast timers in Linux and FreeBSD.
IP lookup and packet classification
Varghese has also worked extensively on fast IP lookup and packet classification. His work with G. Chandranmenon on Threaded indexes predates the work done at Cisco Systems and Juniper Networks on tag switching. His work on multibit tries (with V. Srinivasan) has been used by a number of companies including Microsoft. His work on scalable IP packet lookup (with Waldvogel and Turner) for longer addresses such as IPv6 is being considered for use by Linux.
George also worked with Eatherton and Dittia on the Tree bitmap IP lookup algorithm that is used in Cisco's CRS-1 router, which many believe to be the fastest router in the world. Tree bitmap and hypercuts (with Sumeet Singh and Florin Baboescu) appear to be among the best algorithms (excluding CAMs) for IP lookup and packet classification today.
Self stabilization
George is also known for his contributions to the theoretical field of self-stabilization (a form of fault-tolerance), where he has helped (with various colleagues) pioneer several general techniques such as local checking, local correction, and counter flushing.
NetSift
Varghese co-founded NetSift Inc. (with Sumeet Singh) in 2004, serving as president and CTO. NetSift helped pioneer the notion of automated signature extraction for security and helped to introduce the use of streaming algorithms for network measurement and security at speeds greater than 10 Gbit/s. His work with Cristian Estan on multistage filters has been widely used in industry. NetSift was acquired in June 2005 by Cisco Systems as part of the Modular Switching Group.
Awards and honors
Elected as a member into the National Academy of Engineering, 2017
2014 Koji Kobayashi Award for Computers and Communications for "Contributions to the field of network algorithmics and its applications to high-speed packet networks"
ACM Fellow, 2002
Best Teacher Award in Computer Science, UCSD, 2001, voted by graduating undergraduate students
Best Tutorial Award, SIGMETRICS 98.
Big Fish, Mentor of the Year Award, Association for Graduate Engineering Students (AGES),Washington University 1997.
ONR Young Investigator Award 1996 (34 awarded out of 416 applications across the sciences, among 2 computer scientists chosen in 1996)
Best Student Paper, PODC 96, for a paper jointly written with student Mahesh Jayaram.
Joint winner of the Sproull Prize for best MIT Thesis in Computer Science (1993) and nominated by MIT for ACM Thesis Prize.
DEC Graduate Education Program (GEEP) Scholar, 1989–1991.
Selected publications
Sumeet Singh, Cristian Estan, George Varghese, and Stefan Savage, Automated Worm Fingerprinting, Proceedings of the 6th ACM/USENIX Symposium on Operating Systems Design and Implementation (OSDI). This paper was the basis of NetSift, which see above.
Cristian Estan, David Moore, and George Varghese, Building a Better NetFlow, Proceedings of the ACM SIGCOMM Conference, Portland, OR, September 2004
Fan Chung Graham, Ron Graham, and George Varghese, Parallelism versus Memory Allocation in Pipelined Router Forwarding Engines
Proceedings of SPAA 2004 (invited and accepted to Theory of Computer Science journal as best of SPAA), Barcelona, Spain March 2004
W. Eatherton, Z. Dittia, and George Varghese, Tree bitmap: Hardware Software IP Lookups with Incremental Updates (no prior conference paper, IP lookup algorithm used in Cisco's most recent CRS-1 router) ACM Computer Communications Review, volume 34, April 2004
George Varghese, Summary of Ph.D. Thesis on Self-stabilization
References
External links
George Varghese home page at Microsoft Research
George Varghese old home page at UCSD
List of online papers of George Varghese
Internet Algorithms Lab
Center for Network Systems
Center for Internet Epidemiology
ACM Fellows listing for Varghese
Timing wheels
Fast timers in Linux
FreeBSD
Threaded indexes
Multibit tries
Scalable IP packet lookup
Tree BitMap IP lookup algorithm
Hypercuts
Cisco Systems acquires NetSift
Transparent bridging
1960 births
Indian emigrants to the United States
American computer scientists
American technology writers
Researchers in distributed computing
Fellows of the Association for Computing Machinery
Members of the United States National Academy of Engineering
MIT School of Engineering alumni
North Carolina State University alumni
University of California, San Diego faculty
American businesspeople of Indian descent
IIT Bombay alumni
Living people
Microsoft employees
American male writers of Indian descent
American male non-fiction writers
|
32494
|
https://en.wikipedia.org/wiki/Vi
|
Vi
|
vi (pronounced as distinct letters, ) is a screen-oriented text editor originally created for the Unix operating system. The portable subset of the behavior of vi and programs based on it, and the ex editor language supported within these programs, is described by (and thus standardized by) the Single Unix Specification and POSIX.
The original code for vi was written by Bill Joy in 1976, as the visual mode for a line editor called ex that Joy had written with Chuck Haley. Bill Joy's ex 1.1 was released as part of the first Berkeley Software Distribution (BSD) Unix release in March 1978. It was not until version 2.0 of ex, released as part of Second BSD in May 1979 that the editor was installed under the name "vi" (which took users straight into ex's visual mode), and the name by which it is known today. Some current implementations of vi can trace their source code ancestry to Bill Joy; others are completely new, largely compatible reimplementations.
The name "vi" is derived from the shortest unambiguous abbreviation for the ex command visual, which switches the ex line editor to its full-screen mode. The name is pronounced (the English letters v and i).
In addition to various non–free software variants of vi distributed with proprietary implementations of Unix, vi was opensourced with OpenSolaris, and several free and open source software vi clones exist. A 2009 survey of Linux Journal readers found that vi was the most widely used text editor among respondents, beating gedit, the second most widely used editor, by nearly a factor of two (36% to 19%).
History
Creation
vi was derived from a sequence of UNIX command line editors, starting with ed, which was a line editor designed to work well on teleprinters, rather than display terminals. Within AT&T Corporation, where ed originated, people seemed to be happy with an editor as basic and unfriendly as ed, George Coulouris recalls:
[...] for many years, they had no suitable terminals. They carried on with TTYs and other printing terminals for a long time, and when they did buy screens for everyone, they got Tektronix 4014s. These were large storage tube displays. You can't run a screen editor on a storage-tube display as the picture can't be updated. Thus it had to fall to someone else to pioneer screen editing for Unix, and that was us initially, and we continued to do so for many years.
Coulouris considered the cryptic commands of ed to be only suitable for "immortals", and thus in February 1976, he enhanced ed (using Ken Thompson's ed source as a starting point) to make em (the "editor for mortals") while acting as a lecturer at Queen Mary College. The em editor was designed for display terminals and was a single-line-at-a-time visual editor. It was one of the first programs on Unix to make heavy use of "raw terminal input mode", in which the running program, rather than the terminal device driver, handled all keystrokes. When Coulouris visited UC Berkeley in the summer of 1976, he brought a DECtape containing em, and showed the editor to various people. Some people considered this new kind of editor to be a potential resource hog, but others, including Bill Joy, were impressed.
Inspired by em, and by their own tweaks to ed, Bill Joy and Chuck Haley, both graduate students at UC Berkeley, took code from em to make en, and then "extended" en to create ex version 0.1. After Haley's departure, Bruce Englar encouraged Joy to redesign the editor, which he did June through October 1977 adding a full-screen visual mode to exwhich came to be vi.
vi and ex share their code; vi is the ex binary launching with the capability to render the text being edited onto a computer terminalit is ex's visual mode. The name vi comes from the abbreviated ex command (vi) to enter the visual mode from within it. The longform command to do the same was visual, and the name vi is explained as a contraction of visual in later literature. vi is also the shell command to launch ex/vi in the visual mode directly, from within a shell.
According to Joy, many of the ideas in this visual mode were taken from Bravothe bimodal text editor developed at Xerox PARC for the Alto. In an interview about vi's origins, Joy said:
A lot of the ideas for the screen editing mode were stolen from a Bravo manual I surreptitiously looked at and copied. Dot is really the double-escape from Bravo, the redo command. Most of the stuff was stolen. There were some things stolen from ed—we got a manual page for the Toronto version of ed, which I think Rob Pike had something to do with. We took some of the regular expression extensions out of that.
Joy used a Lear Siegler ADM-3A terminal. On this terminal, the Escape key was at the location now occupied by the Tab key on the widely used IBM PC keyboard (on the left side of the alphabetic part of the keyboard, one row above the middle row). This made it a convenient choice for switching vi modes. Also, the keys h,j,k,l served double duty as cursor movement keys and were inscribed with arrows, which is why vi uses them in that way. The ADM-3A had no other cursor keys. Joy explained that the terse, single character commands and the ability to type ahead of the display were a result of the slow 300 baud modem he used when developing the software and that he wanted to be productive when the screen was painting slower than he could think.
Distribution
Joy was responsible for creating the first BSD Unix release in March, 1978, and included ex 1.1 (dated 1 February 1978) in the distribution, thereby exposing his editor to an audience beyond UC Berkeley. From that release of BSD Unix onwards, the only editors that came with the Unix system were ed and ex. In a 1984 interview, Joy attributed much of the success of vi to the fact that it was bundled for free, whereas other editors, such as Emacs, could cost hundreds of dollars.
Eventually it was observed that most ex users were spending all their time in visual mode, and thus in ex 2.0 (released as part of Second Berkeley Software Distribution in May, 1979), Joy created vi as a hard link to ex, such that when invoked as vi, ex would automatically start up in its visual mode. Thus, vi is not the evolution of ex, vi is ex.
Joy described ex 2.0 (vi) as a very large program, barely able to fit in the memory of a PDP-11/70, thus although vi may be regarded as a small, lightweight, program today, it was not seen that way early in its history. By version 3.1, shipped with 3BSD in December 1979, the full version of vi was no longer able to fit in the memory of a PDP-11; the editor would be also too big to run on PC/IX for the IBM PC in 1984.
Joy continued to be lead developer for vi until version 2.7 in June 1979, and made occasional contributions to vi's development until at least version 3.5 in August 1980. In discussing the origins of vi and why he discontinued development, Joy said:
I wish we hadn't used all the keys on the keyboard. I think one of the interesting things is that vi is really a mode-based editor. I think as mode-based editors go, it's pretty good. One of the good things about EMACS, though, is its programmability and the modelessness. Those are two ideas which never occurred to me. I also wasn't very good at optimizing code when I wrote vi. I think the redisplay module of the editor is almost intractable. It does a really good job for what it does, but when you're writing programs as you're learning... That's why I stopped working on it.
What actually happened was that I was in the process of adding multiwindows to vi when we installed our VAX, which would have been in December of '78. We didn't have any backups and the tape drive broke. I continued to work even without being able to do backups. And then the source code got scrunched and I didn't have a complete listing. I had almost rewritten all of the display code for windows, and that was when I gave up. After that, I went back to the previous version and just documented the code, finished the manual and closed it off. If that scrunch had not happened, vi would have multiple windows, and I might have put in some programmability—but I don't know.
The fundamental problem with vi is that it doesn't have a mouse and therefore you've got all these commands. In some sense, its backwards from the kind of thing you'd get from a mouse-oriented thing. I think multiple levels of undo would be wonderful, too. But fundamentally, vi is still ed inside. You can't really fool it.
It's like one of those pinatas—things that have candy inside but has layer after layer of paper mache on top. It doesn't really have a unified concept. I think if I were going to go back—I wouldn't go back, but start over again.
In 1979, Mary Ann Horton took on responsibility for vi. Horton added support for arrow and function keys, macros, and improved performance by replacing termcap with terminfo.
Ports and clones
Up to version 3.7 of vi, created in October 1981, UC Berkeley was the development home for vi, but with Bill Joy's departure in early 1982 to join Sun Microsystems, and AT&T's UNIX System V (January 1983) adopting vi, changes to the vi codebase happened more slowly and in a more dispersed and mutually incompatible way. At UC Berkeley, changes were made but the version number was never updated beyond 3.7. Commercial Unix vendors, such as Sun, HP, DEC, and IBM each received copies of the vi source, and their operating systems, Solaris, HP-UX, Tru64 UNIX, and AIX, today continue to maintain versions of vi directly descended from the 3.7 release, but with added features, such as adjustable key mappings, encryption, and wide character support.
While commercial vendors could work with Bill Joy's codebase (and continue to use it today), many people could not. Because Joy had begun with Ken Thompson's ed editor, ex and vi were derivative works and could not be distributed except to people who had an AT&T source license. People looking for a free Unix-style editor would have to look elsewhere. By 1985, a version of Emacs (MicroEMACS) was available for a variety of platforms, but it was not until June 1987 that STEVIE (ST Editor for VI Enthusiasts), a limited vi clone, appeared. In early January 1990, Steve Kirkendall posted a new clone of vi, Elvis, to the Usenet newsgroup comp.os.minix, aiming for a more complete and more faithful clone of vi than STEVIE. It quickly attracted considerable interest in a number of enthusiast communities. Andrew Tanenbaum quickly asked the community to decide on one of these two editors to be the vi clone in Minix; Elvis was chosen, and remains the vi clone for Minix today.
In 1989 Lynne Jolitz and William Jolitz began porting BSD Unix to run on 386 class processors, but to create a free distribution they needed to avoid any AT&T-contaminated code, including Joy's vi. To fill the void left by removing vi, their 1992 386BSD distribution adopted Elvis as its vi replacement. 386BSD's descendants, FreeBSD and NetBSD, followed suit. But at UC Berkeley, Keith Bostic wanted a "bug for bug compatible" replacement for Joy's vi for BSD 4.4 Lite. Using Kirkendall's Elvis (version 1.8) as a starting point, Bostic created nvi, releasing it in the northern spring of 1994. When FreeBSD and NetBSD resynchronized the 4.4-Lite2 codebase, they too switched over to Bostic's nvi, which they continue to use today.
Despite the existence of vi clones with enhanced featuresets, sometime before June 2000, Gunnar Ritter ported Joy's vi codebase (taken from 2.11BSD, February 1992) to modern Unix-based operating systems, such as Linux and FreeBSD. Initially, his work was technically illegal to distribute without an AT&T source license, but, in January 2002, those licensing rules were relaxed, allowing legal distribution as an open-source project. Ritter continued to make small enhancements to the vi codebase similar to those done by commercial Unix vendors still using Joy's codebase, including changes required by the POSIX.2 standard for vi. His work is available as Traditional Vi, and runs today on a variety of systems.
But although Joy's vi was now once again available for BSD Unix, it arrived after the various BSD flavors had committed themselves to nvi, which provides a number of enhancements over traditional vi, and drops some of its legacy features (such as open mode for editing one line at a time). It is in some sense, a strange inversion that BSD Unix, where Joy's vi codebase began, no longer uses it, and the AT&T-derived Unixes, which in the early days lacked Joy's editor, are the ones that now use and maintain modified versions of his code.
Impact
Over the years since its creation, vi became the de facto standard Unix editor and a hacker favorite outside of MIT until the rise of Emacs after about 1984. The Single UNIX Specification specifies vi, so every conforming system must have it.
vi is still widely used by users of the Unix family of operating systems. About half the respondents in a 1991 USENET poll preferred vi. In 1999, Tim O'Reilly, founder of the eponymous computer book publishing company, stated that his company sold more copies of its vi book than its emacs book.
Interface
vi is a modal editor: it operates in either insert mode (where typed text becomes part of the document) or command mode (where keystrokes are interpreted as commands that control the edit session). For example, typing while in command mode switches the editor to insert mode, but typing again at this point places an "i" character in the document. From insert mode, pressing switches the editor back to command mode. A perceived advantage of vi's separation of text entry and command modes is that both text editing and command operations can be performed without requiring the removal of the user's hands from the home row. As non-modal editors usually have to reserve all keys with letters and symbols for the printing of characters, any special commands for actions other than adding text to the buffer must be assigned to keys that do not produce characters, such as function keys, or combinations of modifier keys such as , and with regular keys. Vi has the property that most ordinary keys are connected to some kind of command for positioning, altering text, searching and so forth, either singly or in key combinations. Many commands can be touch typed without the use of or . Other types of editors generally require the user to move their hands from the home row when touch typing:
To use a mouse to select text, commands, or menu items in a GUI editor.
To the arrow keys or editing functions (Home / End or Function Keys).
To invoke commands using modifier keys in conjunction with the standard typewriter keys.
For instance, in vi, replacing a word is replacement text, which is a combination of two independent commands (change and word-motion) together with a transition into and out of insert mode. Text between the cursor position and the end of the word is overwritten by the replacement text. The operation can be repeated at some other location by typing , the effect being that the word starting at that location will be replaced with the same replacement text.
A human–computer interaction textbook notes on its first page that "One of the classic UI foibles—told and re-told by HCI educators around the world—is the vi editor's lack of feedback when switching between modes. Many a user made the mistake of providing input while in command mode or entering a command while in input mode."
Contemporary derivatives and clones
Vim "Vi IMproved" has many additional features compared to vi, including (scriptable) syntax highlighting, mouse support, graphical versions, visual mode, many new editing commands and a large amount of extension in the area of ex commands. Vim is included with almost every Linux distribution (and is also shipped with every copy of Apple macOS). Vim also has a vi compatibility mode, in which Vim is more compatible with vi than it would be otherwise, although some vi features, such as open mode, are missing in Vim, even in compatibility mode. This mode is controlled by the :set compatible option. It is automatically turned on by Vim when it is started in a situation that looks as if the software might be expected to be vi compatible. Vim features that do not conflict with vi compatibility are always available, regardless of the setting. Vim was derived from a port of STEVIE to the Amiga.
Elvis is a free vi clone for Unix and other operating systems written by Steve Kirkendall. Elvis introduced a number of features now present in other vi clones, including allowing the cursor keys to work in input mode. It was the first to provide color syntax highlighting (and to generalize syntax highlighting to multiple filetypes). Elvis 1.x was used as the starting point for nvi, but Elvis 2.0 added numerous features, including multiple buffers, windows, display modes, and file access schemes. Elvis is the standard version of vi shipped on Slackware Linux, Kate OS and MINIX. The most recent version of Elvis is 2.2, released in October 2003.
nvi is an implementation of the ex/vi text editor originally distributed as part of the final official Berkeley Software Distribution (4.4 BSD-Lite). This is the version of vi that is shipped with all BSD-based open source distributions. It adds command history and editing, filename completions, multiple edit buffers, and multi-windowing (including multiple windows on the same edit buffer). Beyond 1.79, from October, 1996, which is the recommended stable version, there have been "development releases" of nvi, the most recent of which is 1.81.6, from November, 2007.
vile was initially derived from an early version of Microemacs in an attempt to bring the Emacs multi-window/multi-buffer editing paradigm to vi users, and was first published on Usenet's alt.sources in 1991. It provides infinite undo, UTF-8 compatibility, multi-window/multi-buffer operation, a macro expansion language, syntax highlighting, file read and write hooks, and more.
BusyBox, a set of standard Linux utilities in a single executable, includes a tiny vi clone.
Neovim, a refactor of Vim, which it strives to supersede.
See also
List of text editors
Comparison of text editors
visudo
List of Unix commands
References
Further reading
External links
The original Vi version, adapted to more modern standards
An Introduction to Display Editing with Vi, by Mark Horton and Bill Joy
vi lovers home page
Explanation of modal editing with vi – "Why, oh WHY, do those #?@! nutheads use vi?"
The original source code of ex (aka vi) versions 1.1, 2.2, 3.2, 3.6 and 3.7 ported to current UNIX
Computer-related introductions in 1976
Free text editors
Software using the BSD license
Unix SUS2008 utilities
Unix text editors
Console applications
|
20351675
|
https://en.wikipedia.org/wiki/MDynaMix
|
MDynaMix
|
Molecular Dynamics of Mixtures (MDynaMix) is a computer software package for general purpose molecular dynamics to simulate mixtures of molecules, interacting by AMBER- and CHARMM-like force fields in periodic boundary conditions.
Algorithms are included for NVE, NVT, NPT, anisotropic NPT ensembles, and Ewald summation to treat electrostatic interactions.
The code was written in a mix of Fortran 77 and 90 (with Message Passing Interface (MPI) for parallel execution). The package runs on Unix and Unix-like (Linux) workstations, clusters of workstations, and on Windows in sequential mode.
MDynaMix is developed at the Division of Physical Chemistry, Department of Materials and Environmental Chemistry, Stockholm University, Sweden. It is released as open-source software under a GNU General Public License (GPL).
Programs
md is the main MDynaMix block
makemol is a utility which provides help to create files describing molecular structure and the force field
tranal is a suite of utilities to analyze trajectories
mdee is a version of the program which implements expanded ensemble method to compute free energy and chemical potential (is not parallelized)
mge provides a graphical user interface to construct molecular models and monitor dynamics process
Field of application
Thermodynamic properties of liquids
Nucleic acid - ions interaction
Modeling of lipid bilayers
Polyelectrolytes
Ionic liquids
X-ray spectra of liquid water
Force Field development
See also
References
External links
Ascalaph, graphical shell for MDynaMix (GNU GPL)
Molecular dynamics software
Free science software
Free software programmed in C++
Free software programmed in Fortran
|
29474849
|
https://en.wikipedia.org/wiki/Bredolab%20botnet
|
Bredolab botnet
|
The Bredolab botnet, also known by its alias Oficla, was a Russian botnet mostly involved in viral e-mail spam. Before the botnet was eventually dismantled in November 2010 through the seizure of its command and control servers, it was estimated to consist of millions of zombie computers.
Operations
Though the earliest reports surrounding the Bredolab botnet originate from May 2009 (when the first malware samples of the Bredolab trojan horse were found) the botnet itself did not rise to prominence until August 2009, when there was a major surge in the size of the botnet. Bredonet's main form of propagation was through sending malicious e-mails that included malware attachments which would infect a computer when opened, effectively turning the computer into another zombie controlled by the botnet. At its peak, the botnet was capable of sending 3.6 billion infected emails every day. The other main form of propagation was through the use of drive-by downloads - a method which exploits security vulnerabilities in software. This method allowed the botnet to bypass software protection in order to facilitate downloads without the user being aware of them.
The main income of the botnet was generated through leasing parts of the botnet to third parties who could subsequently use these infected systems for their own purposes, and security researchers estimate that the owner of the botnet made up to $139,000 a month from botnet related activities. Due to the rental business strategy, the payload of Bredolab has been very diverse, and ranged from scareware to malware and e-mail spam.
Dismantling and aftermath
On 25 October 2010, a team of Dutch law enforcement agents seized control of 143 servers which contained three command & control servers, one database server and several management servers from the Bredolab botnet in a datacenter from LeaseWeb, effectively removing the botnet herder's ability to control the botnet centrally. In an attempt to regain control of his botnet, the botnet herder utilized 220,000 computers which were still under his control, to unleash a DDoS attack on LeaseWeb servers, though these attempts were ultimately in vain. After taking control of the botnet, the law enforcement team utilized the botnet itself to send a message to owners of infected computers, stating that their computer was part of the botnet.
Subsequently, Armenian law enforcement officers arrested an Armenian citizen, Georgy Avanesov, on the basis of being the suspected mastermind behind the botnet. The suspect denied any such involvement in the botnet. He was sentenced to four years in prison in May 2012.
While the seizure of the command and control servers severely disrupted the botnet's ability to operate, the botnet itself is still partially intact, with command and control servers persisting in Russia and Kazakhstan. Security firm FireEye believes that a secondary group of botnet herders has taken over the remaining part of the botnet for their own purposes, possibly a previous client who reverse engineered parts of the original botnet creator's code. Even so, the group noted that the botnet's size and capacity has been severely reduced by the law enforcement intervention.
References
Web security exploits
Multi-agent systems
Distributed computing projects
Spamming
Botnets
|
44562167
|
https://en.wikipedia.org/wiki/AVFoundation
|
AVFoundation
|
AVFoundation is a multimedia framework with APIs in Objective-C and Swift, which provides high-level services for working with time-based audiovisual media on Apple Darwin-based operating systems: iOS, macOS, tvOS, and watchOS. It was first introduced in iOS 4 and has seen significant changes in iOS 5 and iOS 6. Starting with Mac OS X Lion, it is now the default media framework for the macOS platform.
AVKit
As a component of AVFoundation, AVKit is an API that comes with OS X Mavericks 10.9+ and can be used with Xcode 5.0+ for developing media player software for Mac.
The AVKit software framework is replacing QTKit which was deprecated in OS X Mavericks, and was discontinued with the release of macOS Catalina.
See also
QuickTime
Media Foundation
References
External links
Moving to AV Kit and AV Foundation presentation (video and slides) from WWDC 2013 at Apple Developer
Moving to AV Kit and AV Foundation slides at Huihoo Foundation Documents
Technical Note TN2300: Transitioning QTKit Code to AV Foundation at Apple Developer Archive
Apple Inc. software
Software frameworks
MacOS APIs
Apple Inc. articles needing an infobox
|
1495750
|
https://en.wikipedia.org/wiki/Kanwal%20Rekhi
|
Kanwal Rekhi
|
Kanwal Singh Rekhi (born August 29, 1945) (Punjabi : ਕੰਵਲ ਰੇਖੀ) is an Indian-American businessperson. He is credited as the first Indian-American Founder and CEO to take a venture-backed company public on the NASDAQ.
Career
Entrepreneur & Industry Executive
Rekhi had worked as an engineer, systems analyst, and manager before venturing into entrepreneurship. At the age of 36, he moved to San Jose, California and co-founded Excelan in 1982, a manufacturer of smart Ethernet, and was named president and CEO in 1985. The company went public on the NASDAQ in 1987, and merged with Novell in 1989. Kanwal remained at Novell as an executive vice-president and the chief technology officer of the company, later joining the board of directors. Kanwal retired from Novell in 1995. After leaving Novell, Kanwal also served as the CEO at CyberMedia from January 1998 until its merger with Network Associates (now McAfee) in September 1998.
Venture Investor
In 1994, Kanwal became a full-time Angel investor, investing in more than 50 startups of which he led the initial financing and was a member of the board of directors for 23 companies. His venture financings have resulted in 21 exits including 6 IPOs to date. Also active in Indian Public policy related to venture, Kanwal advised India government policy makers in reforming venture regulations. This encouraged fund formation in India, and Kanwal was the founding limited partner behind Infinity Capital-India a successful early-stage India venture fund.
In 2007, Kanwal co-founded Inventus Capital Partners, and currently serves as managing director. During his time at Inventus, Kanwal has invested in companies including GENWI, Salorix, Poshmark, and Sierra Atlantic (acquired by Hitachi Consulting).
India Entrepreneurial Leader
In 1995, Kanwal Rekhi co-founded TiE, The Indus Entrepreneurs, a nonprofit support network to provide advice, contacts, and funding to Indian Americans hoping to start businesses. Apart from the services to the company, he also acted as an expert in advising the Prime Minister of India and his government for laying the foundation for the country's Information technology expansion. Kanwal has served as a trustee on the global board of TiE. He is also a former chairman of the Centre for Civil Society.
Philanthropy
Kanwal has contributed in increasing the profile of educational institutions in India and US. He contributed $5 million to Michigan Tech in 2000. To help set up a new school of Information Technology, Kanwal Rekhi donated $3 million to IIT Bombay which named one of its schools after him as "Kanwal Rekhi School of Information Technology (KReSIT)"(which was merged in 2006 with the department of computer science and engineering).
Awards and Accolades
Kanwal was felicitated with 2010 Haridas and was also conferred Bina Chaudhuri Award for Distinguished Service by California Institute of Integral Studies. He was also awarded the Entrepreneur of the Year in 1987 by the Arthur-Young/Venture magazine.
References
External links
Kanwal Rekhi School of Information Technology, IIT-B
Indian emigrants to the United States
American communications businesspeople
American businesspeople of Indian descent
Living people
American people of Punjabi descent
1945 births
Indian venture capitalists
Novell people
|
58735232
|
https://en.wikipedia.org/wiki/Patterson%20Hume
|
Patterson Hume
|
James Nairn Patterson "Pat" Hume (17 March 1923 – 9 May 2013) was a Canadian professor and science educator who has been called "Canada's pioneer of computer programming". He was a Professor of Physics and of Computer Science at the University of Toronto, and he served as the second Master of Massey College from 1981 to 1988.
Life and career
Hume received a B.A. in Mathematics and Physics in 1945, an M.A. in Physics in 1946 and a PhD in Physics in 1949 (Theoretical Atomic Spectroscopy) from the University of Toronto. From 1946-1949 he taught returning soldiers Mathematics at the University of Toronto campus in Ajax, Ontario.
He was an instructor in Physics at Rutgers University in New Jersey between 1949-1950 before rejoining the University of Toronto as an Assistant Professor of Physics.
In 1953, Hume and Beatrice Worsley began development of Transcode, a new computer language for the Ferranti Mark 1 machine known as FERUT.
In collaboration with his colleague Donald Ivey, he helped to steer the teaching of physics in a new direction through the use of educational television programs and movies. Starting in 1958 Hume and Ivey prepared and presented over one hundred television programs for the Canadian Broadcasting Corporation on various physics topics. Short films for the PSSC such as Frames of Reference and the CBC TV show The Nature of Things used humour and creative camerawork to make physics accessible to a wider range of students. In 1958 with Calvin Gotlieb he published High-speed Data Processing, the first book on using computers in business which was "recognized by The Oxford English Dictionary in twelve computer-related entries: block, character, datum, generator, housekeeping, in-line, interpreter, keyboard, logical, loop, matrix and simulate".
In 1964, with Calvin Gotlieb and Thomas Hull, he founded the Computer Science department at the University of Toronto.
With Ric Holt, he co-authored many computer programming textbooks, for SP/k, Fortran, Pascal, Turing and Java.
Hume was the second Master of Massey College, Toronto having been a Senior Fellow since 1973.
Upon his retirement, he was appointed Professor Emeritus in 1988.
In 2002, he was inducted into the Canadian Information Productivity Awards (CIPA) Hall of Fame. In 2006 he was awarded an Honorary D.Sc. from Queen's University School of Computing.
He was an active member of The Arts and Letters Club of Toronto and for many years collaborated with Jack Yokom to produce the Annual Spring Review.
He died on 9 May 2013.
In 2014 Hume was given a Lifetime Achievement Award from the Canadian Association of Computer Science including for "the world's first long-distance use of a computer".
For the education work he carried out with Ivey, an asteroid (number 22415) was named HumeIvey in their honour.
Sources
On Beyond Darwin, Chapter 1
In Memoriam: University of Toronto Magazine
In Memoriam: Department of Computer Science
References
External links
CBC TV programs with Donald G. Ivey
Download or watch online: Frames of Reference (1960)
On Beyond Darwin by Patterson Hume
Honorary Doctorate at Queen's University
James Nairn Patterson Hume archival papers held at the University of Toronto Archives and Records Management Services
1923 births
2013 deaths
Canadian computer scientists
Massey College, Toronto
People from Brooklyn
University of Toronto alumni
University of Toronto faculty
|
40636816
|
https://en.wikipedia.org/wiki/Carbon%20nanotube%20computer
|
Carbon nanotube computer
|
A carbon nanotube computer is a computer built entirely using carbon nanotubes (CNT) based transistors. Researchers from Stanford University said that they had successfully built a carbon nanotube computer and their research paper published on 25 September 2013 in the journal Nature. They named their first carbon nanotube computer Cedric. It has a one-bit processor containing just 178 transistors.
In 2019, a team at the Massachusetts Institute of Technology created the 16-bit processor called RV16X-NANO. With 14 000 transistors (compared to only hundreds in the first CNT computer made in 2013) it is the largest computer chip yet to be made from carbon nanotubes. It was able to execute a "Hello, World!" program with a message: “Hello, world! I am RV16XNano, made from CNTs”. It is based on the RISC-V instruction set and runs standard 32-bit instructions on 16-bit data and addresses.
What Cedric can do
The only operation that Cedric can carry out is the SUBNEG which is very simple but when repeated creates a complete Turing Machine. Through the SUBNEG Cedric can count, order numbers and choose between two values.
Current miniaturization and theoretical limits
Cedric has a miniaturization level of 8,000 nanometers, while the theoretical limit is 24—32 nanometers.
Practical problems of realization
The biggest problem in building a carbon nanotube computer in general is the alignment of the carbon nanotubes. This is what Shulaker said about it:
Even 98% accuracy is not acceptable, because out of one billion transistors this means 20,000,000 (20 million) non-functioning transistors which effectively block the computer from functioning. This issue was solved with a system that uses electricity to vaporize the misaligned nanotubes. However, thanks to an algorithm, a working computer is obtained even if not all the nanotubes are aligned. This approach is extremely important because it is industrially usable, in fact making it possible to mass-produce carbon nanotube computers. Cedric has 0.5% of non-aligned nanotubes but thanks to this algorithm it works anyway.
Benefits
Transistor-based digital circuits fabricated from carbon nanotubes can potentially outperform silicon by more than an order of magnitude.
Probable date of marketing
It is realistic that carbon nanotube computers will be commercialized between 2023 and 2025.
References
Carbon nanotubes
Stanford University
Science and technology in the San Francisco Bay Area
2013 in computing
|
6165386
|
https://en.wikipedia.org/wiki/List%20of%20Rutgers%20University%20people
|
List of Rutgers University people
|
This is an enumeration of notable people affiliated with Rutgers University, including graduates of the undergraduate and graduate and professional programs at all three campuses, former students who did not graduate or receive their degree, presidents of the university, current and former professors, as well as members of the board of trustees and board of governors, and coaches affiliated with the university's athletic program. Also included are characters in works of fiction (books, films, television shows, et cetera) who have been mentioned or were depicted as having an affiliation with Rutgers, either as a student, alumnus, or member of the faculty.
Some noted alumni and faculty may be also listed in the main Rutgers University article or in some of the affiliated articles. Individuals are sorted by category and alphabetized within each category. Default campus for listings is the New Brunswick campus, the systems' largest campus, with Camden and Newark campus affiliations noted in parenthesis.
Presidents of Rutgers University
Since 1785, twenty men have served as the institution's president, beginning with Jacob Rutsen Hardenbergh (1735–1790), a Dutch Reformed clergyman who was responsible for establishing the college. Before 1930, most of the university's presidents (eight of the twelve) were clergymen affiliated with Christian denominations in the Reformed tradition (either Dutch Reformed, Presbyterian, or German Reformed). Presidents Hasbrouck (1840–1850), Frelinghuysen (1850–1862), Gates (1882–1890), and Scott (1891–1906) were all laymen. Two presidents were alumni of Rutgers College: William H. S. Demarest (Class of 1883) and Philip Milledoler Brett (Class of 1892). The current president is Jonathan Holloway (born 1976). Holloway, a U.S. historian, is the first person of color to lead Rutgers University.
The president serves in an ex officio capacity as a presiding officer within the university's 59-member Board of Trustees and its eleven-member Board of Governors, and is appointed by these boards to oversee day-to-day operations of the university across its three campuses. He is charged with implementing "board policies with the help and advice of senior administrators and other members of the university community." The president is responsible only to those two governing boards—there is no oversight by state officials. Frequently, the president also occupies a professorship in his academic discipline and engages in instructing students.
Nobel laureates
Milton Friedman, 1912–2006, A.B. 1932, economist, public intellectual, winner of the Nobel Prize in Economics (1976)
Toni Morrison (honorary doctorate), taught at Rutgers, novelist (Beloved, Song of Solomon), Nobel Prize in Literature (1993), Pulitzer Prize for Fiction (1988)
Heinrich Rohrer, 1961–1963, physicist, winner of the Nobel Prize in Physics (1986)
Selman Waksman 1918–1958, professor of microbiology; discovered 22 antibiotics (including Streptomycin); winner of the Nobel Prize in Physiology or Medicine (1952)
Notable trustees and benefactors
Andrew Kirkpatrick (1756–1831), lawyer, Chief Justice of New Jersey Supreme Court, trustee 1782–1809
Littleton Kirkpatrick (1797–1859), attorney and politician, trustee 1841–1859
Henry Rutgers (1745-1830), military officer and philanthropist after whom Rutgers is named
Notable alumni
Architecture
Louis Ayres, Medievalist architect best known for designing the United States Memorial Chapel at the Meuse-Argonne American Cemetery and Memorial and the Herbert C. Hoover U.S. Department of Commerce Building
Frank Townsend Lent
Arts and entertainment
Art
Brad Ascalon, Class of 1999, industrial designer
Alice Aycock, Class of 1968, sculptor
Marc Ecko, fashion designer
Lore Kadden Lindenfeld, textile designer
Kojiro Matsukata, art collector whose collection helped form the National Museum of Western Art in Tokyo
George Segal, GSNB 1963, sculptor
Entertainment
Livingston Allen, hip hop YouTuber better known as DJ Akademiks
Roger Bart, actor (Desperate Housewives, The Producers; Tony Award for You're a Good Man, Charlie Brown)
Mario Batali, Class of 1982, chef, restaurateur, television host (Molto Mario, Iron Chef America)
Bill Bellamy, Class of 1989, comedian, actor
Avery Brooks, Class of 1973, actor, educator
John Carpenter, Class of 1990, first-ever champion of Who Wants to Be a Millionaire television quiz show
Asia Carrera (born Jessica Steinhauser), Class of 1995 (did not graduate), porn star; majored in Business and Japanese
Kevin Chamberlin, actor (Tony Award nominations for Dirty Blonde and Seussical)
Larry Charles, film director (Borat and Bruno)
Jim Coane, Class of 1970, Emmy award-winning television executive producer, writer and director (Dragon Tales)
Jessica Darrow, Class of 2017, actress and singer, voice of Luisa Madrigal in Disney's Encanto
Kristin Davis, Class of 1987, actress (Sex and the City)
Mike Colter, actor (Netflix's Luke Cage)
Tim DeKay, Class of 1990 (Mason Gross School of the Arts), actor (White Collar)
John DiMaggio, voice actor (Bender on Futurama and Jake the Dog on Adventure Time), voicework in anime (Princess Mononoke, Vampire Hunter D: Bloodlust)
Katie Dippold, television and film writer (Parks and Recreation, The Heat)
Wheeler Winston Dixon, filmmaker, critic, author
Keir Dullea, actor (2001, A Space Odyssey)
Simon Feil, Class of 2000, actor (Julie & Julia, House of Cards)
Jon Finkel, Class of 2003, professional Magic: The Gathering player; inducted into the MTG Hall of Fame
Calista Flockhart, Class of 1988, actress (The Birdcage, Ally McBeal), Emmy winner, spouse of Harrison Ford
Brandon Flynn, actor (13 Reasons Why)
Marlene Forte, (attended) actress, sister of HSN host Lesley Machado
Gwendolyn Audrey Foster, filmmaker, critic, author
Midori Francis, actress (Dash & Lily)
James Gandolfini, Class of 1983, actor (The Sopranos), Emmy winner, voice actor (Where the Wild Things Are)
Chris Gethard, comedian, actor
Judy Gold, B.A. 1984, comedian, actress
Dan Green, voice actor (Yu-Gi-Oh!)
Charles Hallahan, Class of 1969 (Camden), actor (The Thing, Hunter)
Robert Harper, Class of 1974, actor (Once Upon A Time In America, Frank's Place, Creepshow, Commander in Chief...)
Bakhtiyaar Irani, Class of 1999, Indian television actor, participant in the Indian version of Big Brother, Bigg Boss
Bill Jemas, Class of 1980, writer, creative director, publisher for Marvel Comics Group
Ed Kalegi, national talk radio host and personality The Weekend with Ed Kalegi, actor
Jason Kaplan, associate producer of The Howard Stern Show
Jane Krakowski, Class of 1988, actress (Ally McBeal, 30 Rock)
William Mastrosimone, Class of 1980, playwright, Golden Globe Award winner
Christopher McCulloch, creator of The Venture Bros.
Paolo Montalban, Broadway, television and film actor
Luis Moro, Class of 1987, actor, comic, filmmaker, writer, Independent Spirit Award Nominee, Best Actor Nominee ABFF (Love and Suicide)
Oswald "Ozzie" Nelson, Class of 1927, musician and actor (The Adventures of Ozzie and Harriet)
Scott Patterson, actor (Saw IV, Saw V)
Hasan Piker, Popular Twitch streamer and online personality
Matt Pinfield, radio DJ, host of MTV's 120 Minutes
Molly Price, actress
Robert Pulcini, Class of 1989 (Camden), Academy Award nominated documentary and feature filmmaker, co-director of American Splendor
Sheryl Lee Ralph, English Lit/Theatre degree, 1975, original Deena Jones in the Broadway smash hit musical Dreamgirls, winner of six Tony Awards
Roy Scheider, actor (Jaws, Sorcerer)
Henry Selick, attended for a year, director ( Nightmare Before Christmas, Coraline )
Michael Sorvino, actor, son of Paul Sorvino
Dina Spybey, actress (Disney's The Haunted Mansion)
Sebastian Stan, Class of 2005, actor (Captain America: The First Avenger, The Covenant)
Aaron Stanford, Class of 2000, actor (X2, Tadpole)
Kurt Sutter, Class of 1986, writer (The Shield), creator of Sons of Anarchy
Daniel Travis, actor (Open Water)
Paul Wesley, actor (Vampire Diaries)
Ashley Woodfolk, young adult fiction writer
Cary Woodworth, Class of 1999, actor (Mary and Rhoda), songwriter
Karen Young, actress (The Sopranos, Law & Order)
Ramy Youssef, attended, actor (Ramy)
Saul Zaentz, film producer (One Flew over the Cuckoo's Nest, Amadeus)
Daniel O'Brien, Class of 2008, comedian/writer (Cracked.com, How to Fight Presidents)
Journalism
Spencer Ackerman, Class of 2002, journalist for The Daily Beast
Joan Acocella, Class of 1984, journalist, author, dance critic for The New Yorker
Martin Agronsky, Class of 1936, pioneering TV journalist
Amanda Alcantara, Class of 2012, writer and activist
Carrie Budoff Brown, editor of Politico
Lisa Daftari, foreign affairs investigative journalist for "The Foreign Desk"
Stuart Diamond, journalist, New York Times, Pulitzer Prize. Author, Getting More, NY Times bestseller
Dylan Dreyer, meteorologist
Rich Edson, Class of 2003, Washington correspondent, Fox News Channel
Mike Emanuel, journalist, Chief Congressional Correspondent and former White House Correspondent for Fox News Channel
Nick Gillespie, Class of 1985, journalist, editor
Bernard Goldberg, Class of 1967, journalist
Jerry Izenberg, Class of 1952, Emmy-winning sports journalist
Amani Al-Khatahtbeh, Class of 2014, author and tech entrepreneur
Jeff Koyen, Class of 1991, journalist and entrepreneur
Gene Lyons, Class of 1952, political columnist
Natalie Morales, Class of 1994, journalist and correspondent for The Today Show
Richard Newcomb, Class of 1962, journalist and author, best-selling author of Iwo Jima! and Abandon Ship!
James O'Keefe, Class of 2006, political activist
Wendy Osefo, Class of 2016 (Camden, PhD), political commentator and assistant professor at Johns Hopkins University.
Rebecca Quick, Class of 1993, journalist and anchor (CNBC Squawk Box)
Larry Stark, Class of 1956, Boston journalist and theater critic, Theater Mirror
Mike Taibbi, Class of 1971, journalist and correspondent for NBC Nightly News
Milton Viorst, Class of 1951, journalist, author, Middle East scholar
Cathy Young, Class of 1988, journalist and non-fiction author
Music
Kenny Barron, jazz pianist in Dizzie Gillespie quartet
Laurie Berkner, children's musician; Jack's Big Music Show
Regina Belle, singer (A Whole New World), plays during end credits of (Disney's Aladdin)
Just Blaze, Grammy Award-nominated hip hop producer
David Bryan, keyboardist and member of band Bon Jovi
Jim Conti, tenor saxophonist for the third wave ska band Streetlight Manifesto
Mike Glita, musician, producer, songwriter, manager, and former bassist for New Jersey post-hardcore band Senses Fail
Rasika Shekar, Indo-American flautist and singer, who plays the bansuri, a bamboo flute.
Roger Lee Hall, music preservationist, composer
Mark Helias, bassist, composer
Frank Iero, guitarist and backup vocals for the band My Chemical Romance; lead singer of post-hardcore/screamo band Leathermouth; co-founder of the Skeleton Crew company (dropped out, was on a scholarship)
Ben Jelen, musician
Brian Joo, Korean R&B singer; half of Fly to the Sky
Tomas Kalnoky, lead singer-songwriter and lead guitarist of third wave ska band Streetlight Manifesto; formed Catch 22 and Bandits of the Acoustic Revolution
Kenneth Lampl, Juilliard School faculty, film composer and professor
Dan Lavery, Grammy-nominated bass player for rock group Tonic and occasionally The Fray
Looking Glass, 1970s band, one-hit wonder with the song "Brandy"
Earl MacDonald, Class of 1995 (M.Mus.), Director of Jazz Studies at the University of Connecticut; former musical director; pianist with Maynard Ferguson
Marissa Paternoster, artist; lead singer-songwriter and lead guitarist of independent rock band Screaming Females and solo project Noun
Cristina Pato, Galician bagpiper
Pras, Grammy-winning rapper from the Fugees
James Romig, Class of 2000 (Ph.D.), composer. 2019 Pulitzer Prize in Music, finalist
Gabe Saporta, musician with Midtown, Cobra Starship, and Humble Beginnings
Sister Souljah, born Lisa Williamson, Class of 1986, author
Soraya, Colombian-American singer-songwriter, guitarist, arranger and record producer
Athletics
Baseball
Jason Bergmann, starting pitcher for the Washington Nationals
Joe Borowski, relief pitcher for the Cleveland Indians; played for the Chicago Cubs, Florida Marlins, New York Yankees, Atlanta Braves, Baltimore Orioles, and Tampa Bay Devil Rays
David DeJesus, center fielder for the Oakland Athletics
Tom Emanski, creator of Tom Emanski Instructional Videos
Jeff Frazier, plays for the Washington Nationals organization; brother of Todd Frazier
Todd Frazier, plays for the Texas Rangers; member of the 1998 LLWS champions, Toms River, New Jersey
Don Taussig (born 1932), Major League Baseball player
Jeff Torborg, Class of 1963, Major League Baseball catcher (Los Angeles Dodgers and California Angels); manager of several teams
Eric Young, Class of 1992, Major League Baseball player
Basketball
James Bailey, Class of 1978, NBA: 1979–1987
John Battle, guard for the Atlanta Hawks and Cleveland Cavaliers, 1985–1995
Hollis Copeland, NBA: 1979–1981
Waliyy Dixon, AND1 Mixtape Tour streetball legend
Quincy Douby, guard for the Toronto Raptors
Brian Ellerbe, Class of 1985, head coach of the Michigan Wolverines
Luis Flores, professional basketball player, 2009 top scorer in the Israel Basketball Premier League
Bob Greacen, NBA: 1969–1971
Art Hillhouse, NBA: 1946–1947
Roy Hinson, Class of 1983, NBA: 1983–1990
Charles Jones, NBA: 1999–1999
Dahntay Jones, NBA: 2003–2006
Eddie Jordan, Class of 1977, head coach of the Rutgers Men's Basketball team; former head coach of the Washington Wizards
Steve Kaplan, Class of 1972, American-Israeli basketball player in the Israel Basketball Premier League
Herve Lamizana, Class of 2004, power forward, Indios de Mayagüez
Bob Lloyd, NBA: 1967–1968 professional player with the New York Nets; CEO of Mindscape; Chairman of the V Foundation for Cancer Research which honors the memory of his former Rutgers backcourt teammate, Jim "Jimmy V." Valvano
Hamady N'Diaye, Class of 2010, 26th pick of the second round (56th selection overall) in the 2010 NBA Draft to play for the Minnesota Timberwolves; his draft rights have been traded to the Washington Wizards
Chelsea Newton, Class of 2004, Sacramento Monarchs of the WNBA
Arthur Perry, basketball player and coach
Cappie Pondexter, Class of 2006, 2nd overall pick in the 2006 WNBA Draft by the Phoenix Mercury; 2008 Summer Olympic gold medalist for United States Women's Basketball in Beijing
Phil Sellers, NBA: 1976–1976
David Stern, Class of 1963, Commissioner of the National Basketball Association
Tammy Sutton-Brown, Class of 2001, Charlotte Sting of the WNBA
Jim Valvano, Class of 1967, won NCAA Men's Basketball National Championship at N.C. State
Sue Wicks, Class of 1988, member of the 1988 Olympic team and New York Liberty (1997–2002) of the WNBA
Heather Zurich, Class of 2009, player; assistant coach of the UC Santa Barbara Gauchos team
Fencing
Alex Treves (born 1929), Italian-born American Olympic fencer, won the NCAA saber title in both 1949 and 1950, was undefeated in three years of competing in college.
Football
Mike Barr, Class of 2004, NFL punter (Pittsburgh Steelers, Frankfurt Galaxy)
Marco Battaglia, Class of 1996, NFL tight end (Cincinnati Bengals, Pittsburgh Steelers)
Steve Belichick, Class of 2011, Assistant Coach for the New England Patriots
Jay Bellamy, Class of 1994, NFL safety (New Orleans Saints)
Brandon Bing, Class of 2011, safety for the New York Giants
Gary Brackett, Class of 2003, NFL linebacker (Indianapolis Colts)
Chris Brantley, Class of 1992, NFL player (Rams, Bills)
Kenny Britt, Class of 2010 (did not graduate), NFL player (Titans)
Frank Burns, Class of 1949, NFL quarterback (Philadelphia Eagles), Head Coach at Rutgers 1973–1983
Michael Burton, Class of 2010, fullback for the Detroit Lions
Deron Cherry, Class of 1980, safety with the Kansas City Chiefs; member of the NFL 1980s All-Decade Team
Anthony Davis, Class of 2010, NFL offensive tackle (San Francisco 49ers)
Jack Emmer, Class of 1967, NFL wide receiver (New York Jets); Hall of Fame college lacrosse coach; head coach of 2002 U.S. Lacrosse World Champions
Eric Foster, Class of 2008, NFL defensive tackle (Indianapolis Colts)
Gary Gibson, Class of 2005, NFL defensive tackle (Carolina Panthers)
Clark Harris, Class of 2007, NFL tight end (Houston Texans)
Homer Hazel, "Pop Hazel", All-American football star and member of the College Football Hall of Fame
Carl Howard, Class of 1984, NFL cornerback (New York Jets)
Jeremy Ito, Class of 2008
James Jenkins, Class of 1991, NFL tight end (Washington Redskins)
Ed Jones, Class of 1974, CFL All-Star
Nate Jones, Class of 2004, NFL cornerback Miami Dolphins)
Rashod Kent, Class of 2003, NFL tight end (Houston Texans)
Alex Kroll, Class of 1962, AFL center (New York Titans), CEO of Young & Rubicam
Brian Leonard, Class of 2007, NFL running back (Cincinnati Bengals)
Steve Longa, linebacker (Detroit Lions)
Ray Lucas, Class of 1996, NFL quarterback 1996–2002 (New York Jets, Miami Dolphins), TV Football commentator
Dino Mangiero, Class of 1980, NFL defensive end (Seattle Seahawks)
Devin McCourty, Class of 2010, Pro Bowl NFL cornerback ( New England Patriots)
Jason McCourty, Class of 2009, NFL cornerback (Tennessee Titans)
Mike McMahon, Class of 2001, NFL quarterback (Minnesota Vikings)
Robert Nash, "Nasty Nash", first football player traded in the NFL and first Captain of the New York Giants
Ryan Neill, Class of 2006, NFL defensive end (Buffalo Bills)
Shaun O'Hara, Class of 2000, NFL center (New York Giants)
Raheem Orr, Class of 2004, NFL defensive end, AFL DL/OL (Houston Texans, Philadelphia Soul)
J'Vonne Parker, Class of 2004, NFL defensive tackle (Cleveland Browns)
Bill Pickel, Class of 1982, NFL defensive tackle (Los Angeles Raiders)
Joe Porter, Class of 2007, NFL cornerback (Green Bay Packers)
Nick Prisco, NFL player
Ray Rice, NFL running back (Baltimore Ravens)
Paul Robeson, Class of 1919, athlete, actor, singer, political activist, NFL guard 1920–1922 (Akron Pros, Milwaukee Badgers)
Stan Rosen (1906–1984), NFL football player
Mohamed Sanu, Class of 2012, wide receiver (Cincinnati Bengals)
Tom Savage, attended, quarterback (Houston Texans)
L.J. Smith, Class of 2003, NFL tight end (Philadelphia Eagles)
Pedro Sosa, Class of 2008, offensive lineman (Miami Dolphins)
Darnell Stapleton, Class of 2007, NFL Guard (Pittsburgh Steelers)
Reggie Stephens, Class of 1999, cornerback (New York Giants)
Cameron Stephenson, Class of 2007, NFL Guard (Jacksonville Jaguars)
Tyronne Stowe, Class of 1987, linebacker (Phoenix Cardinals)
Harry Swayne, Class of 1986, NFL lineman 1987–2001
Rashod Swinger, NFL DT 1997–1999 (Arizona Cardinals)
Mike Teel, Class of 2009, NFL quarterback 2009–2011 (Seattle Seahawks), quarterbacks coach (Kean University, Wagner College)
Lou Tepper, Class of 1967, former head coach of Illinois
Tiquan Underwood, Class of 2009, wide receiver (New England Patriots)
Elnardo Webster, Class of 1992, NFL player, Pittsburgh Steelers
Sonny Werblin, Class of 1932, founder of the New York Jets; President and CEO Madison Square Garden Corporation; President of Music Corporation of America-TV
Jamaal Westerman, Class of 2009, NFL player, linebacker and defensive end (Jets)
Jeremy Zuttah, Class of 2008, offensive lineman (Tampa Bay Buccaneers)
Powerlifting
Lev Susany, Class of 2011, Australian powerlifter and Commonwealth record holder
Soccer
Jon Conway, Class of 1999, goalkeeper for Chicago Fire
Josh Gros, Class of 2003, midfielder for D.C. United
Nick LaBrocca, Class of 2006, midfielder for Colorado Rapids
Alexi Lalas, Class of 1991, former U.S. Soccer National Team member, former president and General Manager of the Los Angeles Galaxy
Carli Lloyd, midfielder for the United States women's national soccer team and the Manchester City W.F.C.
Steve Mokone, player for FC Barcelona and South Africa
Peter Vermes, Class of 1987, former U.S. Soccer National Team member, former professional player in Major League Soccer
Swimming
George Kojac, member of the International Swimming Hall of Fame; gold medalist in swimming at the 1928 Summer Olympics
Walter Spence, member of International Swimming Hall of Fame; broke five world records in his first year of competitive swimming (1925)
Wrestling
Nick Catone, retired professional mixed martial artist who competed in the UFC
Anthony Ashnault, 2019 NCAA Wrestling Champion, 149 lb weight class. 4-time NCAA All-American
Nick Suriano, 2019 NCAA Wrestling Champion, 133 lb weight class, first wrestling national champion for Rutgers
MMA
Mickey Gall, professional mixed martial arts fighter, currently fighting in the Welterweight Division of the UFC
Hockey
Andrew Barroway, majority owner and chairman of the Arizona Coyotes.
Business
Greg Brown, Class of 1982, President and Co-CEO of Motorola; CEO of the Broadband Mobility Solutions Business Unit
John Joseph "Jack" Byrne, Jr., chairman and CEO of GEICO which he pulled from the brink of insolvency in the mid-1970s; chairman and CEO of White Mountains Insurance Group, formerly (Fund American Enterprises, Inc.); chairman of the Board of Overstock.com 2005–06
Arturo L. Carrión Muñoz, former executive vice president of the Puerto Rico Bankers Association
Stephen Chazen, CEO of Occidental Petroleum
Jay Chiat, Class of 1953, founder of TBWA\Chiat\Day advertising
Nick Corcodilos, professional headhunter
Alvaro de Molina, Class of 1988, MBA, retired CFO of Bank of America
Marc Ecko, founder of Complex magazine and CEO of Marc Ecko Enterprises
Mark Fields, B.A. Economics, President and chief executive officer of Ford Motor Company
Sharon Fordham, Class of 1975, CEO of WeightWatchers.com, Inc.
Robert L. Fornaro, CEO of Spirit Airlines
Otto Hermann Kahn, Rutgers Trustee, financier, patron of the arts
Rana Kapoor, founder/CEO of Yes Bank
Maryann Keller, Class of 1966, B.S., former President of Priceline.com automotive services division
Leonor F. Loree, Class of 1877, President of the Pennsylvania Railroad
Walt MacDonald, Class of 1974 (Camden), CEO of Educational Testing Services
Duncan MacMillan, B.S. 1966, co-founder of Bloomberg L.P.
Bernard Marcus, Class of 1951, founder of Home Depot
Ernest Mario, Class of 1961, former CEO of GlaxoSmithKline
Sherilyn McCoy, Class of 1988, MBA, CEO of Avon Products
Gene Muller, Class of 1977 (Camden), founder and CEO of Flying Fish Brewing
Edward H. Murphy Ph.D., retired from American Petroleum Institute
George Norcross (Camden), insurance executive and chairman of Cooper Health System
Randal Pinkett, Class of 1994, winner of The Apprentice 4; chairman and CEO of BCT Partners
Robert C. Pruyn, Class of 1869, President of the Embossing Company, and the National Commercial Bank of Albany
Gary Rodkin, former ConAgra CEO
Bill Rasmussen, Class of 1960 MBA, managing director at CSFBdirect; founder of ESPN
Tom Renyi, Class of 1968 (BA) and 1969 (MBA), former chairman and CEO of Bank of New York
Barry Schuler, Class of 1976, former chairman and CEO of AOL
Bill Schultz, Class of 1971, MBA, former CEO of Fender Musical Instruments
Harvey Schwartz, Class of 1987, former president and Co-Chief Operating Officer of Goldman Sachs
Steven H. Temaras, CEO of Bed Bath and Beyond
Sir William Cornelius Van Horne, former President of the Canadian Pacific Railway and builder of that country's Transcontinental railroad
William Bernard Ziff, Jr., Ziff Davis Inc. publishing executive
Crime
Melanie McGuire, convicted of murdering her husband, dismembering his body and putting it in suitcases
Jennifer San Marco, perpetrator of the shooting at the Goleta, California United States Postal Service center on January 30, 2006, when seven people were killed.
Rana Kapoor, convicted for embezzlement and fraud worth $100 million.
Education
Philip Milledoler Brett, A.B. 1892, Acting President of Rutgers University (1930–1931); corporate attorney
Carol T. Christ, A.B. 1966, Former President of Smith College and current Chancellor of U.C. Berkeley
Alvin S. Felzenberg, historian, political commentator, member of 9/11 Commission
Charles Ferster, B.S. 1947, behavioral psychologist, author and professor (deceased 1981)
Richard H. Fink, founder of Mercatus Center, current executive vice president at Koch Industries
Milton Friedman, A.B. 1932, economist; public intellectual; winner of the Nobel Prize in Economics (1976)
William H. S. Demarest, A.B. 1883, Professor of Theology and Church Government; President of Rutgers University (1906–1924), President of New Brunswick Theological Seminary
Brigid Callahan Harrison, political science professor and academic at Montclair State University
Jerome Kagan, B.S. 1950, psychologist
William English Kirwan, M.A. 1962, Ph.D. 1964, mathematician; Chancellor emeritus of the University System of Maryland (2002–2015); former President of Ohio State University (1998–2002)
Sarah-Jane Leslie, B.A., current Dean of Princeton University Graduate School
Earl MacDonald, Class of 1995 (M.Mus.), Associate Professor of Music at the University of Connecticut
Richard P. McCormick, A.B. 1938, M.A. 1940, historian; Professor of History and Dean of Faculty at Rutgers University; President of New Jersey Historical Society
John McWhorter, B.A. 1985, historian; author of books on linguistics and race relations; former professor of linguistics at University of California, Berkeley; Senior Fellow at Manhattan Institute
Roy Franklin Nichols, A.B. 1918, M.A. 1919, historian, winner of the Pulitzer Prize (1949)
John C. Norcross, B.S. 1980 (Camden) psychiatrist, university professor
Dennis A. Rondinelli, B.A. 1965, professor and researcher of public administration at the Sanford School of Public Policy.
Camilla Townsend, Ph.D. 1995, professor of history at Rutgers-New Brunswick
Selman Waksman, B.Sc. 1915 M.Sc. 1916, professor of microbiology, discovered 22 antibiotics (including Streptomycin) and winner of the Nobel Prize in Physiology or Medicine (1952)
Carl R. Woodward, B.Sc. 1914, President of the University of Rhode Island
Government, law, and public policy
Rosemary Alito, J.D. 1978, corporate and labor attorney for K&L Gates, sister of Samuel Alito
Curt Anderson, member of Maryland House of Delegates (1983 -); chair of Legislative Black Caucus of Maryland (1989–1991)
Stewart H. Appleby 1913, represented 1925–1927
Adam Leitman Bailey, lawyer, defended the Ground Zero Mosque and other prominent cases
Cheri Beasley, B.A. 1988, former chief justice of NC Supreme Court, candidate for 2022 United States Senate election in North Carolina
Joseph P. Bradley, A.B. 1836, Associate Justice, United States Supreme Court (1870–1891)
Sam Brown, M.A. 1966, organiser of the Vietnam Moratorium and former state treasurer of Colorado
Wayne R. Bryant, J.D. 1972 (Camden), New Jersey Senator (1995-2008)
Donald Burdick, B.S. 1956, M.S., 1958, United States Army Major General who served as Director of the Army National Guard
Clifford P. Case, A.B. 1925, U.S. House of Representatives (1945–1953), United States Senate (1955–1979)
William T. Cahill, JD 1937 (Camden), 46th Governor of New Jersey
Simeon De Witt, A.B. 1776, Surveyor-General for the Continental Army, 1776–1783, and the State of New York, 1784–1834
Michael DuHaime, B.A., 1995, Campaign Manager, Rudy Giuliani for President, 2008; Political Director, Republican National Committee, 2005–2006; Regional Political Director, Bush-Cheney '04, 2003–2004
George S. Duryee B.A. 1872, Member of the New Jersey State Assembly and The United States Attorney for the District of New Jersey
Maria Fernanda Espinosa, Former President of the United Nations General Assembly
Richard Fink, B.A. in Economics founded the Center for Study of Market Processes at Rutgers University. After the Koch brothers donated $30 million, it moved to George Mason University in the 1980s and in 1999 it became the Mercatus Center.
James J. Florio, J.D. 1967 (Camden), 49th Governor of New Jersey (1990–1994)
Louis Freeh, Class of 1971, Director of the FBI (1993–2001)
Frederick T. Frelinghuysen, A.B. 1836, United States Senate (1866–1869, 1871–1877); Secretary of State (1881–1885)
Scott Garrett, J.D. 1984 (Newark), U.S. House of Representatives (2003–2017)
Scott Gration, Obama nominee for NASA Administrator
John H. Griebel, B.S. 1926, Marine Corps General
Diane Gutierrez-Scaccetti, M.S. 1987, Nominee for the Commissioner of the New Jersey Department of Transportation
Garret A. Hobart, A.B. 1863, industrialist, Vice President of the United States (1897–1899)
James J. Howard, M.Ed. 1958, represented New Jersey's 3rd congressional district in the United States House of Representatives 1965–1988
Richard J. Hughes, J.D. 1931, New Jersey Governor, Chief State Supreme Court Justice
William Hughes, Class of 1955, Congressman, United States Ambassador to Panama
Jack H. Jacobs, Class of 1966, M.A. 1972, Medal of Honor recipient, military analyst for MSNBC
Robert E. Kelley, highly decorated and youngest Lieutenant General in USAF history; Superintendent of the United States Air Force Academy, 1981–83
Herbert Klein, member, United States House of Representatives
Stephanie Kusie, Member of Canadian Parliament for Calgary Midnapore
Joseph Lazarow, Mayor of Atlantic City, New Jersey 1976–1982
Kenneth LeFevre, B.S. 1976 (Camden), member of the New Jersey General Assembly 1996–2002
Wu Weihua, Vice Chairman of the Standing Committee of the National People's Congress of the People's Republic of China
Tim Louis, Member of the Parliament of Canada
George C. Ludlow, A.B. 1850, 25th Governor of New Jersey
Gail D. Mathieu, J.D (Newark), current United States Ambassador to Namibia and former United States Ambassador to Niger
Dina Matos, former First Lady of New Jersey and ex-wife of former NJ governor Jim McGreevey
Ivy Matsepe-Casaburri, South African Minister of Communications (1999 -)
D. Bennett Mazur (c. 1925–1994), member of the New Jersey General Assembly
Bob Menendez, J.D. (Newark), U.S. House of Representatives (1992–2005); United States Senator (2006–present)
Anne Milgram, Attorney General of New Jersey and first Assistant Attorney General of New Jersey
Geoffrey H. Moore was the ninth U.S. Commissioner of the Bureau of Labor Statistics. He was known as the father of Business Cycles. He was a graduate of the College of Agriculture at Rutgers University intent on a career in poultry after having worked after school and summers for a chicken farmer.
A. Harry Moore, J.D., Governor of New Jersey, U.S. Senator from New Jersey
David A. Morse, A.B. 1929, Director-General of ILO who accepted the Nobel Peace Prize in 1969 on behalf of the ILO
Joseph A. Mussomeli, J.D. 1978 (Camden), former ambassador to Slovenia and Cambodia
William A. Newell, A.B. 1836, physician; Governor of New Jersey (1857–1860)
George Norcross (Camden, attended), Democratic Party fundraiser, insurance and media executive
Janet Norwood served as the first female Commissioner of the Bureau of Labor Statistics when she was appointed by President Jimmy Carter. She graduated from the New Jersey College of Women, which is now Douglass Residential College, in 1945 and inducted into the Rutgers Hall of Distinguished Alumni in 1987 Hall of Distinguished Alumni.
Hazel O'Leary J.D., U.S. Secretary of Energy (1993–1997)
Edward J. Patten, J.D. 1927 (Newark), U.S. House of Representatives (1963–1980)
Clark V. Poling, A.B. 1933, one of the Four Chaplains killed on the troop transport
Robert H. Pruyn, A.B. 1833, A.M. 1836, second United States Ambassador to Japan
Dana Redd, B.A. 1989 (Camden), Mayor of Camden, New Jersey.
Matthew John Rinaldo, B.S. 1953, represented New Jersey in the United States House of Representatives for twenty years, in the 12th congressional district (1973–1983) and in the 7th congressional district (1983–1993)
Norman M. Robertson, New Jersey State Senator
Eduardo Robreno, J.D. 1978 (Camden), Federal Judge for the United States District Court for the Eastern District of Pennsylvania
Richie Roberts, (Newark), prosecutor who took down Frank Lucas, portrayed in the movie American Gangster
Peter W. Rodino, Jr., J.D. 1937, Congressman
Maria Rodriguez-Gregg, B.A. 2013 (Camden), member of the New Jersey General Assembly
Esther Salas, J.D. 1991, United States District Judge of the United States District Court for the District of New Jersey
David Samson, B.A. 1961, New Jersey Attorney General from 2002 to 2003
Salvatore Eugene Scalia, law clerk and father of Supreme Court justice Antonin Scalia
Mike Schofield, B.A., Republican member of the Texas House of Representatives; former policy advisors to then Governor Rick Perry
James Schureman, A.B. 1775, Continental Congress, Senator
Martin J. Silverstein, B.A. 1976, United States Ambassador to Uruguay from 2001 to 2005
Gregory M. Sleet, J.D. 1976 (Camden), Federal Judge for the United States District Court for the District of Delaware
Elliott F. Smith (1931–1987), politician who served in the New Jersey General Assembly from 1978 to 1984, where he represented the 16th Legislative District.
Jeremiah Smith, 6th governor of New Hampshire
Mark Sokolich, B.A., Mayor of Fort Lee, New Jersey
Danene Sorace, MPP, Mayor of Lancaster, Pennsylvania
Darren Soto, B.A. 2000, U.S. House of Representatives Florida District 9 (2014–Present)
Charles C. Stratton, 15th Governor of New Jersey
Gary Stuhltrager B.A., J.D., eight-term member of the New Jersey General Assembly
Robert Torricelli, Class of 1974, United States Senator, Congressman
Foster M. Voorhees, A.B. 1876, Governor of New Jersey (1898, 1899–1902)
Elizabeth Warren (Newark), United States Senator (D-MA); Chair of the Congressional Troubled Asset Relief Program (TARP) oversight panel; author, contributing editor to the Huffington Post; former Harvard Law School professor;
Jacob R. Wortendyke, 1839, represented in the United States House of Representatives 1857–1859
Barbara Wright, M.Ed., member of the New Jersey General Assembly
Library and information science
William B. Brahms B.A. 1989, M.L.S. 2003, librarian and reference book writer
Ted Hines, M.L.S. 1958, Ph.D. 1960, librarian, pioneer in computer information cataloging systems
Literature
Janine Benyus, natural sciences writer
Holly Black, author Spiderwick Chronicles(attended)
James Blish, Class of 1942, science fiction and fantasy author; wrote A Case of Conscience, winner of 1959 Hugo Award for Best Novel and 2004 Retrospective Hugo Award for Best Novella
Lester Brown, Class of 1955, environmental analyst and author
Denise Drace-Brownell, military writer
Marian Calabro, author and publisher of history books; founder and president of CorporateHistory.net
Jonathan Carroll, Class of 1971, author
Junot Díaz, Class of 1991, author of The Brief Wondrous Life of Oscar Wao, winner of 2008 Pulitzer Prize for Fiction and 2007 National Book Critics Circle Award
Janet Evanovich, Class of 1965, best-selling author
Michael Farber, sports journalist, Elmer Ferguson Memorial Award recipient, Hockey Hall of Fame selection committee member
Richard Florida, author and public intellectual
Alfred Joyce Kilmer, Class of 1908 (did not graduate), poet, died in France during World War I; author of "Trees"
Paul Lisicky, Class of 1983 (Camden), MFA 1986 (Camden), author, creative writing professor, 2016 Guggenheim Fellow
Lawrence Millman, Ph.D., travel writer and mycologist
Ankhi Mukherjee - Ph.D., professor of literature at University of Oxford
Ira B. Nadel, Class of 1965, M.A. in 1967, biographer, literary critic, distinguished professor at University of British Columbia
Daniel Nester, Class of 1991 (Camden), poet and essayist
Fabian Nicieza, Class of 1983, comic book writer and editor; X-Men, X-Force, New Warriors, Cable and Deadpool, Thunderbolts
Daniel O'Brien, Class of 2008, humorist and novelist
Gregory Pardlo, Class of 1999 (Camden), poet, recipient of the 2015 Pulitzer Prize for Poetry
Robert Pinsky, Class of 1962, Poet Laureate of the United States, Pulitzer Prize nominee
Nina Raginsky, Class of 1962, photographer
Katherine Ramsland, true-crime author, professor of forensics psychology at DeSales University
Philip Roth, Attended (Newark) author
Rudy Rucker, Masters and PHD in mathematics, author of science fiction as well as non-fiction books on mathematics, computer programming, and the future of technology
Michael Shaara, Class of 1951, author of The Killer Angels, winner of 1975 Pulitzer Prize for Fiction
Judith Viorst, children's literature author; Alexander and the Terrible, Horrible, No Good, Very Bad Day
Dave White, Class of 2001, Derringer Award-winning mystery author
Wesley Yang, essayist, columnist for Tablet magazine, author of The Souls of Yellow Folk
Mathematics
John Charles Martin Nash, mathematician, son of John F. Nash
Medicine
Michael S. Gottlieb, Class of 1969, first physician to identify acquired immune deficiency syndrome (AIDS) as a new disease
Howard Krein, otolarynologist and plastic surgeon, husband of Ashley Biden and son-in-law of 46th United States President Joe Biden
Sandra Saouaf, immunologist
Albert Schatz, graduate assistant to Selman Waksman, co-discovered Streptomycin
Selman Waksman, Class of 1915, discovered 22 antibiotics, best known for streptomycin; Nobel laureate. Waksman Institute of Microbiology and Waksman Hall are named in his honor
Religion
Eugene Augustus Hoffman (A.Bz. 1847), Dean and "Our Most Munificent Benefactor" of The General Theological Seminary of the Episcopal Church (New York City)
Matthew Leydt (A.B. 1774), Rutgers' first alumnus and Dutch-Reformed Minister
William P. Merrill (D.D. 1904), first president on the Church Peace Union, writer of "Rise Up, O Men of God"
Clark V. Poling, Dutch-Reformed Army Chaplain among the "Four Chaplains" on the troop transport during World War II
Vernon Grounds (B.A. 1937), theologian, Christian educator, Chancellor of Denver Seminary, one of the founders of American Evangelicalism
Michael Plekon (Master's in Sociology and Religion 1977), priest, author, sociologist and theologian
Royalty
Ewuare II, Oba of Benin
Science and technology
Angela Christiano, molecular geneticist in dermatology at Columbia University
Stanley N. Cohen, Class of 1956, geneticist, pioneer in gene splicing
Robert Cooke, first researcher to identify antihistamines
Simeon De Witt, A.B. 1776, geographer for George Washington and Continental Army during the American Revolution
Elma González, PhD 1972, plant cell biologist
Louis Gluck, Class of 1930, engineer; considered the father of neonatology, the science of caring for newborn infants
Thomas H. Haines, biochemist, father of Director of National Intelligence Avril Haines
Danielle Hairston, psychiatrist; faculty at Howard University College of Medicine
Terry Hart, Class of 1978, astronaut, president of LORAL Skynet
George William Hill, Class of 1859, mathematician and astronomer, first President of the American Mathematical Society
George Duryea Hulst, clergyman, botanist, entomologist
Mir Imran, Class of 1976, BS Electrical Engineering (1976), MS Bio Engineering (1978), winner of 2005 Rutgers University Distinguished Engineer Award
Jason Locasale, Class of 2003, scientist; pioneer in the area of modern metabolism research
Richard Swann Lull, paleontologist
George Willard Martin, mycologist and academic
Harry A. Marmer, oceanographer
Charles Molnar, inventor of personal computer LINC (acknowledged as the 1st personal computer by IEEE)
Nathan M. Newmark, Class of 1948, inventor of the Newmark-beta method of numerical integration used to solve differential equations; winner of the National Medal of Science
Daniel G. Nocera, Class of 1979, chemist noted for work on proton coupled electron transfer
Eva J. Pell, Class of 1972, plant pathologist
Edward Rebar, biologist
Carl Safina, writer and ecological scientist
Peter C. Schultz, Class of 1964, co-inventor of fiber optics
John Scudder, physician; research pioneer in the field of blood storage and replacement
Raymond Seeger, Class of 1926, physicist, fluid dynamics researcher, winner of the Navy Distinguished Public Service Award
Harold Hill Smith, geneticist, responsible for fusing human and plant cells
Evelyn M. Witkin, geneticist, 2015 Lasker Prize winner, awarded National Medal of Science in 2002
Heather Zichal, Deputy for Energy and Climate Change in Obama Administration
Social sciences
Dorothy Cantor, Psy.D. 1976, former president of the American Psychological Association
Notable faculty
Arts
Emma Amos, professor of fine arts; postmodernist painter and printmaker; member of Spiral; editorial board member of feminist journal Heresies; member of Fantastic Women in the Arts
Julianne Baird, professor of music (Camden), soprano
Vivian E. Browne, painter, professor of art
Angelin Chang, former associate professor of music; Grammy Award-winning classical pianist
Leon Golub, professor of fine arts
Al Hansen, professor of finer arts; a founder of Fluxus
Allan Kaprow, professor of fine arts
Roy Lichtenstein, professor of fine arts
Robert Moevs, professor of music
George Segal, professor of fine arts; Fluxus artist
Robert Watts, professor of fine arts
Charles Wuorinen, professor of music; Pulitzer Prize–winning composer and MacArthur fellow
Economics
Harry Gideonse (1901–1985), President of Brooklyn College, and Chancellor of the New School for Social Research
Library and information science
Marc Aronson, Professor of Library and Information Science, author and historian
Nicholas J. Belkin, Professor of Library and Information science
Paul S. Dunkin, Professor Emeritus of Library Services
Elizabeth Futas, Professor of Library and Information Science
Peggy Sullivan, Lecturer
Literature
Miguel Algarín, Professor of English
Giannina Braschi, Professor of Spanish, author of Yo-Yo Boing! and United States of Banana
John Ciardi, Professor of English, poet, translator of Dante's The Divine Comedy
Mark Doty, Professor of English, poet
William C. Dowling, Professor of English
Ralph Ellison, author of Invisible Man
Francis Fergusson, Professor of English, literary critic
H. Bruce Franklin, John Cotton Dana Professor of English and American Studies (Newark); expert on Herman Melville, science fiction, and prison literature
Joanna Fuhrman, poet
Paul Fussell, Professor of English, author, literary critic, social commentator
Rafey Habib, Professor of Literature (Camden), poet
Stanley Kunitz, Visiting Professor of Literature (Camden), poet
Paul Lisicky, Professor of English and Creative writing (Camden), author
Alicia Ostriker, Professor of English, poet
Gregory Pardlo, Professor of English (Camden), poet
David S. Reynolds, Professor of Literature (Camden), cultural critic
Medicine
Sidney Pestka, Professor of Microbiology and Immunology at the Robert Wood Johnson Medical School; the "father of interferon"; received the National Medal of Technology
Robert A. Schwartz, Professor and Head of Dermatology at the Rutgers New Jersey Medical School; co-discoverer of AIDS-associated Kaposi sarcoma and the Schwartz-Burgess syndrome
René Joyeuse MD, MS, FACS, Office of Strategic Services Allied intelligence agent during World War II, CMDNJ Assistant Professor of Surgery, co-founder of the American Trauma Society, involved in training physicians and EMS personnel in trauma care.
Law
Robert E. Andrews, adjunct professor at the School of Law in Camden, Congressman, U.S. House of Representatives
Ruth Bader Ginsburg, professor at the School of Law in Newark, Associate Justice of the Supreme Court of the United States
Arthur Kinoy, professor at the School of Law in Newark; civil rights litigator for leftist causes
Wendell Pritchett, Chancellor of Rutgers University–Camden, Interim Dean and Presidential Professor at the University of Pennsylvania Law School, and Provost of the University of Pennsylvania
Raphael Lemkin, Professor of International Law at the School of Law in Newark, Jurist who coined the term Genocide and key drafter and campaigner for the UN Genocide Convention
Mathematics
Abbas Bahri (1955–2016), professor of mathematics
József Beck, professor of mathematics
Haim Brezis, professor of mathematics
Israel Gelfand (1913–2009), professor of mathematics
Daniel Gorenstein (1923–1992), professor of mathematics
Samuel L. Greitzer (1905–1988), professor of mathematics, founding chairman of the United States of America Mathematical Olympiad
András Hajnal (1931–2016)— professor of mathematics
Henryk Iwaniec, professor of mathematics
Jeffry Ned Kahn, professor of mathematics
János Komlós, professor of mathematics, winner of the Alfréd Rényi Prize (1975)
Michael Saks, professor of mathematics, winner of the Gödel Prize (2004)
Glenn Shafer (1992–present), professor of mathematical statistics, co-creator of the Dempster-Shafer theory
Saharon Shelah, professor of mathematics
Doron Zeilberger, professor of mathematics; winner of the Steele Prize for Seminal Contributions to Research (1998)
Philosophy
Elisabeth Camp, associate professor of philosophy
Ruth Chang, professor of philosophy
Frances Egan, professor of philosophy
Jerry Fodor, professor of philosophy and cognitive science
Alvin Goldman, professor of philosophy
Peter D. Klein, professor of philosophy
Brian Leftow, William P. Alston Chair in Philosophy of Religion
Ernest Lepore, professor of philosophy
Alan Prince, professor of linguistics and cognitive science, founder of Optimality Theory (OT)
Zenon Pylyshyn, professor of philosophy and cognitive science
Theodore Sider, professor of philosophy
Holly Martin Smith, Distinguished Professor of Philosophy
Stephen Stich, professor of philosophy
Robert Weingard, professor of philosophy
Samuel Merrill Woodbridge (1819–1905), professor of metaphysics and philosophy of the human mind (1857–1864)
Dean Zimmerman, professor of philosophy
Larry Temkin, professor of philosophy
Physics
Thomas Banks, professor of physics
Girsh Blumberg, professor of physics
Herman Carr, professor of physics, pioneer of magnetic resonance imaging
Piers Coleman, professor of physics
Michael R. Douglas, former professor of physics (now at Simons Center for Geometry and Physics, Stony Brook)
Daniel Friedan, professor of physics
Gabriel Kotliar, professor of physics
Joel Lebowitz, professor of mathematical physics
Gregory Moore, professor of physics
Nathan Seiberg, former professor of physics (now at Institute for Advanced Study, Princeton)
Stephen Shenker, former professor of physics (now at Stanford University)
Rachel Somerville, professor of physics and astronomy
David Vanderbilt, professor of physics
Alexander Zamolodchikov, professor of physics
Science and engineering
Jean Ruth Adams, entomologist and virologist
Willard H. Allen, poultry scientist and New Jersey secretary of agriculture
C. Olin Ball, professor of food engineering, chair of the Department of Food Science
Richard Bartha, professor of microbiology and biochemistry; discoverer of "oil eating bacteria"
Helen M. Berman, chemistry professor, former Director of the RCSB Protein Data Bank
Kenneth Breslauer, Linus C. Pauling professor of chemistry and chemical biology
Stephen K. Burley, Director of RCSB Protein Data Bank and the Center for Integrative Proteomics Research
Stephen S. Chang, professor of food science and Nicholas Appert Award winner
Albert Huntington Chester, mining engineer, professor of chemistry, mineralogy, and metallurgy, explorer, and namesake of Chester Peak
Hettie Morse Chute, professor of botany
Vašek Chvátal, professor of computer science
George Hammell Cook, State Geologist of New Jersey and Vice President of Rutgers College
Michael R. Douglas, Director of New High Energy Theory Center; Sackler Prize winner
Richard H. Ebright, professor of chemistry
Helen Fisher, research professor of anthropology
Robin Fox, professor of anthropology
Apostolos Gerasoulis, professor of computer science; creator of the Teoma/Ask search engine
Alan S. Goldman, professor of chemistry
Chi-Tang Ho, professor of food science and Stephen S. Chang Award for Lipid or Flavor Science winner
Tomasz Imielinski, professor of computer science
Yogesh Jaluria, Board of Governors Professor and Distinguished Professor of Mechanical and Aerospace Engineering.
Paul B. Kantor, professor of information science
Leonid Khachiyan, professor of computer science; creator of the first polynomial time algorithm for linear programming
Lisa C. Klein, Distinguished Professor of Materials Science and Engineering
Alan Leslie, professor of cognitive science and psychology
Jing Li, chemist
Paul J. Lioy, Professor of Environmental and Occupational Medicine, UMDNJ, Robert Wood Johnson Medical School
Michael L. Littman, professor of computer science
Wilma Olson, professor of chemistry and physics, BioMAPS Institute for Quantitative Biology
Lawrence Rabiner, professor of electrical and computer engineering
Robert Schommer, astronomer, professor of physics
Myron Solberg, professor of food science; founding director of the Center for Advanced Food Technology at Rutgers; Nicholas Appert Award winner
Mario Szegedy, professor of computer science; two-time winner of Godel Prize
Endre Szemerédi, professor of computer science
Lionel Tiger, professor of anthropology
Jay Tischfield, professor of genetics
Robert Trivers, professor of anthropology and biological sciences and winner of the Crafoord Prize in Biosciences (2007)
Kathryn Uhrich, professor of chemistry, Area Dean of Mathematical and Physical Sciences
Selman Waksman, professor of microbiology and winner of the Nobel Prize in Physiology or Medicine (1952)
Judith Weis, professor emeritus of marine biology
Martin Yarmush, professor of biomedical and chemical & biochemical engineering, Fellow: US National Academy of Inventors and US National Academy of Engineering
Lujendra Ojha, assistant professor of planetary sciences.
Social sciences
Stephen Bronner, professor of political science, comparative literature and German studies
Charlotte Bunch, founder and Director the Center for Women's Global Leadership, activist and author
Arthur F. Burns, professor of economics, 10th Chairman of the Federal Reserve
Mason W. Gross, professor of classics, President of Rutgers University (1959–1971)
Paul Lazarsfeld, prominent sociologist and pioneering communication theorist (Newark)
William D. Lutz, Professor of linguistics (Camden), leading theorist on doublespeak
Gerald M. Pomper, professor of political scientist, leading expert on election studies
History
Peter Charanis, Voorhees Professor of History; Byzantine historian
Lloyd Gardner, Mary and Charles Beard Professor of History and distinguished diplomatic historian
Annette Gordon-Reed, Professor of History (Newark), winner of the Pulitzer Prize for History 1999
Michael Kulikowski, Professor of History at the University of Tennessee and author of Late Roman Spain and Its Cities (Johns Hopkins University Press), 2004, and Rome's Gothic Wars from the Third Century to Alaric (Cambridge University Press)
David Levering Lewis, former Professor of History; twice winner of the Pulitzer Prize for Biography or Autobiography (1994 and 2001)
Tomás Eloy Martínez, Professor of Latin American studies; Argentinian journalist and writer
Phillip S. Paludan, Professor of History (Camden)
Said Sheikh Samatar, Professor of History (Newark)
Jacob Soll, Professor of History (Camden), MacArthur Fellow 2011
Traian Stoianovich, Professor of History
Camilla Townsend, Professor of History
Athletic coaches and staff
Dick Anderson, football coach (1984–1989); assistant coach at Lafayette College, University of Pennsylvania and Penn State
George Case, baseball coach (1950–1960), including 1950 College World Series berth; former Major League Baseball player with the Washington Senators and Cleveland Indians; four-time All-Star and six-time American League leader in stolen bases
Robert E. Mulcahy, athletic director
Stephen Peterson, men's rowing coach (1992-1995)
Mike Rice Jr., men's basketball coach (2010-2013)
George Sanford, football coach (1913–1923)
Greg Schiano, football coach (2001–2011, 2020–present)
Terry Shea, football coach (1996–2000); later a coach with Kansas City Chiefs, Chicago Bears, Miami Dolphins, and St. Louis Rams
C. Vivian Stringer
Dick Vitale, assistant basketball coach (1970–72); coach of the Detroit Pistons; sports commentator
Fictional characters
Todd Anderson, The Cookout
Jackie Aprile, Jr., The Sopranos
Lt. Joseph Cable, USMC, South Pacific
Richard Cooper, I Think I Love My Wife
Jason Gervasi, The Sopranos (Newark)
Harriet Hayes, Studio 60 on the Sunset Strip
Rufus Humphrey, Gossip Girl
Neil Klugman, protagonist and narrator of Philip Roth's novel Goodbye Columbus, winner of the 1960 National Book Award
Liz Lemler, 30 Rock
Mr. Magoo, 1950s cartoon character
Lucy McClane, Live Free or Die Hard (Camden)
OSS Agent / German Mole Bill O'Connor, played by Richard Conte in the film 13 Rue Madeleine
Jason Parisi, The Sopranos (Newark)
Agent Dylan Rhodes, in the film Now You See Me
Agent Shavers, in the film Runner Runner
Oscar Wao, The Brief Wondrous Life of Oscar Wao
Notes and references
Online resources
Rutgers notable alumni
Rutgers Business School distinguished alumni
Scarlet Knights History Hall of Fame
Lists of people by university or college in New Jersey
|
52433684
|
https://en.wikipedia.org/wiki/Open%20manufacturing
|
Open manufacturing
|
Open manufacturing, also known as open production, maker manufacturing, and with the slogan "Design Global, Manufacture Local" is a new model of socioeconomic production in which physical objects are produced in an open, collaborative and distributed manner and based on open design and open source principles.
Open manufacturing combines the following elements of a production process: new open production tools and methods (such as 3D printers), new value-based movements (such as the maker movement), new institutions and networks for manufacturing and production (such as FabLabs), and open source methods, software and protocols.
Open manufacturing may also include digital modeling and fabrication and computer numeric control (CNC) of the machines used for production through open source software and open source hardware.
The philosophy of open manufacturing is close to the open-source movement, but aims at the development of physical products rather than software. The term is linked to the notion of democratizing technology as embodied in the maker culture, the DIY ethic, the open source appropriate technology movement, the Fablab-network and other rooms for grassroot innovation such as hackerspaces.
Principles
The openness of "open manufacturing" may relate to the nature of the product (open design), to the nature of the production machines and methods (e.g. open source 3D-printers, open source CNC), to the process of production and innovation (commons-based peer production / collaborative / distributed manufacturing), or to new forms of value creation (network-based bottom-up or hybrid versus business-centric top down). Jeremy Rifkin argues, that open production through 3D-printing "will eventually and inevitably reduce marginal costs to near zero, eliminate profit, and make property exchange in markets unnecessary for many (though not all) products".
Socioeconomic implications
The following points are seen as key implications of open manufacturing:
a democratization of (the means of) production,
a decentralization of production and local value creation (global cooperation – local manufacturing),
the possibility to produce high quality prototypes and products in small quantities at moderate (to increasingly low) prices,
the closing of the gap between the formal and informal sector and opportunities for bottom-up open innovation, and
a transition from consumer to producer for manufactured goods.
In the context of socioeconomic development, open manufacturing has been described as a path towards a more sustainable industrialization on a global scale, that promotes "social sustainability" and provides the opportunity to shift to a "collaboration-oriented industrialization driven by stakeholders from countries with different development status connected in a global value creation at eye level".
For developing countries, open production could notably lead to products more adapted to local problems and local markets and reduce dependencies on foreign goods, as vital products could be manufactured locally. In such a context, open manufacturing is strongly linked to the broader concept of Open Source Appropriate Technology movement.
Views
According to scholar Michel Bauwens, Open Manufacturing is "the expansion of peer production to the world of physical production".
Redlich and Bruns define "Open Production" as "a new form of coordination for production systems that implies a superior broker system coordinating the information and material flows between the stakeholders of production", and which will encompass the entire value creation process for physical goods: development, manufacturing, sales, support etc.
A policy paper commissioned by the European Commission uses the term "maker manufacturing" and positions it between social innovation, open source ICT and manufacturing.
Criticism
A number of factors are seen to hamper the broad-based application of the model of "open manufacturing" and / or to realize its positive implications for more sustainable global production pattern.
The first factor is the sustainability of commons-based peer production models: "Empowerment happens only, if the participants are willing to share their knowledge with their colleagues. The participation of the actors cannot be guaranteed, thus there are many cases known, where participation could only be insufficiently realized". Other problems include missing or inadequate systems of quality control, the persistent paradigm of high-volume manufacturing and its cost-efficiency, the lack of widely adopted platforms to share hardware designs, as well as challenges linked to the joint-ownership paradigm behind the open licences of open manufacturing and the fact, that hardware is much more difficult to share and to standardize than software.
In developing countries, a number of factors need to be considered in addition to the points above. Scholar Waldman-Brown names the following: lack of manufacturing expertise and informality of current SMMs in emerging markets as an obstacle to quality control for final products and raw material as well as universities and vocational training programs not apt to react rapidly enough to provide the necessary knowledge and qualifications.
Examples
Open Source Ecology, a project for designing and building open source industrial machines, fabricated by eXtreme Manufacturing
RepRap Project, a project to create an open-source self-copying 3D printer.
Wikispeed, a automotive manufacturer that produces modular design cars using open source tools
Local Motors : Applying open production to the field of transport and vehicles
Sensorica, a hardware development network-organization using the open value network model.
See also
Distributed manufacturing
Open design
Open source hardware
Open Source
Collaboration
Commons-based peer production
Open source appropriate technology
Collaborative software development model
Knowledge commons
Co-creation
Decentralized planning (economics)
Mass collaboration
Production for use
Prosumer
Gift economy
References
External links
The Emergence of Open Design and Open Manufacturing Michel Bauwens, We Magazine Volume 2
http://openmanufacturing.net/ Short introduction and online group.
Economic systems
Collaboration
Free software
3D printing
Public commons
Manufacturing
|
79977
|
https://en.wikipedia.org/wiki/IceWM
|
IceWM
|
IceWM is a stacking window manager for the X Window System graphical infrastructure, written by Marko Maček. It was written from scratch in C++ and is released under the terms of the GNU Lesser General Public License. It is relatively lightweight in terms of memory and CPU usage, and comes with themes that allow it to imitate the GUI of Windows 95, Windows XP, Windows 7, OS/2, Motif, and other graphical user interfaces. IceWM is meant to excel in look and feel while being lightweight and customizable.
IceWM can be configured from plain text files stored in a user's home directory, making it easy to customize and copy settings. IceWM has an optional, built-in taskbar with a dynamic start menu, tasks display, system tray, network and CPU meters, mail check and configurable clock. It features a task list window and an Alt+Tab task switcher. Official support for GNOME and KDE menus used to be available as a separate package. In recent IceWM versions, support for them is built-in as well. External graphical programs for editing the configuration and the menu are also available.
Usage
IceWM is installed as the main Window Manager for Absolute Linux, Antix and the light version of VectorLinux.
The Easy mode default desktop of the Asus Eee PC uses IceWM.
openSUSE for Raspberry Pi uses IceWM by default as a lightweight GUI. The Raspberry Pi 3 only version of SUSE Linux Enterprise Server also uses IceWM.
Screenshots
See also
JWM
FVWM95
Comparison of X window managers
Spri, a former lightweight Linux distribution which used IceWM as its default user interface
References
External links
IceWM Themes
IceWM Control Panel
IceWM Themes
1997 software
Articles containing video clips
Free desktop environments
Free software programmed in C++
Free X window managers
Window managers that use GTK
|
42282970
|
https://en.wikipedia.org/wiki/Matthew%20Garrett
|
Matthew Garrett
|
Matthew Garrett is an Irish technologist, programmer, and free software activist who is a major contributor to a series of free software projects including Linux, GNOME, Debian, Ubuntu, and Red Hat. He has received the Free Software Award from the Free Software Foundation (FSF) for his work on Secure Boot, UEFI, and the Linux kernel.
Life and career
Garrett states that he was born in Galway, Ireland and has a PhD in Genetics from the University of Cambridge. He is the author of several articles on Drosophila melanogaster (i.e., fruit fly) genetics.
Garrett has been a contributor to the GNOME and the Debian Linux projects, was an early contributor to Ubuntu, was an initial member of the Ubuntu Technical Board, worked as a contractor at Canonical Ltd., and worked at Red Hat.
At Canonical Ltd. and Red Hat, Garrett worked on power management in Linux. While at Red Hat, Garrett also worked on issues relating to Secure Boot and UEFI and the Linux kernel to preserve users' ability to run the operating system of their choosing on hardware supporting Secure Boot. This work eventually led to his being awarded the 2013 FSF Free Software Award.
Garrett worked at the cloud computing platform company CoreOS and is cited in the press as an expert in cloud computing issues. From 2017 until 2021, he worked for Google and is currently employed at Aurora.
He has received the Free Software Award from the Free Software Foundation for his work on Secure Boot, UEFI, and the Linux kernel.
Advocacy
Garrett has been a strong advocate for software freedom and compliance with the GNU General Public License (GPL) in the Linux kernel. For example, Garrett filed a complaint with US Customs against Fusion Garage due to violations of the GPL.
In March 2021, Garrett, who had served on the Free Software Foundation's board of directors, signed an open letter to the FSF calling for the removal of its entire board and for Richard Stallman to be removed from all leadership positions.
References
External links
Garrett's blog on Dreamwidth
Alumni of the University of Cambridge
Irish computer programmers
Free software programmers
Linux kernel programmers
Living people
Open source people
People from Galway (city)
Debian people
Ubuntu (operating system) people
GNOME developers
Google employees
Year of birth missing (living people)
|
1715675
|
https://en.wikipedia.org/wiki/Apple%20Remote%20Desktop
|
Apple Remote Desktop
|
Apple Remote Desktop (ARD) is a Macintosh application produced by Apple Inc., first released on March 14, 2002, that replaced a similar product called Apple Network Assistant. Aimed at computer administrators responsible for large numbers of computers and teachers who need to assist individuals or perform group demonstrations, Apple Remote Desktop allows users to remotely control or monitor other computers over a network.
Releases
The original release, which used the User Datagram Protocol (UDP) on port 3283, allowed remote computers (running Mac OS 8.1 or later) to be observed or controlled from a computer running Mac OS X 10.1. It also allowed remote computers to be restarted or shut down, to have their screens locked or unlocked, or be put to sleep or awakened, all remotely. Version 1 also included simple file transfer abilities that would allow administrators to install simple applications remotely; however, to install applications that required the use of an installer, the administrator would have to run the installer manually through the client system's interface.
Version 1.1 (released August 20, 2002) introduced the ability to schedule remote tasks.
Version 1.2 (released April 2, 2003) added a number of features that were designed to ease the administration of a large number of computers. Software could now be installed remotely on a number of machines simultaneously, without using the client system's interface. The startup disk on remote computers can also be changed, setting them to boot from a NetBoot server, a Network Install image, or a partition on their own drives. The client ARD software could also now be upgraded remotely to allow administrators to take advantage of new features without having to visit each individual computer.
Apple released a minor update on December 16, 2003, that brought ARD to 1.2.4. This update concentrated on security, performance and reliability.
On June 21, 2004, Apple announced Apple Remote Desktop 2 (released in July), which was designed to use the VNC protocol instead of Apple's original ARD protocol. This allows the ARD administration software to observe and control any computer running VNC-compatible server software (such as Windows and Unix systems) not just Macs and conversely allowing standard VNC viewing software to connect to any Mac with the ARD 2 software installed and VNC access enabled. This version also uses the Transmission Control Protocol (TCP) for most functions (on ports 5900 and 5988), which is designed to be more reliable than the UDP used in ARD 1. Another significant addition to ARD 2 was the Task List, that allows remote tasks to be queued and monitored, reporting their status (such as Succeeded or Failed). This release also dropped support for older versions of the Mac OS, requiring 10.2.8 or higher.
On October 11, 2004, Apple released version 2.1 which improved on a number of existing features while adding the ability to view observed or controlled computers in full-screen mode, the ability to see the displays of computers with more than one monitor and support for mouse right-click and scroll-wheels.
On April 29, 2005, Apple released version 2.2 which added support for Mac OS X 10.4 along with several other bug-fixes and improvements to reliability.
On April 11, 2006, Apple released version 3.0 which is now a Universal Binary and features improved software upgrade functionality, Spotlight searching, as well as increased throughput and encryption for file transfers, and Automator support.
On November 16, 2006, Apple released version 3.1 which provides support for the new Intel-based Xserve Lights Out Management feature.
On October 18, 2007, Apple released version 3.2 which introduced Mac OS X 10.5 support and compatibility for third party VNC viewers and servers.
On August 20, 2009, Apple released version 3.3 which fixed many bugs and allowed function keys and key combinations to be sent to the remote computer instead of the local machine.
On January 6, 2011, Apple released version 3.4 which provides compatibility with the Mac App Store.
On July 20, 2011, Apple released version 3.5 which provides compatibility with Mac OS X 10.7.
On October 22, 2013, Apple released version 3.7 which provides compatibility with OS X 10.9, multiple monitors, and enhancements to remote copy/paste.
On January 27, 2015, Apple released version 3.8, which primarily added support for OS X 10.10, while also including various user interface improvements, a new icon, stability improvements and the ability to update the application using the Mac App Store, even if the application was not originally installed from that source. This version now requires OS X 10.9 or later.
On February 21, 2017, Apple released version 3.9, which heightened communications security between local and remote computers (including a Preferences checkbox to allow communication with pre-3.9 clients), added support for the MacBook Pro Touch Bar, addressed various stability issues, allowed the user to export and import an encrypted list of computers with user credentials, and debuted the ability to use an "Assistance Cursor" to call attention to items for the remote user. This version now requires OS X/macOS 10.10.5 or later.
Encryption
Prior to version 3, ARD encrypted only passwords, mouse events and keystrokes; and not desktop graphics or file transfers. Apple therefore recommended that ARD traffic crossing a public network should be tunneled through a VPN, to avoid the possibility of someone eavesdropping on ARD sessions.
ARD 3.0 has the option of using AES 128-bit encryption, the same as a basic SSH server.
ARD 3.9 included as yet unspecified enhancements to communications security that made the native mode incompatible with previous-version clients. A Preferences checkbox was provided in the Apple Remote Desktop app to explicitly allow communications with older clients. ARD 3.9.2 made the use of this checkbox optional for seeing clients in the list.
Legal
In November 2017, the United States International Trade Commission announced an investigation into allegations of patent infringement with regard to Apple's remote desktop technology. Aqua Connect, a company that builds remote desktop software, has claimed that Apple infringed on two of its patents.
Restrictions
ARD does not support reverse connections to listening VNC viewers.
See also
Screen Sharing
Comparison of remote desktop software
RFB protocol
Remote Desktop Services
Notes
References
External links
Apple Remote Desktop
Remote desktop
Remote Desktop
Virtual Network Computing
Remote administration software
MacOS remote administration software
2002 software
|
52111378
|
https://en.wikipedia.org/wiki/Codentify
|
Codentify
|
Codentify is the name of a product serialization system developed and patented back in 2005 by Philip Morris International (PMI) for tobacco product authenticity, production volume verification and supply chain control. In the production process, each cigarette package is marked with a unique visible code (also called “Codentify”), that allows authenticating the code against a central server.
In November 2010, PMI licensed this technology to its three main competitors, namely British American Tobacco (BAT), Imperial Tobacco Group (ITG), and Japan Tobacco International (JTI), and the four companies together formed the Digital Coding and Tracking Association (DCTA) which works to promote the system in order to replace governmental revenue stamps. Codentify was branded by its inventors as a “track & trace and product authentication technology”.
History
In July 2004, Phillip Morris International and the European Union had settled a 12 year long legal dispute concerning cigarette smuggling allegations. PMI agreed to pay 1.25 Billion USD to the EU budget and its member states. In addition, PMI was legally obligated to mark its products with trackable serial codes. Agreements were subsequently signed with the other three major tobacco companies.
PMI's affiliate company, Philip Morris Products S.A. created and patented the Codentify system in 2005.
In late 2010, PMI licensed Codentify technology to its main competitors BAT, JTI, and ITG free of charge. The four companies, which together account for 71% of global cigarette sales (excluding China), agreed to use the PMI-developed system on all of their products to ensure “the adoption of a single industry standard, based on Codentify.” The Framework Convention on Tobacco Control (FCTC) immediately voiced concerns that “Codentify should never be used for tracking and tracing purposes as tracking and tracing provisions should be implemented under the strict control and management of governments.”
In 2011, the four companies formed the Digital Coding and Tracking Association (DCTA) to promote international standards and digital technologies to help governments fight smuggling, counterfeiting and tax evasion. The association was officially launched in 2013.
According to the DCTA, around 12% of the global cigarette market is illicit, depriving national governments of more than US$40 billion a year in lost tax revenues and some say this is a serious underestimate. The agreements between the EU and the four major tobacco companies aim to stem the illicit trade of cigarettes, but some academics and the anti-tobacco movement have criticized it as a wholly inadequate deterrent. The EU has since not renewed this deal after MEPs complained that it was inappropriate for governments and tobacco companies have such a deal.
Inexto
In June 2016 the DCTA announced that it transferred Codentify to Inexto, an affiliate of the French Group Impala. This was criticized by leading industry watchdogs such as the FCTC and academics such as Anna Gilmore, who is the director of the tobacco control research group at the University of Bath. She said that “Inexto could not be considered sufficiently independent from the tobacco industry". Martyn Day, Scottish National Party Member of Parliament, says while Codentify was sold "the new owner is merely a front company and that the system is still under the effective control of the tobacco firms". Other academics such as Luk Joossens, who is the advocacy officer of the Association of European Cancer Leagues, said the sale was “predictable” and that tobacco companies will now “pretend” that Codentify is no longer part of the tobacco industry. PMI have rebutted that “Inexto is fully independent from the tobacco industry."
Technology
The Codentify system is based on a machine-created, unique, human-readable multi-digit alphanumeric code that is printed directly onto every individual product during the manufacturing process. A double key encryption system, with separate central authority level and factory level encryption keys are stored on a respective server ,allows for factory line production of a pre-defined number of Codentify codes that have been authorized by a central (e.g.: government) server.
The system generated 12 digit variant of the code is described as pseudo-random, offering 3412 possible combinations. Data which is unique and attributed to each discreet item (product) is encrypted into the code such as the date and exact time of manufacture, machine count of the item, specific machine line of manufacture, brand, variant, pack size, pack type, destination market, and price.
Critics of this system have argued that this approach only allows for the verification of the code itself and not of the product the code is printed on; thereby leaving the potential for copying. A European Commission Assessment Report into Tracking and Tracing notes in section 5.1.2 that in addition to the Codentify code being easily copied, it also fails to link the cigarette packets to the master cases.
However, this critique omits to recognize that the system does allow for recognition of copied codes when an illegitimate copy is queried. The system logic leverages geo-positioning data and will recognize if a code has been previously queried to highlight and notify that the item is suspect. Illegitimate products are invariably developed and replicated in significant numbers from a legitimate example. Once the system logic recognizes illegitimate code, it is able to notify the system authority that this code has been compromised and is no longer valid. Due to the pseudo-random encrypted design of Codentify, illegitimate parties cannot predict codes and therefore can either default to replication of one legitimate code, which the system will logically identify from duplicate queries and provide notification, or the generation of random illegitimate codes, which the system will immediately recognize. Given the geo-positioning data at the time of query, both illegitimate approaches provide legitimate authorities important data on the suspect illegitimate supply chain.
Furthermore, when the Codentify technology is coupled with aggregated data (parent-child packaging) and supply chain event tracking, the ability of the system to identify a suspect query is thereafter immediate. In essence, supply chain event tracking along the legitimate supply chain generates additional data encompassing that legitimate product’s specific provenance. Provenance that the illicit supply chain cannot replicate.
Criticism
Codentify has been the subject of harsh criticism as a tobacco industry promoted system which is aimed at undermining public health efforts and not capable of curbing the illicit trade of cigarettes. This criticism has come from some academics and pro-health groups, to include the WHO.
The WHO FCTC Protocol on the Elimination of the Illicit Trade in Tobacco products states in article 8, section 12 that tobacco tracking and regulation “shall not be performed by or delegated to the tobacco industry”. Today, the Codentify technology is under a totally independent ownership and management, having no capitalistic or governance link to the tobacco industry, with application of a successor product (INEXTOR) that is now present across multiple industries outside of tobacco, to include beer, fine spirits, luxury goods, automotive parts, and pharmaceuticals.
Critics of the tobacco industry say Codentify is simply not good enough, “because it focuses too much on production and does not store product codes or track them.” However, the Codentify system does not require the storage of codes in the clear for security purposes (though it is capable to perform this task, if specified by the central authority), as such mass storage of legitimate codes exposes them to potential compromise. The Codentify technology uses decryption processes to deliver authentication and validation of an item and is able to provide the item’s descriptive and unique attributes in parallel, with a near instant response time.
Heavy criticism has also been launched at the factory-level keys the system uses to provide unique verification codes for the product. Since these secret keys are stored on company and government servers, abuse of privileges on this level would allow criminals to generate additional codes, which would appear to be genuine to the system. However, this criticism does not take into account that the central server keys are not shared with the manufacturer nor does it account for the dynamic and static key multi-level encryption method designed into the system, which jointly provide the legitimate authority with complete and secure control over the code authorization process.
The decentralized nature of the system the realities of imperfect connectivity across complex, cross border supply chains by ensuring a central control and oversight of secure code generation within production environments whilst providing legitimate manufacturing assets continuity of production under parameters dictated by the central authority.
Action on Smoking and Health (ASH) described the system as a black box created by the tobacco industry that uses unsecured equipment that is vulnerable to code recycling. However, this criticism does not take into account that: (i) codes can only be legitimized by the central authority server, (ii) the algorithm and both static and dynamic keys remain secret to and under strict control of the central authority, (iii) the method applied in the generation of the codes is patented and therefore visible by the public. Under this strict control regime, illegitimate creation of codes is not possible. Illicit techniques such as “code recycling”, using codes of products rejected in quality control, “code cloning”, printing the same code on multiple products, and “code migration”, reprinting codes used in one country elsewhere, allowing to reuse genuine codes multiple times, are therefore rendered obsolete and defeated by this multi-layer encryption method.
Philip Morris has been accused, via its South American subsidiary Massalin Particulares, of using bribery and extortion to implement Codentify and Inexto in Argentina.
“The directors, managers and legal representatives of PMI and its Argentine subsidiary Massalin Particulares S.R.L. (MP) are being investigated within the framework of a criminal case in federal court…,” Attorney Alejandro Sánchez Kalbermatten wrote in a 2017 letter to the Security and Exchange Commission in the United States.
A decision by the Federal Court of Argentina overseeing the case concluded that the plaintiff, Attorney Alejandro Sánchez Kalbermatten, had no standing and that the accusations made in the complaint were not substantiated by material facts. As a consequence, the case No.17.766/2016 was fully dismissed on September 28, 2017.
References
Tobacco industry
|
607133
|
https://en.wikipedia.org/wiki/Copyright%20misuse
|
Copyright misuse
|
Copyright misuse is an equitable defence to copyright infringement in the United States based upon the doctrine of unclean hands. The misuse doctrine provides that the copyright holder engaged in abusive or improper conduct in exploiting or enforcing the copyright will be precluded from enforcing his rights against the infringer. Copyright misuse is often comparable to and draws from the older and more established doctrine of patent misuse, which bars a patentee from obtaining relief for infringement when he extends his patent rights beyond the limited monopoly conferred by the law.
The doctrine forbids the copyright holder from attempting to extend the effect or operation of copyright beyond the scope of the statutory right. For example, by engaging in restrictive licensing practices that are contrary to the public policy underlying copyright law. In fact, the misuse doctrine is said to have evolved to tackle such aggressive licensing practices.
Requirements
Although the doctrine of copyright misuse has not yet been delineated, several circuits have upheld the defence on the following policy grounds –
The plaintiff has violated antitrust laws
The plaintiff used the copyright in a manner violative of the public policy of copyright law
Difference from Fair Use
Fair use defence to copyright infringement allows unauthorised use of copyrighted work in a reasonable manner under certain circumstances. The following are some of the facets that distinguish the misuse doctrine from fair use –
Fair use is statutorily recognised in 17 USC § 107, whereas copyright misuse is yet to receive statutory support; and
The defendant must prove that his unauthorised use of copyrighted work qualifies for a fair use exception, whereas the defendant need not suffer directly from the misuse to successfully claim an exception to copyright infringement.
In the United States
Although the misuse doctrine was a well-known defence in patent infringement cases, it was extended to copyright law in M. Witmark & Sons v Jensen for the first time. Consequently, copyright misuse was adopted by various circuit courts in recent years. However, the contours of the doctrine remain uncertain as it is yet to be explicitly recognised by the United States Supreme Court. Some scholars have even advocated for the codification of the misuse doctrine.
Lasercomb America, Inc. v Reynolds
In Lasercomb America, Inc. v Reynolds, the Fourth Circuit became the first appellate court to uphold a copyright misuse defence as analogous to the patent misuse defence. In this case, Lasercomb had sued Reynolds for making unauthorised copies of its die-making software, which was subject to copyright protection. Reynolds alleged that Lasercomb had misused its copyright by imposing unreasonable non-compete clause that restricted creating a competing product for a period of one hundred years in its standard licensing agreement. The Court ruled that although the inclusion of such a provision did not constitute an antitrust violation, it did violate the public policy underlying copyright and rendered Lasercomb’s copyright unenforceable. The Court also held that the defendant need not be subjected to the purported misuse in order to set up a valid defence as Reynolds had not signed the standard licensing agreement. Lastly, the Court clarified that Lasercomb was free to initiate a suit for infringement once it has purged itself of the misuse and its copyright was not invalidated.
Practice Management Information Corp. v American Medical Association
The Ninth Circuit was the next circuit to adopt the copyright misuse doctrine in Practice Management Information Corp. v American Medical Association. In this case, American Medical Association granted Health Care Financing Administration (now known as the Centers for Medicare & Medicaid Services) a non-exclusive, royalty-free perpetual license to use its coding system for medical procedures. However, the license was restricted, as no other coding system could be used. Practice Management, a publisher and distributor of medical books, filed for declaratory relief to have the copyright invalidated when it failed to procure the volume discounted it requested. The Court refused to invalidate the copyright but ruled that the licensing provision requiring exclusive use of the coding system constituted copyright misuse.
Alcatel USA Inc. v DGI Technologies Inc.
The Fifth Circuit upheld the defence of copyright misuse in Alcatel USA, Inc. v DGI Technologies, Inc. Alcatel licensed the use of its software only along with its manufactured equipment. The terms of the license banned downloading or copying Alcatel’s software. However, DGI downloaded and copied Alcatel’s software in violation of the licensing agreement in order to ensure compatibility with its product. In a suit for infringement of copyright, DGI claimed misuse of copyright by Alcatel. The Court found Alcatel to have exceeded the scope of the copyright grant to gain an extended monopoly, which effectively constituted a copyright misuse.
Assessment Technologies of WI, LLC v WIREdata Inc.
The doctrine of copyright misuse was upheld by the Seventh Circuit in Assessment Technologies of WI, LLC v WIREdata Inc, which is another case involving computer software. In this case, WIREdata sought public information about a number of properties from the Wisconsin municipalities. The information was collected by the municipalities and compiled using the plaintiff’s software for tax assessment purposes. Hence, some municipalities refused to furnish the information for fear of infringing Assessment Technologies’ copyright. WIREdata, the defendants in this case, sued in state court for the release of information and Assessment Technologies sued in federal court, claiming that the release would violate its copyright. The Court held that the conduct of the plaintiff amounted to copyright misuse as the information withheld by the municipalities were beyond the purview of the said copyright.
Video Pipeline, Inc. v Buena Vista Home Entertainment, Inc.
In Video Pipeline, Inc. v Buena Vista Home Entertainment, Inc., the Third Circuit stated that a copyright holder might commit misuse in trying to enforce a license that prohibit criticism of copyright-protected works. Video Pipeline had an agreement with Disney, which allowed it to compile more than 500 movie trailers. When Video Pipeline started to post the trailers online, Disney asked Video Pipeline to remove the trailers, as they were not covered by the terms of the license. Although Video Pipeline complied with Disney’s request, it sought a declaratory relief that its use of trailers online did not in any manner violate Disney’s copyright. Additionally, Video Pipeline amended the plaint to seek declaratory relief to use two minute video clip review it created out of sixty two movies. Disney, which owned Buena Vista, filed a counterclaim for copyright infringement in response to the suit. The Court observed that the doctrine is yet to be affirmatively expressed by the United States Supreme Court and that the licensing terms were reasonable. Accordingly, the Court ultimately ruled that the doctrine was inapplicable to the factual matrix of this case.
This case has assumed significance because it was decided in a circuit wherein Redbox sued three major studios, namely Universal, Walter and Fox.
In India
India has incorporated fair dealing provisions into its domestic law, which provides important limitations to copyright holders’ rights. The doctrine of copyright misuse has not received any statutory support much like in the United States.
Tekla Corporation v Survo Ghosh
In Tekla Corporation and Ors. v Survo Ghosh and Ors., the Delhi High Court ruled that the defence of copyright misuse was not available to the defendants either in a suit for permanent injunction from infringing a plaintiff’s copyright or in an action for damages for copyright infringement. In 2011, the plaintiffs had initiated a suit for copyright infringement against the defendants for unauthorized usage of its software. Instead of invoking any of the exemptions under Section 52 of the Indian Copyright Act, 1957, the defendants contested that the plaintiffs were precluded from claiming remedies for infringement as their conduct constituted copyright misuse. It was alleged that the plaintiffs had been charging an exorbitant fee alongside imposing ‘unreasonable' conditions as a part of their licensing agreements. However, the Court was not persuaded by the American jurisprudence on this subject and refused to recognize the doctrine of copyright misuse, as it would amount to adding more grounds than what the statute already provided. The Court was also concerned about the implications of the doctrine on judicial delays in enforcement of copyright if it were to be adopted in India. Nonetheless, the Court is said to have missed an opportunity to engage with copyright policy prevalent in India.
See also
Patent misuse
Copyfraud
Kai Puolamäki, activist against copyright misuse in Finland
Copyright law of India
References
Misuse, Copyright
Equitable defenses
Anti-competitive practices
|
5716325
|
https://en.wikipedia.org/wiki/6LoWPAN
|
6LoWPAN
|
6LoWPAN is an acronym of IPv6 over Low-Power Wireless Personal Area Networks. 6LoWPAN is the name of a concluded working group in the Internet area of the IETF.
The 6LoWPAN concept originated from the idea that "the Internet Protocol could and should be applied even to the smallest devices," and that low-power devices with limited processing capabilities should be able to participate in the Internet of Things.
The 6LoWPAN group has defined encapsulation and header compression mechanisms that allow IPv6 packets to be sent and received over IEEE 802.15.4 based networks. IPv4 and IPv6 are the work horses for data delivery for local-area networks, metropolitan area networks, and wide area networks such as the Internet. Likewise, IEEE 802.15.4 devices provide sensing communication-ability in the wireless domain. The inherent natures of the two networks though, are different.
The base specification developed by the 6LoWPAN IETF group is (updated by with header compression, and by with neighbor discovery optimizations). The problem statement document is . IPv6 over Bluetooth Low Energy (BLE) is defined in .
Application areas
The target for IP networking for low-power radio communication is applications that need wireless internet connectivity at lower data rates for devices with very limited form factor. An example is automation and entertainment applications in home, office and factory environments. The header compression mechanisms standardized in RFC6282 can be used to provide header compression of IPv6 packets over such networks.
IPv6 is also in use on the smart grid enabling smart meters and other devices to build a micro mesh network before sending the data back to the billing system using the IPv6 backbone. Some of these networks run over IEEE 802.15.4 radios, and therefore use the header compression and fragmentation as specified by RFC6282.
Thread
Thread is an effort of over 50 companies to standardize on a protocol running over 6LoWPAN to enable home automation. The specification is available at no cost , subject to adherence to an EULA stipulating that Thread Group membership (in most cases paid) is required to implement the protocol. It is to be launched in the . The protocol will most directly compete with Z-Wave and Zigbee IP.
Matter
Matter, which started as Project CHIP (Connected Home over IP) is the latest effort to standardize on a protocol running over 6LoWPAN to enable home automation, by combining it with DTLS , CoAP and MQTT-SN
Functions
As with all link-layer mappings of IP, RFC4944 provides a number of functions. Beyond the usual differences between L2 and L3 networks,
mapping from the IPv6 network to the IEEE 802.15.4 network poses additional design challenges (see for an overview).
Adapting the packet sizes of the two networks
IPv6 requires the maximum transmission unit (MTU) to be at least 1280 octets. In contrast, IEEE 802.15.4's standard packet size is 127 octets. A maximum frame overhead of 25 octets spares 102 octets at the media access control layer. An optional but highly recommended security feature at the link layer poses an additional overhead. For example, 21 octets are consumed for AES-CCM-128 leaving only 81 octets for upper layers.
Address resolution
IPv6 nodes are assigned 128 bit IP addresses in a hierarchical manner, through an arbitrary length network prefix. IEEE 802.15.4 devices may use either of IEEE 64 bit extended addresses or, after an association event, 16 bit addresses that are unique within a PAN. There is also a PAN-ID for a group of physically collocated IEEE 802.15.4 devices.
Differing device designs
IEEE 802.15.4 devices are intentionally constrained in form factor to reduce costs (allowing for large-scale network of many devices), reduce power consumption (allowing battery powered devices) and allow flexibility of installation (e.g. small devices for body-worn networks). On the other hand, wired nodes in the IP domain are not constrained in this way; they can be larger and make use of mains power supplies.
Differing focus on parameter optimization
IPv6 nodes are geared towards attaining high speeds. Algorithms and protocols implemented at the higher layers such as TCP kernel of the TCP/IP are optimized to handle typical network problems such as congestion. In IEEE 802.15.4-compliant devices, energy conservation and code-size optimization remain at the top of the agenda.
Adaptation layer for interoperability and packet formats
An adaptation mechanism to allow interoperability between IPv6 domain and the IEEE 802.15.4 can best be viewed as a layer problem. Identifying the functionality of this layer and defining newer packet formats, if needed, is an enticing research area. proposes an adaptation layer to allow the transmission of IPv6 datagrams over IEEE 802.15.4 networks.
Addressing management mechanisms
The management of addresses for devices that communicate across the two dissimilar domains of IPv6 and IEEE 802.15.4 is cumbersome, if not exhaustingly complex.
Routing considerations and protocols for mesh topologies in 6LoWPAN
Routing per se is a two phased problem that is being considered for low-power IP networking:
Mesh routing in the personal area network (PAN) space.
The routability of packets between the IPv6 domain and the PAN domain.
Several routing protocols have been proposed by the 6LoWPAN community such as LOAD, DYMO-LOW, HI-LOW. However, only two routing protocols are currently legitimate for large-scale deployments: LOADng standardized by the ITU under the recommendation ITU-T G.9903 and RPL standardized by the IETF ROLL working group.
Device and service discovery
Since IP-enabled devices may require the formation of ad hoc networks, the current state of neighboring devices and the services hosted by such devices will need to be known. IPv6 neighbour discovery extensions is an internet draft proposed as a contribution in this area.
Security
IEEE 802.15.4 nodes can operate in either secure mode or non-secure mode. Two security modes are defined in the specification in order to achieve different security objectives: Access Control List (ACL) and Secure mode
Further reading
Interoperability of 6LoWPAN
6LoWPAN Ad Hoc On-Demand Distance Vector Routing (LOAD)
Dynamic MANET On-demand for 6LoWPAN (DYMO-low) Routing
Hierarchical Routing over 6LoWPAN (HiLow)
LowPan Neighbor Discovery Extensions
Serial forwarding approach to connecting TinyOS-based sensors to IPv6 Internet
GLoWBAL IPv6: An adaptive and transparent IPv6 integration in the Internet of Things Download
IETF Standardization in the Field of the Internet of Things (IoT): A Survey Download
See also
DASH7 active RFID standard
MyriaNed low power, biology inspired, wireless technology
Z-Wave designed to provide, reliable, low-latency transmission of small data packets at data rates up to 100kbit/s
ZigBee standards-based protocol based on IEEE 802.15.4.
LoRaWAN allows low bit rate communication from and to connected objects, thus participating to Internet of Things, machine-to-machine M2M, and smart city.
Thread (network protocol) standard suggested by Nest Labs based on IEEE 802.15.4 and 6LoWPAN.
Static Context Header Compression (SCHC)
References
External links
Internet Engineering Task Force (IETF)
6lowpan Working Group
6lowpan.tzi.org
DASH7
IPv6
Wireless networking standards
|
14332366
|
https://en.wikipedia.org/wiki/Anti-computer%20tactics
|
Anti-computer tactics
|
Anti-computer tactics are methods used by humans to try to beat computer opponents at various games, especially in board games such as chess and Arimaa. It often involves playing conservatively for a long-term advantage that the computer is unable to find in its game tree search. This will frequently involve selecting moves that appear sub-optimal in the short term in order to exploit known weaknesses in the way computer players evaluate positions.
In human–computer chess matches
One example of the use of anti-computer tactics was Brains in Bahrain, an eight-game chess match between human chess grandmaster, and then World Champion, Vladimir Kramnik and the computer program Deep Fritz 7, held in October 2002. The match ended in a tie 4-4, with two wins for each participant and four draws, worth half a point each.
In the 1997 Deep Blue versus Garry Kasparov match, Kasparov played an anti-computer tactic move at the start of the game to get Deep Blue out of its opening book. Kasparov chose the unusual Mieses Opening and thought that the computer would play the opening poorly if it had to play itself (that is, rely on its own skills) rather than use its opening book. Kasparov played similar anti-computer openings in the other games of the match but the tactic backfired.
Anti-computer chess games
Garry Kasparov vs Deep Blue (Computer)] IBM Man-Machine, New York USA 1997
Garry Kasparov vs X3D Fritz (Computer)] Man-Machine World Chess Championship 2003
Rybka (Computer) vs Hikaru Nakamura ICC blitz 3 0 2008
See also
Arimaa – a chess variant designed to be difficult for computers, inspired by Kasparov's loss to Deep Blue in 1997.
Horizon effect
Human–computer chess matches
References
External links
Anticomputer Chess
Computer chess
|
5415565
|
https://en.wikipedia.org/wiki/Club%20Penguin
|
Club Penguin
|
Club Penguin was a massively multiplayer online game (MMO), involving a virtual world that contained a range of online games and activities. It was created by New Horizon Interactive (now known as Disney Canada Inc.). Players used cartoon penguin-avatars and played in an arctic-themed open-world. After beta-testing, Club Penguin was made available to the general public on October 24, 2005, and expanded into a large online community, such that by late 2007, it was claimed Club Penguin had over 30 million user accounts. In July 2013, Club Penguin had over 200 million registered user accounts.
While free memberships were available, revenue was predominantly raised through paid memberships, which allowed players to access a range of additional features, such as the ability to purchase virtual clothing, furniture, and in-game pets called "puffles" for their penguins through the usage of in-game currency. The success of Club Penguin led to New Horizon being purchased by the Walt Disney Company in August 2007 for the sum of 350 million dollars, with an additional 350 million dollars in bonuses should specific targets be met by 2009.
The game was specifically designed for children aged 6 to 14 (however, users of any age were allowed to play Club Penguin). Thus, a major focus of the developers was on child safety, with a number of features having been introduced to the game to facilitate this. These features included offering an "Ultimate Safe Chat" mode, whereby users selected their comments from a menu; filtering that prevented swearing and the revelation of personal information; and moderators who patrolled the game.
On January 30, 2017, it was announced that the game would be discontinued on March 29, 2017. Club Penguin later shut down its servers on March 30, 2017, at 12:01 AM PDT. The game was replaced by a successor, titled Club Penguin Island (which itself was discontinued the following year). Since being shut down, the original game has been hosted and recreated on a number of private servers using SWF files from the game's old website. Many of the private servers were shut down around May 15, 2020, after Digital Millennium Copyright Act filings by the Walt Disney Company were sent on May 13, 2020, initiated by concerns about Club Penguin Online, such as children being groomed by pedophiles and child pornography.
History
Predecessors (2000–2004)
The first seeds of what would become Club Penguin began as a Flash 4 web-based game called Snow Blasters that developer Lance Priebe had been developing in his spare time in July 2000. Priebe's attention was brought to penguins after he "happened to glance at a Far Side cartoon featuring penguins that was sitting on his desk." The project was never finished, and instead morphed into Experimental Penguins. Experimental Penguins was released through Priebe's company of employment, the Kelowna, British Columbia, Canada-based online game and comic developer Rocketsnail Games, in July 2000, though it ultimately went offline the following year. It was used as the inspiration for Penguin Chat (also known as Penguin Chat 1), a similar game which was released shortly after Experimental Penguins' removal. Released January 2003, Penguin Football Chat (also known as Penguin Chat 2) was the second attempt at a penguin-themed MMORPG, and was created on FLASH 5 and used the same interface as Experimental Penguins. The game contained various minigames; the premiere title of RocketSnail Games was Ballistic Biscuit, a game that would be placed into Experimental Penguins and eventually be adapted into Club Penguin's Hydro Hopper. RocketSnails Games' Mancala Classic would also be placed into the game as Mancala.
Lance Priebe, as well as co-workers Lane Merrifield and Dave Krysko, started to formulate the Club Penguin concept when the trio were unsuccessful in finding "something that had some social components but was safe, and not just marketed as safe" for their own children. Dave Krysko in particular wanted to build a safe social-networking site their kids could enjoy free of advertising. In 2003, Merrifield and Priebe approached their boss, with the idea of creating a spinoff company to develop the new product. The spin-off company would be known as New Horizon Interactive.
Early history (2004–2007)
Work commenced on the project in 2004, and the team settled on a name in the summer of 2005. The developers used the previous project Penguin Chat 2 – which was still online – as a jumping-off point in the design process, while incorporating concepts and ideas from Experimental Penguins. Penguin Chat's third version was released in April 2005, and was used to test the client and servers of Penguin Chat 4 (renamed Club Penguin). Variants of Penguin Chat 3 included Crab Chat, Chibi Friends Chat, Goat Chat, Ultra-Chat, and TV Chat. Users from Penguin Chat were invited to beta test Club Penguin. The original plan was to release Club Penguin in 2010, but since the team had decided to fast-track the project, the first version of Club Penguin went live on October 24, 2005, just after Penguin Chat servers were shut down in August 2005. While Penguin Chat used ElectroServer, Club Penguin would use SmartFoxServer. The developers financed their start-up entirely with their own credit cards and personal lines of credit, and maintained 100 percent ownership. Club Penguin started with 15,000 users, and by March that number had reached 1.4 million—a figure which almost doubled by September, when it hit 2.6 million. By the time Club Penguin was two years old, it had reached 3.9 million users, despite lacking a marketing budget. The first mention of the game in The New York Times was in October 2006. The following year, Club Penguin spokesperson Karen Mason explained: "We offer children the training wheels for the kinds of activities they might pursue as they get older."
Acquisition by Disney (2007)
Although the three Club Penguin co-creators had turned down lucrative advertising offers and venture capital investments in the past, in August 2007, they agreed to sell both Club Penguin and its parent company to Disney for the sum of $350.93 million. In addition, the owners were promised bonuses of up to $350 million if they were able to meet growth targets by 2009. Disney ultimately didn't pay the extra $350 million, as Club Penguin missed both profit goals. At the point when it was purchased by Disney, Club Penguin had 11–12 million accounts, of which 700,000 were paid subscribers, and was generating $40 million in annual revenue. In making the sale, Merrifield has stated that their main focus during negotiations was philosophical, and that the intent was to provide themselves with the needed infrastructure in order to continue to grow. By late 2007, it was claimed that Club Penguin had over 30 million user accounts. In December of that year, The New York Times asserted that the game "attracts seven times more traffic than Second Life." Club Penguin was the 8th top social networking site in April 2008, according to Nielsen.
After Disney's acquisition, Disney Interactive had four MMOs to simultaneously juggle: ToonTown, Pirates of the Caribbean Online, Pixie Hollow, and Club Penguin, with World of Cars set to follow soon. Lane Merrifield assured GlobalToyNews at the time that "it's a lot of worlds to manage, but we have really strong teams." Merrifield's role changed from taking a backseat in daily game design to focusing on overall branding and quality control of the virtual gaming properties. One of his roles was to merge the Club Penguin studio New Horizon Interactive in Kelowna (renamed to Disneyland Studios Canada) with Disneyland Studios LA. Disneyland Studios Canada focused its efforts on one product (with such features as multilingual versions), while Disneyland Studios LA focused on customer products and franchises of a wide selection of games. Merrifield was responsible for cross-pollinating both cultures.
Franchising and growth (2007–2015)
Since the Disney purchase, Club Penguin continued to grow, becoming part of a larger franchise including video games, books, a television special, an anniversary song, and an app MMO. Disney has often used the game as a cross-promotion opportunity when releasing new films such as Frozen, Zootopia, and Star Wars, having special themed events and parties to celebrate their releases. The game forged an ever-growing mythology of characters and plot elements, including: a pirate, a journalist, and a secret agent.
In 2008, the first international office opened in Brighton, England, to personalise the level of moderation and player support. Later international office locations included São Paulo and Buenos Aires. On March 11, 2008, Club Penguin released the Club Penguin Improvement Project. This project allowed players to be part of the testing of new servers, which were put into use in Club Penguin on April 14, 2008. Players had a "clone" of their penguin made, to test these new servers for bugs and glitches. The testing was ended on April 4, 2008.
On June 20, 2011, the game's website temporarily crashed after the company let the Club Penguin domain name expire. In September 2011, one of Club Penguin'''s minigames, Puffle Launch, was released on iOS as an app. Merrifield commented: "Kids are going mobile and have been asking for Club Penguin to go there with them."
In late 2012, Merrifield left Disney Interactive to focus on his family and a new educational product, Freshgrade. Chris Heatherly took Merrifield's former position. The company dropped the words "Online Studios" from its name in 2013. As of July 2013, Club Penguin had over 200 million registered user accounts. In 2013, Club Penguin hired singer and former Club Penguin player Jordan Fisher to record a song entitled It's Your Birthday, to commemorate Club Penguin's 8th anniversary.
Decline and discontinuation (2015–2017)
In April 2015, it was revealed that Disney Interactive had laid off 28 members of Club Penguin's Kelowna headquarters due to the game's declining popularity. The company's UK office in Brighton was shut down around April 17, 2015. Some employees in the Los Angeles office were also let go. Disney Interactive replied to Castanet on the layoffs: "Disney Interactive continually looks to find ways to create efficiencies and streamline our operations. As part of this ongoing process, we are consolidating a small number of teams and are undergoing a targeted reduction in workforce."
On September 2, 2015, Club Penguin closed down the German and Russian versions of the site. A spin-off mobile app, Puffle Wild, was removed from the App Store and Google Play the same day in order to allow Disney Interactive to focus on Club Penguin. On January 11, 2016, the Sled Racer and SoundStudio apps (the former being an original game and the latter being a port of a game on the website) followed suit. With the closure of Disney Interactive in 2015, Club Penguin side-projects wound down to allow a streamlined effort to focus on the core Club Penguin experience; this involved the layoffs of 30 Disney Studios Canada staff.
On January 30, 2017, Club Penguin announced that the current game would be discontinued on March 29, 2017, to make way for its successor, Club Penguin Island. Membership payments for the original game were no longer accepted as of January 31, 2017, with paid members slated to receive emails about membership and refunds.
It became popular in the final weeks of Club Penguin to attempt speedruns to see how fast users could get banned from the site; the fastest world record was a tool-assisted speedrun (TAS).
Days before the shut down, Club Penguin announced that on the final day of the game's operation, all users would be given a free membership until the servers were disconnected.
On March 30, 2017, at 12:01:39 AM PDT (7:01:39 AM ), Club Penguin's servers were officially shut down.
Design
Business model
Prior to being purchased by Disney, Club Penguin was almost entirely dependent on membership fees to produce a revenue stream. The vast majority of users (90% according to The Washington Post) chose not to pay, instead taking advantage of the free play on offer. Those who chose to pay did so because full (paid) membership was required to access all of the services, such as the ability to purchase virtual clothes for the penguins and buy decorations for igloos, and because peer pressure created a "caste system," separating paid from unpaid members. Advertising, both in-game and on-site, was not incorporated into the system, although some competitors chose to employ it, including: Whyville, which used corporate sponsorship, and Neopets, which incorporated product placements.
An alternative revenue stream came through the development of an online merchandise shop, which opened on the Club Penguin website in August 2006, selling stuffed Puffles and T-shirts. Key chains, gift cards, and more shirts were added on November 7, 2006. In October 2008, a series of plush toys based on characters from Club Penguin, were made available online (both through the Club Penguin store and Disney's online store), and in retail outlets.
As with one of its major rivals, Webkinz, Club Penguin traditionally relied almost entirely on word-of-mouth advertising to increase its membership base.
Child safety Club Penguin was designed for the ages of 6–14. Thus, one of the major concerns when designing Club Penguin was how to improve both the safety of participants and the suitability of the game to children. As Lane Merrifield stated, "the decision to build Club Penguin grew out of a desire to create a fun, virtual world that I and the site's other two founders would feel safe letting our own children visit." As a result, Club Penguin maintained a strong focus on child safety, to the point where the security features were described as almost "fastidious" and "reminiscent of an Orwellian dystopia", although it was also argued that this focus might "reassure more parents than it alienate[d]."
The system employed a number of different approaches in an attempt to improve child safety. The key approaches included preventing the use of inappropriate usernames; providing an "Ultimate Safe Chat" mode, which limited players to selecting phrases from a list; using an automatic filter during "Standard Safe Chat" (which allowed users to generate their own messages) and blocked profanity even when users employed "creative" methods to insert it into sentences; filtering seemingly innocuous terms, such as "mom"; and blocking both telephone numbers and email addresses. It also included employing paid moderators; out of 100 staff employed in the company in May 2007, Merrifield estimated that approximately 70 staff were dedicated to policing the game. It also included promoting users to "EPF (Elite Penguin Force) Agent" status, and encouraging them to report inappropriate behavior.
Each game server offered a particular type of chat—the majority allowing either chat mode, but some servers allowed only the "Ultimate Safe Chat" mode. When using "Standard Safe Chat", all comments made by users were filtered. When a comment was blocked, the user who made the comment saw it, but other users were unaware that it was made—suggesting to the "speaker" that they were being ignored, rather than encouraging them to try to find a way around the restriction.
Beyond these primary measures, systems were in place to limit the amount of time spent online, and the site did not feature any advertisements, because, as described by Merrifield, "within two or three clicks, a kid could be on a gambling site or an adult dating site." Nevertheless, after Club Penguin was purchased by Disney, concerns were raised that this state of affairs might change, especially in regard to potential spin-off products, although Disney continued to insist that it believed advertising to be "inappropriate" for a young audience.
Players who used profanity were often punished by an automatic 24-hour ban, although not all vulgar language resulted in an immediate ban. Players found by moderators to have broken Club Penguin rules were punished by a ban lasting "from 24 hours to forever depending on the offense."
Education and charity
Research shows that the design of virtual worlds, like Club Penguin, provide children with opportunities to develop literacy and communication skills while having a powerful impact on their social relationships and identity formation. One literary practice involved players frequently engaged in semiotic analysis of other player profiles, which was a display of that player's identity within the game. Other literacy and communication practices included the use of the in game postal service and the of emoticons, which served to build social cohesion and structure.
Coins for Change was an in-game charity fund-raising event which first appeared in 2007. The fund-raising lasted for approximately two weeks each December during the game's annual "Holiday Party". Players could "donate" their virtual coins to vote for three charitable issues: Kids who were sick, the environment, and kids in developing countries. Players were able to donate in increments of 100, 250, 500, 1,000, 5,000, or 10,000 virtual coins. At the end of the campaign, a set amount of real-world money was divided among each of the causes based on the amount of in-game currency each cause received. At the end of the first campaign, the New Horizon Foundation donated a total of $1 million to the World Wide Fund for Nature, the Elizabeth Glaser Pediatric AIDS Foundation, and Free The Children. In both the 2007 and 2008 campaigns, two-and-a-half million players participated. In 2009, Club Penguin donated $1,000,000 Canadian dollars to charitable projects around the world. In 2010, Club Penguin donated $300,000 towards building safe places, $360,000 towards protecting the Earth, and $340,000 towards providing medical help. Lane Merrifield said: "Our players are always looking for ways to make a difference and help others, and over the past five years they've embraced the opportunity to give through Coins For Change, it was exciting to see kids from 191 countries participate together. In 2011, the amount of money donated was doubled to $2 million, ostensibly in response to an unexpected increase in participation.
Plot and gameplay Club Penguin was divided into various rooms and distinct areas. Illustrator Chris Hendricks designed many of the first environments. Each player was provided with an igloo for a home. Members had the option of opening their igloo so other penguins could access it via the map, under "Member Igloos." Members could also purchase larger igloos and decorate their igloos with items bought with virtual coins earned by playing mini-games. At least one party per month was held on Club Penguin. In most cases, a free clothing item was available, both for paid members and free users. Some parties also provided member-only rooms which only paid members could access. Some major Club Penguin parties were its annual Halloween and Holiday parties. Other large parties included the Music Jam, the Adventure Party, the Puffle Party, and the Medieval Party.
Franchise
Disney's franchising of the brand began with its acquisition of Club Penguin in 2007. In addition to the Club Penguin Island web-based video game, the franchise has also included console video games for Nintendo DS and Wii, television specials in the UK, and a series of books.
Critical reception Club Penguin received mixed reviews throughout its journey. The site was awarded a "kids' privacy seal of approval" from the Better Business Bureau. Similarly, Brian Ward, a Detective Inspector at the Child Abuse Investigation Command in the United Kingdom, stated that it was good for children to experience a restricted system such as Club Penguin before moving into social networking sites, which provide less protection. In terms of simple popularity, the rapid growth of Club Penguin suggested considerable success, although there were signs that this was leveling out. Nielsen figures released in April 2008 indicated that in the previous 12 months, Club Penguin traffic had shrunk by 7%.
A criticism expressed by commentators was that the game encouraged consumerism and allowed players to cheat. While Club Penguin did not require members to purchase in-game products with real money (instead relying on a set monthly fee), players were encouraged to earn coins within the game with which to buy virtual products. Furthermore, Club Penguin was full of advertisements for their paid membership that repeatedly encouraged children to subscribe in order to gain access to the full range of activities. These advertisements included notices that certain levels of games and items were reserved for paid members and even included paid members that unwittingly acted as recruiters. Additionally, Club Penguin merchandise was sold through the website and in Disney retail stores that would unlock items and coins in the game. In this way, critics believe that Disney positioned children as economic subjects that became acculturated to shopping as a key cultural practice. Others argue that the use of in-game money as possibly helping teach children how to save money, select what to spend it on, improve their abilities at math, and encourage them to "practice safe money-management skills".
In addition, the "competitive culture" that this could create led to concerns about cheating, as children looked for "shortcuts" to improve their standing. It was suggested that this might influence their real-world behavior. To counter this, Club Penguin added guidelines to prevent cheating, and banned players who were caught cheating or who encouraged cheating.
In spite of the attempts to create a safe space for children in Club Penguin, concerns about safety and behavior still arose within the media. While the language in-game was filtered, discussions outside of Club Penguin were beyond the owner's control, and thus it was stated that third-party Club Penguin forums could become "as bawdy as any other chat". Even within the game, Club Penguin had a unique form of anti-social behavior and cyberbullying that presented as angry emoticons, relentless throwing of snowballs at other players and some messages were able to get through the filtered chat. Also, the "Caste system" between those who had membership and exclusive items and those who lacked full membership (and therefore were unable to own the "coolest" items) could lead to players having a difficult time attracting friends. Furthermore, some researchers were concerned that due to the differing experiences and privileges between paid and non-paid members, children would be exposed to a class system where some would be competing for higher and higher status. Others worried that the display of class and promotion of consumerism within Club Penguin fosters the notion that the accumulation of wealth and possessions are the direct result of one's success and status. Additionally, some critics have noted the presence of strong sexual connotations due to the modeling of romantic relationships and behaviors.
One criticism came from Caitlin Flanagan in The Atlantic Monthly: in relation to the safety procedures, she noted that Club Penguin was "certainly the safest way for unsupervised children to talk to potentially malevolent strangers—but why would you want them to do that in the first place?" While views of the strength of this criticism might vary, the concern was mirrored by Lynsey Kiely in the Sunday Independent, who quoted Karen Mason, Communications Director for Club Penguin, as saying "we cannot guarantee that every person who visits the site is a child."
On August 20, 2013, Disney announced that Toontown Online, Pixie Hollow, and Pirates of the Caribbean Online were closing directly because of Club Penguin and Disney's mobile games. This caused major controversy between Club Penguin and fans of the three games, especially Toontown, where some users had played for more than 12 years (Toontown's alpha test started in August 2001).
Private servers
A Club Penguin Private Server (commonly abbreviated and known as a CPPS) is an online multiplayer game that is not part of Club Penguin, but uses unlicensed SWF files from Club Penguin, a database, and a server emulator in order to create a similar environment for the game. Many now use these environments in order to play the original game after its discontinuation. CPPSes often contain features that did not exist in the original game such as custom items and rooms, free membership, etc.
Throughout the official game's existence, various players created private servers of Club Penguin, and in response to the its closure, more private servers were created. Club Penguin Rewritten, a popular remake, launched February 12, 2017, had reached a million players as of October 12, 2017, though discontinued "permanently" on March 4, 2018. Citing community support and funding, however, it returned with all accounts intact on April 27, 2018. Club Penguin Rewritten had reached eight million registered accounts on December 2, 2020; nearly twice as many as Club Penguin had 14 years earlier in December 2006.
During the COVID-19 pandemic, private servers experienced a surge in popularity, with between 6,000 to 8,000 new players signing up each day. On April 16, 2020, American artist Soccer Mommy collaborated with Club Penguin Rewritten to host a virtual concert for her new album Color Theory. The event had been rescheduled from April 2, 2020, due to higher than expected player counts that overloaded the server.
As of May 21, 2020, the collective number of player accounts registered on all private servers numbered over 15 million.
Legal status
Since private servers essentially copy materials copyrighted by Disney, there has been much controversy as to whether or not creating and hosting them is legal. Disney and Club Penguin have pursued numerous CPPSes and attempted to have them taken down with DMCA notices.
Vulnerabilities
Many private servers have become vulnerable to DDOS attacks and database leaks due to insufficient security measures. On January 21, 2018, the login data of over 1.7 million Club Penguin Rewritten users were stolen after a data breach and on July 27, 2019, the private server again suffered a data breach with over 4 million accounts stolen in addition to the previous breach.
Shutdown
On May 14, 2020, it was announced that all private servers using the Club Penguin brand were given DMCA take-down notices after allegations emerged concerning child predation by an administrator of another popular private server, Club Penguin Online. According to an investigation by the BBC, one man involved with the site had been arrested on suspicion of possessing child pornography. Detectives say the man from London has been released on bail pending further inquiries. On May 15, 2020, the site was shut down after complying with the DMCA takedown notice by The Walt Disney Company. In a statement, Disney said, "Child safety is a top priority for the Walt Disney Company and we are appalled by the allegations of criminal activity and abhorrent behaviour on this unauthorised website that is illegally using the Club Penguin brand and characters for its own purposes. [...] We continue to enforce our rights against this, and other, unauthorised uses of the Club Penguin'' game."
Awards and nominations
References
External links
Official Club Penguin Island website. Archived on December 20, 2018.
The technology behind Disney's Club Penguin
Club Penguin
2005 video games
Browser games
Children's websites
Disney video games
Disney acquisitions
Fictional penguins
Inactive massively multiplayer online games
Internet properties disestablished in 2017
Internet properties established in 2005
Massively multiplayer online games
Miniclip games
Persistent worlds
Video games about birds
Video games developed in Canada
Webby Award winners
|
57592104
|
https://en.wikipedia.org/wiki/MacOS%20Mojave
|
MacOS Mojave
|
macOS Mojave ( ; version 10.14) is the fifteenth major release of macOS, Apple Inc.'s desktop operating system for Macintosh computers. Mojave was announced at Apple's Worldwide Developers Conference on June 4, 2018, and was released to the public on September 24, 2018. The operating system's name refers to the Mojave Desert and is part of a series of California-themed names that began with OS X Mavericks. It succeeded macOS High Sierra and was followed by macOS Catalina.
macOS Mojave brings several iOS apps to the desktop operating system, including Apple News, Voice Memos, and Home. It also includes a much more comprehensive "dark mode", is the final version of macOS to support 32-bit application software, and is also the last version of macOS to support the iPhoto app, which had already been superseded in OS X Yosemite (10.10) by the newer Photos app.
Mojave was well received and was supplemented by point releases after launch.
Overview
macOS Mojave was announced on June 4, 2018, at Apple's annual Worldwide Developers Conference in San Jose, California. Apple pitched Mojave, named after the California desert, as adding "pro" features that would benefit all users. The developer preview of the operating system was released for developers the same day, followed by a public beta on June 26. The retail version of 10.14 was released on September 24. It was followed by several point updates and supplemental updates.
System requirements
Mojave requires a GPU that supports Metal, and the list of compatible systems is more restrictive than the previous version, macOS High Sierra. Compatible models are the following Macintosh computers running OS X Mountain Lion or later:
MacBook: Early 2015 or newer
MacBook Air: Mid 2012 or newer
MacBook Pro: Mid 2012 or newer, Retina display not needed
Mac Mini: Late 2012 or newer
iMac: Late 2012 or newer
iMac Pro: Late 2017
Mac Pro: Late 2013 or newer; Mid 2010 or Mid 2012 models require a Metal-capable GPU
macOS Mojave requires at least 2GB of RAM as well as 12.5GB of available disk space to upgrade from OS X El Capitan, macOS Sierra, or macOS High Sierra, or 18.5GB of disk space to upgrade from OS X Yosemite and earlier releases. Some features are not available on all compatible models.
Changes
System updates
macOS Mojave deprecates support for several legacy features of the OS. The graphics frameworks OpenGL and OpenCL are still supported by the operating system, but will no longer be maintained; developers are encouraged to use Apple's Metal library instead.
OpenGL is a cross-platform graphics framework designed to support a wide range of processors. Apple chose OpenGL in the late 1990s to build support for software graphics rendering into the Mac, after abandoning QuickDraw 3D. At the time, moving to OpenGL allowed Apple to take advantage of existing libraries that enabled hardware acceleration on a variety of different GPUs. As time went on, Apple has shifted its efforts towards building its hardware platforms for mobile and desktop use. Metal makes use of the homogenized hardware by abandoning the abstraction layer and running on the "bare metal". Metal reduces CPU load, shifting more tasks to the GPU. It reduces driver overhead and improves multithreading, allowing every CPU thread to send commands to the GPU.
macOS does not natively support Vulkan, the Khronos group's official successor to OpenGL. The MoltenVK library can be used as a bridge, translating most of the Vulkan 1.0 API into the Metal API.
Continuing the process started in macOS High Sierra (10.13), which issued warnings about compatibility with 32-bit applications, Mojave issues warnings when opening 32-bit apps that they will not be supported in future updates. In macOS Mojave 10.14, this alert appears once every 30 days when launching the app, as macOS 10.15 will not support 32-bit applications.
When Mojave is installed, it will convert solid-state drives (SSDs), hard disk drives (HDDs), and Fusion Drives, from HFS Plus to APFS. On Fusion Drives using APFS, files will be moved to the SSD based on the file's frequency of use and its SSD performance profile. APFS will also store all metadata for a Fusion Drive's file system on the SSD.
New data protections require applications to get permission from the user before using the Mac camera and microphone or accessing system data like user Mail history and Messages database.
Removed features
Mojave removes integration with Facebook, Twitter, Vimeo, and Flickr, which was added in OS X Mountain Lion.
The only supported Nvidia graphics cards are the Quadro K5000 and GeForce GTX 680 Mac Edition.
Applications
Mojave features changes to existing applications as well as new ones. Finder now has metadata preview accessed via View > Show Preview, and many other updates, including a Gallery View (replacing Cover Flow) that lets users browse through files visually. After a screenshot is taken, as with iOS, the image appears in the corner of the display. The screenshot software can now record video, choose where to save files, and be opened via + + .
Safari's Tracking Prevention features now prevent social media "Like" or "Share" buttons and comment widgets from tracking users without permission. The browser also sends less information to web servers about the user's system, reducing the chance of being tracked based on system configuration. It can also automatically create, autofill, and store strong passwords when users create new online accounts; it also flags reused passwords so users can change them.
A new Screenshot app was added to macOS Mojave to replace the Grab app. Screenshot can capture a selected area, window or the entire screen as well as screen record a selected area or the entire display. The Screenshot app is located in the /Applications/Utilities/ folder, as was the Grab app. Screenshot can also be accessed by pressing ++.
FaceTime
macOS 10.14.1, released on October 30, 2018, adds Group FaceTime, which lets users chat with up to 32 people at the same time, using video or audio from an iPhone, iPad or Mac, or audio from Apple Watch. Participants can join in mid-conversation.
App Store
The Mac App Store was rewritten from the ground up and features a new interface and editorial content, similar to the iOS App Store. A new 'Discover' tab highlights new and updated apps; Create, Work, Play and Develop tabs help users find apps for a specific project or purpose.
iOS apps ported to macOS
Four new apps (News, Stocks, Voice Memos and Home) are ported to macOS Mojave from iOS, with Apple implementing a subset of UIKit on the desktop OS. Third-party developers would be able to port iOS applications to macOS in 2019.
With Home, Mac users can control their HomeKit-enabled accessories to do things like turn lights off and on or adjust thermostat settings. Voice Memos lets users record audio (e.g., personal notes, lectures, meetings, interviews, or song ideas), and access them from iPhone, iPad or Mac. Stocks delivers curated market news alongside a personalized watchlist, with quotes and charts.
Other applications found on macOS 10.14 Mojave
Adobe Flash Player (installer)
AirPort Utility
Archive Utility
Audio MIDI Setup
Automator
Bluetooth File Exchange
Books
Boot Camp Assistant
Calculator
Calendar
Chess
ColorSync Utility
Console
Contacts
Dictionary
Digital Color Meter
Disk Utility
DVD Player
Font Book
GarageBand (may not be pre-installed)
Grab (still might be pre-installed)
Grapher
iMovie (may not be pre-installed)
iTunes
Image Capture
Ink (can only be accessed by connecting a graphics tablet into your Mac)
Keychain Access
Keynote (may not be pre-installed)
Mail
Migration Assistant
Notes, version 4.6
Numbers (may not be pre-installed)
Pages (may not be pre-installed)
Photo Booth
Preview
QuickTime Player
Reminders
Screenshot (succeeded Grab since Mojave or Catalina)
Script Editor
Siri
Stickies
System Information
Terminal
TextEdit
Time Machine
VoiceOver Utility
X11/XQuartz (may not be pre-installed)
User interface
Dark mode and accent colors
Mojave introduces "Dark Mode", a Light-on-dark color scheme that darkens the user interface to make content stand out while the interface recedes. Users can choose dark or light mode when installing Mojave, or any time thereafter from System Preferences.
Apple's built-in apps support Dark Mode. App developers can implement Dark mode in their apps via a public API.
A limited dark mode that affected only the Dock, menu bar, and drop-down menus was previously introduced in OS X Yosemite.
Desktop
Stacks, a feature introduced in Mac OS X Leopard, now lets users group desktop files into groups based on file attributes such as file kind, date last opened, date modified, date created, name and tags. This is accessed via View > Use Stacks.
macOS Mojave features a new Dynamic Desktop that automatically changes specially made desktop backgrounds (two of which are included) to match the time of day.
Dock
The Dock has a space for recently used apps that have not previously been added to the Dock.
Preferences
macOS update functionality has been moved back to System Preferences from the Mac App Store. In OS X Mountain Lion (10.8), system and app updates moved to the App Store from Software Update.
Reception
Mojave was generally well received by technology journalists and the press. The Verges Jacob Kastrenakes considered Mojave a relatively minor update, but Kastrenakes and Jason Snell thought the release hinted at the future direction of macOS. In contrast, Ars Technicas Andrew Cunningham felt that "Mojave feels, if not totally transformative, at least more consequential than the last few macOS releases have felt." Cunningham highlighted productivity improvements and continued work on macOS's foundation.
TechCrunch’s Brian Heater dubbed Mojave "arguably the most focused macOS release in recent memory", playing an important role in reassuring professional users that it was still committed to them.
Mojave's new features were generally praised. Critics welcomed the addition of Dark Mode.
Release history
References
External links
– official site
macOS Mojave download page at Apple
14
X86-64 operating systems
2018 software
Computer-related introductions in 2018
|
39052822
|
https://en.wikipedia.org/wiki/Shearwater%20Research
|
Shearwater Research
|
Shearwater Research is a Canadian manufacturer of dive computers and rebreather electronics for technical diving.
History
In 2004, Shearwater Research was founded by Bruce Partridge who produced their products in a spare bedroom at his home. As of 2014, Shearwater was producing thousands of dive computers per year in a manufacturing facility with twenty employees.
From the beginning the company sought to develop products that are simple to use and easy to read underwater.
Shearwater Research began by building controller boards for the Innerspace Systems Corp (ISC) Megalodon rebreathers in 2004. There was a problem with the configuration and by the end of 2005, ISC was no longer offering the Shearwater electronics package. Since that time, the initial issues have been resolved and Shearwater electronics are again available for use on the ISC Megalodons.
Shearwater decompression computers began with an implementation of the Bühlmann decompression algorithm with gradient factors into their Shearwater GF in the Spring of 2006. It was available in either the partial pressure of oxygen with decompression or control versions. In January 2007, the Shearwater GF was the computer used with the JJ-CCR.
With the release of the Predator in 2009, Shearwater moved away from the older LCD display technology to the use of newer technology OLED displays in their computers. This was the first color OLED diving computer available in the market with a user replaceable battery. Power was a major limiting factor in the development process to include the OLED technology.
With the Predator, Shearwater also introduced bluetooth to allow easier syncing with their desktop software. Their reason for the move to bluetooth was to make a computer that could be used on multiple operating systems. The Predator's two button design has been called "intuitive and easy to use". The top-of-the-line Predator will also allow for up to five breathing gases for the rebreather and up to five bail-out gasses. The user can make gas switches on the computer at any point during the dive.
Shearwater received their certification for ISO 9001-2008 in 2010 and all their products are compliant with CE, Federal Communications Commission (FCC) and IC international standards.
In 2011, Shearwater announced that they had licensed a technique to thermally monitor the condition of rebreather carbon dioxide absorbent canisters developed by the United States Navy Experimental Diving Unit. In collaboration with rEvo rebreathers, they were able to show that the thermal canister CO2 monitor would work with Shearwater's Predator dive computer.
Shearwater has continued to develop new ways to calculate decompression in their equipment by releasing an implementation of the Varying Permeability Model (VPM-B/GFS) in 2011. The "GFS" is for Gradient Factor Surfacing and indicates the combination where VPM and GF models are compared and the longer time utilized for the displayed profile.
The Shearwater Petrel has been described as the "Predator with improvements". The Petrel was designed to allow a user serviceable standard AA battery to supply the power it needs for calculations, and OLED display with automatic brightness changing to suit ambient lighting. The unit is 40% smaller than the Predator. The Petrel includes both the Bühlmann algorithm and their VPM-B/GFS algorithm. The Petrel also extends the profile data storage that was previously available from 200 to approximately 1000 hours.
With the release of the Petrel, Shearwater also improved the educational materials available to their owners.
In 2013, Shearwater was presented with the International System Safety Society Award for safety in "Scientific Research & Development" at the 31st International System Safety Conference in Boston.
Shearwater's NERD or Near Eye Remote Display is a head-up display that places the divers information in front of their eyes. The Shearwater NERD was released at Dive 2013 in Birmingham, UK.
In 2015, the Perdix wrist mounted dive computer was released. The Perdix is similar to the Petrel but has a 30% longer battery life and a thinner and lower profile. The computer was named after a . Unlike the Petrel, the Perdix is only available in a stand-alone configuration and does not have a version that can be connected to a rebreather.
In 2016, the Perdix AI was released. It built on the success of the Perdix by adding air integration features designed to function in conjunction with Pelagic Pressure Systems wireless gas pressure transmitters. The Perdix AI allows for 2 cylinder pressures to be displayed simultaneously.
In 2017, Shearwater launched the NERD 2. A successor to the original NERD heads-up dive computer, the NERD 2 eliminated the brain box from the NERD system, incorporating all of the electronics into the eyepiece. The NERD 2 contains a rechargeable lithium ion battery, heads-up compass, and dual air integration capability. Unlike the original NERD, the NERD 2 is available in a stand-alone model, making it practical for open circuit diving for the first time.
The Teric which was launched in May 2018, is Shearwater's first dive computer in a watch format.
Safety outreach
In 2010, Shearwater was one of the founding manufacturers for the Rebreather Education and Safety Association. Shearwater's Bruce Partridge served as Secretary for the founding board of the organization.
Partridge also presented at the Rebreather Forum 3 meeting held in 2012. He presented on the use of information technology with focus on human factors in equipment design.
Shearwater is also a sponsor for the diving research efforts of the Rubicon Foundation.
In 2016 Shearwater funded a rebreather sorb absorption research study by Harvey and colleagues.
Exploration support
A Shearwater Predator was used to calculate decompression on a 2010 expedition that lead to the identification of HMS Snaefell that went down on July 5, 1941.
Lance Robb utilized an ISC Megalodon rebreather with a Shearwater Predator in a 2010 expedition to explore Osprey Reef at a depth of .
Shearwater also supported research by the University of Connecticut and Ocean Opportunity to explore the Tongue of the Ocean. This project, funded by the National Geographic Society/ Waitt Grants Program to explore the mesophotic zone between and carried The Explorers Club flag number 172. The Shearwater electronics were utilized to record the diver profiles.
Awards
In 2013, Shearwater was presented with the International System Safety Society Award for safety in "Scientific Research & Development" at the 31st International System Safety Conference in Boston.
The EUROTEK 2014 Innovation Award, for manufacturing "an advanced or technical diving product or service that has enabled you to further your diving or made your diving safer" was granted to Bruce and Lynn Partridge of Shearwater Research for the Petrel and NERD.
References
Diving equipment manufacturers
|
381842
|
https://en.wikipedia.org/wiki/OpenMP
|
OpenMP
|
OpenMP (Open Multi-Processing) is an application programming interface (API) that supports multi-platform shared-memory multiprocessing programming in C, C++, and Fortran, on many platforms, instruction-set architectures and operating systems, including Solaris, AIX, FreeBSD, HP-UX, Linux, macOS, and Windows. It consists of a set of compiler directives, library routines, and environment variables that influence run-time behavior.
OpenMP is managed by the nonprofit technology consortium OpenMP Architecture Review Board (or OpenMP ARB), jointly defined by a broad swath of leading computer hardware and software vendors, including Arm, AMD, IBM, Intel, Cray, HP, Fujitsu, Nvidia, NEC, Red Hat, Texas Instruments, and Oracle Corporation.
OpenMP uses a portable, scalable model that gives programmers a simple and flexible interface for developing parallel applications for platforms ranging from the standard desktop computer to the supercomputer.
An application built with the hybrid model of parallel programming can run on a computer cluster using both OpenMP and Message Passing Interface (MPI), such that OpenMP is used for parallelism within a (multi-core) node while MPI is used for parallelism between nodes. There have also been efforts to run OpenMP on software distributed shared memory systems, to translate OpenMP into MPI
and to extend OpenMP for non-shared memory systems.
Design
OpenMP is an implementation of multithreading, a method of parallelizing whereby a primary thread (a series of instructions executed consecutively) forks a specified number of sub-threads and the system divides a task among them. The threads then run concurrently, with the runtime environment allocating threads to different processors.
The section of code that is meant to run in parallel is marked accordingly, with a compiler directive that will cause the threads to form before the section is executed. Each thread has an id attached to it which can be obtained using a function (called omp_get_thread_num()). The thread id is an integer, and the primary thread has an id of 0. After the execution of the parallelized code, the threads join back into the primary thread, which continues onward to the end of the program.
By default, each thread executes the parallelized section of code independently. Work-sharing constructs can be used to divide a task among the threads so that each thread executes its allocated part of the code. Both task parallelism and data parallelism can be achieved using OpenMP in this way.
The runtime environment allocates threads to processors depending on usage, machine load and other factors. The runtime environment can assign the number of threads based on environment variables, or the code can do so using functions. The OpenMP functions are included in a header file labelled in C/C++.
History
The OpenMP Architecture Review Board (ARB) published its first API specifications, OpenMP for Fortran 1.0, in October 1997. In October the following year they released the C/C++ standard. 2000 saw version 2.0 of the Fortran specifications with version 2.0 of the C/C++ specifications being released in 2002. Version 2.5 is a combined C/C++/Fortran specification that was released in 2005.
Up to version 2.0, OpenMP primarily specified ways to parallelize highly regular loops, as they occur in matrix-oriented numerical programming, where the number of iterations of the loop is known at entry time. This was recognized as a limitation, and various task parallel extensions were added to implementations. In 2005, an effort to standardize task parallelism was formed, which published a proposal in 2007, taking inspiration from task parallelism features in Cilk, X10 and Chapel.
Version 3.0 was released in May 2008. Included in the new features in 3.0 is the concept of tasks and the task construct, significantly broadening the scope of OpenMP beyond the parallel loop constructs that made up most of OpenMP 2.0.
Version 4.0 of the specification was released in July 2013. It adds or improves the following features: support for accelerators; atomics; error handling; thread affinity; tasking extensions; user defined reduction; SIMD support; Fortran 2003 support.
The current version is 5.1, released in November 2020.
Note that not all compilers (and OSes) support the full set of features for the latest version/s.
Core elements
The core elements of OpenMP are the constructs for thread creation, workload distribution (work sharing), data-environment management, thread synchronization, user-level runtime routines and environment variables.
In C/C++, OpenMP uses #pragmas. The OpenMP specific pragmas are listed below.
Thread creation
The pragma omp parallel is used to fork additional threads to carry out the work enclosed in the construct in parallel. The original thread will be denoted as master thread with thread ID 0.
Example (C program): Display "Hello, world." using multiple threads.
#include <stdio.h>
#include <omp.h>
int main(void)
{
#pragma omp parallel
printf("Hello, world.\n");
return 0;
}
Use flag -fopenmp to compile using GCC:
$ gcc -fopenmp hello.c -o hello
Output on a computer with two cores, and thus two threads:
Hello, world.
Hello, world.
However, the output may also be garbled because of the race condition caused from the two threads sharing the standard output.
Hello, wHello, woorld.
rld.
Whether printf is atomic depends on the underlying implementation unlike C++'s std::cout.
Work-sharing constructs
Used to specify how to assign independent work to one or all of the threads.
omp for or omp do: used to split up loop iterations among the threads, also called loop constructs.
sections: assigning consecutive but independent code blocks to different threads
single: specifying a code block that is executed by only one thread, a barrier is implied in the end
master: similar to single, but the code block will be executed by the master thread only and no barrier implied in the end.
Example: initialize the value of a large array in parallel, using each thread to do part of the work
int main(int argc, char **argv)
{
int a[100000];
#pragma omp parallel for
for (int i = 0; i < 100000; i++) {
a[i] = 2 * i;
}
return 0;
}
This example is embarrassingly parallel, and depends only on the value of . The OpenMP flag tells the OpenMP system to split this task among its working threads. The threads will each receive a unique and private version of the variable. For instance, with two worker threads, one thread might be handed a version of that runs from 0 to 49999 while the second gets a version running from 50000 to 99999.
Variant directives
Variant directives is one of the major features introduced in OpenMP 5.0 specification to facilitate programmers to improve performance portability. They enable adaptation of OpenMP pragmas and user code at compile time. The specification defines traits to describe active OpenMP constructs, execution devices, and functionality provided by an implementation, context selectors based on the traits and user-defined conditions, and metadirective and declare directive directives for users to program the same code region with variant directives.
The metadirective is an executable directive that conditionally resolves to another directive at compile time by selecting from multiple directive variants based on traits that define an OpenMP condition or context.
The declare variant directive has similar functionality as metadirective but selects a function variant at the call-site based on context or user-defined conditions.
The mechanism provided by the two variant directives for selecting variants is more convenient to use than the C/C++ preprocessing since it directly supports variant selection in OpenMP and allows an OpenMP compiler to analyze and determine the final directive from variants and context.
// code adaptation using preprocessing directives
int v1[N], v2[N], v3[N];
#if defined(nvptx)
#pragma omp target teams distribute parallel loop map(to:v1,v2) map(from:v3)
for (int i= 0; i< N; i++)
v3[i] = v1[i] * v2[i];
#else
#pragma omp target parallel loop map(to:v1,v2) map(from:v3)
for (int i= 0; i< N; i++)
v3[i] = v1[i] * v2[i];
#endif
// code adaptation using metadirective in OpenMP 5.0
int v1[N], v2[N], v3[N];
#pragma omp target map(to:v1,v2) map(from:v3)
#pragma omp metadirective \
when(device={arch(nvptx)}: target teams distribute parallel loop)\
default(target parallel loop)
for (int i= 0; i< N; i++)
v3[i] = v1[i] * v2[i];
Clauses
Since OpenMP is a shared memory programming model, most variables in OpenMP code are visible to all threads by default. But sometimes private variables are necessary to avoid race conditions and there is a need to pass values between the sequential part and the parallel region (the code block executed in parallel), so data environment management is introduced as data sharing attribute clauses by appending them to the OpenMP directive. The different types of clauses are:
Data sharing attribute clauses
shared: the data declared outside a parallel region is shared, which means visible and accessible by all threads simultaneously. By default, all variables in the work sharing region are shared except the loop iteration counter.
private: the data declared within a parallel region is private to each thread, which means each thread will have a local copy and use it as a temporary variable. A private variable is not initialized and the value is not maintained for use outside the parallel region. By default, the loop iteration counters in the OpenMP loop constructs are private.
default: allows the programmer to state that the default data scoping within a parallel region will be either shared, or none for C/C++, or shared, firstprivate, private, or none for Fortran. The none option forces the programmer to declare each variable in the parallel region using the data sharing attribute clauses.
firstprivate: like private except initialized to original value.
lastprivate: like private except original value is updated after construct.
reduction: a safe way of joining work from all threads after construct.
Synchronization clauses
critical: the enclosed code block will be executed by only one thread at a time, and not simultaneously executed by multiple threads. It is often used to protect shared data from race conditions.
atomic: the memory update (write, or read-modify-write) in the next instruction will be performed atomically. It does not make the entire statement atomic; only the memory update is atomic. A compiler might use special hardware instructions for better performance than when using critical.
ordered: the structured block is executed in the order in which iterations would be executed in a sequential loop
barrier: each thread waits until all of the other threads of a team have reached this point. A work-sharing construct has an implicit barrier synchronization at the end.
nowait: specifies that threads completing assigned work can proceed without waiting for all threads in the team to finish. In the absence of this clause, threads encounter a barrier synchronization at the end of the work sharing construct.
Scheduling clauses
schedule (type, chunk): This is useful if the work sharing construct is a do-loop or for-loop. The iterations in the work sharing construct are assigned to threads according to the scheduling method defined by this clause. The three types of scheduling are:
static: Here, all the threads are allocated iterations before they execute the loop iterations. The iterations are divided among threads equally by default. However, specifying an integer for the parameter chunk will allocate chunk number of contiguous iterations to a particular thread.
dynamic: Here, some of the iterations are allocated to a smaller number of threads. Once a particular thread finishes its allocated iteration, it returns to get another one from the iterations that are left. The parameter chunk defines the number of contiguous iterations that are allocated to a thread at a time.
guided: A large chunk of contiguous iterations are allocated to each thread dynamically (as above). The chunk size decreases exponentially with each successive allocation to a minimum size specified in the parameter chunk
IF control
if: This will cause the threads to parallelize the task only if a condition is met. Otherwise the code block executes serially.
Initialization
firstprivate: the data is private to each thread, but initialized using the value of the variable using the same name from the master thread.
lastprivate: the data is private to each thread. The value of this private data will be copied to a global variable using the same name outside the parallel region if current iteration is the last iteration in the parallelized loop. A variable can be both firstprivate and lastprivate.
threadprivate: The data is a global data, but it is private in each parallel region during the runtime. The difference between threadprivate and private is the global scope associated with threadprivate and the preserved value across parallel regions.
Data copying
copyin: similar to firstprivate for private variables, threadprivate variables are not initialized, unless using copyin to pass the value from the corresponding global variables. No copyout is needed because the value of a threadprivate variable is maintained throughout the execution of the whole program.
copyprivate: used with single to support the copying of data values from private objects on one thread (the single thread) to the corresponding objects on other threads in the team.
Reduction
reduction (operator | intrinsic : list): the variable has a local copy in each thread, but the values of the local copies will be summarized (reduced) into a global shared variable. This is very useful if a particular operation (specified in operator for this particular clause) on a variable runs iteratively, so that its value at a particular iteration depends on its value at a prior iteration. The steps that lead up to the operational increment are parallelized, but the threads updates the global variable in a thread safe manner. This would be required in parallelizing numerical integration of functions and differential equations, as a common example.
Others
flush: The value of this variable is restored from the register to the memory for using this value outside of a parallel part
master: Executed only by the master thread (the thread which forked off all the others during the execution of the OpenMP directive). No implicit barrier; other team members (threads) not required to reach.
User-level runtime routines
Used to modify/check the number of threads, detect if the execution context is in a parallel region, how many processors in current system, set/unset locks, timing functions, etc
Environment variables
A method to alter the execution features of OpenMP applications. Used to control loop iterations scheduling, default number of threads, etc. For example, OMP_NUM_THREADS is used to specify number of threads for an application.
Implementations
OpenMP has been implemented in many commercial compilers. For instance, Visual C++ 2005, 2008, 2010, 2012 and 2013 support it (OpenMP 2.0, in Professional, Team System, Premium and Ultimate editions), as well as Intel Parallel Studio for various processors. Oracle Solaris Studio compilers and tools support the latest OpenMP specifications with productivity enhancements for Solaris OS (UltraSPARC and x86/x64) and Linux platforms. The Fortran, C and C++ compilers from The Portland Group also support OpenMP 2.5. GCC has also supported OpenMP since version 4.2.
Compilers with an implementation of OpenMP 3.0:
GCC 4.3.1
Mercurium compiler
Intel Fortran and C/C++ versions 11.0 and 11.1 compilers, Intel C/C++ and Fortran Composer XE 2011 and Intel Parallel Studio.
IBM XL compiler
Sun Studio 12 update 1 has a full implementation of OpenMP 3.0
Multi-Processor Computing ()
Several compilers support OpenMP 3.1:
GCC 4.7
Intel Fortran and C/C++ compilers 12.1
IBM XL C/C++ compilers for AIX and Linux, V13.1 & IBM XL Fortran compilers for AIX and Linux, V14.1
LLVM/Clang 3.7
Absoft Fortran Compilers v. 19 for Windows, Mac OS X and Linux
Compilers supporting OpenMP 4.0:
GCC 4.9.0 for C/C++, GCC 4.9.1 for Fortran
Intel Fortran and C/C++ compilers 15.0
IBM XL C/C++ for Linux, V13.1 (partial) & XL Fortran for Linux, V15.1 (partial)
LLVM/Clang 3.7 (partial)
Several Compilers supporting OpenMP 4.5:
GCC 6 for C/C++
Intel Fortran and C/C++ compilers 17.0, 18.0, 19.0
LLVM/Clang 12
Partial support for OpenMP 5.0:
GCC 9 for C/C++
Intel Fortran and C/C++ compilers 19.1
LLVM/Clang 12
Auto-parallelizing compilers that generates source code annotated with OpenMP directives:
iPat/OMP
Parallware
PLUTO
ROSE (compiler framework)
S2P by KPIT Cummins Infosystems Ltd.
Several profilers and debuggers expressly support OpenMP:
Intel VTune Profiler - a profiler for the x86 CPU and Xe GPU architectures
Intel Advisor - a design assistance and analysis tool for OpenMP and MPI codes
Allinea Distributed Debugging Tool (DDT) – debugger for OpenMP and MPI codes
Allinea MAP – profiler for OpenMP and MPI codes
TotalView - debugger from Rogue Wave Software for OpenMP, MPI and serial codes
ompP – profiler for OpenMP
VAMPIR – profiler for OpenMP and MPI code
Pros and cons
Pros:
Portable multithreading code (in C/C++ and other languages, one typically has to call platform-specific primitives in order to get multithreading).
Simple: need not deal with message passing as MPI does.
Data layout and decomposition is handled automatically by directives.
Scalability comparable to MPI on shared-memory systems.
Incremental parallelism: can work on one part of the program at one time, no dramatic change to code is needed.
Unified code for both serial and parallel applications: OpenMP constructs are treated as comments when sequential compilers are used.
Original (serial) code statements need not, in general, be modified when parallelized with OpenMP. This reduces the chance of inadvertently introducing bugs.
Both coarse-grained and fine-grained parallelism are possible.
In irregular multi-physics applications which do not adhere solely to the SPMD mode of computation, as encountered in tightly coupled fluid-particulate systems, the flexibility of OpenMP can have a big performance advantage over MPI.
Can be used on various accelerators such as GPGPU and FPGAs.
Cons:
Risk of introducing difficult to debug synchronization bugs and race conditions.
only runs efficiently in shared-memory multiprocessor platforms (see however Intel's Cluster OpenMP and other distributed shared memory platforms).
Requires a compiler that supports OpenMP.
Scalability is limited by memory architecture.
No support for compare-and-swap.
Reliable error handling is missing.
Lacks fine-grained mechanisms to control thread-processor mapping.
High chance of accidentally writing false sharing code.
Performance expectations
One might expect to get an N times speedup when running a program parallelized using OpenMP on a N processor platform. However, this seldom occurs for these reasons:
When a dependency exists, a process must wait until the data it depends on is computed.
When multiple processes share a non-parallel proof resource (like a file to write in), their requests are executed sequentially. Therefore, each thread must wait until the other thread releases the resource.
A large part of the program may not be parallelized by OpenMP, which means that the theoretical upper limit of speedup is limited according to Amdahl's law.
N processors in a symmetric multiprocessing (SMP) may have N times the computation power, but the memory bandwidth usually does not scale up N times. Quite often, the original memory path is shared by multiple processors and performance degradation may be observed when they compete for the shared memory bandwidth.
Many other common problems affecting the final speedup in parallel computing also apply to OpenMP, like load balancing and synchronization overhead.
Compiler optimisation may not be as effective when invoking OpenMP. This can commonly lead to a single-threaded OpenMP program running slower than the same code compiled without an OpenMP flag (which will be fully serial).
Thread affinity
Some vendors recommend setting the processor affinity on OpenMP threads to associate them with particular processor cores.
This minimizes thread migration and context-switching cost among cores. It also improves the data locality and reduces the cache-coherency traffic among the cores (or processors).
Benchmarks
A variety of benchmarks has been developed to demonstrate the use of OpenMP, test its performance and evaluate correctness.
Simple examples
OmpSCR: OpenMP Source Code Repository
Performance benchmarks include:
NAS Parallel Benchmark
Barcelona OpenMP Task Suite a collection of applications that allow to test OpenMP tasking implementations.
SPEC series
SPEC OMP 2012
The SPEC ACCEL benchmark suite testing OpenMP 4 target offloading API
The SPEChpc 2002 benchmark
CORAL benchmarks
Exascale Proxy Applications
Rodinia focusing on accelerators.
Problem Based Benchmark Suite
Correctness benchmarks include:
OpenMP Validation Suite
OpenMP Validation and Verification Testsuite
DataRaceBench is a benchmark suite designed to systematically and quantitatively evaluate the effectiveness of OpenMP data race detection tools.
AutoParBench is a benchmark suite to evaluate compilers and tools which can automatically insert OpenMP directives.
See also
Concurrency (computer science)
Heterogeneous System Architecture
Parallel programming model
POSIX Threads
Unified Parallel C
Bulk synchronous parallel
Partitioned global address space
SequenceL
References
Further reading
Quinn Michael J, Parallel Programming in C with MPI and OpenMP McGraw-Hill Inc. 2004.
R. Chandra, R. Menon, L. Dagum, D. Kohr, D. Maydan, J. McDonald, Parallel Programming in OpenMP. Morgan Kaufmann, 2000.
R. Eigenmann (Editor), M. Voss (Editor), OpenMP Shared Memory Parallel Programming: International Workshop on OpenMP Applications and Tools, WOMPAT 2001, West Lafayette, IN, USA, July 30–31, 2001. (Lecture Notes in Computer Science). Springer 2001.
B. Chapman, G. Jost, R. van der Pas, D.J. Kuck (foreword), Using OpenMP: Portable Shared Memory Parallel Programming. The MIT Press (October 31, 2007).
Parallel Processing via MPI & OpenMP, M. Firuziaan, O. Nommensen. Linux Enterprise, 10/2002
MSDN Magazine article on OpenMP
SC08 OpenMP Tutorial (PDF) – Hands-On Introduction to OpenMP, Mattson and Meadows, from SC08 (Austin)
OpenMP Specifications
Parallel Programming in Fortran 95 using OpenMP (PDF)
External links
, includes the latest OpenMP specifications, links to resources, forums where questions can be asked and are answered by OpenMP experts and implementors
OpenMPCon, website of the OpenMP Developers Conference
IWOMP, website for the annual International Workshop on OpenMP
UK OpenMP Users, website for the UK OpenMP Users group and conference
Blaise Barney, Lawrence Livermore National Laboratory site on OpenMP
Combining OpenMP and MPI (PDF)
Measure and visualize OpenMP parallelism by means of a C++ routing planner calculating the Speedup factor
Intel Advisor
Application programming interfaces
Articles with example Fortran code
C programming language family
Fortran
Parallel computing
|
861745
|
https://en.wikipedia.org/wiki/Wendy%20M.%20Grossman
|
Wendy M. Grossman
|
Wendy M. Grossman (born January 26, 1954 in New York City) is a journalist, blogger, and folksinger. Her writing has been published in several newspapers, magazines, and specialized publications. She is the recipient of the 2013 Enigma Award for information security reporting.
Education
Grossman graduated from Cornell University in 1975.
Career
Writer and editor
In 1987, she founded the magazine The Skeptic in the United Kingdom and edited it for two years, resuming the editorship from 1999 to 2001. As founder and editor, she has appeared on numerous UK TV and radio programmes. Her credits since 1990 include work for Scientific American, The Guardian, and the Daily Telegraph, as well as New Scientist, Wired and Wired News, and The Inquirer for which she wrote a regular weekly net.wars column. That column continues in NewsWireless and on her own site every Friday. She was a columnist for Internet Today from July 1996 until it closed in April 1997, and together with Dominic Young ran the Fleet Street Forum on CompuServe UK in the mid-1990s.
She edited an anthology of interviews with leading computer industry figures taken from the pages of the British computer magazine Personal Computer World. Entitled Remembering the Future, it was published in January 1997 by Springer Verlag. Her 1998 book net.wars was one of the first to have its full text published on the Web.
She was a member of an external board that advised Edinburgh University on the creation of the Intellectual Property and Law Centre.
She sits on the executive committee of the Association of British Science Writers and the Advisory Councils of the Open Rights Group and Privacy International.
In February 2011 Grossman was elected as a Fellow of the Committee for Skeptical Inquiry.
Folksinger
She was a full-time folksinger from 1975–83 and her folk album Roseville Fair was released in 1980. She also played on Archie Fisher's 1976 LP The Man With a Rhyme.
She was president of the Cornell Folk Song Club, the oldest university-affiliated, student-run folk song club in the US, from 1973 to 1975.
TV appearances
In 2005, Grossman featured on an episode of the BBC Three comedy spoof series High Spirits with Shirley Ghostman.
Awards
In 2013, Grossman was the winner of the Enigma Award, part of the BT Information Security Journalism Awards, "for her dedication and outstanding contribution to information security journalism, recognising her extensive writing on the subject for several publications over a number of years".
Works
Remembering the Future: Interviews from Personal Computer World (1996)
Net.wars (1998)
From Anarchy to Power: The Net Comes of Age (2001)
The Daily Telegraph A-Z Guide to the Internet (2001)
The Daily Telegraph Small Business Guide to Computer Networking (2003)
Why Statues Weep: The Best of the "Skeptic" (2010) – with Chris French
References
External links
Official website
Wendy Grossman on LiveJournal
Wendy Grossman in The Guardian
NewsWirelessNet, where her column net.wars appears every Friday
Full text of net.wars, Wendy Grossman, 1997–99 NYU Press,
1954 births
Living people
21st-century American non-fiction writers
American bloggers
American folk singers
American technology writers
American women bloggers
American women journalists
Cornell University alumni
Riverdale Country School alumni
Women technology writers
Writers from New York City
20th-century American non-fiction writers
20th-century American women writers
21st-century American women writers
20th-century American journalists
21st-century American journalists
|
16742756
|
https://en.wikipedia.org/wiki/Rich%20Communication%20Services
|
Rich Communication Services
|
Rich Communication Services (RCS) is a communication protocol between mobile telephone carriers and between phone and carrier, aiming at replacing SMS messages with a text-message system that is richer, provides phonebook polling (for service discovery), and can transmit in-call multimedia. It is part of the broader IP Multimedia Subsystem. Google added support for end-to-end encryption for one-on-one conversations in their own extension.
It is also marketed as Advanced Messaging, Chat, joyn, SMSoIP, Message+, and SMS+.
In early 2020, it was estimated that RCS was available from 88 operators in 59 countries with approximately 390 million users per month.
History
The Rich Communication Suite industry initiative
was formed by a group of industry promoters in 2007. In February 2008 the GSM Association officially became the project 'home' of RCS and an RCS steering committee was established by the organisation.
The steering committee specified the definition, testing, and integration of the services in the application suite known as RCS. Three years later, the RCS project released a new specification – RCS-e (e = 'enhanced'), which included various iterations of the original RCS specifications. The GSMA program is now called Rich Communication Services.
The GSMA published the Universal Profile in November 2016. The Universal Profile is a single GSMA specification for advanced communications. Carriers that deploy the Universal Profile guarantee interconnection with other carriers. 47 mobile network operators, 11 manufacturers, and 2 OS providers (Google and Microsoft) have announced their support. Google's Jibe Cloud platform is an implementation of the RCS Universal Profile, designed to help carriers launch RCS quickly and scale easily.
Samsung is the major device original equipment manufacturer (OEM) to support RCS. Samsung RCS capable devices have been commercially launched in Europe since 2012 and in the United States since 2015.
Google supports RCS on Android devices with its Android SMS app Messages. In April 2018, it was reported that Google would be transferring the team that was working on its Google Allo messaging service to work on a wider RCS implementation. In June 2019, Google announced that it would begin to deploy RCS on an opt-in basis via the Messages app (branded as chat features), with service compliant with the Universal Profile and hosted by Google rather than the user's carrier. The rollout of this functionality began in France and the United Kingdom. In response to concerns over the lack of end-to-end encryption in RCS, Google stated that it would only retain message data in transit until it is delivered to the recipient. In November 2020, Google later announced that it would begin to roll out end-to-end encryption for one-on-one conversations between Messages users, beginning with the beta version of the app. In December 2020, Samsung updated its Samsung Experience messages app to also allow users to opt into Chat.
In October 2019, the four major U.S. carriers announced an agreement to form the "Cross-Carrier Messaging Initiative" to jointly implement RCS using a newly developed app. This service will be compatible with the Universal Profile. However, both T-Mobile and AT&T later signed deals with Google to replace their messaging app with Google's own Messages app.
RCS specifications
RCS combines different services defined by 3GPP and Open Mobile Alliance (OMA) with an enhanced phonebook. Another phone's capabilities and presence information can be discovered and displayed by a mobile phone.
RCS reuses 3GPP specified IMS core system as the underlying service platform taking care of issues such as authentication, authorization, registration, charging and routing.
Release 1 Version 1.0 (15.12.2008) Offered the first definitions for the enrichment of voice and chat with content sharing, driven from an RCS enhanced address book (EAB).
Release 2 Version 1.0 (31.08.2009) Added broadband access to RCS features: enhancing the messaging and enabling sharing of files.
Release 3 Version 1.0 (25.02.2010) Focused on the broadband device as a primary device.
Release 4 Version 1.0 (14.02.2011) Included support for LTE.
Release 5 Version 1.0 (19.04.2012) RCS 5.0 is completely backwards-compatible with RCS-e V1.2 specifications and also includes features from RCS 4 and new features such as IP video call, IP voice call and Geo-location exchange. RCS5.0 supports both OMA CPM and OMA SIMPLE IM. RCS 5.0 includes the following features.
Standalone Messaging
1-2-1 Chat
Group Chat
File Transfer
Content Sharing
Social Presence Information
IP Voice call (IR92 and IR.58)
IP Video call (IR.94)
Geolocation Exchange
Capability Exchange based on Presence or SIP OPTIONS
Release 5.1 5.1 is completely backwards compatible with the RCS-e V1.2 and RCS 5.0 specifications. It introduces additional new features such as Group Chat Store & Forward, File Transfer in Group Chat, File Transfer Store & Forward, and Best Effort Voice Call, as well as lessons-learnt and bug fixes from the V1.2 interoperability testing efforts. RCS 5.1 supports both OMA CPM and OMA SIMPLE IM.
Version 1.0 (13.08.2012)
Version 2.0 (03.05.2013)
Version 3.0 (25.09.2013)
Version 4.0 (28.11.2013)
Release 5.2 Version 5.0 (07.05.2014) Improved central message store and introduced service extension tags into the specification. It also introduced a number of incremental improvements and bug fixes to RCS 5.1 V4.0 that improve the user experience and resolve issues that were noticed in deployed RCS networks
Release 5.3 Version 6.0 (28.02.2015)
Release 6.0 Version 7.0 (21.03.2016) Support for Visual Voice Mail and more
Release 7.0 Version 8.0 (28.06.2017) Support for Chatbots, SMS fallback features and more
Release 8.0 Version 9.0 (16.05.2018) Support for additional Chatbots features and vCard 4.0
RCS-e (enhanced)
Initial Version (May 2011)
Version 1.2 (28.11.2011)
Version 1.2.2 (04.07.2012)
Joyn
The GSMA defined a series of specific implementations of the RCS specifications. The RCS specifications often define a number of options for implementing individual communications features, resulting in challenges in delivering interoperable services between carriers. The RCS specifications aim to define a more specific implementation that promotes standardization and simplify interconnection between carriers.
At this time there are two major relevant releases:
Joyn Hot Fixes - based upon the RCS 1.2.2 specification (previously known as RCS-e), this includes 1:1 chat, group chat, MSRP file sharing and video sharing (during a circuit-switched call). Services based upon this specification are live in Spain, France and Germany.
Joyn Blackbird Drop 1 - based upon the RCS 5.1 specification, this extends the Joyn Hot Fixes service to include HTTP file sharing, location sharing, group file sharing, and other capabilities such as group chat store and forward. Joyn Blackbird Drop 1 is backwards compatible with Joyn Hot Fixes. Vodafone Spain's network is accredited for Joyn Blackbird Drop 1, and Telefónica and Orange Spain have also been involved in interoperability testing with vendors of Joyn Blackbird Drop 1 clients. A number of client vendors are accredited to Joyn Blackbird Drop 1.
Two or more future releases are planned:
Joyn Blackbird Drop 2 - also based upon the RCS 5.1 specification, this will primarily add IP voice and video calling. The test cases for Joyn Blackbird Drop 2 have yet to be released by the GSMA.
Joyn Crane - Already available in GSMA web page.
RCS Universal Profile
The GSMA's Universal Profile is a single set of features for product development and operator deployment of RCS.
Version 1.0 (November 2016) Includes core features such as capability discovery which will be interoperable between regions, chat, group chat, file transfer, audio messaging, video share, multi-device, enriched calling, location share and live sketching.
Version 2.0 (July 2017) Includes Messaging as a Platform, APIs, plug-in integration and improved authentication and app security.
Version 2.1 (December 2017)
Version 2.2 (May 2018)
Version 2.3 (December 2018)
Version 2.4 (October 2019) Removes plug-in integration and includes integrated seamless web-view.
RCS Business Messaging
RCS Business Messaging (RBM) is the B2C (A2P in telecoms terminology) version of RCS. This is supposed to be an answer to third-party messaging apps (or OTTs) absorbing mobile operators' messaging traffic and associated revenues. While RCS is designed to win back Person-to-Person (P2P) traffic, RBM is intended to retain and grow this A2P traffic. RCS offers "rich" features similar to those of messaging apps, but delivered (in theory) via the preloaded SMS messaging app - for example Google Messages or Samsung Messages. By making these features available in a B2C setting, RBM is expected to attract marketing and customer service spend from enterprises, thanks to improved customer engagement and interactive features that facilitate new use cases. This was the primary reason for the development of RCS by the GSMA.
RBM includes features not available to ordinary users, including predefined quick-reply suggestions, rich cards, carousels, and branding. This last feature is intended to increase consumer confidence and reduce fraud through the implementation of a verified sender system. These additional features are only available with the use of a messaging-as-a-platform (MaaP) server integrated with the operator's network. The MaaP controls the verified sender details, unlocking RBM features, while also segregating P2P and A2P RCS messages, aiding monetisation of the latter (SMS currently suffers from grey routes, where A2P messages are sent over P2P connections, which are cheaper or often free).
Status
According to GSMA PR in 2012, Rich Communication Services (RCS) carriers from around the globe supporting the RCS standard included AT&T, Bell Mobility, Bharti Airtel, Deutsche Telekom, Jio, KPN, KT Corporation, LG U+, Orange, Orascom Telecom, Rogers Communications, SFR, SK Telecom, Telecom Italia, Telefónica, Telia Company, Telus, Verizon and Vodafone.
Universal Profile is currently backed by "a large and growing ecosystem" (68 supporters in 2019). Universal Profile support is optional in 4G, but mandatory in 5G networks and devices.
55 operators: Advanced Info Service, América Móvil, AT&T Mobility, Axiata, Beeline (brand), Bell Mobility, Bharti Airtel, China Telecom, China Unicom, Claro Americas, Deutsche Telekom, Etisalat, Globe Telecom, Ice, Indosat Ooredoo, Jio, KDDI, KPN, M1 Limited, MegaFon, Millicom, MTN Group, MTS (network provider), NTT Docomo, Optus, Orange S.A., Personal, Rogers Communications, Singtel, Smart Communications, Sprint Corporation, T-Mobile US, Telcel, Tele2, Telefónica, Telenor, Telia Company, Telkomsel, Telstra, Telus, TIM (brand), Turkcell, Verizon Communications, VEON, and Vodafone.
11 OEMs: TCL (Alcatel Mobile), Asus, General Mobile, HTC, Huawei, Intex Technologies, Lava International, LG Electronics, Lenovo (Motorola), Samsung Electronics and ZTE.
2 mobile OS providers: Google and Microsoft.
Interconnect and hubs
Like SMS, RCS requires national and international interconnects to enable roaming. As with SMS, this will be accomplished with hubbing - where third-party providers complete agreements with individual operators to interwork their systems. Each subsequent operator that connects to a hub is therefore connected automatically to all other connected operators. This eliminates the need to each operator to connect to all the others to which they may need to send messages. RCS hubs are provided by stakeholders with a vested interest in increasing RCS use. These include traditional SMS hub providers (e.g. Global Message Services and Sinch), software and hardware vendors (e.g. Interop Technologies, Mavenir, and ZTE), and also Google via its Jibe Cloud platform.
Accreditation
The RCS interop and testing (IOT) accreditation process was started by the GSMA in order to improve the quality of testing, increase transparency, drive scale, minimize complexity and accelerate time-to-market (TTM) of joyn services. Companies need to undertake the IOT process from the GSMA to apply for a license to use the service mark joyn.
"Accredited" means that the device, client or network has undertaken a series of test cases (150 to 300) in a specific set of conditions, provided test results and traces that have been analysed by the GSMA RCS IOT team and any IOT issues arising resolved with the submitter.
"Accreditation Ready" is the designation awarded to a hosted RCS service that has undertaken the same series of test cases as the mobile network operator, provided test results and traces that have been analysed by the GSMA RCS IOT team and any IoT issues arising resolved with the submitter.
A list of RCS AS providers and their GSMA RCS Accreditation status can be found here:
Reception
Amnesty International researcher Joe Westby criticized RCS for not allowing end-to-end encryption, because it is treated as a service of carriers and thus subject to lawful interception.
The Verge criticized the inconsistent support of RCS in the United States, with carriers not supporting RCS in all markets, not certifying service on all phones, or not yet supporting the Universal Profile. Concerns were shown over Google's decision to run its own RCS service due to the possibility of antitrust scrutiny, but it was acknowledged that Google had to do so in order to bypass the carriers' inconsistent support of RCS, as it wanted to have a service more comparable to Apple's iMessage service available on Android.
Ars Technica also criticized Google's move to launch a direct-to-consumer RCS service, considering it a contradiction of RCS being native to the carrier to provide features reminiscent of messaging apps, counting it as being among various past and unsuccessful attempts by Google to develop an in-house messaging service (including Google Talk, Google+ Messenger, Hangouts, and Allo), and noting limitations: such as its dependencies on phone numbers as the identity (whereas email-based accounts are telco-agnostic), not being capable of being readily synchronized between multiple devices, and the aforementioned lack of end-to-end encryption. In June 2021 Google introduced end-to-end encryption in Google Messages, an app supporting RCS. Encryption is supported only if two users are on Messages in a 1:1 chat (not group chat), both with RCS turned on.
See also
Matrix communication protocol
References
External links
Specifications
Mobile technology
|
323149
|
https://en.wikipedia.org/wiki/Matter%20of%20Rome
|
Matter of Rome
|
According to the medieval poet Jean Bodel, the Matter of Rome was the literary cycle made up of Greek and Roman mythology, together with episodes from the history of classical antiquity, focusing on military heroes like Alexander the Great and Julius Caesar. Bodel divided all the literary cycles he knew best into the Matter of Britain, the Matter of France and the Matter of Rome (although "non-cyclical" romance also existed). The Matter of Rome also included what is referred to as the Matter of Troy, consisting of romances and other texts based on the Trojan War and its after-effects, including the adventures of Aeneas.
Subject matter
Classical topics were the subjects of a good deal of Old French literature, which in the case of Trojan subject matter ultimately deriving from Homer was built on scant sources; since the Iliad and the Odyssey were unknown, medieval Western poets had to make do with two short prose narratives based on Homer, ascribed to Dictys Cretensis and Dares Phrygius. The paucity of original text did not prevent the 12th century Norman poet Benoît de Sainte-Maure from writing a lengthy adaptation, Le Roman de Troie, running 40,000 lines. The poems that were written on these topics were called the romans d'antiquité, the "romances of antiquity." This name presages the anachronistic approach the medieval poets used in dealing with these subjects. For example, in the epic poems Roman d'Alixandre and the Roman de Troie, Alexander the Great, and Achilles and his fellow heroes of the Trojan War were treated as knights of chivalry, not much different from the heroes of the chansons de geste. Elements of courtly love were introduced into the poems; in the Roman de Thèbes, a romantic relationship absent from the Greek sources is introduced into the tale of Parthenopæus and Antigone. Military episodes in these tales were also multiplied, and used to introduce scenes of knight-errantry and tournaments.
Another example of French medieval poetry in this genre is the Eneas, a treatment of the Aeneid that comes across as being a sort of burlesque of Virgil's poem. Sentimental and fantasy elements in the source material were multiplied, and incidents from Ovid, the most popular Latin poet of the Middle Ages, were mixed into the pastiche. The Philomela attributed to Chrétien de Troyes, a retelling of the story of Philomela and Procne, also takes its source from Ovid's Metamorphoses.
Geoffrey Chaucer's Troilus and Criseyde is an English example, with Chaucer adding many elements to emphasize its connection with the matter. He also brought the story into line with the precepts of courtly love.
This anachronistic treatment of elements from Greek mythology is similar to that of the Middle English narrative poem "Sir Orfeo", where the Greek Orpheus becomes the knight Sir Orfeo who rescues his wife Heurodis (i.e. Eurydice) from the fairy king.
Principal Texts
Some principal texts of the Matter of Rome include:
The Alexander Romance, probably written in Greek during the 4th Century, adapted into European vernaculars in the 12th Century.
The Seven Wise Masters, a collection of stories of Middle Eastern and Indian origin.
The Romance of Thebes, a 1155 French retelling of the ancient Greek myth of Eteocles and Polynices, based on the 1st Century Latin poem Thebaid (Latin poem) by Statius.
The Roman d'Enéas, written in French in 1156, based on the myth of Aeneas as recounted by Virgil in the Aeneid.
The Roman de Troie, a French poem written in the 1150s by Benoît de Sainte-Maure. A prose version of the story dates from 1225.
The Roman d'Éracle, by the French writer Gautier d'Arras in 1177.
See also
Classical mythology
References
Medieval literature
Medieval legends
History of literature
Romance (genre)
Metanarratives
Works based on classical literature
Works based on classical mythology
|
2050611
|
https://en.wikipedia.org/wiki/AnoNet
|
AnoNet
|
anoNet is a decentralized friend-to-friend network built using VPNs and software BGP routers. anoNet works by making it difficult to learn the identities of others on the network allowing them to anonymously host IPv4 and IPv6 services.
Motivation
Implementing an anonymous network on a service by service basis has its drawbacks, and it is debatable if such work should be built at the application level. A simpler approach could be to design an IPv4/IPv6 network where its participants enjoyed strong anonymity. Doing so allows the use of any number of applications and services already written and available on the internet at large.
IPv4 networks do not preclude anonymity by design; it is only necessary to decouple the identity of the owner of an IP address from the address itself. Commercial internet connectivity and its need of billing records makes this impossible, but private IPv4 networks do not share that requirement. Assuming that a router administrator on such a metanet knows only information about the adjacent routers, standard routing protocols can take care of finding the proper path for a packet to take to reach its destination. All destinations further than one hop can for most people's threat models be considered anonymous. This is because only your immediate peers know your IP. Anyone not directly connected to you only knows you by an IP in the 21.0.0.0/8 range, and that IP is not necessarily tied to any identifiable information.
anoNet is pseudonymous
Everyone can build a profile of an anoNet IP address: what kind of documents it publishes or requests, in which language, about which countries or towns, etc. If this IP ever publishes a document that can lead to its owner's identity, then all other documents ever published or requested can be tied to this identity. Unlike some other Friend to Friend (F2F) programs, there is no automatic forwarding in anoNet that hides the IP of a node from all nodes that are not directly connected to it.
However, all existing F2F programs can be used inside anoNet, making it harder to detect that someone uses one of these F2F programs (only a VPN connection can be seen from the outside, but traffic analysis remain possible).
Architecture
Since running fiber to distant hosts is prohibitively costly for the volunteer nature of such a network, the network uses off-the-shelf VPN software for both router to router, and router to user links. This offers other advantages as well, such as invulnerability to external eavesdropping and the lack of need for unusual software which might give notice to those interested in who is participating.
To avoid addressing conflict with the internet itself, anoNet initially used the IP range 1.0.0.0/8. This was to avoid conflicting with internal networks such as 10.0.0.0/8, 172.16.0.0/12 and 192.168.0.0/16, as well as assigned Internet ranges. In January 2010 IANA allocated 1.0.0.0/8 to APNIC. In March 2017 anoNet changed the network to use the 21.0.0.0/8 subnetwork, which is assigned to the United States Department of Defense but is not currently in use on the internet.
The network itself is not arranged in any regular, repeating pattern of routers, although redundant (>1) links are desired. This serves to make it more decentralized, reduces choke points, and the use of BGP allows for redundancy.
Suitable VPN choices are available, if not numerous. Any robust IPsec package is acceptable, such as FreeS/WAN or Greenbow. Non-IPsec solutions also exist, such as OpenVPN and SSH tunneling. There is no requirement for a homogeneous network; each link could in fact use a different VPN daemon.
Goals
One of the primary goals of anoNet is to protect its participants' rights of speech and expression, especially those that have come under attack of late. Some examples of what might be protected by anoNet include:
Fan fiction
DeCSS
Criticisms of electronic voting machines.
Bnetd and similar software
Song of the South and other films of historical interest unavailable due to political controversy
How it works
It is impossible on the Internet to communicate with another host without knowing its IP address. Thus, the anoNet realizes that you will be known to your peer, along with the subnet mask used for communicating with them. A routing protocol, BGP, allows any node to advertise any routes they like, and this seemingly chaotic method is what provides users with anonymity. Once a node advertises a new route, it is hard for anyone else to determine if it is a route to another machine in another country via VPN, or just a dummy interface on that users machine.
It is possible that certain analysis could be used to determine if the subnet was remote (as in another country), or local (as in either a dummy interface, or a machine connected via Ethernet.)
These include TCP timestamps, ping times, OS identification, user agents, and traffic analysis.
Most of these are mitigatible through action on the users' part.
Scaling
There are 65536 ASNs available in BGP v4. Long before anoNet reaches that number of routers the network will have to be split into OSPF clouds, or switched to a completely different routing protocol or alter the BGP protocol to use a 32bit integer for ASNs, like the rest of the Internet will do, since 32-bit AS numbers now are standardised.
There are also only 65536 /24 subnets in the 21.0.0.0/8 subnet. This would be easier to overcome by adding a new unused /8 subnet if there were any.
Allocated Subnets
Below is the list of allocated IPv4 and IPv6 subnets as of 4 March 2020.
21.3.3.0/24
21.3.37.0/24
21.4.9.200/30
21.3.4.0/24
21.0.0.0/24
21.22.1.0/24
21.4.9.153/32
21.3.3.96/30
21.50.0.0/24
21.71.12.0/24
21.41.41.0/24
21.3.3.8/32
21.0.99.11/32
21.63.70.0/24
21.4.9.53/32
21.3.3.1/32
21.78.0.53/32
21.3.3.7/32
21.255.222.0/24
21.3.3.10/32
21.255.112.0/24
21.255.113.0/24
21.79.3.153/32
21.3.3.3/32
21.104.100.0/24
21.255.114.0/24
fd63:1e39:6f73:ff72::/64
fd63:1e39:6f73:ff75::/64
fd63:1e39:6f73:325::/64
fd63:1e39:6f73:1601::/64
fd63:1e39:6f73:304::/64
fd63:1e39:6f73:2929::/64
fd63:1e39:6f73:1c6a::/64
fd63:1e39:6f73:303::/64
fd63:1e39:6f73:3f46::/64
fd63:1e39:6f73:3f45::/64
fd63:1e39:6f73:470c::/64
Security concerns
Since there is no identifiable information tied to a user of anoNet, one might assume that the network would drop into complete chaos. Unlike other anonymous networks, on anoNet if a particular router or user is causing a problem it is easy to block them with a firewall. In the event that they are affecting the entire network, their peers would drop their tunnel.
With the chaotic nature of random addressing, it is not necessary to hide link IP addresses. These are already known. If however, a user wants to run services, or participate in discussions anonymously, he can advertise a new route, and bind his services or clients to the new IP addresses.
See also
Anonymous P2P
Crypto-anarchism
DarkNET Conglomeration
Darknet
Similar software :
Freenet
GNUnet
I2P
RetroShare
References
Consideration of User Preference on Internet-based Overlay Network, T Gu, JB Yoo, CY Park - …, Networking, and Parallel/Distributed Computing, 2008 …, 2008 - ieeexplore.ieee.org
External links
https://web.archive.org/web/20140127020051/http://anonet2.biz/ anonet wiki
http://wiki.ucis.nl/Anonet Another informative page (including information on connecting)
Anonymity networks
|
39801728
|
https://en.wikipedia.org/wiki/Aarhus%20University%20Department%20of%20Computer%20Science
|
Aarhus University Department of Computer Science
|
Department of Computer Science at Aarhus University is with its 1000 students the largest Computer Science Department in Denmark. Earlier, the department abbreviation was 'DAIMI’, but after a restructure and internationalization, the abbreviation of the department became CS AU, short for Department of Computer Science, Aarhus University.
History
Originally, Department of Computer Science was a section within Department of Mathematics at Aarhus University. Here, the department was abbreviated DAIMI, short for Datalogisk Afdeling i Matematisk Institut, and by 1998 the name DAIMI had become so well known that it was kept, when the department became an independent department.
The computer science course started at Aarhus University in 1971 as part of Department of Mathematics. During the period 1993-1998 the computer scientific subject area underwent rapid growth, and at the department the total number of staff members rose from 80 to 160, primarily because of an increase in external funding.
An independent Department of Computer Science was founded in 1998. In the coming 5–6 years the department continuously moved more sections to new buildings as part of Aarhus University's plan to concentrate IT activities within the IT City Katrinebjerg. Close working relations to other organizations within the IT City have been established, i.e. with Department of Aesthetics and Communication and the Alexandra Institute.
Well-known computer scientists from Department of Computer Science, Aarhus University include:
Bjarne Stroustrup (inventor of C++)
Jakob Nielsen (expert in usability)
Lars Bak (inventor of the V8 JavaScript Engine)
Education
Educations at Bachelor level:
Bachelor in Computer Science
Bachelor in IT (Information Technology)
Educations at Master level:
MSc in Computer Science
MSc in IT Product Development
MSc in Information Technology
Additionally, the department offers a number of continuing and further education courses.
Research
Algorithms and Data Structures
Bioinformatics
Complexity Theory
Computer Graphics and Scientific Computing
Cryptography and Security
Human Computer Interaction
Modelling and Validation of Distributed Systems
Object-Oriented Software Systems
Programming Languages and Formal Models
Professors
Lars Arge
Susanne Bødker
Lars Birkedal
Ivan Bjerre Damgård
Kaj Grønbæk
Christian S. Jensen
Kurt Jensen (CPN Tools)
Morten Kyng
Ole Lehrmann Madsen
Brian H. Mayoh
Peter Bro Miltersen (P/poly)
Mogens Nielsen (Petri net)
Michael I. Schwartzbach
The IT City Katrinebjerg
The department is located in the Aarhus region named Katrinebjerg. The area also hosts many IT companies as well as other institutes of education and is known as the IT City Katrinebjerg.
External links
Department of Computer Science, Aarhus University
Magazine about the Dept. of Computer Science, Aarhus University (2009)
Research areas at CS
Founded: 1971 (section) / 1998 (department)
Head of Department: Lars Birkedal
City: Aarhus
Country: Denmark
Number of students: Approximately 1000
Website: cs.au.dk
Aarhus University
|
42742497
|
https://en.wikipedia.org/wiki/Pulsonix
|
Pulsonix
|
Pulsonix is an electronic design automation (EDA) software suite for schematic capture and PCB design. It is produced by WestDev, which is headquartered in Gloucestershire, England, with additional sales and distribution offices overseas. It was first released in 2001, and runs on Windows.
Development
The British software house WestDev created electronic design automation (EDA) software Pulsonix in 2001. Some development team members had formerly worked at Racal–Redac on computer-aided design tools. A key aim of the developers was that the software be easy to use, without the need for extensive training that they believed existing EDA products at that time required.
The software formed part of the €14 million euros EU-funded project "". The three-year project (2008–11) sought to embed components within a circuit board's inner layers, to minimize use of design space.
Traditionally, wire leads of components were inserted through holes in a circuit board then soldered in place; more recently, components and chips are surface-mounted flush with the board and heat-set. Although the concept of embedding components directly within layers of a circuit board itself had existed for some time, technical difficulties meant it was experimental, unsuited for use in mass production.
An increased demand for miniaturization, for products such as smartphones or medical devices that have to be swallowed to explore inside the body, led the EU to create the three-year project to develop embedding for "industrialization" (mass production use). No EDA software was suitable for embedding. The taskforce working on HERMES approached the "most important EDA tool suppliers and convinced them to support [the project]". Pulsonix was one of those contacted, the others being Cadence, Mentor Graphics, and Zuken.
Features
Pulsonix is a Windows application for schematic capture and layout design. It is produced in three variants, from 1000 pins up to an unlimited component pins version suited to larger designs. All three have autorouter capability. Within a dual monitor setup, schematic and layout design processes can each be assigned to a single screen, with changes synchronized as needed.
Schematic capture
Schematic capture layout functionality, including:
Hierarchical schematic design
mixed-signal circuit simulation
Netlist export
Reporting and BOM creation
Sketch Routing
PCB design
Push, shove and hug routing
Manual routing, with support for differential pairs, multi-trace routing, pin-swapping and gate-swapping
Automatic trace routing
Apply layout pattern for component placement
Layer spans
Via stitching within custom shapes and pads
Component footprint library management
Support for Flexi-Rigid Design
Support for Embedded Components
STEP support
Manufacturing files generation with support for Gerber and ODB++ formats
Import and export among various file formats
3D visualisation and clash detection
Reception
Chris Anderson, then Wired editor-in-chief, gave it a generally positive review, at DIY Drones – an online portal for unmanned aerial vehicle ("drones") enthusiasts, in which he praised its user interface plus range of features such as 3D views, and, while noting it is an expensive product, deemed it "the best competitor to the aging Cadsoft Eagle" software.
Neil Gruending, columnist at long-running electronics magazine Elektor, on board design among the maker subculture, reviewed around seven EDA products on his blog in late 2012. Gruending found Pulsonix's user interface straightforward, singling out how "copper pours work properly" for praise. He considered for range of features and cost, its closest relation was Altium. Contrasting the two products Gruending wrote Pulsonix had comparatively low market share in North America, though he found support from vendors significantly better for Pulsonix there.
See also
Comparison of EDA software
List of EDA companies
Comparison of CAD software
List of CAx companies
References
External links
Official site
Companies based in Gloucestershire
Companies established in 2001
Computer-aided design software
Computer-aided design software for Windows
Electronic design automation companies
Electronic design automation software
Windows-only software
|
31766333
|
https://en.wikipedia.org/wiki/List%20of%20moths%20of%20Yemen
|
List of moths of Yemen
|
There are about 550 known moth species of Yemen. The moths (mostly nocturnal) and butterflies (mostly diurnal) together make up the taxonomic order Lepidoptera.
This is a list of moth species which have been recorded in Yemen.
Alucitidae
Alucita nannodactyla (Rebel, 1907)
Arctiidae
Amatula kikiae Wiltshire, 1983
Creataloum arabicum (Hampson, 1896)
Creatonotos leucanioides Holland, 1893
Eilema sokotrensis (Hampson, 1900)
Nyctemera torbeni Wiltshire, 1983
Secusio strigata Walker, 1854
Siccia butvillai Ivinskis & Saldaitis, 2008
Spilosoma yemenensis (Hampson, 1916)
Utetheisa lotrix (Cramer, 1779)
Utetheisa pulchella (Linnaeus, 1758)
Autostichidae
Hesperesta arabica Gozmány, 2000
Turatia argillacea Gozmány, 2000
Turatia striatula Gozmány, 2000
Turatia yemenensis Derra, 2008
Choreutidae
Choreutis aegyptiaca (Zeller, 1867)
Coleophoridae
Ishnophanes bifucata Baldizzone, 1994
Coleophora aegyptiacae Walsingham, 1907
Coleophora eilatica Baldizzone, 1994
Coleophora himyarita Baldizzone, 2007
Coleophora jerusalemella Toll, 1942
Coleophora lasloella Baldizzone, 1982
Coleophora longiductella Baldizzone, 1989
Coleophora recula Baldizzone, 2007
Coleophora sabaea Baldizzone, 2007
Coleophora semicinerea Staudinger, 1859
Coleophora sudanella Rebel, 1916
Coleophora taizensis Baldizzone, 2007
Coleophora yemenita Baldizzone, 2007
Cosmopterigidae
Alloclita gambiella (Walsingham, 1891)
Cossidae
Aethalopteryx diksami Yakovlev & Saldaitis, 2010
Azygophleps larseni Yakovlev & Saldaitis, 2011
Azygophleps sheikh Yakovlev & Saldaitis, 2011
Meharia acuta Wiltshire, 1982
Meharia philbyi Bradley, 1952
Meharia hackeri Saldaitis, Ivinskis & Yakovlev, 2011
Meharia semilactea (Warren & Rothschild, 1905)
Meharia yakovlevi Saldaitis & Ivinskis, 2010
Mormogystia brandstetteri Saldaitis, Ivinskis & Yakovlev, 2011
Mormogystia proleuca (Hampson in Walsingham & Hampson, 1896)
Paropta frater Warnecke, 1930
Crambidae
Achyra nudalis (Hübner, 1796)
Amselia leucozonellus (Hampson, 1896)
Ancylolomia chrysographellus (Kollar, 1844)
Antigastra catalaunalis (Duponchel, 1833)
Aplectropus leucopis Hampson, 1896
Autocharis fessalis (Swinhoe, 1886)
Bocchoris inspersalis (Zeller, 1852)
Bocchoris onychinalis (Guenée, 1854)
Cotachena smaragdina (Butler, 1875)
Cybalomia albilinealis (Hampson, 1896)
Diaphana indica (Saunders, 1851)
Dolicharthria paediusalis (Walker, 1859)
Eoophyla peribocalis (Walker, 1859)
Euchromius ocellea (Haworth, 1811)
Euclasta varii Popescu-Gorj & Constantinescu, 1973
Heliothela ophideresana (Walker, 1863)
Hellula undalis (Fabricius, 1781)
Herpetogramma licarsisalis (Walker, 1859)
Hodebertia testalis (Fabricius, 1794)
Lamprosema inglorialis Hampson, 1918
Loxostege albifascialis (Hampson, 1896)
Metasia profanalis (Walker, 1865)
Noctuelia floralis (Hübner, [1809])
Nomophila noctuella ([Denis & Schiffermüller], 1775)
Noorda blitealis Walker, 1859
Palepicorsia ustrinalis (Christoph, 1877)
Palpita unionalis (Hübner, 1796)
Ptychopseustis pavonialis (Hampson, 1896)
Pyrausta arabica Butler, 1884
Spoladea recurvalis (Fabricius, 1775)
Synclera traducalis (Zeller, 1852)
Tegostoma bipartalis Hampson, 1896
Tegostoma comparalis (Hübner, 1796)
Thyridiphora furia (Swinhoe, 1884)
Galacticidae
Galactica inornata (Walsingham, 1900)
Homadaula maritima Mey, 2007
Homadaula montana Mey, 2007
Homadaula submontana Mey, 2007
Gelechiidae
Anarsia acaciae Walsingham, 1896
Parapsectris amseli (Povolny, 1981)
Phthorimaea molitor (Walsingham, 1896)
Sitotroga cerealella (Olivier, 1789)
Geometridae
Acidaliastis micra Hampson, 1896
Brachyglossina tibbuana Herbulot, 1965
Casilda kikiae (Wiltshire, 1982)
Charissa lequatrei (Herbulot, 1988)
Cleora rostella D. S. Fletcher, 1967
Cyclophora staudei Hausmann, 2006
Disclisioprocta natalata (Walker, 1862)
Eucrostes disparata Walker, 1861
Glossotrophia jacta (Swinhoe, 1884)
Idaea granulosa (Warren & Rothschild, 1905)
Idaea illustrior (Wiltshire, 1952)
Idaea tahamae Wiltshire, 1983
Idaea testacea Swinhoe, 1885
Isturgia catalaunaria (Guenée, 1858)
Isturgia disputaria (Guenée, 1858)
Isturgia sublimbata (Butler, 1885)
Neromia pulvereisparsa (Hampson, 1896)
Oar pratana (Fabricius, 1794)
Omphacodes directa (Walker, 1861)
Pseudosterrha rufistrigata (Hampson, 1896)
Scopula actuaria (Walker, 1861)
Scopula rufinubes (Warren, 1900)
Traminda mundissima (Walker, 1861)
Zamarada anacantha D. S. Fletcher, 1974
Zamarada latilimbata Rebel, 1948
Zamarada minimaria Swinhoe, 1895
Zamarada torrida D. S. Fletcher, 1974
Zygophyxia retracta Hausmann, 2006
Gracillariidae
Phyllocnistis citrella Stainton, 1856
Phyllonorycter aarviki de Prins, 2012
Phyllonorycter grewiella (Vári, 1961)
Phyllonorycter maererei de Prins, 2012
Phyllonorycter mida de Prins, 2012
Hyblaeidae
Hyblaea puera (Cramer, 1777)
Lasiocampidae
Braura desdemona Zolotuhin & Gurkovich, 2009
Odontocheilopteryx myxa Wallengren, 1860
Limacodidae
Parasa fulvicorpus Hampson, 1896
Lymantriidae
Euproctis erythrosticta (Hampson, 1910)
Knappetra fasciata (Walker, 1855)
Micronoctuidae
Micronola wadicola Amsel, 1935
Micronola yemeni Fibiger, 2011
Noctuidae
Acantholipes aurea Berio, 1966
Acantholipes canofusca Hacker & Saldaitis, 2010
Acantholipes circumdata (Walker, 1858)
Achaea catella Guenée, 1852
Achaea finita (Guenée, 1852)
Achaea lienardi (Boisduval, 1833)
Achaea mercatoria (Fabricius, 1775)
Acontia akbar Wiltshire, 1985
Acontia albarabica Wiltshire, 1994
Acontia antica Walker, 1862
Acontia basifera Walker, 1857
Acontia binominata (Butler, 1892)
Acontia chiaromontei Berio, 1936
Acontia crassivalva (Wiltshire, 1947)
Acontia dichroa (Hampson, 1914)
Acontia hoppei Hacker, Legrain & Fibiger, 2008
Acontia hortensis Swinhoe, 1884
Acontia imitatrix Wallengren, 1856
Acontia insocia (Walker, 1857)
Acontia karachiensis Swinhoe, 1889
Acontia lactea Hacker, Legrain & Fibiger, 2008
Acontia manakhana Hacker, Legrain & Fibiger, 2010
Acontia melaena (Hampson, 1899)
Acontia minuscula Hacker, Legrain & Fibiger, 2010
Acontia mukalla Hacker, Legrain & Fibiger, 2008
Acontia opalinoides Guenée, 1852
Acontia peksi Hacker, Legrain & Fibiger, 2008
Acontia philbyi Wiltshire, 1988
Acontia porphyrea (Butler, 1898)
Acontia transfigurata Wallengren, 1856
Acontia trimaculata Aurivillius, 1879
Acontia yemenensis (Hampson, 1918)
Aegocera bettsi Wiltshire, 1988
Aegocera brevivitta Hampson, 1901
Aegocera rectilinea Boisduval, 1836
Agoma trimenii (Felder, 1874)
Agrotis acronycta (Rebel, 1907)
Agrotis biconica Kollar, 1844
Agrotis brachypecten Hampson, 1899
Agrotis herzogi Rebel, 1911
Agrotis ipsilon (Hufnagel, 1766)
Agrotis medioatra Hampson, 1918
Agrotis segetum ([Denis & Schiffermüller], 1775)
Agrotis sesamioides (Rebel, 1907)
Aletia consanguis (Guenée, 1852)
Amefrontia purpurea Hampson, 1899
Amyna axis Guenée, 1852
Amyna delicata Wiltshire, 1994
Amyna punctum (Fabricius, 1794)
Anarta endemica Hacker & Saldaitis, 2010
Anarta trifolii (Hufnagel, 1766)
Androlymnia clavata Hampson, 1910
Anoba socotrensis Hampson, 1926
Anoba triangularis (Warnecke, 1938)
Anomis erosa (Hübner, 1818)
Anomis flava (Fabricius, 1775)
Anomis mesogona (Walker, 1857)
Anomis sabulifera (Guenée, 1852)
Antarchaea conicephala (Staudinger, 1870)
Antarchaea digramma (Walker, 1863)
Antarchaea erubescens (Bang-Haas, 1910)
Antarchaea flavissima Hacker & Saldaitis, 2010
Antarchaea fragilis (Butler, 1875)
Anticarsia rubricans (Boisduval, 1833)
Anumeta atrosignata Walker, 1858
Anumeta spilota (Erschoff, 1874)
Argyrogramma signata (Fabricius, 1775)
Asplenia melanodonta (Hampson, 1896)
Athetis partita (Walker, 1857)
Attatha metaleuca Hampson, 1913
Aucha polyphaenoides (Wiltshire, 1961)
Autoba abrupta (Walker, 1865)
Autoba admota (Felder & Rogenhofer, 1874)
Brevipecten bischofi Hacker & Fibiger, 2007
Brevipecten biscornuta Wiltshire, 1985
Brevipecten calimanii (Berio, 1939)
Brevipecten confluens Hampson, 1926
Brevipecten hypocornuta Hacker & Fibiger, 2007
Brevipecten marmoreata Hacker & Fibiger, 2007
Brevipecten tihamae Hacker & Fibiger, 2007
Brithys crini (Fabricius, 1775)
Callopistria latreillei (Duponchel, 1827)
Callopistria maillardi (Guenée, 1862)
Callopistria yerburii Butler, 1884
Callyna gaedei Hacker & Fibiger, 2006
Calophasia platyptera (Esper, [1788])
Caradrina soudanensis (Hampson, 1918)
Caranilla uvarovi (Wiltshire, 1949)
Carcharoda yemenicola Wiltshire, 1983
Catamecia minima (Swinhoe, 1889)
Cerocala sokotrensis Hampson, 1899
Chasmina vestae (Guenée, 1852)
Chrysodeixis acuta (Walker, [1858])
Chrysodeixis chalcites (Esper, 1789)
Clytie devia (Swinhoe, 1884)
Clytie infrequens (Swinhoe, 1884)
Clytie sancta (Staudinger, 1900)
Clytie tropicalis Rungs, 1975
Condica capensis (Guenée, 1852)
Condica conducta (Walker, 1857)
Condica illecta Walker, 1865
Condica pauperata (Walker, 1858)
Condica viscosa (Freyer, 1831)
Cortyta canescens Walker, 1858
Ctenoplusia dorfmeisteri (Felder & Rogenhofer, 1874)
Ctenoplusia fracta (Walker, 1857)
Ctenoplusia limbirena (Guenée, 1852)
Ctenoplusia phocea (Hampson, 1910)
Cyligramma latona (Cramer, 1775)
Diparopsis watersi (Rothschild, 1901)
Drasteria kabylaria (Bang-Haas, 1906)
Drasteria yerburyi (Butler, 1892)
Dysgonia algira (Linnaeus, 1767)
Dysgonia angularis (Boisduval, 1833)
Dysgonia torrida (Guenée, 1852)
Dysmilichia flavonigra (Swinhoe, 1884)
Epharmottomena albiluna (Hampson, 1899)
Epharmottomena sublimbata Berio, 1894
Ericeia congregata (Walker, 1858)
Eublemma anachoresis (Wallengren, 1863)
Eublemma baccalix (Swinhoe, 1886)
Eublemma bifasciata (Moore, 1881)
Eublemma bulla (Swinhoe, 1884)
Eublemma cochylioides (Guenée, 1852)
Eublemma cornutus Fibiger & Hacker, 2004
Eublemma ecthaemata Hampson, 1896
Eublemma gayneri (Rothschild, 1901)
Eublemma khonoides Wiltshire, 1980
Eublemma odontophora Hampson, 1910
Eublemma parva (Hübner, [1808])
Eublemma scitula (Rambur, 1833)
Eublemma seminivea Hampson, 1896
Eublemma subflavipes Hacker & Saldaitis, 2010
Eublemma thermobasis Hampson, 1910
Eublemmoides apicimacula (Mabille, 1880)
Eudocima materna (Linnaeus, 1767)
Eulocastra alfierii Wiltshire, 1948
Eulocastra diaphora (Staudinger, 1878)
Eulocastra insignis (Butler, 1884)
Eutelia amatrix Walker, 1858
Eutelia bowkeri (Felder & Rogenhofer, 1874)
Eutelia discitriga Walker, 1865
Eutelia polychorda Hampson, 1902
Feliniopsis africana (Schaus & Clements, 1893)
Feliniopsis connivens (Felder & Rogenhofer, 1874)
Feliniopsis consummata (Walker, 1857)
Feliniopsis hosplitoides (Laporte, 1979)
Feliniopsis minnecii (Berio, 1939)
Feliniopsis opposita (Walker, 1865)
Feliniopsis sabaea Hacker & Fibiger, 2001
Feliniopsis talhouki (Wiltshire, 1983)
Feliniopsis viettei Hacker & Fibiger, 2001
Fodina legrainei Hacker & Saldaitis, 2010
Gesonia obeditalis Walker, 1859
Gnamptonyx innexa (Walker, 1858)
Grammodes exclusiva Pagenstecher, 1907
Grammodes stolida (Fabricius, 1775)
Hadjina tyriobaphes Wiltshire, 1983
Helicoverpa armigera (Hübner, [1808])
Helicoverpa assulta (Guenée, 1852)
Heliothis nubigera Herrich-Schäffer, 1851
Heliothis peltigera ([Denis & Schiffermüller], 1775)
Heteropalpia acrosticta (Püngeler, 1904)
Heteropalpia exarata (Mabille, 1890)
Heteropalpia robusta Wiltshire, 1988
Heteropalpia rosacea (Rebel, 1907)
Heteropalpia vetusta (Walker, 1865)
Hiccoda dosaroides Moore, 1882
Hipoepa fractalis (Guenée, 1854)
Honeyia clearchus (Fawcett, 1916)
Hypena abyssinialis Guenée, 1854
Hypena laceratalis Walker, 1859
Hypena lividalis (Hübner, 1790)
Hypena obacerralis Walker, [1859]
Hypena obsitalis (Hübner, [1813])
Hypena senialis Guenée, 1854
Hypocala rostrata (Fabricius, 1794)
Hypotacha indecisa Walker, [1858]
Hypotacha isthmigera Wiltshire, 1968
Hypotacha ochribasalis (Hampson, 1896)
Hypotacha raffaldii Berio, 1939
Iambiodes incerta (Rothschild, 1913)
Iambiodes postpallida Wiltshire, 1977
Idia fumosa (Hampson, 1896)
Leucania loreyi (Duponchel, 1827)
Lophoptera arabica Hacker & Fibiger, 2006
Lyncestoides kruegeri (Hacker & Fibiger, 2006)
Lyncestoides unilinea (Swinhoe, 1885)
Marathyssa cuneata (Saalmüller, 1891)
Matopo socotrensis Hacker & Saldaitis, 2010
Maxera marchalii (Boisduval, 1833)
Maxera nigriceps (Walker, 1858)
Melanephia nigrescens (Wallengren, 1856)
Metachrostis quinaria (Moore, 1881)
Metachrostis subvelox Hacker & Saldaitis, 2010
Metopoceras kneuckeri (Rebel, 1903)
Mocis frugalis (Fabricius, 1775)
Mocis mayeri (Boisduval, 1833)
Mocis proverai Zilli, 2000
Mocis repanda (Fabricius, 1794)
Mythimna diopis (Hampson, 1905)
Mythimna languida (Walker, 1858)
Mythimna sokotrensis Hreblay, 1996
Mythimna umbrigera (Saalmüller, 1891)
Mythimna unipuncta (Haworth, 1809)
Nagia natalensis (Hampson, 1902)
Nimasia brachyura Wiltshire, 1982
Ophiusa dianaris (Guenée, 1852)
Ophiusa mejanesi (Guenée, 1852)
Ophiusa tirhaca (Cramer, 1777)
Oraesia emarginata (Fabricius, 1794)
Oraesia intrusa (Krüger, 1939)
Oraesia isolata Hacker & Saldaitis, 2010
Ozarba atrifera Hampson, 1910
Ozarba nyanza (Felder & Rogenhofer, 1874)
Ozarba perplexa Saalmüller, 1891
Ozarba simplex (Rebel, 1907)
Ozarba socotrana Hampson, 1910
Ozarba terminipuncta (Hampson, 1899)
Ozarba varia (Walker, 1865)
Pandesma quenavadi Guenée, 1852
Pandesma robusta (Walker, 1858)
Pericyma mendax (Walker, 1858)
Pericyma metaleuca Hampson, 1913
Phytometra subflavalis (Walker, 1865)
Plecoptera butkevicii Hacker & Saldaitis, 2010
Plusiopalpa dichora Holland, 1894
Polydesma umbricola Boisduval, 1833
Polytela cliens (Felder & Rogenhofer, 1874)
Prionofrontia ochrosia Hampson, 1926
Pseudomicrodes decolor Rebel, 1907
Pseudozarba mesozona (Hampson, 1896)
Rhabdophera clathrum (Guenée, 1852)
Rhesala moestalis (Walker, 1866)
Rhynchina albiscripta Hampson, 1916
Rhynchina coniodes Vári, 1962
Sesamia nonagrioides (Lefèbvre, 1827)
Simplicia extinctalis (Zeller, 1852)
Simplicia robustalis Guenée, 1854
Simyra confusa (Walker, 1856)
Sphingomorpha chlorea (Cramer, 1777)
Spodoptera cilium Guenée, 1852
Spodoptera exempta (Walker, 1857)
Spodoptera exigua (Hübner, 1808)
Spodoptera littoralis (Boisduval, 1833)
Spodoptera mauritia (Boisduval, 1833)
Stenosticta grisea Hampson, 1912
Stenosticta sibensis Wiltshire, 1977
Stenosticta wiltshirei Hacker, Saldaitis & Ivinskis, 2010
Syngrapha circumflexa (Linnaeus, 1767)
Tathorhynchus exsiccata (Lederer, 1855)
Tathorhynchus stenoptera (Rebel, 1907)
Thiacidas cerurodes (Hampson, 1916)
Thiacidas roseotincta (Pinhey, 1962)
Thysanoplusia chalcedona (Hampson, 1902)
Thysanoplusia cupreomicans (Hampson, 1909)
Thysanoplusia exquisita (Felder & Rogenhofer, 1874)
Thysanoplusia rostrata (D. S. Fletcher, 1963)
Thysanoplusia sestertia (Felder & Rogenhofer, 1874)
Thysanoplusia tetrastigma (Hampson, 1910)
Trichoplusia ni (Hübner, [1803])
Trichoplusia orichalcea (Fabricius, 1775)
Trigonodes hyppasia (Cramer, 1779)
Tytroca balnearia (Distant, 1898)
Tytroca leucoptera (Hampson, 1896)
Ulotrichopus stertzi (Püngeler, 1907)
Ulotrichopus tinctipennis (Hampson, 1902)
Vittaplusia vittata (Wallengren, 1856)
Nolidae
Archinola pyralidia Hampson, 1896
Bryophilopsis tarachoides Mabille, 1900
Churia gallagheri Wiltshire, 1985
Earias biplaga Walker, 1866
Earias cupreoviridis (Walker, 1862)
Earias insulana (Boisduval, 1833)
Giaura dakkaki Wiltshire, 1986
Negeta luminosa (Walker, 1858)
Nola pumila Snellen, 1875
Nola socotrensis (Hampson, 1901)
Odontestis murina Wiltshire, 1988
Odontestis socotrensis Hacker & Saldaitis, 2010
Odontestis striata Hampson, 1912
Pardasena minorella Walker, 1866
Pardasena virgulana (Mabille, 1880)
Pardoxia graellsii (Feisthamel, 1837)
Selepa celtis (Moore, 1858)
Xanthodes albago (Fabricius, 1794)
Xanthodes brunnescens (Pinhey, 1968)
Xanthodes gephyrias (Meyrick, 1902)
Notodontidae
Macrosenta purpurascens Hacker, Fibiger & Schreier, 2007
Oecophoridae
Stathmopoda diplaspis (Meyrick, 1887)
Plutellidae
Genostele renigera Walsingham, 1900
Paraxenistis africana Mey, 2007
Plutella xylostella (Linnaeus, 1758)
Pterophoridae
Agdistis adenensis Amsel, 1961
Agdistis arabica Amsel, 1958
Agdistis bellissima Arenberger, 1975
Agdistis cathae Arenberger, 1999
Agdistis hakimah Arenberger, 1985
Agdistis insidiatrix Meyrick, 1933
Agdistis minima Walsingham, 1900
Agdistis nanodes Meyrick, 1906
Agdistis obstinata Meyrick, 1920
Agdistis riftvalleyi Arenberger, 2001
Agdistis tamaricis (Zeller, 1847)
Agdistis tenera Arenberger, 1976
Agdistis tihamae Arenberger, 1999
Agdistis yemenica Arenberger, 1999
Arcoptilia gizan Arenberger, 1985
Deuterocopus socotranus Rebel, 1907
Diacrotricha lanceatus (Arenberger, 1986)
Emmelina monodactyla (Linnaeus, 1758)
Exelastis ebalensis (Rebel, 1907)
Hellinsia bawana Arenberger, 2010
Megalorhipida angusta Arenberger, 2002
Megalorhipida fissa Arenberger, 2002
Megalorhipida leptomeres (Meyrick, 1886)
Megalorhipida leucodactylus (Fabricius, 1794)
Megalorhipida parvula Arenberger, 2010
Merrifieldia malacodactylus (Zeller, 1847)
Platyptilia albifimbriata Arenberger, 2002
Platyptilia dschambiya Arenberger, 1999
Porrittia imbecilla (Meyrick, 1925)
Procapperia hackeri Arenberger, 2002
Pterophorus ischnodactyla (Treitschke, 1833)
Pterophorus rhyparias (Meyrick, 1908)
Puerphorus olbiadactylus (Millière, 1859)
Stangeia siceliota (Zeller, 1847)
Stenodacma wahlbergi (Zeller, 1852)
Stenoptilia amseli Arenberger, 1990
Stenoptilia aridus (Zeller, 1847)
Stenoptilia balsami Arenberger, 2010
Stenoptilia elkefi Arenberger, 1984
Stenoptilia sanaa Arenberger, 1999
Pyralidae
Achroia grisella (Fabricius, 1794)
Ancylosis faustinella (Zeller, 1867)
Ancylosis limoniella (Chrétien, 1911)
Ancylosis nigripunctella (Staudigner, 1879)
Cadra cautella (Walker, 1863)
Candiope erubescens (Hampson, 1896)
Candiope joannisella Ragonot, 1888
Endotricha erythralis Mabille, 1900
Ephestia elutella (Hübner, 1796)
Etiella zinckenella (Treitschke, 1832)
Nephopterix divisella (Duponchel, 1842)
Nephopterix metamelana Hampson, 1896
Nephopterix nigristriata Hampson, 1896
Phycita phoenicocraspis Hampson, 1896
Phycita poteriella Zeller, 1846
Polyocha depressella (Swinhoe, 1885)
Pyralis galactalis Hampson, 1916
Pyralis obsoletalis Mann, 1864
Raphimetopus ablutella (Zeller, 1839)
Staudingeria proniphea (Hampson, 1896)
Staudingeria suboblitella (Ragonot, 1888)
Staudingeria yerburii (Butler, 1884)
Saturniidae
Yatanga arabica (Rougeot, 1977)
Yatanga smithi (Holland, 1892)
Sesiidae
Crinipus leucozonipus Hampson, 1896
Sphingidae
Acherontia styx (Westwood, 1848)
Agrius convolvuli (Linnaeus, 1758)
Basiothia medea (Fabricius, 1781)
Batocnema cocquerelii (Boisduval, 1875)
Cephonodes hylas (Linnaeus, 1771)
Daphnis nerii (Linnaeus, 1758)
Euchloron megaera (Linnaeus, 1758)
Hippotion celerio (Linnaeus, 1758)
Hippotion rosae (Butler, 1882)
Hippotion socotrensis (Rebel, 1899)
Hyles livornica (Esper, 1780)
Nephele vau (Walker, 1856)
Sphingonaepiopsis nana (Walker, 1856)
Tineidae
Perissomastix taeniaecornis (Walsingham, 1896)
Phthoropoea carpella Walsingham, 1896
Tinea messalina Robinson, 1979
Trichophaga abruptella (Wollaston, 1858)
Trichophaga swinhoei (Butler, 1884)
Tortricidae
Cryptophlebia socotrensis Walsingham, 1900
Dasodis cladographa Diakonoff, 1983
Xyloryctidae
Enolmis jemenensis Bengtsson, 2002
Eretmocera bradleyi Amsel, 1961
Eretmocera jemensis Rebel, 1930
Scythris abyanensis Bengtsson, 2002
Scythris albiangulella Bengtsson, 2002
Scythris albocanella Bengtsson, 2002
Scythris albogrammella Bengtsson, 2002
Scythris amplexella Bengtsson, 2002
Scythris badiella Bengtsson, 2002
Scythris basilicella Bengtsson, 2002
Scythris beccella Bengtsson, 2002
Scythris biacutella Bengtsson, 2002
Scythris bicuspidella Bengtsson, 2002
Scythris bispinella Bengtsson, 2002
Scythris camelella Walsingham, 1907
Scythris canella Bengtsson, 2002
Scythris capnofasciae Bengtsson, 2002
Scythris ceratella Bengtsson, 2002
Scythris cinisella Bengtsson, 2002
Scythris consimilella Bengtsson, 2002
Scythris cucullella Bengtsson, 2002
Scythris cuneatella Bengtsson, 2002
Scythris curvipilella Bengtsson, 2002
Scythris fibigeri Bengtsson, 2002
Scythris fissurella Bengtsson, 1997
Scythris galeatella Bengtsson, 2002
Scythris indigoferivora Bengtsson, 2002
Scythris iterella Bengtsson, 2002
Scythris jemenensis Bengtsson, 2002
Scythris meraula Meyrick, 1916
Scythris nigrogrammella Bengtsson, 2002
Scythris nigropterella Bengtsson, 2002
Scythris nipholecta Meyrick, 1924
Scythris nivicolor Meyrick, 1916
Scythris ochrea Walsingham, 1896
Scythris pangalactis Meyrick, 1933
Scythris paralogella Bengtsson, 2002
Scythris parenthesella Bengtsson, 2002
Scythris pollicella Bengtsson, 2002
Scythris pterosaurella Bengtsson, 2002
Scythris reflectella Bengtsson, 2002
Scythris sanae Bengtsson, 2002
Scythris scyphella Bengtsson, 2002
Scythris sinuosella Bengtsson, 2002
Scythris sordidella Bengtsson, 2002
Scythris strabella Bengtsson, 2002
Scythris subgaleatella Bengtsson, 2002
Scythris subparachalca Bengtsson, 2002
Scythris taizzae Bengtsson, 2002
Scythris tenebrella Bengtsson, 2002
Scythris valgella Bengtsson, 2002
Scythris valvaearcella Bengtsson, 2002
External links
Lists of moths by country
Lists of moths of Asia
Moths
Moths
|
16428664
|
https://en.wikipedia.org/wiki/4060%20Deipylos
|
4060 Deipylos
|
4060 Deipylos is a large Jupiter trojan from the Greek camp, approximately in diameter. It was discovered on 17 December 1987, by astronomers Eric Elst and Guido Pizarro at ESO's La Silla Observatory in northern Chile. The transitional C-type asteroid belongs to the 40 largest Jupiter trojans and has rotation period of 9.3 hours. It was named after Deipylos from Greek mythology.
Orbit and classification
Deipylos is a dark Jovian asteroid orbiting in the leading Greek camp at Jupiter's Lagrangian point, 60° ahead its orbit in a 1:1 resonance (see Trojans in astronomy). It is a non-family asteroid in the Jovian background population. It orbits the Sun at a distance of 4.4–6.1 AU once every 12.03 years (4,392 days; semi-major axis of 5.25 AU). Its orbit has an eccentricity of 0.15 and an inclination of 16° with respect to the ecliptic.
The body's observation arc begins with its first observation as at Turku Observatory in March 1942yxz years prior to its official discovery observation.
Physical characteristics
Deipylos has been characterized as a carbonaceous C-type asteroid in the Tholen-like taxonomy of the Small Solar System Objects Spectroscopic Survey (S3OS2). In their SMASS-like taxonomy, S3OS2 classified Deipylos as an Cb-subtype that transitions to the somewhat brighter B-type asteroids. The Collaborative Asteroid Lightcurve Link also assumes it to be of carbonaceous composition.
Rotation period
In December 2010, a rotational lightcurve of Deipylos was obtained from photometric observations in the R-band by astronomers at the Palomar Transient Factory in California. Lightcurve analysis gave a rotation period of 11.490 hours with a brightness amplitude of 0.11 magnitude () Between 2015 and 2017, several observations by Robert Stephens in collaboration with Daniel Coley and Brian Warner at the Center for Solar System Studies in California gave a more refined period between 9.19 and 9.38 hours and an amplitude of 0.07–0.13 magnitude (). The best-rated result gave 9.298 hours with an amplitude of magnitude.
Diameter and albedo
According to the surveys carried out by the Infrared Astronomical Satellite IRAS, the Japanese Akari satellite and the NEOWISE mission of NASA's Wide-field Infrared Survey Explorer, Deipylos measures between 79.21 and 86.79 kilometers in diameter and its surface has an albedo between 0.043 and 0.078. CALL assumes a standard albedo for a carbonaceous asteroid of 0.057 and calculates a diameter of 66.34 kilometers based on an absolute magnitude of 9.62.
Naming
This minor planet was named after Deipylos a Greek hero in the Trojan War. He was a fellow of Sthenelus (Sthenelos), who ordered him to bring the horses captured from Aeneas to the Greek vessels. The official naming citation was published by the Minor Planet Center on 15 September 1989 ().
Notes
References
External links
Asteroid Lightcurve Database (LCDB), query form (info )
Dictionary of Minor Planet Names, Google books
Discovery Circumstances: Numbered Minor Planets (1)-(5000) – Minor Planet Center
004060
004060
Discoveries by Eric Walter Elst
Discoveries by Guido Pizarro (astronomer)
Minor planets named from Greek mythology
Named minor planets
19871217
|
1723963
|
https://en.wikipedia.org/wiki/Roger%20Stone
|
Roger Stone
|
Roger Jason Stone (born Roger Joseph Stone Jr.; August 27, 1952) is an American conservative political consultant and lobbyist.
Since the 1970s, Stone has worked on the campaigns of Republican politicians, including Richard Nixon, Ronald Reagan, Jack Kemp, Bob Dole, George W. Bush, and Donald Trump. In addition to frequently serving as a campaign adviser, Stone was a political lobbyist. In 1980, he co-founded a Washington, D.C.–based lobbying firm with Paul Manafort and Charles R. Black Jr. The firm recruited Peter G. Kelly and was renamed Black, Manafort, Stone and Kelly in 1984. During the 1980s, BMSK became a top lobbying firm by leveraging its White House connections to attract high-paying clients, including U.S. corporations and trade associations, as well as foreign governments. By 1990, it was one of the leading lobbyists for American companies and foreign organizations.
A longtime friend of Donald Trump, Stone has been variously described as a "renowned infighter", a "seasoned practitioner of hard-edged politics", a "mendacious windbag", a "veteran Republican strategist", and a political fixer. Over the course of the 2016 Trump presidential campaign, Stone promoted a number of falsehoods and conspiracy theories. He has described his political modus operandi as "Attack, attack, attacknever defend" and "Admit nothing, deny everything, launch counterattack." Stone first suggested Trump run for president in early 1998 while he was Trump's casino business lobbyist in Washington. The Netflix documentary film Get Me Roger Stone focuses on Stone's past and role in Trump's presidential campaign.
Stone officially left the Trump campaign on August 8, 2015. However, two associates of Stone have said he collaborated with WikiLeaks founder Julian Assange during the 2016 presidential campaign to discredit Hillary Clinton. Stone and Assange have denied these claims. Nearly three dozen search warrants were unsealed in April 2020 which revealed contacts between Stone and Assange, and that Stone orchestrated hundreds of fake Facebook accounts and bloggers to run a political influence scheme on social media.
On January 25, 2019, Stone was arrested at his Fort Lauderdale, Florida, home in connection with Robert Mueller's Special Counsel investigation and charged in an indictment with witness tampering, obstructing an official proceeding, and five counts of making false statements. In November 2019, a jury convicted him on all seven felony counts. He was sentenced to 40 months in prison. On July 10, 2020, days before Stone was scheduled to report to prison, Trump commuted his sentence. On August 17, 2020, he dropped the appeal of his convictions. Trump pardoned Stone on December 23, 2020.
Early life and political work
Stone was born on August 27, 1952, in Norwalk, Connecticut, to Gloria Rose (Corbo) and Roger J. Stone. He grew up in the community of Vista, part of the town of Lewisboro, New York, on the Connecticut border. His mother was the president of Meadow Pond Elementary School PTA, a Cub Scout den mother, and occasionally a small-town reporter; his father "Chubby" (also Roger J. Stone) was a well driller and sometime chief of the Vista volunteer Fire Department. He has described his family as middle-class, blue-collar Catholics.
Stone said that as an elementary school student in 1960, he broke into politics to further John F. Kennedy's presidential campaign: "I remember going through the cafeteria line and telling every kid that Nixon was in favor of school on Saturdays ... It was my first political trick."
When he was a junior and vice president of student government at John Jay High School in northern Westchester County, New York, he manipulated the ouster of the president and succeeded him. Stone recalled how he ran for election as president for his senior year: "I built alliances and put all my serious challengers on my ticket. Then I recruited the most unpopular guy in the school to run against me. You think that's mean? No, it's smart."
Given a copy of Barry Goldwater's The Conscience of a Conservative, Stone became a convert to conservatism as a child and a volunteer in Goldwater's 1964 campaign. In 2007, Stone indicated he was a staunch conservative but with libertarian leanings.
As a student at George Washington University in 1972, Stone invited Jeb Magruder to speak at a Young Republicans Club meeting, then asked Magruder for a job with Richard Nixon's Committee to Re-elect the President. Magruder agreed and Stone then left college to work for the committee.
Career
1970s: Nixon campaign, Watergate and Reagan 1976
Stone's political career began in earnest on the 1972 Nixon campaign, with activities such as contributing money to a possible rival of Nixon in the name of the Young Socialist Alliance and then slipping the receipt to the Manchester Union-Leader. He also hired a spy in the Hubert Humphrey campaign who became Humphrey's driver. According to Stone, during the day he was officially a scheduler in the Nixon campaign, but "By night, I'm trafficking in the black arts. Nixon's people were obsessed with intelligence." Stone maintains he never did anything illegal during Watergate. The Richard Nixon Foundation later clarified that Stone had been a 20-year-old junior scheduler on the campaign, and that to characterize Stone as one of Nixon's aides or advisers was a "gross misstatement".
After Nixon won the 1972 presidential election, Stone worked for the administration in the Office of Economic Opportunity. After Nixon resigned, Stone went to work for Bob Dole, but was later fired after columnist Jack Anderson publicly identified Stone as a Nixon "dirty trickster".
In 1975, Stone helped found the National Conservative Political Action Committee, a New Right organization that helped to pioneer independent expenditure political advertising.
In 1976, he worked in Ronald Reagan's campaign for U.S. President. In 1977, at age 24, Stone won the presidency of the Young Republicans in a campaign managed by his friend Paul Manafort; they had compiled a dossier on each of the 800 delegates that gathered, which they called "whip books".
1980s: Reagan 1980, lobbying, Bush 1988
Stone went on to serve as chief strategist for Thomas Kean's campaign for Governor of New Jersey in 1981 and for his reelection campaign in 1985.
Stone, the "keeper of the Nixon flame", was an adviser to the former President in his post-presidential years, serving as "Nixon's man in Washington". Stone was a protégé of former Connecticut Governor John Davis Lodge, who introduced the young Stone to former Vice President Nixon in 1967. After Stone was indicted in 2019, the Nixon Foundation released a statement distancing Stone's ties to Nixon. John Sears recruited Stone to work in Ronald Reagan's 1980 presidential campaign, coordinating the Northeast. Stone said that Roy Cohn helped him arrange for John B. Anderson to get the nomination of the Liberal Party of New York, a move that would help split the opposition to Reagan in the state. Stone said Cohn gave him a suitcase that Stone avoided opening and that, as instructed by Cohn, he dropped off at the office of a lawyer influential in Liberal Party circles. Reagan carried the state with 46% of the vote. Speaking after the statute of limitations for bribery had expired, Stone later said, "I paid his law firm. Legal fees. I don't know what he did for the money, but whatever it was, the Liberal party reached its right conclusion out of a matter of principle."
In 1980, after their key roles in the Reagan campaign, Stone and Manafort decided to go into business together, with partner Charlie Black, creating a political consulting and lobbying firm to cash in on their relationships within the new administration. Black, Manafort & Stone (BMS), became one of Washington D.C.'s first mega-lobbying firms and was described as instrumental to the success of Ronald Reagan's 1984 campaign. Republican political strategist Lee Atwater joined the firm in 1985, after serving in the #2 position on Reagan-Bush 1984.
Because of BMS's willingness to represent brutal third-world dictators like Mobutu Sese Seko in Zaire and Ferdinand Marcos in the Philippines, the firm was branded "The Torturers' Lobby". BMS also represented a host of high-powered corporate clients, including Rupert Murdoch's News Corp, The Tobacco Institute and, starting in the early 1980s, Donald Trump.
In 1987 and 1988, Stone served as senior adviser to Jack Kemp's presidential campaign, which was managed by consulting partner Charlie Black. In that same election, his other partners worked for George H. W. Bush (Lee Atwater as campaign manager, and Paul Manafort as director of operations in the fall campaign).
In April 1992, Time alleged that Stone was involved with the controversial Willie Horton advertisements to aid George H. W. Bush's 1988 presidential campaign, which were targeted against Democratic opponent Michael Dukakis. Stone has said that he urged Lee Atwater not to include Horton in the ad. Stone denied making or distributing the advertisement, and said it was Atwater's doing.
In the 1990s, Stone and Manafort sold their business. Although their careers went in different directions, their relationship remained close. Stone married his first wife Anne Elizabeth Wesche in 1974. Using the name Ann E.W. Stone, she founded the group Republicans for Choice in 1989. They divorced in 1990.
1990s: Early work with Donald Trump, Dole 1996
In 1995, Stone was the president of Republican Senator Arlen Specter's campaign for the 1996 Republican presidential nomination. Specter withdrew early in the campaign season with less than 2% support.
Stone was for many years a lobbyist for Donald Trump on behalf of his casino business and also was involved in opposing expanded casino gambling in New York State, a position that brought him into conflict with Governor George Pataki.
Stone resigned from a post as a consultant to the 1996 presidential campaign for Senator Bob Dole after The National Enquirer reported that Stone had placed ads and pictures on websites and swingers' publications seeking sexual partners for himself and Nydia Bertran Stone, his second wife. Stone initially denied the report. On the Good Morning America program he falsely stated, "An exhaustive investigation now indicates that a domestic employee, who I discharged for substance abuse on the second time that we learned that he had a drug problem, is the perpetrator who had access to my home, access to my computer, access to my password, access to my postage meter, access to my post-office box key." In a 2008 interview with The New Yorker, Stone admitted that the ads were authentic.
2000–2008: Florida recount, Killian memos, conflict with Eliot Spitzer
In 2000, Stone served as the campaign manager for Donald Trump's aborted campaign for President in the Reform Party primary. Investigative journalist Wayne Barrett accused Stone of persuading Trump to publicly consider a run for the Reform nomination to sideline Pat Buchanan and sabotage the Reform Party in an attempt to lower their vote total to benefit George W. Bush.
Later that year, according to Stone and the film Recount, Stone was recruited by James Baker to assist with public relations during the Florida recount. According to reporter Greg Palast, Stone was a key figure in organizing the so-called Brooks Brothers riot, the demonstration by Republican operatives against the recount.
In 2002, Stone was associated with the campaign of businessman Thomas Golisano for governor of New York State.
During the 2004 presidential campaign, Stone was an advisor (apparently unpaid) to Al Sharpton, a candidate in the Democratic primaries. Defending Stone's involvement, Sharpton said, "I've been talking to Roger Stone for a long time. That doesn't mean that he's calling the shots for me. Don't forget that Bill Clinton was doing more than talking to Dick Morris." Critics suggested that Stone was only working with Sharpton as a way to undermine the Democratic Party's chances of winning the election. Sharpton denies that Stone had any influence over his campaign.
In that election a blogger accused Stone of responsibility for the Kerry–Specter campaign materials which were circulated in Pennsylvania. Such signs were considered controversial because they were seen as an effort to get Democrats who supported Kerry to vote for then Republican Senator Arlen Specter in heavily Democratic Philadelphia.
During the 2004 general election, Stone was accused by then-DNC Chairman Terry McAuliffe of forging the Killian memos that led CBS News to report that President Bush had not fulfilled his service obligations while enlisted in the Texas Air National Guard. McAuliffe cited a report in the New York Post in his accusations. For his part, Stone denied having forged the documents.
In 2007, Stone, a top adviser at the time to Joseph Bruno (the Majority Leader of the New York State Senate), was forced to resign by Bruno after allegations that Stone had threatened Bernard Spitzer, the then-83-year-old father of Democratic gubernatorial candidate Eliot Spitzer. On August 6, 2007, an expletive-laced message was left on the elder Spitzer's answering machine threatening to prosecute the elderly man if he did not implicate his son in wrongdoing. Bernard Spitzer hired a private detective agency that traced the call to the phone of Roger Stone's wife. Roger Stone denied leaving the message, despite the fact that his voice was recognized, claiming he was at a movie that was later shown not to have been screened that night. Stone was accused on an episode of Hardball with Chris Matthews on August 22, 2007, of being the voice on an expletive-laden voicemail threatening Bernard Spitzer, father of Eliot, with subpoenas. Donald Trump is quoted as saying of the incident, "They caught Roger red-handed, lying. What he did was ridiculous and stupid."
Stone consistently denied the reports. Thereafter, however, he resigned from his position as a consultant to the New York State Senate Republican Campaign Committee at Bruno's request.
In January 2008, Stone founded Citizens United Not Timid, an anti-Hillary Clinton 527 group with an intentionally obscene acronym.
Stone is featured in Boogie Man: The Lee Atwater Story, documentary on Lee Atwater made in 2008. He also was featured in Client 9: The Rise and Fall of Eliot Spitzer, the 2010 documentary of the Eliot Spitzer prostitution scandal.
Former Trump aide Sam Nunberg considers Stone his mentor during this time, and "surrogate father".
2010–2014: Libertarian Party involvement and other political activity
In February 2010, Stone became campaign manager for Kristin Davis, a madam linked with the Eliot Spitzer prostitution scandal, in her bid for the Libertarian Party nomination for governor of New York in the 2010 election. Stone said that the campaign "is not a hoax, a prank or a publicity stunt. I want to get her a half-million votes." However, he later was spotted at a campaign rally for Republican gubernatorial candidate Carl Paladino, of whom Stone has spoken favorably. Stone admittedly had been providing support and advice to both campaigns on the grounds that the two campaigns had different goals: Davis was seeking to gain permanent ballot access for her party, and Paladino was in the race to win (and was Stone's preferred candidate). As such, Stone did not believe he had a conflict of interest in supporting both candidates. While working for the Davis campaign, Warren Redlich, the Libertarian nominee for Governor, alleged that Stone collaborated with a group entitled "People for a Safer New York" to send a flyer labeling Redlich a "sexual predator" and "sick, twisted pervert" based on a blog post Redlich had made in 2008. Redlich later sued Stone in a New York court for defamation over the flyers, and sought $20 million in damages. However, the jury in the case returned a verdict in favor of Stone in December 2017, finding that Redlich failed to prove Stone was involved with the flyers.
Stone volunteered as an unpaid adviser to comedian Steve Berke ("a libertarian member of his so-called After Party") in his 2011 campaign for mayor of Miami Beach, Florida in 2012. Berke lost the race to incumbent Mayor Matti Herrera Bower.
In February 2012, Stone said that he had changed his party affiliation from the Republican Party to the Libertarian Party. Stone predicted a "Libertarian moment" in 2016 and the end of the Republican party.
In June 2012, Stone said that he was running a super PAC in support of former New Mexico governor and Libertarian presidential candidate Gary Johnson, whom he had met at a Reason magazine Christmas party two years earlier. Stone told the Huffington Post that Johnson had a real role to play, although "I have no allusions of him winning."
Stone considered running as a Libertarian candidate for governor of Florida in 2014, but in May 2013, he said in a statement that he would not run, and that he wanted to devote himself to campaigning in support of a 2014 constitutional amendment on the Florida ballot to legalize medical marijuana.
2016: Donald Trump campaign and media commentary
Stone served as an adviser to the 2016 presidential campaign of Donald Trump. Stone left the campaign on August 8, 2015, amid controversy, with Stone claiming he quit and Trump claiming that Stone was fired. Despite this, Stone still supported Trump. A few days later, Stone wrote an op-ed called "The man who just resigned from Donald Trump's campaign explains how Trump can still win" for Business Insider.
Despite calling Stone a "stone-cold loser" in a 2008 interview and accusing him of seeking too much publicity in a statement shortly after Stone left the campaign, Donald Trump praised him during an appearance in December 2015 on Alex Jones' radio show that was orchestrated by Stone. "Roger's a good guy," Trump said. "He's been so loyal and so wonderful." Stone remained an informal adviser to and media surrogate for Trump throughout the campaign.
Stone had considered entering the 2016 US Senate race in Florida to challenge white nationalist Augustus Invictus for the Libertarian nomination. He ultimately did not enter the race.
During the course of the 2016 campaign, Stone was banned from appearing on CNN and MSNBC after making a series of offensive Twitter posts disparaging television personalities. Stone specifically referred to a CNN commentator as an "entitled diva bitch" and imagined her "killing herself", and called another CNN personality a "stupid negro" and a "fat negro". Erik Wemple, media writer for The Washington Post, described Stone's tweets as "nasty" and "bigoted". In February 2016, CNN said that it would no longer invite Stone to appear on its network, and MSNBC followed suit, confirming in April 2016, that Stone had also been banned from that network. In a June 2016 appearance on On Point, Stone told Tom Ashbrook: "I would have to admit that calling Roland Martin a 'fat negro' was a two-martini tweet, and I regret that. As for my criticism of Ana Navarro not being qualified ... I don't understand why she's there, given her lack of qualifications."
In March 2016, an article in the tabloid magazine National Enquirer stated that Ted Cruz, Trump's Republican primary rival, had extramarital affairs with five women. The article quoted Stone as saying, "These stories have been swirling about Cruz for some time. I believe where there is smoke there is fire." Cruz denied the allegations (calling it "garbage" and a "tabloid smear") and accused the Trump campaign, and Stone specifically, of planting the story as part of an orchestrated smear campaign against him. Cruz stated, "It is a story that quoted one source on the record, Roger Stone, Donald Trump's chief political adviser. And I would note that Mr. Stone is a man who has 50 years of dirty tricks behind him. He's a man for whom a term was coined for copulating with a rodent." In April 2016, Cruz again criticized Stone, saying on Sean Hannity's radio show of Stone: "He is pulling the strings on Donald Trump. He planned the Trump campaign, and he is Trump's henchman and dirty trickster. And this pattern, Donald keeps associating himself with people who encourage violence." Stone responded by comparing Cruz to Richard Nixon and accusing him of being a liar.
In April 2016, Stone formed a pro-Trump activist group, Stop the Steal, and threatened "Days of Rage" if Republican party leaders tried to deny the nomination to Trump at the Republican National Convention in Cleveland. The Washington Post reported that Stone "is organizing [Trump] supporters as a force of intimidation", noting that Stone "has ... threatened to publicly disclose the hotel room numbers of delegates who work against Trump". Republican National Committee Chairman Reince Priebus said that Stone's threat to publicize the hotel room numbers of delegates was "just totally over the line".
After Trump had been criticized at the Democratic National Convention for his comments on Muslims by Khizr Khan, a Pakistani American whose son received a posthumous Bronze Star Medal and Purple Heart in Operation Iraqi Freedom in 2004, Stone made headlines defending Trump's criticism by accusing Khan of sympathizing with the enemy.
In 2017, Stone was the subject of a Netflix documentary film, titled Get Me Roger Stone, which focuses on his past and on his role in the 2016 presidential campaign of Donald Trump. Stone first suggested Trump run for President in early 1998 while Stone was Trump's casino business lobbyist in Washington.
Stone called Saudi Arabia "an enemy" and criticized Trump's visit to Riyadh in May 2017. He suggested that the Saudi government or members of the Saudi royal family directly supported or financed the September 11 attacks, tweeting that "Instead of meeting with the Saudis @realDonaldTrump should be demanding they pay for the attack on America on 9/11 which they financed."
During the campaign, Stone frequently promoted conspiracy theories, including the false claim that Clinton aide Huma Abedin was connected to the Muslim Brotherhood. In December 2018, as part of a defamation settlement, Stone agreed to retract a false claim he had made during the campaign: that Guo Wengui had donated to Hillary Clinton.
On September 10, 2020, Stone told InfoWars Alex Jones that if Trump appears to lose the 2020 United States presidential election, he should consider declaring martial law via the Insurrection Act and confiscate ballots, shut down the opinion website The Daily Beast and arrest its staff for "seditious" activities, among other things.
As numerous false and unsubstantiated allegations of voting fraud spread after the 2020 presidential election, Stone asserted he had "learned of absolute incontrovertible evidence of North Korean boats delivering ballots through a harbor in Maine." Matthew Dunlap, the Maine secretary of state, said the "vague rumor has absolutely no validity."
Proud Boys ties
In early 2018, ahead of an appearance at the annual Republican Dorchester Conference in Salem, Oregon, Stone sought out the Proud Boys, a right-wing group known for street violence, to act as his "security" for the event; photos posted online showed Stone drinking with several Proud Boys. After his arraignment at the Miami federal courthouse in January 2019, they joined him on its steps holding signs that said, "Roger Stone is innocent," and promoting right-wing conspiracy theorist Alex Jones and his InfoWars website. Proud Boys founder Gavin McInnes said Stone was “one of the three approved media figures allowed to speak” about the group. When Stone was asked by a local reporter about the Proud Boys' claim that he had been initiated as a member of the group, he responded by calling the reporter a member of the Communist party. He is particularly close to the group's current leader, Enrique Tarrio, who has commercially monetized his position. At a televised Trump rally in Miami, Florida, on February 18, 2019, Tarrio was seated directly behind President Trump wearing a "Roger Stone did nothing wrong" tee shirt.
The Washington Post reported in February 2021 that the FBI was investigating any role Stone might have had in influencing the Proud Boys and Oath Keepers in their participation in the 2021 storming of the United States Capitol.
Relations with Israel before the 2016 United States elections
According to The Times of Israel, Roger Stone "was in contact with one or more apparently well-connected Israelis at the height of the 2016 US presidential campaign, one of whom warned Stone that Trump was “going to be defeated unless we intervene” and promised “we have critical intell[sic].” The exchange between Stone and this Jerusalem-based contact appears in FBI documents made public".
Relations with Wikileaks and Russia before the 2016 United States elections
During the 2016 campaign, Stone was accused by Hillary Clinton campaign chairman John Podesta of having prior knowledge of the publishing by WikiLeaks of Podesta's private emails obtained by a hacker. Stone tweeted before the leak, "It will soon the Podesta's time in the barrel". Five days before the leak, Stone tweeted, "Wednesday Hillary Clinton is done. #Wikileaks." Stone has denied having any advance knowledge of the Podesta email hack or any connection to Russian intelligence, stating that his earlier tweet was referring to reports of the Podesta Group's own ties to Russia. In his opening statement before the United States House Permanent Select Committee on Intelligence on September 26, 2017, Stone reiterated this claim: "Note that my tweet of August 21, 2016, makes no mention, whatsoever, of Mr. Podesta's email, but does accurately predict that the Podesta brothers' business activities in Russia ... would come under public scrutiny."
Stone repeatedly acknowledged that he had established a back-channel with the WikiLeaks founder Julian Assange to obtain information on Hillary Clinton and pointed to this intermediary as the source for his advance knowledge about the release of Podesta's e-mails by WikiLeaks. Stone ultimately named Randy Credico, who had interviewed both Assange and Stone for a radio show, as his intermediary with Assange. A January 2019 indictment claimed Stone communicated with additional contacts knowledgeable about WikiLeaks plans.
In February 2017, The New York Times reported that as part of its investigation into the Trump campaign, the FBI was looking into any contacts Stone may have had with Russian operatives. The following month The Washington Times reported that Stone had direct-messaged alleged DNC hacker Guccifer 2.0 on Twitter. Stone acknowledged contacts with the mysterious persona and made public excerpts of the messages. Stone said the messages were just innocent praise of the hacking. U.S. intelligence agencies believe Guccifer 2.0 to be a persona created by Russian intelligence to obscure its role in the DNC hack. The Guccifer 2.0 persona was ultimately linked with an IP address associated with the Russian intelligence agency, GRU, in Moscow when a user with a Moscow IP address logged into one of the Guccifer social media accounts without using a VPN.
In March 2017, the Senate Intelligence Committee asked Stone to preserve all documents related to any Russian contacts. The Committee Vice Chair, Senator Mark Warner (D-VA), called on Stone to testify before the committee, saying he "hit the trifecta" of shady dealings with Russia. Stone denied any wrongdoing in an interview on Real Time with Bill Maher on March 31, 2017, and said he was willing to testify before the committee. The Committee's final report of August 2020 found that Stone did have access to Wikileaks and that Trump had spoken to Stone and other associates about it multiple times. Immediately after the Access Hollywood tape was released in October 2016, Stone directed his associate Jerome Corsi to tell Julian Assange to "drop the Podesta emails immediately," which Wikileaks leaked minutes later. However the drop had been announced three days previously and the Mueller investigation was only able to establish Corsi talked to Ted Malloch who was not an Assange associate. The Committee also found that Wikileaks "very likely knew it was assisting a Russian intelligence influence effort." In written responses to the Mueller investigation, Trump had stated he did not recall such discussions with Stone.
On September 26, 2017, Stone testified before the House Intelligence Committee behind closed doors. He also provided a statement to the Committee and the press. The Washington Post annotated Stone's statement by noting his affiliations with InfoWars, Breitbart, and Barack Obama citizenship conspiracy theories promulgator, Jerome Corsi. Stone also made personal attacks on Democratic committee members Adam Schiff, Eric Swalwell and Dennis Heck.
On October 28, 2017, following a news report by CNN that indictments would be announced within a few days, Stone's Twitter account was suspended by Twitter for what it called "targeted abuse" of various CNN personnel in a series of derogatory, threatening and obscenity-filled tweets.
On December 1, 2017, Stone texted Randy Credico, a prosecution witness: "If you testify you're a fool. Because of tromp (sic), I could never get away with a certain (sic) my Fifth Amendment rights but you can. I guarantee you you (sic) are the one who gets indicted for perjury if you're stupid enough to testify." According to his indictment, page 20, on April 9, 2018, Stone emailed these threats to the witness, including a comment regarding his security dog that he would: "...take that dog away from you," "You are a rat. A stoolie. You backstab your friends-run your mouth my lawyers are dying Rip you to shreds." "I am so ready. Let's get it on. Prepare to die cock sucker." In a May 21, 2018 email, Stone wrote: "You are so full of shit. You got nothing. Keep running your mouth and I'll file a bar complaint against your friend."
In a December 2017 interview with the Florida television station WBBH-TV, following the sentencing of Michael Cohen, Stone said that Cohen shouldn't have lied under oath, and Cohen was a "rat" because he turned on the president, something that Stone said he would never do.
On March 13, 2018, two sources close to Stone, former Trump aide Sam Nunberg and a person speaking on condition of anonymity, acknowledged to The Washington Post that Stone had established contact with WikiLeaks owner Julian Assange and that the two had a telephone conversation discussing emails related to the Clinton campaign which had been leaked to WikiLeaks. According to Nunberg, who claimed he spoke to the paper after being asked to do so by Special Counsel Robert Mueller, Stone joked to him that he had taken a trip to London to personally meet with Assange, but declined to do so, had only wanted to have telephone conversations to remain undetected and did not have advance notice of the leaked emails. The other source, who spoke on anonymity, stated that the conversation occurred before it was publicly known that hackers had obtained the emails of Podesta and of the Democratic National Committee, documents that WikiLeaks released in July and October 2016. Stone afterwards denied that he had contacted Assange or had known in advance about the leaked emails.
In May 2018, Stone's social media consultant, Jason Sullivan, was issued grand jury subpoenas from the Mueller investigation.
On July 3, 2018, U.S. District Judge Ellen Huvelle dismissed a lawsuit brought by political activist group Protect Democracy, alleging that Trump's campaign and Stone conspired with Russia and WikiLeaks to publish hacked Democratic National Committee emails during the 2016 presidential election race. The judge found that the suit was brought in the wrong jurisdiction. The next week, Stone was identified by two government officials as the anonymous person mentioned in the indictment released by Deputy Attorney General Rod Rosenstein that charged twelve Russian military intelligence officials with conspiring to interfere in the 2016 elections, as somebody the Russian hackers operating the online persona Guccifer 2.0 communicated with, and who the indictment alleged was in regular contact with senior members of the presidential campaign.
Charges
Arrest and indictment
On January 25, 2019, in a pre-dawn raid by 29 FBI agents acting on both an arrest warrant and a search warrant at his Fort Lauderdale, Florida home, Stone was arrested on seven criminal charges of an indictment in the Mueller investigation: one count of obstructing an official proceeding, five counts of false statements, and one count of witness tampering. The same day, a federal magistrate judge released Stone on a US$250,000 signature bond and declared that he was not a flight risk. Stone said he would fight the charges, which he called politically motivated, and would refuse to “bear false witness" against Trump. He called Robert Mueller a "rogue prosecutor". In the charging document, prosecutors alleged that after the first WikiLeaks release of hacked DNC emails in July 2016, a senior Trump campaign official was directed to contact Stone about any additional releases and determine what other damaging information WikiLeaks had regarding the Clinton campaign. Stone thereafter told the Trump campaign about potential future releases of damaging material by WikiLeaks, the indictment alleged. The indictment also alleged that Stone had discussed WikiLeaks releases with multiple senior Trump campaign officials.
On February 18, 2019, Stone posted on Instagram a photo of the federal judge overseeing his case, Amy Berman Jackson, with what resembled rifle scope crosshairs next to her head. Later that day, Stone filed an apology with the court. Jackson then imposed a full gag order on Stone, citing her belief that Stone would "pose a danger" to others without the order.
Trial and conviction
Stone's trial began on November 6, 2019. Randy Credico testified that Stone urged and threatened him to prevent him testifying to Congress. Stone had testified to Congress that Credico was his WikiLeaks go-between, but prosecutors said this was a lie in order to protect Jerome Corsi. During the November 12 testimony, former Trump campaign deputy chairman Rick Gates testified that Stone told campaign associates in April 2016 of WikiLeaks' plans to release documents, far earlier than previously known. Gates also testified that Trump had spoken with Stone about the forthcoming releases. After a week-long trial and two days of deliberations, the jury convicted Stone on all counts – obstruction, making false statements, and witness tampering – on November 15, 2019. After the trial, one of the jurors emphasized that the jury did not convict Stone based on his political beliefs. On November 25, a decision denying a defense motion for acquittal was released. The judge wrote that the testimony of Steven Bannon and Rick Gates was sufficient to conclude that Stone lied to Congress.
Sentencing
Intervention by Trump and Justice Department officials
On February 10, 2020, prosecutors from the U.S. Attorney's Office for the District of Columbia requested that Stone be sentenced to seven to nine years in prison for his crimes after securing convictions on all seven charges. Around midnight, Trump characterized the sentencing recommendation as "horrible and very unfair situation" in tweeted, "Cannot allow this miscarriage of justice!" The next morning a senior Justice Department official said the department would recommend a lighter sentence, adding that the decision had been made before Trump commented. That afternoon the Department of Justice filed a revised sentencing memorandum, saying the initial recommendation could be "considered excessive and unwarranted under the circumstances." All four of the Assistant U.S. Attorneys who were prosecuting the case Jonathan Kravis, Aaron Zelinsky, Adam Jed and Michael Marando withdrew from the case, and Kravis resigned from the U.S. Attorney's Office altogether. Senate Minority Leader Chuck Schumer sent a letter to the Department of Justice Inspector General requesting a probe into the reduced sentencing recommendation, over fears of potential improper political interference in the process. Trump later said he had not asked the Justice Department to recommend a lighter sentence, but also asserted he had an "absolute right" to intervene. The next day he praised U.S. Attorney General William Barr for "taking charge" of the case and thanked Justice Department officials for recommending a lesser sentence than was proposed by the prosecutors who tried the case.
The politicization of Stone's sentencing by Trump and senior Trump administration officials at the Justice Department caused controversy and prompted allegations of political interference; the Justice Department's unusual decision to overrule the prosecutors on the case, as well as Stone's close association with Donald Trump, led to the affair being described as a crisis in the rule of law in the U.S. More than 2,000 former employees of the Department of Justice signed an open letter calling on Barr to resign, and the Federal Judges Association convened an emergency meeting on the matter. In testimony before the House Judiciary Committee, Zelinsky, one of the prosecutors who withdrew from the case after the Justice Department intervened to recommend a lighter sentence for Stone, said that the "highest levels" of Justice Department had been "exerting significant pressure" on prosecutors "to cut Stone a break" and "water down and in some cases outright distort" Stone's conduct. Zelinsky testified that "What I heardrepeatedlywas that Roger Stone was being treated differently from any other defendant because of his relationship to the president." Zelinsky also testified that acting U.S. Attorney Timothy Shea made the request for a lighter sentence for Stone after coming under "heavy pressure from the highest levels of the Department of Justice" and out of fear of Trump. Zelinsky testified that in his career as a prosecutor, United States v. Roger Stone was the sole occasion in which he witnessed "political influence play any role in prosecutorial decision making," and that he opted to resign from the case and his temporary appointment in the U.S. Attorney's Office in D.C. "rather than be associated with the Department of Justice's actions at sentencing. Former Attorney General Eric Holder tweeted, "do not underestimate the danger of this situation: the political appointees in the DOJ are involving themselves in an inappropriate way in cases involving political allies of the President"; former director of the Office of Government Ethics Walter Shaub tweeted, "a corrupt authoritarian and his henchmen are wielding the Justice Department as a shield for friends and a sword for political rivals. It is impossible to overstate the danger." Channing D. Phillips, who previously served as U.S. Attorney for D.C., said that the events were "deeply troubling" and that the withdrawal of all four line prosecutors suggested "undue meddling by higher ups at DOJ or elsewhere." CNN reported that other prosecutors in the U.S. Attorney's Office for D.C. had discussed resigning over the matter. The New York Times reported that federal prosecutors around the nation – already leery of taking cases that might catch Trump's attention – had become increasingly concerned after the Stone developments. In late June, Attorney General Barr agreed to testify before the House Judiciary Committee at an oversight hearing on July 28, 2020, which would be Barr's first congressional testimony since his confirmation in early 2019. Barr agreed to appear before the committee one day after Chairman Jerry Nadler said he would issue a subpoena to compel Barr's testimony if he did not appear voluntarily.
On February 11, 2020the same day the four Stone prosecutors withdrew from the case after the Justice Department intervened in the sentencing recommendationTrump withdrew the nomination of Jessie K. Liu, former U.S. Attorney for the District of Columbia, to become an Under Secretary of the Treasury, two days before her scheduled confirmation hearing. As U.S. attorney, Liu had overseen some ancillary cases referred by the Mueller investigation including the Stone prosecution, as well as a politically charged case involving former FBI deputy director Andrew McCabe, until attorney general Barr replaced her with his close advisor Shea in January 2020. CNN reported the next day that Liu's nomination was withdrawn because she was perceived to be insufficiently involved in the Stone and McCabe cases.
Post-trial motions and sentencing
On February 12, Judge Amy Berman Jackson denied Stone's motion for a new trial. Stone had asserted that a juror was biased against him. Stone again requested a new trial on February 14, after the jury foreperson of his trial publicly voiced support for the four prosecutors who withdrew from the Stone case. All jurors in the Stone trial had been vetted for potential bias by Judge Jackson, the defense team, and prosecutors.
On February 20, 2020, Judge Jackson sentenced Stone to 40 months in federal prison and a $20,000 fine for his crimes, but allowed him to delay the start of his sentence pending resolution of Stone's post-trial motions. Jackson stated in the sentencing hearing, "The truth still exists. The truth still matters [in spite of] Roger Stone's insistence that it doesn't [pose] a threat to our most fundamental institutions, to the very foundation of our democracy." Jackson also rejected Trump's attacks on the investigators and prosecutors, saying, "There was nothing unfair, phony, or disgraceful about the investigation or the prosecution." Jackson said "Roger Stone will not be sentenced for who his friends are, or who his enemies are."
On February 23, 2020, Judge Jackson rejected a request by Stone's lawyers that she be removed from the case.
On April 16, Judge Jackson denied Stone's motion for a new trial and ordered Stone to federal prison within 2 weeks. On April 30, ABC News reported that they had learned through sources that the Federal Bureau of Prisons planned to delay Stone's surrender date by at least 30 days due to concerns relating to the COVID-19 pandemic. On May 28, Stone was ordered by Judge Jackson to report to prison by June 30. On June 24, Stone filed a motion to delay his transfer to prison, alleging potential health concerns connected to the COVID-19 pandemic. On June 27, Judge Jackson rescheduled Stone's surrender date as July 14, but also ordered him to immediately begin serving time in home confinement before reporting to prison.
Commutation and pardon
After Stone's conviction, Trump repeatedly indicated that he was considering a pardon for Stone. Trump also repeatedly attacked the prosecutors, judge, and jury in Stone's trial, and contended, without evidence, that the foreperson of the jury (which unanimously convicted Stone), was dishonest in the jury questionnaire, however she had previously made anti-Trump social media posts and had retweeted a social media post about Roger Stone's initial arrest shortly after it happened (before the trial). Another juror stated that had she not been there, they would have returned the same verdict but faster, insisting that the jury forewoman was impartial and focussed on process. Stone publicly lobbied for clemency, stressing his loyalty to the president, saying: "He knows I was under enormous pressure to turn on him. It would have eased my situation considerably. But I didn't." Within Trump's circle, Fox News commentator Tucker Carlson, Trump aide Larry Kudlow, and Republican congressman Matt Gaetz urged Trump to grant clemency to Stone, as did Republican Senator Lindsey Graham. Other Trump advisors, including chief of staff Mark Meadows, son-in-law and senior adviser Jared Kushner, and White House Counsel Pat A. Cipollone were concerned about granting clemency to Stone, viewing a grant of clemency as a political liability for Trump.
On July 10, 2020, Trump commuted Stone's sentence by entirely removing his jailtime a few days before he was to report to prison. Trump personally called Stone to inform him that his sentence was being commuted. In a lengthy statement containing an array of grievances, Trump attacked the prosecutors as "overzealous" and said, "Roger Stone has already suffered greatly. He was treated very unfairly, as were many others in this case. Roger Stone is now a free man!" The Trump White House statement contained multiple statements and claims regarding Stone's prosecution and the Mueller investigation. The commutation was announced late on a Friday evening, a common time for the release of prospectively damaging news. Stone's commutation followed a number of occasions in which Trump granted executive clemency to his supporters or political allies, or following personal appeals or campaigns in conservative media, as in the cases of Rod Blagojevich, Michael Milken, Joe Arpaio, Dinesh D'Souza, and Clint Lorance, as well as Bernard Kerik. Trump's grant of clemency to Stone, however, marked "the first figure directly connected to the president's campaign to benefit from his clemency power." On July 15, 2020, counsel for two constitutional law professors sought leave of Judge Jackson to file an amicus brief addressing whether the commutation "may not be constitutionally valid". Judge Jackson denied their motion on July 30, saying that the matter was no longer in her court, so she lacked jurisdiction.
In rare public comments, prosecutor Robert Mueller forcefully rebutted Trump's claims in an op-ed in The Washington Post. Democrats condemned Trump's commutation of Stone's sentence, viewing it as abuse of the rule of law that distorted the U.S. justice system to protect Trump's friends and undermine Trump's rivals. Representatives Jerrold Nadler and Carolyn B. Maloney, who chair two House committees, said that "No other president has exercised the clemency power for such a patently personal and self-serving purpose" and said that they would investigate whether Stone's commutation was a reward for protecting Trump. Most Republican elected officials remained silent on Trump's commutation of Stone. Exceptions were Republican Senators Mitt Romney, who termed the commutation "unprecedented, historic corruption," and Pat Toomey, who called the commutation a "mistake" due in part to the severity of the crimes of which Stone was convicted.
On December 23, 2020, President Trump issued a full pardon to Stone.
2021 storming of the United States Capitol
After Trump's November 2020 U.S. Presidential election loss, Stone urged followers to “fight until the bitter end”. He appeared at the “Stop the Steal” rally on January 5, at Freedom Plaza, telling the crowd that the president's enemies sought "nothing less than the heist of the 2020 election and we say, No way!" And "… we will win this fight or America will step off into a thousand years of darkness. We dare not fail. I will be with you tomorrow shoulder to shoulder."
On November 22, 2021, the House Select Committee on the January 6 Attack subpoenaed Stone and Alex Jones for testimony and documents by December 17 and 6, respectively.
On December 23, 2021, Stone urged a judge to dismiss a lawsuit filed against him by eight Capitol Police officers, alleging that he is responsible for inciting a crowd of former President Donald Trump's supporters to riot on January 6, 2021.
Federal civil suit
In April 2021, the Justice Department filed a civil suit against Stone and his wife to recover about $2 million in alleged unpaid federal taxes, asserting they had used a commercial entity to shield their income and fund their personal expenses.
Books and other writings
Since 2010, Stone has been an occasional contributor to the conservative website The Daily Caller, serving as a "male fashion editor". Stone also writes for his own fashion blog, Stone on Style.
Stone has written five books, all published by Skyhorse Publishing of New York City. His books have been described as "hatchet jobs" by the Miami Herald and Tampa Bay Times.
The Man Who Killed Kennedy: The Case Against LBJ (with Mike Colapietro contributing) (Skyhorse Publishing, 2013): Stone contends that Lyndon B. Johnson was behind a conspiracy to kill John F. Kennedy and was complicit in at least six other murders. In a review for The Washington Times, Hugh Aynesworth wrote: "The title pretty much explains the book's theory. If a reader doesn't let facts get in the way, it could be an interesting adventure." Aynesworth, who covered the assassination for the Dallas Morning News, said that the book "is totally full of all kinds of crap".
Nixon's Secrets: The Rise, Fall and Untold Truth about the President, Watergate, and the Pardon (Skyhorse Publishing, 2014): Stone discusses Richard Nixon and his career. About two-thirds of the book "is a conventional biography that is by no means a whitewash of Nixon. Stone writes that the President took campaign money from the mob, had a long-running affair with a Hong Kong woman who may have been a Chinese spy, and even once unwittingly smuggled three pounds of marijuana into the United States when carrying the suitcase of jazz great Louis Armstrong." The remaining one-third of the book is an unconventional account of the Watergate scandal. Stone portrays Nixon as a "confused victim" and claims that John Dean orchestrated the break-in (which he depicts as ordinary politics of the time) to cover up involvement in a prostitution ring. This account is rejected by experts, such as Watergate researchers Anthony Summers and Max Holland. Holland said of Stone: "He's out of his ever-lovin' mind." Dean said in 2014 that Stone's book and his defense of Nixon are "typical of the alternative universe out there" and "pure bullshit".
The Clintons' War on Women (with Robert Morrow of Austin, Texas) (Skyhorse Publishing, 2015): This book, according to Politico, is a "sensational" work that contains "explosive, but highly dubious, revelations about both Bill Clinton and Hillary Clinton", with a focus on Bill Clinton sexual misconduct allegations, and a claim that Webster Hubbell is the biological father of Chelsea Clinton. This book was promoted by Trump, who posted a Twitter message containing the book's Amazon.com page. David Corn, writing in Mother Jones, writes that the book is "apparently designed to smear the Clintons – by depicting Bill as a serial rapist, Hillary as an enabler, and both members of the power couple as a diabolical duo bent on destroying anyone who stands in their way" and said that the book was part of a wider "extreme anti-Clinton project" by Stone.
Jeb! and the Bush Crime Family (with Saint John Hunt) (Skyhorse Publishing, 2016): The book focuses on Jeb Bush and the Bush family.
The Making of the President 2016: How Donald Trump Orchestrated a Revolution (Skyhorse Publishing, 2017): Susan J. McWilliams, Professor of Politics at Pomona College, wrote in her review of the book that "[a]side from some minor revelations about how long Trump planned what would later appear to be spontaneous decisions he trademarked the slogan "Make America Great Again" in 2013 there's very little Trump, doing very little orchestrating, in these pages" and that "[t]here are many provocative political musings here, but they get lost in Stone's avaricious appetite for self-promotion and grudge-holding."
Stone's Rules: How to Win at Politics, Business, and Style (Skyhorse Publishing, 2018)
The Myth of Russian Collusion: The Inside Story of How Donald Trump REALLY Won (Skyhorse Publishing, 2019) (paperback edition of Stone's 2016 book The Making of the President 2016 with an added "Introduction 2019")
Personal style and habits
Stone's personal style has been described as flamboyant. In a 2007 Weekly Standard profile written by Matt Labash, Stone was described as a "lord of mischief" and the "boastful black prince of Republican sleaze". Labash wrote that Stone "often sets his pronouncements off with the utterance 'Stone's Rules,' signifying to listeners that one of his shot-glass commandments is coming down, a pithy dictate uttered with the unbending certitude one usually associates with the Book of Deuteronomy." Examples of Stone's Rules include "Politics with me isn't theater. It's performance art, sometimes for its own sake."
Stone does not wear socksa fact that Nancy Reagan brought to her husband's attention during his 1980 presidential campaign. Labash described him as "a dandy by disposition who boasts of having not bought off-the-rack since he was 17", who has "taught reporters how to achieve perfect double-dimples underneath their tie knots". Washington journalist Victor Gold has noted Stone's reputation as one of the "smartest dressers" in Washington. Stone's longtime tailor is Alan Flusser. Stone dislikes single-vent jackets (describing them as the sign of a "heathen"); says he owns 100 silver-colored neckties; and has 100 suits in storage. Fashion stories have been written about him in GQ and Penthouse. Stone has written of his dislike for jeans and ascots and has praised seersucker three-piece suits, as well as Madras jackets in the summertime and velvet blazers in the winter.
In 1999, Stone credited his facial appearance to "decades of following a regimen of Chinese herbs, breathing therapies, tai chi, and acupuncture." Stone wears a diamond pinky ring in the shape of a horseshoe and in 2007 he had Richard Nixon's face tattooed on his back. He has said: "I like English tailoring, I like Italian shoes. I like French wine. I like vodka martinis with an olive, please. I like to keep physically fit." Stone's office in Florida has been described as a "Hall of Nixonia" with framed pictures, posters, and letters associated with Nixon.
See also
Criminal charges brought in the Special Counsel investigation (2017–2019)
Links between Trump associates and Russian officials
List of people granted executive clemency by Donald Trump
Timeline of Russian interference in the 2016 United States elections
Notes
References
External links
1952 births
Living people
21st-century American criminals
1972 United States presidential election
American conspiracy theorists
American lobbyists
American people of Hungarian descent
American politicians of Italian descent
American political consultants
Donald Trump 2016 presidential campaign
Florida Libertarians
George Washington University alumni
InfoWars people
John F. Kennedy conspiracy theorists
John Jay High School (Cross River, New York) alumni
Members of the Committee for the Re-Election of the President
Members of the Libertarian Party (United States)
New York (state) Libertarians
New York (state) Republicans
People associated with Russian interference in the 2016 United States elections
People associated with the 2016 United States presidential election
People convicted of making false statements
People convicted of obstruction of justice
People from Lewisboro, New York
People from Norwalk, Connecticut
Recipients of American presidential clemency
Recipients of American presidential pardons
Researchers of the assassination of John F. Kennedy
Watergate scandal
|
10520936
|
https://en.wikipedia.org/wiki/Ural%20%28computer%29
|
Ural (computer)
|
Ural () is a series of mainframe computers built in the former Soviet Union.
History
The Ural was developed at the Electronic Computer Producing Manufacturer of Penza in the Soviet Union and was produced between 1956 and 1964. The computer was widely used in the 1960s, mainly in the socialist countries, though some were also exported to Western Europe and Latin America. The Indian Statistical Institute purchased an Ural-1 in 1958.
When the University of Tartu received a new computer, its old computer, the Ural 1, was moved to a science-based secondary school, the Nõo Reaalgümnaasium. That event took place in 1965 and made the Nõo Reaalgümnaasium one of the first secondary schools in the Soviet Union to own a computer. The name of the computer was also used to coin the first name for "computer" in Estonian, raal, in use until the 1990s until it was replaced by the word arvuti ("computer"). School 444 in Moscow, Russia has started graduating programmers in 1960 and had the Ural computer operating by its students on-premises in 1965.
Attributes
Models Ural-1 to Ural-4 were based on vacuum tubes (valves), with the hardware being able to perform 12,000 floating-point calculations per second. One word consisted of 40 bits and was able to contain either one numeric value or two instructions. Ferrite core was used as operative memory beginning with the Ural-2. A new series (Ural-11, Ural-14, produced between 1965 and 1971) was based on semiconductors.
It was able to perform mathematical tasks at computer centres, industrial facilities and research facilities. The device occupied approximately 90-100 square metres of space. The computer ran on three-phase electric power and had a three-phase magnetic voltage stabiliser with 30kVA capacity.
The main units of the system were: keyboard, controlling-reading unit, input punched tape, output punched tape, printer, magnetic tape memory, ferrite memory, ALU (arithmetical logical unit), CPU (central processing unit), and power supply.
Models
Several models were released:
Ural-1 – 1956
Ural-2 – 1959
Ural-3 – 1964
Ural-4 – 1962
Ural-11 – 1965
Ural-14 – 1965
Ural-16 – 1969
Trivia
Charles Simonyi, who was the second Hungarian in space, stated that he would take old paper tapes from his Soviet-built Ural-2 computer into space with him: he kept them to remind him of his past.
See also
Bashir Rameev, chief designer of the Ural series
History of computing hardware
List of vacuum tube computers
References
External links
Soviet inventions
Soviet brands
Ministry of Radio Industry (USSR) computers
|
40914533
|
https://en.wikipedia.org/wiki/Vicarious%20%28company%29
|
Vicarious (company)
|
Vicarious is an artificial intelligence company based in the San Francisco Bay Area, California. They are using the theorized computational principles of the brain to build software that can think and learn like a human.
Founders
The company was founded in 2010 by D. Scott Phoenix and Dileep George. Before co-founding Vicarious, Phoenix was Entrepreneur in Residence at Founders Fund and CEO of Frogmetrics, a touchscreen analytics company he co-founded through the Y Combinator incubator program. Previously, George was Chief Technology Officer at Numenta, a company he co-founded with Jeff Hawkins and Donna Dubinsky (PALM, Handspring) while completing his PhD at Stanford University.
Funding
The company launched in February 2011 with funding from Founders Fund, Dustin Moskovitz, Adam D’Angelo (former Facebook CTO and co-founder of Quora), Felicis Ventures, and Palantir co-founder Joe Lonsdale. In August 2012, in its Series A round of funding, it raised an additional $15 million. The round was led by Good Ventures; Founders Fund, Open Field Capital and Zarco Investment Group also participated.
The company received $40 million in its Series B round of funding. The round was led by such notables as Mark Zuckerberg, Elon Musk, Peter Thiel, Vinod Khosla, and Ashton Kutcher. An additional undisclosed amount was later contributed by Amazon.com CEO Jeff Bezos, Yahoo! co-founder Jerry Yang, Skype co-founder Janus Friis and Salesforce.com CEO Marc Benioff.
Recursive Cortical Network
Vicarious is developing machine learning software based on the computational principles of the human brain. One such software is a vision system known as the Recursive Cortical Network (RCN), it is a generative graphical visual perception system that interprets the contents of photographs and videos in a manner similar to humans. The system is powered by a balanced approach that takes sensory data, mathematics, and biological plausibility into consideration.
On October 22, 2013, beating CAPTCHA, Vicarious announced its model was reliably able to solve modern CAPTCHAs, with character recognition rates of 90% or better when trained on one style. However, Luis von Ahn, a pioneer of early CAPTCHA and founder of reCAPTCHA, expressed skepticism, stating: "It's hard for me to be impressed since I see these every few months." He pointed out that 50 similar claims to that of Vicarious had been made since 2003. Vicarious later published their findings in peer-reviewed journal Science.
Vicarious has indicated that its AI was not specifically designed to complete CAPTCHAs and its success at the task is a product of its advanced vision system. Because Vicarious's algorithms are based on insights from the human brain, it is also able to recognize photographs, videos, and other visual data.
See also
Artificial intelligence
Glossary of artificial intelligence
References
External links
Applied machine learning
Software companies based in the San Francisco Bay Area
Software companies of the United States
Software companies established in 2010
Computer vision
American companies established in 2010
2010 establishments in California
|
33702099
|
https://en.wikipedia.org/wiki/Yellowstone%20%28supercomputer%29
|
Yellowstone (supercomputer)
|
Yellowstone was the inaugural supercomputer at the NCAR-Wyoming Supercomputing Center (NWSC) in Cheyenne, Wyoming. It was installed, tested, and readied for production in the summer of 2012. The Yellowstone supercomputing cluster was decommissioned on December 31, 2017, being replaced by its successor Cheyenne.
Yellowstone was a highly capable petascale system designed for conducting breakthrough scientific research in the interdisciplinary field of Earth system science. Scientists used the computer and its associated resources to model and analyze complex processes in the atmosphere, oceans, ice caps, and throughout the Earth system, accelerating scientific research in climate change, severe weather, geomagnetic storms, carbon sequestration, aviation safety, wildfires, and many other topics. Funded by the National Science Foundation and the State and University of Wyoming, and operated by the National Center for Atmospheric Research, Yellowstone's purpose was to improve the predictive power of Earth system science simulation to benefit decision-making and planning for society.
System description
Yellowstone was a 1.5-petaflops IBM iDataPlex cluster computer with 4,536 dual-socket compute nodes that contained 9,072, 2.6-GHz Intel Xeon E5-2670 8-core processors (72,576 cores), and its aggregate memory size was 145 terabytes. The nodes interconnected in a full fat tree network via a Mellanox FDR InfiniBand switching fabric. System software includes the Red Hat Enterprise Linux operating system for Scientific Computing, LSF Batch Subsystem and Resource Manager, and IBM General Parallel File System (GPFS).
Yellowstone was integrated with many other high-performance computing resources in the NWSC. The central feature of this supercomputing architecture was its shared file system that streamlined science workflows by providing computation, analysis, and visualization work spaces common to all resources. This common data storage pool, called the GLobally Accessible Data Environment (GLADE), provides 36.4 petabytes of online disk capacity shared by the supercomputer, two data analysis and visualization (DAV) cluster computers (Geyser and Caldera), data servers for both local and remote users, and a data archive with the capacity to store 320 petabytes of research data. High-speed networks connect this Yellowstone environment to science gateways, data transfer services, remote visualization resources, Extreme Science and Engineering Discovery Environment (XSEDE) sites, and partner sites around the world.
This integration of computing resources, file systems, data storage, and broadband networks allowed scientists to simulate future geophysical scenarios at high resolution, then analyze and visualize them on one computing complex. This improves scientific productivity by avoiding the delays associated with moving large quantities of data between separate systems. Further, this reduces the volume of data that needs to be transferred to researchers at their home institutions. The Yellowstone environment at NWSC makes more than 600 million processor-hours available each year to researchers in the Earth system sciences.
See also
Supercomputer architecture
Supercomputer operating systems
References
External links
"Yellowstone"
"Wyoming supercomputer moves in"
"Supercomputer will help researchers map climate change down to the local level"
"Yellowstone Super First to Crunch Local Climate Models"
"NCAR's Data-Centric Supercomputing Environment: Yellowstone"
"IBM Yellowstone Supercomputer To Study Climate Change"
"IBM Installs Sandy Bridge EP Supercomputer for NCAR"
"IBM working on NCAR supercomputer."
"U.S. weather boffins tap IBM for 1.6 petaflops super"
"NCAR-Wyoming Supercomputing Center website"
"NCAR-Wyoming Supercomputing Center Fact Sheet"
"NCAR-Wyoming Supercomputing Center - UW website"
X86 supercomputers
IBM supercomputers
iDataPlex supercomputers
|
36277736
|
https://en.wikipedia.org/wiki/Google%20Compute%20Engine
|
Google Compute Engine
|
Google Compute Engine (GCE) is the Infrastructure as a Service (IaaS) component of Google Cloud Platform which is built on the global infrastructure that runs Google's search engine, Gmail, YouTube and other services. Google Compute Engine enables users to launch virtual machines (VMs) on demand. VMs can be launched from the standard images or custom images created by users. GCE users must authenticate based on OAuth 2.0 before launching the VMs. Google Compute Engine can be accessed via the Developer Console, RESTful API or command-line interface (CLI).
History
Google announced Compute Engine on June 28, 2012 at Google I/O 2012 in a limited preview mode. In April 2013, GCE was made available to customers with Gold Support Package. On February 25, 2013, Google announced that RightScale was their first reseller. During Google I/O 2013, many features including sub-hour billing, shared-core instance types, larger persistent disks, enhanced SDN based networking capabilities and ISO/IEC 27001 certification got announced. GCE became available to everyone on May 15, 2013. Layer 3 load balancing came to GCE on August 7, 2013. Finally, on December 2, 2013, Google announced that GCE is generally available. It also expanded the OS support, enabled live migration of VMs, 16-core instances, faster persistent disks and lowered the price of standard instances.
At the Google Cloud Platform Live event on March 25, 2014, Urs Hölzle, Senior VP of technical infrastructure announced sustained usage discounts, support for Microsoft Windows Server 2008 R2, Cloud DNS and Cloud Deployment Manager. On May 28, 2014, Google announced optimizations for LXC containers along with dynamic scheduling of Docker containers across a fleet of VM instances.
Google Compute Engine Unit
Google Compute Engine Unit (GCEU), which is pronounced as GQ, is an abstraction of computing resources. According to Google, 2.75 GCEUs represent the minimum power of one logical core (a hardware hyper-thread) based on the Sandy Bridge platform. The GCEU was created by Anthony F. Voellm out of a need to compare the performance of virtual machines offered by Google. It is approximated by the Coremark(TM) benchmark run as part of the PerfKitBenchmarker Open Source benchmark created by Google in partnership with many Cloud Providers.
Persistent disks
Every Google Compute Engine instance starts with a disk resource called persistent disk. Persistent disk provides the disk space for instances and contains the root filesystem from which the instance boots. Persistent disks can be used as raw block devices. By default, Google Compute Engine uses SCSI for attaching persistent disks. Persistent Disks provide straightforward, consistent and reliable storage at a consistent and reliable price, removing the need for a separate local ephemeral disk. Persistent disks need to be created before launching an instance. Once attached to an instance, they can be formatted with the native filesystem. A single persistent disk can be attached to multiple instances in read-only mode. Each persistent disk can be up to 10TB in size. Google Compute Engine encrypts the persistent disks with AES-128-CB, and this encryption is applied before the data leaves the virtual machine monitor and hits the disk. Encryption is always enabled and is transparent to Google Compute Engine users. The integrity of persistent disks is maintained via a HMAC scheme.
On June 18, 2014, Google announced support for SSD persistent disks. These disks deliver up to 30 IOPS per GB which is 20x more write IOPS and 100x more read IOPS than the standard persistent disks.
Images
An image is a persistent disk that contains the operating system and root file system that is necessary for starting an instance. An image must be selected while creating an instance or during the creation of a root persistent disk. By default, Google Compute Engine installs the root filesystem defined by the image on a root persistent disk. Google Compute Engine provides CentOS and Debian images as standard Linux images. Red Hat Enterprise Linux (RHEL) and Microsoft Windows Server 2008 R2 images are a part of the premier operating system images which are available for an additional fee. Container Linux (formerly CoreOS), the lightweight Linux OS based on Chromium OS is also supported on Google Compute Engine.
Machine types
Google Compute Engine uses KVM as the hypervisor, and supports guest images running Linux and Microsoft Windows which are used to launch virtual machines based on the 64 bit x86 architecture. VMs boot from a persistent disk that has a root filesystem. The number of virtual CPUs, amount of memory supported by the VM is dependent on the machine type selected.
Billing and discounts
Google Compute Engine offers sustained use discounts. Once an instance is run for over 25% of a billing cycle, the price starts to drop:
If an instance is used for 50% of the month, one will get a 10% discount over the on-demand prices
If an instance is used for 75% of the month, one will get a 20% discount over the on-demand prices
If an instance is used for 100% of the month, one will get a 30% discount over the on-demand prices
Machine type comparison
Google provides certain types of machine:
Standard machine: 3.75 GB of RAM per virtual CPU
High-memory machine: 6.5 GB of RAM per virtual CPU
High-CPU machine: 0.9 GB of RAM per virtual CPU
Shared machine: CPU and RAM are shared between customers
Memory-optimized machine: greater than 14 GB RAM per vCPU.
The prices mentioned below are based on running standard Debian or CentOS Linux virtual machines (VMs). VMs running proprietary operating systems will be charged more.
Resources
Compute Engine connects various entities called resources that will be a part of the deployment. Each resource performs a different function. When a virtual machine instance is launched, an instance resource is created that uses other resources, such as disk resources, network resources and image resources. For example, a disk resource functions as data storage for the virtual machine, similar to a physical hard drive, and a network resource helps regulate traffic to and from the instances.
Image
An image resource contains an operating system and root file system necessary for starting the instance. Google maintains and provides images that are ready-to-use or users can customize an image and use that as an image of choice for creating instances. Depending on the needs, users can also apply an image to a persistent disk and use the persistent disk as the root file system.
Machine type
An instance's machine type determines the number of cores, the memory, and the I/O operations supported by the instance.
Disk
Persistent disks are independent of the virtual machines and outlive an instance's lifespan. All information stored on the persistent disks is encrypted before being written to physical media, and the keys are tightly controlled by Google.
Each instance can attach only a limited amount of total persistent disk space (one can have up to 64 TB on most instances) and a limited number of individual persistent disks (one can attach up to 16 independent persistent disks to most instances).
Regional persistent disks can be replicated between two zones in a region for higher availability.
Snapshot
Persistent disk snapshots lets the users copy data from existing persistent disk and apply them to new persistent disks. This is especially useful for creating backups of the persistent disk data in cases of unexpected failures and zone maintenance events.
Instance
A Google Compute Engine instance is a virtual machine running on a Linux or Microsoft Windows configuration. Users can choose to modify the instances including customizing the hardware, OS, disk, and other configuration options.
Network
A network defines the address range and gateway address of all instances connected to it. It defines how instances communicate with each other, with other networks, and with the outside world. Each instance belongs to a single network and any communication between instances in different networks must be through a public IP address.
Your Cloud Platform Console project can contain multiple networks, and each network can have multiple instances attached to it. A network allows you to define a gateway IP and the network range for the instances attached to that network. By default, every project is provided with a default network with preset configurations and firewall rules. You can choose to customize the default network by adding or removing rules, or you can create new networks in that project. Generally, most users only need one network, although you can have up to five networks per project by default.
A network belongs to only one project, and each instance can only belong to one network. All Compute Engine networks use the IPv4 protocol. Compute Engine currently does not support IPv6. However, Google is a major advocate of IPv6 and it is an important future direction.
Address
When an instance is created, an ephemeral external IP address is automatically assigned to the instance by default. This address is attached to the instance for the life of the instance and is released once the instance has been terminated. GCE also provides mechanism to reserve and attach static IPs to the VMs. An ephemeral IP address can be promoted to a static IP address.
Firewall
A firewall resource contains one or more rules that permit connections into instances. Every firewall resource is associated with one and only one network. It is not possible to associate one firewall with multiple networks. No communication is allowed into an instance unless a firewall resource permits the network traffic, even between instances on the same network.
Route
Google Compute Engine offers a routing table to manage how traffic destined for a certain IP range should be routed. Similar to a physical router in the local area network, all outbound traffic is compared to the routes table and forwarded appropriately if the outbound packet matches any rules in the routes table.
Regions and zones
A region refers to a geographic location of Google's infrastructure facility. Users can choose to deploy their resources in one of the available regions based on their requirement. As of June 1, 2014, Google Compute Engine is available in central US region, Western Europe and Asia East region.
A zone is an isolated location within a region. Zones have high-bandwidth, low-latency network connections to other zones in the same region. In order to deploy fault-tolerant applications that have high availability, Google recommends deploying applications across multiple zones in a region. This helps protect against unexpected failures of components, up to and including a single zone. As of August 5, 2014, there are eight zones - three each in central US region and Asia East region and two zones in Western Europe region.
Scope of resources
All resources within GCE belong to the global, regional, or zonal plane. Global resources are accessible from all the regions and zones. For example, images are a global resource so users can launch a VM in any region based on a global image. But an address is a regional resource that is available only to the instances launched in one of the zones within the same region. Instances are launched in a specific zone that requires the zone specification as a part of all requests made to that instance.
The table below summarises the scope of GCE resources:
Features
Billing and pricing model
Google charges the VMs for a minimum of 10 minutes. At the end of 10th minute, instances are charged in 1-minute increments, rounded up to the nearest minute. Sustained usage based pricing will credit the discounts to the customers based on the monthly utilisation. Users need not pay a commitment fee upfront to get discounts on the regular, on-demand pricing.
VM performance
Compute Engine VMs boot within 30 seconds which is considered to be 4-10x faster than the competition.
Disk performance
The persistent disks of Compute Engine deliver higher IOPS consistently. With the cost of provisioned IOPS included within the cost of storage, users need not pay separately for the IOPS.
Global scope for images and snapshots
Images and disk snapshots belong to the global scope which means they are implicitly available across all the regions and zones of Google Cloud Platform. This avoids the need for exporting and importing images and snapshots between regions.
Transparent maintenance
During the scheduled maintenance of Google data center, Compute Engine can automatically migrate the VMs from one host to the other without involving any action from the users. This delivers better uptime to applications.
References
External links
Cloud computing
Cloud computing providers
Cloud infrastructure
Cloud platforms
Compute Engine
Web services
Computer-related introductions in 2012
|
12783542
|
https://en.wikipedia.org/wiki/MEHARI
|
MEHARI
|
MEHARI (MEthod for Harmonized Analysis of RIsk) is a free, open-source information risk analysis assessment and risk management method, for the use of information security professionals.
MEHARI enables business managers, information security/risk management professionals and other stakeholders to evaluate and manage the organization's risks relating to information, information systems and information processes (not just IT). It is designed to align with and support information security risk management according to ISO/IEC 27005, particularly in the context of an ISO/IEC 27001-compliant Information Security Management System (ISMS) or a similar overarching security management or governance framework.
History
MEHARI has steadily evolved since the mid-1990s to support standards such as ISO/IEC 27001, ISO/IEC 27002, ISO/IEC 27005 and NIST's SP 800-30.
The current version of MEHARI Expert (2010) includes links and support for ISO 27001/27002:2013 revision ISMS.
Description
MEHARI Expert (2010) combines a powerful and extendible knowledge base with a flexible suite of tools supporting the following information security risk analysis and management activities:
Threat analysis: top business managers describe the organization's activities, list the potential issues or concerns that might adversely affect those activities, and assign values to the business impacts.
The business processes are analyzed further in order to identify and map out the associated organizational, human and technical assets.
The assets are classified according to three classic security criteria (confidentiality, integrity, availability) plus the need for compliance to applicable laws and regulations (e.g. to protect personal information or the environment).
The intrinsic likelihood/probability of representative threat event types is considered.
These elements are combined automatically to analyze and assess the intrinsic severity of risks (based on 800 'scenarios' in the knowledgebase), highlighting the most critical and serious ones according to the projected business consequences.
Diagnostic questionnaires help users evaluate the ability of their existing information security measures/controls to mitigate risks.
Security measures (organizational and technical) are grouped into services for discussion with the relevant managers and professionals.
The current severity level of each risk scenario is displayed, taking account of the effectiveness of existing security measures, giving an indication of the current information security risk landscape and suggesting the prioritization of remedial work.
Action plans and security projects can be selected to manage the risks, based on the expected effectiveness of additional security measures and the timescales for their implementation. The preceding analysis enables management to appreciate the business benefits of, and hence justify, appropriate investment in information security: the entire process is business-driven.
MEHARI Expert (2010)'s comprehensive knowledgebase, built using Excel, is available in both English and French as an interactive tool, or more accurately a suite of tools that can be used individually but are designed as a coherent suite. As the process proceeds, the knowledgebase automatically expands with the information obtained, providing inputs for subsequent steps. Consistent analysis of the risks and controls enables large, diverse organizations to compare and contrast operating units on an even footing.
Additional applications and tools, based on the same principles, may be developed under Creative Commons license.
See also
Attack (computing)
Computer security
Information security
Information security management system
IT risk
Methodology
Threat (computer)
Vulnerability (computing)
References
home page
for MEHARI tool download
guides
External links
ENISA information on MEHARI
Risk analysis methodologies
|
63429
|
https://en.wikipedia.org/wiki/Atari%208-bit%20family
|
Atari 8-bit family
|
The Atari 8-bit family is a series of 8-bit home computers introduced by Atari, Inc. in 1979 as the Atari 400 and Atari 800 and manufactured until 1992. All of the machines in the family are technically similar and differ primarily in packaging. They are based on the MOS Technology 6502 CPU running at 1.79 MHz, and were the first home computers designed with custom coprocessor chips. This architecture enabled graphics and sound more advanced than contemporary machines, and gaming was a major draw. First-person space combat simulator Star Raiders is considered the platform's killer app. The systems launched with plug-and-play peripherals using the Atari SIO serial bus, an early analog of USB.
The Atari 400 and 800 differ primarily in packaging. The 400 has a pressure-sensitive, spillproof membrane keyboard and initially shipped with 8 kB of RAM. The 800 has a conventional keyboard, a second (rarely used) cartridge slot, and slots that allow easy RAM upgrades to 48K. Both models were replaced by the XL series in 1983, then–after the company was sold and reestablished as Atari Corporation–the XE models in 1985. The XL and XE are lighter in construction, have two joystick ports instead of four, and Atari BASIC is built-in. The 130XE has 128 KB of bank-switched RAM.
Two million Atari 8-bit computers were sold during its major production run between late 1979 and mid-1985. They were sold through dedicated computer retailers and department stores, such as Sears, using an in-store demo to attract customers. The primary competition in the worldwide market came when the Commodore 64, with similar graphics performance, was introduced in 1982. In 1992, Atari Corporation officially dropped all remaining support for the 8-bit line.
The "Atari 8-bit family" label was not contemporaneous. Atari, Inc., used the term "Atari 800 [or 400] home computer system", often combining the model names into "Atari 400/800" or simply "Atari home computers".
History
Design of the 8-bit series of machines started at Atari as soon as the Atari Video Computer System was released in late 1977. While designing the VCS in 1976, the engineering team from Atari Grass Valley Research Center (originally Cyan Engineering) felt the system would have a three-year lifespan before becoming obsolete. They started blue sky designs for a new console that would be ready to replace it around 1979.
What they ended up with was essentially a greatly updated version of the VCS, fixing its major limitations but sharing a similar design philosophy. The newer design would be faster and with better graphics and sound hardware. Work on the chips for the new system continued throughout 1978 and focused on much-improved video coprocessor known as the CTIA (the VCS version was the TIA).
During the early development period, the home computer era began in earnest with the TRS-80, Commodore PET, and Apple II—what Byte magazine dubbed the "1977 Trinity." Nolan Bushnell sold Atari to Warner Communications for $28 million in 1976 in order to raise funds for the launch of the VCS. Warner had recently hired Ray Kassar to act as the CEO of the company. Kassar felt the chipset should be used in a home computer to challenge Apple. To adapt the machine to this role, it needed to support character graphics, some form of expansion for peripherals, and run the then-universal BASIC programming language.
The VCS lacks bitmap graphics and a character generator. All on-screen graphics are created using sprites and a simple background generated by data loaded by the CPU into single-scan-line video registers. The then-Atari engineer Jay Miner developed the multimedia-chips for the Atari 8-bit family. The CTIA display chip was designed on the same principle, including sprites and background (playfield) graphics, but to reduce load on the main CPU, the task of loading video registers/buffers on the fly was delegated to a newly-designed dedicated graphics microprocessor, the Alphanumeric Television Interface Controller, or ANTIC. The CTIA and ANTIC work together to produce a complete display, with ANTIC fetching and buffering per-scan-line video data from the video frame-buffer and sprite memory in RAM, plus character set memory (for character modes), and feeding these data on-the-fly to the CTIA, which processes the sprite and playfield data in the light of its own color, sprite and graphics handling registers to produce the final color video output.
The resulting system was far in advance of anything then available on the market. Commodore was developing their own video driver in-house at the time, but Chuck Peddle, lead designer of the 6502 used in the VCS and the new machines, saw the Atari work during a visit to Grass Valley. He realized the Commodore design would not be competitive but he was under a strict non-disclosure agreement with Atari, and was unable to tell anyone at Commodore to give up on their own design. Peddle later commented that "the thing that Jay did, just kicked everybody's butt."
Development
Management identified two sweet spots for the new computers: a low-end version known internally as "Candy", and a higher-end machine known as "Colleen" (named after two Atari secretaries). Atari would market Colleen as a computer and Candy as a game machine or hybrid game console. Colleen included user-accessible expansion slots for RAM and ROM, two 8 KB ROM cartridge slots, RF and monitor output (including two pins for separate luma and chroma, allowing a complete S-Video output) and a full keyboard. Candy was initially designed as a game console, lacking a keyboard and input/output ports, although an external keyboard was planned that could be plugged into joystick ports 3 and 4. At the time, plans called for both to have a separate audio port supporting cassette tapes as a storage medium.
A goal for the new systems was user-friendliness. One executive stated, "Does the end user care about the architecture of the machine? The answer is no. 'What will it do for me?' That's his major concern. ... why try to scare the consumer off by making it so he or she has to have a double E or be a computer programmer to utilize the full capabilities of a personal computer?" Cartridges would for example, Atari believed, make the computers easier to use. To minimize handling of bare circuit boards or chips, as was common with other systems of that period, the computers were designed with enclosed modules for memory, ROM cartridges, with keyed connectors to prevent them being plugged into the wrong slot. The operating system boots automatically, loading drivers from devices on the serial bus (SIO). The DOS system for managing floppy storage was menu-driven. When no software is loaded, rather than leaving the user at a blank screen or machine language monitor, the OS goes to the "Memo Pad" mode allowing the user to type using the built-in full-screen editor.
As the design process for the new machines continued, there were questions about what the Candy should be. There was a running argument about whether the keyboard would be external or built in. By the summer of 1978, education had become a focus for the new systems. While the Colleen design was largely complete by May 1978, it was not until early 1979 that the decision was made that Candy would also be a complete computer, but one intended for children. As such, it would feature a new keyboard designed to be resistant to liquid spills.
Atari intended to port Microsoft BASIC to the machine as an 8 KB ROM cartridge. However, the existing 6502 version from Microsoft was around 7,900 bytes, leaving no room for extensions for graphics and sound. The company contracted with local consulting firm Shepardson Microsystems to complete the port. They recommended writing a new version from scratch, resulting in Atari BASIC.
FCC issues
At the time, televisions normally offered only one way to get a signal into them, using the antenna connections on the back of the set. For devices like a computer, the video is generated and then sent to an RF modulator to convert it to antenna-like output. The introduction of many games consoles during this era had led to cases where poorly designed modulators would give off so much signal they would cause interference with other nearby televisions, even in other houses. In response to complaints, the Federal Communications Commission (FCC) introduced new testing standards to reduce these problems. The new standards were extremely exacting and difficult to meet.
Other manufacturers avoided the problem by using built-in composite monitors, such as the Commodore PET and TRS-80. Apple Computer famously left off the modulator and sold them under a 3rd party company as the Sup'R'Mod so they did not have to be tested.
In a July 1977 visit with the engineering staff, a TI salesman presented a new possibility in the form of an inexpensive fibre optic cable with built-in transceivers. During the meeting, Joe Decuir proposed placing an RF modulator on one end, thereby completely isolating any electrical signals so that the computer itself would have no RF components. This would mean the computer itself would not have to meet the FCC requirements, yet users could still attach a television simply by plugging it in. His manager, Wade Tuma, later shot down the idea saying "The FCC would never let us get away with that stunt." Unknown to Atari, TI decided to use Decuir's idea. As Tuma had predicted, the FCC rejected the design and this led to delays in that machine's release. TI ultimately ended up shipping early machines with a custom television as the testing process dragged on.
To meet the off-the-shelf requirement while including internal TV circuitry, the new machines needed to be heavily shielded. Both were built around very strong cast aluminum shields forming a partial Faraday cage, with the various components screwed down onto this internal framework. This resulted in an extremely sturdy computer, at the disadvantage of added manufacturing expense and complexity.
The FCC ruling also made it difficult to have any sizable holes in the case, which led to RF leakage. This eliminated expansion slots or cards that communicated with the outside world via their own connectors. Instead, Atari designed the Serial Input/Output (SIO) computer bus, a daisy-chainable system that allowed multiple, auto-configuring devices to connect to the computer through a single shielded connector. The internal slots were reserved for ROM and RAM modules; they did not have the control lines necessary for a fully functional expansion card, nor room to route a cable outside the case to communicate with external devices.
400/800 release
After Atari announced its intent to enter the home computer market in December 1978, the Atari 400 and Atari 800 were presented at the Winter CES in January 1979 and shipped in November of the same year.
The names originally referred to the amount of memory: 4 KB RAM in the 400 and 8 KB in the 800. By the time they were released, RAM prices had started to fall, so the machines were both released with 8 KB, using 4kx1 DRAMs. The user-installable RAM modules in the 800 initially had plastic casings but this proved to cause overheating issues, so the casings were removed. Later, the expansion cover was held down with screws instead of the easier-to-open plastic latches. The computers eventually shipped with maxed-out RAM: 16k and 48k, respectively, using 16kx1 DRAMs.
Both models have four joystick ports, permitting four simultaneous players, but only a few games (such as M.U.L.E.) make use of them all. Paddle controllers are wired in pairs, and eight players can play Super Breakout. The Atari 400, despite its membrane keyboard and single internal ROM slot, outsold the Atari 800 by a 2-to-1 margin. Only one cartridge for the 800's right slot was produced by March 1983, and later machines in the family omitted the slot.
Reception
Creative Computing mentioned the Atari machines in an April 1979 overview of the CES show. Calling Atari "the videogame people", it went on to state they came with "some fantastic educational, entertainment and home applications software". In an August 1979 interview Atari's Peter Rosenthal suggested that demand might be low until the 1980-81 time frame, when he predicted about one million home computers being sold. The April 1980 issue compared the machines with the Commodore PET, focused mostly on the BASIC dialects.
Ted Nelson reviewed the computer in the magazine in June 1980, calling it "an extraordinary graphics box". Describing his and a friend's "shouting and cheering and clapping" during a demo of Star Raiders, Nelson wrote that he was so impressed that "I've been in computer graphics for twenty years, and I lay awake night after night trying to understand how the Atari machine did what it did". He described the machine as "something else" before criticizing the company for a lack of developer documentation. Nelson concluded by stating "The Atari is like the human body - a terrific machine, but (a) they won't give you access to the documentation, and (b) I'd sure like to meet the guy that designed it".
Kilobaud Microcomputing wrote in September 1980 that the Atari 800 "looks deceptively like a video game machine, [but had] the strongest and tightest chassis I have seen since Raquel Welch. It weighs about ten pounds ... The large amount of engineering and design in the physical part of the system is evident". The reviewer also praised the documentation as "show[ing] the way manuals should be done", and the "excellent 'feel'" of the keyboard.
InfoWorld favorably reviewed the 800's performance, graphics, and ROM cartridges, but disliked the documentation and cautioned that the unusual right Shift key location might make the computer "unsuitable for serious word processing". Noting that the amount of software and hardware available for the computer "is no match for that of the Apple II or the TRS-80", the magazine concluded that the 800 "is an impressive machine that has not yet reached its full computing potential".
Follow-up systems
Liz project
Despite planning an extensive advertising campaign for 1980, Atari found competing with microcomputers from market leaders Commodore, Apple, and Tandy difficult. By mid-1981 it had reportedly lost $10 million on sales of $10–13 million from more than 50,000 computers.
In 1982, Atari started the Sweet 8 (or "Liz NY") and Sweet 16 projects to create an upgraded set of machines that were easier to build and less costly to produce. Atari ordered a custom 6502, initially labelled 6502C, but eventually known as SALLY to differentiate it from a standard 6502C. SALLY was incorporated into late-production 400/800 machines, all XL/XE models, and the Atari 5200 and 7800 consoles. SALLY adds logic to disable the clock signal, called HALT. This lets ANTIC shut off the CPU to access the data/address bus, allowing them to coexist.
Like the earlier machines, the Sweet 8/16 was intended to be released in two versions: the 1000 with 16 KB, and the 1000X with 64 KB. To support expansion, similar to the card slots used in the Apple II, the 1000 series also supported the Parallel Bus Interface (PBI), a single expansion slot on the back of the machine. An external chassis could be plugged into the PBI, supporting card slots for further expansion.
1200XL
The original Liz plans were dropped and only one machine using the new design was released. Announced at a New York City press conference on December 13, 1982, the rechristened 1200XL was presented at the Winter CES on January 6–9, 1983. It shipped in March 1983 with 64 KB of RAM, built-in self test, a redesigned keyboard (with four function keys and a HELP key), and redesigned cable port layout.
Announced with a $1000 price, the 1200XL was released at $899. This was $100 less than the announced price of the 800 at its release in 1979, but by this time the 800 was available for much less.
The 1200XL omitted several features, or they were poorly implemented. The PBI expansion connector from the original 1000X design was omitted, making the design rely entirely on the SIO port again. The +12V pin in the SIO port was left unconnected; only +5V power was available which made a few devices stop working. An improved video circuit provided more chroma for a more colorful image, but the chroma line was not connected to the monitor port, the only place that could make use of it. The rearrangement of the ports made some joysticks and cartridges difficult or impossible to use. Changes made to the operating system resulted in compatibility problems with some older software.
It was discontinued in June 1983. There was no PAL version of the 1200XL.
Reception
The press warned that the 1200XL was too expensive. Compute! stated in an early 1983 editorial:
John J. Anderson, writing in Creative Computings Outpost: Atari column, echoed these comments:
Bill Wilkinson, author of Atari BASIC, co-founder of Optimized Systems Software, and columnist for Compute!, in May 1983 criticized the computer's features and price:
Newer XL machines
By this point in time Atari was involved in what would soon develop into a full-blown price war. Several years earlier, Commodore was a major calculator vendor, selling designs based on a Texas Instruments (TI) chipset. TI decided to enter the market themselves and suddenly raised the prices to other vendors, nearly putting Commodore out of business.
When TI introduced the TI-99, Tramiel turned the tables on them by pricing his machines below theirs. A price war ensued, causing a dramatic decline in home computer prices, reducing them as much as eight times over a period of a few months.
In May 1981, the Atari 800's price was $1,050, but by mid-1983 it was $165 and the 400 was under $150. Although Atari had never been a deliberate target of Tramiel's wrath, the Commodore/TI price war affected the entire market. The timing was particularly bad for Atari; the 1200XL was a flop, and the earlier machines were too expensive to produce to be able to compete at the rapidly falling price points.
A new lineup was announced at the 1983 Summer CES, closely following the original Liz/Sweet concepts. The 600XL was essentially the Liz NY model, and the spiritual replacement for the 400, while the 800XL would replace both the 800 and 1200XL. The machines looked similar to the 1200XL, but were smaller back to front, the 600 being somewhat smaller as it lacked one row of memory chips on the circuit board. The high-end 1400XL added a built-in 300 baud modem and a voice synthesizer, and the 1450XLD also included a built-in double-sided floppy disk drive in an enlarged case, with a slot for a second drive. The machines had Atari BASIC built into the ROM of the computer and the PBI at the back that allowed external expansion.
Atari had difficulty in transitioning manufacturing to Asia after closing its US factory. Originally intended to replace the 1200XL in mid-1983, the new models did not arrive until late that year. Although the 600XL/800XL were well positioned in terms of price and features, during the critical Christmas season they were available only in small numbers while the Commodore 64 was widely available. Brian Moriarty stated in ANALOG Computing that Atari "fail[ed] to keep up with Christmas orders for the 600 and 800XLs", reporting that as of late November 1983 the 800XL had not appeared in Massachusetts stores while 600XL "quantities are so limited that it's almost impossible to obtain".
Although the 800XL would ultimately be the most popular computer sold by Atari, the company was unable to defend its market share, and the ongoing race to the bottom reduced Atari's profits. Prices continued to erode; by November 1983 one toy store chain sold the 800XL for $149.97, $10 above the wholesale price. After losing $563 million in the first nine months of the year, Atari that month announced that prices would rise in January, stating that it "has no intention of participating in these suicidal price wars". The 600XL and 800XL's prices in early 1984 were $50 higher than for the Commodore VIC-20 and 64, and a rumor stated that the company planned to discontinue hardware and only sell software. Combined with the simultaneous effects of the video game crash of 1983, Atari was soon losing millions of dollars a day. Their owners, Warner Communications, became desperate to sell off the division.
The 1400XL and the 1450XLD had their delivery dates pushed back, and in the end, the 1400XL was cancelled outright, and the 1450XLD so delayed that it would never ship. Other prototypes which never made it to market include the 1600XL, 1650XLD, and 1850XLD. The 1600XL was to have been a dual-processor model capable of running 6502 and 80186 code, while the 1650XLD was a similar machine in the 1450XLD case. These were canceled when James J. Morgan became CEO and wanted Atari to return to its video game roots. The 1850XLD was to have been based on the custom chipset in the Amiga Lorraine (later to become the Commodore Amiga).
Reception
ANALOG Computing, writing about the 600XL in January 1984, stated that "the Commodore 64 and Tandy CoCo look like toys by comparison." The magazine approved of its not using the 1200XL's keyboard layout, and predicted that the XL's parallel bus "actually makes the 600 more expandable than a 400 or 800". While disapproving of the use of an operating system closer to the 1200XL's than the 400 and 800's, and the "inadequate and frankly disappointing" documentation, ANALOG concluded that "our first impression ... is mixed but mostly optimistic". The magazine warned, however, that because of "Atari's sluggish marketing", unless existing customers persuaded others to buy the XL models, "we'll all end up marching to the beat of a drummer whose initials are IBM".
Tramiel takeover, declining market
Commodore founder Jack Tramiel resigned in January 1984 and in July, he purchased the Atari consumer division from Warner for an extremely low price. When Tramiel took over, the high-end XL models were canceled and the low-end XLs were redesigned into the XE series. Nearly all research, design, and prototype projects were cancelled, including the Amiga-based 1850XLD. Tramiel focused on developing the 68000-based Atari ST computer line and recruiting ex-Commodore engineers to work on it.
Atari sold about 700,000 computers in 1984 compared to Commodore's two million. As his new company prepared to ship the Atari ST in 1985, Tramiel stated that sales of Atari 8-bit computers were "very, very slow". They were never an important part of Atari's business compared to video games, and it is possible that the 8-bit line was never profitable for the company despite selling almost 1.5 million computers by early 1986.
By that year, the Atari software market was decreasing in size. Antic magazine stated in an editorial in May 1985 that it had received many letters complaining that software companies were ignoring the Atari market, and urged readers to contact the companies' leaders. "The Atari 800 computer has been in existence since 1979. Six years is a pretty long time for a computer to last. Unfortunately, its age is starting to show", ANALOG Computing wrote in February 1986. The magazine stated that while its software library was comparable in size to that of other computers, "now—and even more so in the future—there is going to be less software being made for the Atari 8-bit computers", warning that 1985 only saw a "trickle" of major new titles and that 1986 "will be even leaner".
Computer Gaming World that month stated "games don't come out for the Atari first anymore". In April, the magazine published a survey of ten game publishers which found that they planned to release 19 Atari games in 1986, compared to 43 for Commodore 64, 48 for Apple II, 31 for IBM PC, 20 for Atari ST, and 24 for Amiga. Companies stated that one reason for not publishing for Atari was the unusually high amount of software piracy on the computer, partly caused by the Happy Drive. The magazine warned later that year, "Is this the end for Atari 800 games? It certainly looks like it might be from where I write". In 1987, MicroProse confirmed that it would not release Gunship for the Atari 8-bits, stating that the market was too small.
XE series
The 65XE and 130XE (XE stood for XL-Compatible Eight Bit) were announced in 1985 at the same time as the initial models in the Atari ST series, and they visually resembled the ST. The 65XE has 64 KB of RAM and is functionally equivalent to the 800XL minus the PBI connection. The 130XE has 128 KB of memory, accessible through bank-selection. The 130XE was aimed to appeal at the mass market.
The 130XE added the Enhanced Cartridge Interface (ECI), which is almost compatible with the Parallel Bus Interface (PBI), but physically smaller, since it is located next to the standard 400/800-compatible Cartridge Interface. It provides only those signals that did not exist in the latter. ECI peripherals were expected to plug into both the standard Cartridge Interface and the ECI port. Later revisions of the 65XE contain the ECI port as well.
The 65XE was marketed as 800XE in Germany and Czechoslovakia, to ride on the popularity of the 800XL in those markets. All 800XE units contain the ECI port.
XE Game System
Atari released the XE Game System, or Atari XEGS, in 1987. A repackaged 65XE with a removable keyboard, it boots to the 1981 port of Missile Command instead of BASIC if the keyboard isn't connected.
End of support and legacy
With the beginning of 1992, Atari Corp. officially dropped all remaining support for the 8-bit family.
In 2006, Curt Vendel, who designed the Atari Flashback for Atari, Inc. in 2004, claimed that Atari released the 8-bit chipset into the public domain.
There is agreement in the community that Atari authorized the distribution of the Atari 800's ROM with the Xformer 2.5 emulator, which makes the ROM legally available today as freeware.
Design
The Atari machines consist of a 6502 as the main processor, a combination of ANTIC and GTIA chips to provide graphics, and the POKEY chip to handle sound and serial input/output. These support chips are controlled via a series of registers that can be user-controlled via memory load/store instructions running on the 6502. For example, the GTIA uses a series of registers to select colors for the screen; these colors can be changed by inserting the correct values into its registers, which are mapped into the address space that is visible to the 6502. Some of the coprocessors use data stored in RAM, notably ANTIC's display buffer and Display List, as well as GTIA's Player/Missile (sprite) information.
The custom hardware features enable the computers to perform many functions directly in hardware, such as smooth background scrolling, that would need to be done in software in most other computers. Graphics and sound demos were part of Atari's earliest developer information and used as marketing materials with computers running in-store demos.
ANTIC
ANTIC is a microprocessor which processes a sequence of instructions known as a display list. An instruction adds one row of the specified graphics mode to the display. Each mode varies based on whether it represents text or a bitmap, the resolution and number of colors, and its vertical height in scan lines. An instruction also indicates if it contains an interrupt, if fine scrolling is enabled, and optionally where to fetch the display data from memory.
Since each row can be specified individually, the programmer can create displays containing different text or bitmapped graphics modes on one screen, where the data can be fetched from arbitrary, non-sequential memory addresses.
ANTIC reads this display list and the display data using DMA (Direct Memory Access), then translates the result into a pixel data stream representing the playfield text and graphics. This stream then passes to GTIA which applies the playfield colors and incorporates Player/Missile graphics (sprites) for final output to a TV or composite monitor. Once the display list is set-up, the display is generated without any CPU intervention.
There are 15 character and bitmap modes. In low-resolution modes, 2 or 4 colors per display line can be set. In high-resolution mode, one color can be set per line, but the luminance values of the foreground and background can be adjusted. High resolution bitmap mode (320x192 graphics) produces NTSC artifacts which are "tinted" depending on the color values; it was normally impossible to get color with this mode on PAL machines.
For text modes, the character set data is pointed to by a register. It defaults to an address in ROM, but if pointed to RAM then a programmer can create custom characters. Depending on the text mode, this data can be on any 1K or 512 byte boundary. Additional register controls allow flipping all characters upside down and toggling inverse video.
CTIA/GTIA
The Color Television Interface Adaptor (CTIA) is the graphics chip originally used in the Atari 400 and 800. It is the successor to the TIA chip of the 1977 Atari VCS. According to Joe Decuir, George McLeod designed the CTIA in 1977. It was replaced with the Graphic Television Interface Adaptor (GTIA) in later revisions of the 400 and 800 and all later 8-bit models. GTIA, also designed by McLeod, adds three new playfield graphics modes to ANTIC which allow more colors than previously available.
The CTIA/GTIA receives Playfield graphics information from ANTIC and applies colors to the pixels from a 128 or 256 color palette depending on the color interpretation mode in effect. CTIA/GTIA also controls Player/Missile Graphics (sprites) including collision detection between players, missiles, and the playfield; display priority for objects; and color/luminance control of all displayed objects. CTIA/GTIA outputs separate digital luminance and chroma signals, which are mixed to form an analog composite video signal.
CTIA/GTIA also reads the joystick triggers and the console keys , , , and operating the keyboard speaker in the Atari 400/800. In later computer models the audio output for the keyboard speaker is mixed with the audio out for transmission to the TV/video monitor.
POKEY
The third custom support chip, named POKEY, is responsible for reading the keyboard, generating sound and serial communications (in conjunction with the PIA chip (Peripheral Interface Adapter, 6520) commands and IRQs, plus controlling the 4 joystick movements on 400/800 and later RAM banks and/or ROM(OS/BASIC/Self-test) enables for XL/XE lines). It also provides timers, a random number generator (for generating acoustic noise as well as random numbers), and maskable interrupts. POKEY has four semi-independent audio channels, each with its own frequency, noise and volume control. Each 8-bit channel has its own audio control register which select the noise content and volume. For higher sound frequency resolution (quality), two of the audio channels can be combined for more accurate sound (frequency can be defined with 16-bit value instead of usual 8-bit). The name POKEY comes from the words "POtentiometer" and "KEYboard", which are two of the I/O devices that POKEY interfaces with (the potentiometer is the mechanism used by the paddle). The POKEY chip—as well as its dual- and quad-core versions—was used in many Atari coin-op arcade machines of the 1980s, including Centipede and Millipede, Missile Command, Asteroids Deluxe, Major Havoc, and Return of the Jedi.
Models
400 and 800 (1979) – original machines in beige cases. The 400 has a membrane keyboard. The 800 has full-travel keys, two cartridge ports, and monitor output. Both have expandable memory (up to 48 KB); the slots are easily accessible in the 800. Later PAL versions have the 6502C processor.
1200XL (1983) – new aluminum and smoked plastic case. Includes 64 KB of RAM, two joystick ports, a Help key, and four function keys. Some older software was incompatible with the new OS.
600XL and 800XL (1983) – The 600XL has 16 KB of memory and PAL versions have a monitor port The 800XL has 64 KB and monitor output. Both have built-in BASIC and a Parallel Bus Interface (PBI) expansion port. The last produced PAL units contain the Atari FREDDIE chip and Atari BASIC revision C.
65XE and 130XE (1985) – The 130XE has 128 KB of bank-switched RAM and an Enhanced Cartridge Interface (ECI) instead of a PBI. The first revisions of the 65XE have no ECI or PBI, while the later ones contain the ECI. The 65XE was relabelled as 800XE in some European markets.
XE Game System (1987) – A 65XE styled as a game console. The basic version of the system shipped without the detachable keyboard. With the keyboard it operates just like other Atari 8-bit computer models.
Production timeline
Production timeline dates retrieved from Atari 8-Bit Computers F.A.Q., and Chronology of Personal Computers.
Prototypes/vaporware
1400XL – Similar to the 1200XL but with a PBI, FREDDIE chip, built-in modem and a Votrax SC-01 speech synthesis chip. Cancelled.
1450XLD – a 1400XL with built-in 5¼″ disk drive and expansion bay for a second 5¼″ disk drive. Code named Dynasty. Made it to pre-production, but got abandoned by Tramiel.
1600XL – codenamed Shakti, this was dual-processor system with 6502 and 80186 processors and two built-in 5¼″ floppy disk drives.
1850XL - codenamed Mickey, this was to use the "Lorraine" (aka "Amiga") custom graphics chips
900XLF – redesigned 800XLF. Became the 65XE.
65XEM – 65XE with AMY sound synthesis chip. Cancelled.
65XEP – "portable" 65XE with 3.5" disk drive, 5" green CRT and battery pack.
Peripherals
During the lifetime of the 8-bit series, Atari released a large number of peripherals including cassette tape drives, 5.25-inch floppy drives, printers, modems, a touch tablet, and an 80-column display module.
Atari's peripherals used the proprietary Atari SIO port, which allowed them to be daisy chained together into a single string. A primary goal of the Atari computer design was user-friendliness which was assisted by the SIO bus. Since only one kind of connector plug is used for all devices the Atari computer was easy for novice users to expand. Atari SIO devices used an early form of plug-n-play. Peripherals on the bus have their own IDs, and can deliver downloadable drivers to the Atari computer during the boot process. However, the additional electronics in these peripherals made them cost more than the equivalent "dumb" devices used by other systems of the era.
Software
Atari did not initially disclose technical information for its computers, except to software developers who agreed to keep it secret, possibly to increase its own software sales. Cartridge software was so rare at first that InfoWorld joked in 1980 that Atari owners might have considered turning the slot "into a fancy ashtray". The magazine advised them to "clear out those cobwebs" for Atari's Star Raiders, which became the platform's killer app, akin to VisiCalc for the Apple II in its ability to persuade customers to buy the computer.
Chris Crawford and others at Atari published detailed technical information in De Re Atari. In 1982 Atari published both the Atari Home Computer System Hardware Manual and an annotated source listing of the operating system. These resources resulted in many books and articles about programming the computer's custom hardware.
Because of graphics superior to those of the Apple II and Atari's home-oriented marketing, games dominated its software library. A 1984 compendium of reviews used 198 pages for games compared to 167 for all others.
Built-in operating system
The Atari 8-bit computers come with an operating system built into the ROM. The Atari 400/800 has two versions:
OS Rev. A – 10 KB ROM (3 chips) early machines
OS Rev. B – 10 KB ROM (3 chips) most common
The XL/XE all have OS revisions, which created compatibility issues with certain software. Atari responded with the Translator Disk, a floppy disk which loads the older 400/800 Rev. 'B' or Rev. 'A' OS into the XL/XE computers.
OS Rev. 10 – 16 KB ROM (2 chips) for 1200XL Rev A
OS Rev. 11 – 16 KB ROM (2 chips) for 1200XL Rev B (bug fixes)
OS Rev. 1 – 16 KB ROM for 600XL
OS Rev. 2 – 16 KB ROM for 800XL
OS Rev. 3 – 16 KB ROM for 800XE/130XE
OS Rev. 4 – 32 KB ROM (16 KB OS + 8 KB BASIC + 8 KB Missile Command) for XEGS
The XL/XE models that followed the 1200XL also have the Atari BASIC ROM built-in, which can be disabled at startup by holding down the silver OPTION key. Originally this was revision B, which has some serious bugs. Later models have revision C.
Disk Operating System
The standard Atari OS only contained very low-level routines for accessing floppy disk drives. An extra layer, a disk operating system, was required to assist in organizing file system-level disk access. This was known as Atari DOS, and like most home computer DOSes of the era, had to be booted from floppy disk at every power-on or reset. Atari DOS was entirely menu-driven.
DOS 1.0
DOS 2.0S – Improved over DOS 1.0; became the standard for the 810 disk drive.
DOS 3.0 – Came with 1050 drive. Uses a different disk format which is incompatible with DOS 2.0, making it unpopular.
DOS 2.5 – Replaced DOS 3.0 with later 1050s. Functionally identical to DOS 2.0S, but able to read and write enhanced density disks.
DOS XE – Designed for the XF551 drive.
Third-party replacement DOSes were also available.
Playfield graphics
While the ANTIC chip allows a variety of different Playfield modes and widths, the original Atari Operating System included with the Atari 800/400 computers provides easy access to a limited subset of these graphics modes. These are exposed to users through Atari BASIC via the "GRAPHICS" command, and to some other languages, via similar system calls. Oddly, the modes not directly supported by the original OS and BASIC are the modes most useful for games. The later version of the OS used in the Atari 8-bit XL/XE computers added support for most of these "missing" graphics modes.
ANTIC text modes support soft, redefineable character sets. ANTIC has four different methods of glyph rendering related to the text modes: Normal, Descenders, Single color character matrix, and Multiple colors per character matrix.
The ANTIC chip uses a Display List and other settings to create these modes. Any graphics mode in the default CTIA/GTIA color interpretation can be freely mixed without CPU intervention by changing instructions in the Display List.
The actual ANTIC screen geometry is not fixed. The hardware can be directed to display a narrow Playfield (128 color clocks/256 hi-res pixels wide), the normal width Playfield (160 color clocks/320 hi-res pixels wide), and a wide, overscan Playfield (192 color clocks/384 hi-res pixels wide) by setting a register value. While the Operating System's default height for creating graphics modes is 192 scan lines ANTIC can display vertical overscan up to 240 TV scan lines tall by creating a custom Display List.
The Display List capabilities provide horizontal and vertical coarse scrolling requiring minimal CPU direction. Furthermore, the ANTIC hardware supports horizontal and vertical fine scrolling—shifting the display of screen data incrementally by single pixels (color clocks) horizontally and single scan lines vertically.
The video display system was designed with careful consideration of the NTSC video timing for color output. The system CPU clock and video hardware are synchronized to one-half the NTSC clock frequency. Consequently, the pixel output of all display modes is based on the size of the NTSC color clock which is the minimum size needed to guarantee correct and consistent color regardless of the pixel location on the screen. The fundamental accuracy of the pixel color output allows horizontal fine scrolling without color "strobing"—unsightly hue changes in pixels based on horizontal position caused when signal timing does not provide the TV/monitor hardware adequate time to reach the correct color.
Character modes
Map modes
GTIA modes
GTIA modes are Antic Mode F displays with an alternate color interpretation option enabled via a GTIA register. The full color expression of these GTIA modes can be engaged in Antic text modes 2 and 3, though these will also requires a custom character set to achieve practical use of the colors.
See also
List of Atari 8-bit family emulators
Notes
References
Bibliography
The Atari 800 Personal Computer System , by the Atari Museum , accessed November 13, 2008
External links
Atari 8-Bit Computers: Frequently Asked Questions
Atari 400/800 Peripherals
"A History of Gaming Platforms: Atari 8-bit Computers" at Gamasutra
Atari XL Series Systems & Prototypes
Technical chipset information
Atari Mania database of Atari 8-bit family games and other software
Atari Archives text of Atari 8-bit family books
Atari SAP Music Archive POKEY music and players
More K's. Less £’s Britisch brochure for Atari 400 and 800.
6502-based home computers
Home computers
Computer-related introductions in 1979
|
55716973
|
https://en.wikipedia.org/wiki/Decoding%20Chomsky
|
Decoding Chomsky
|
Decoding Chomsky: Science and Revolutionary Politics is a 2016 book by the linguistic anthropologist Chris Knight on Noam Chomsky's approach to science and politics. Knight admires Chomsky's politics, but argues that his linguistic theories were influenced in damaging ways by his immersion since the early 1950s in an intellectual culture heavily dominated by US military priorities, an immersion deepened when he secured employment in a Pentagon-funded electronics laboratory in the Massachusetts Institute of Technology.
In October 2016, Chomsky dismissed the book, telling The New York Times that it was based on a false assumption since, in fact, no military "work was being done on campus" during his time at MIT. In a subsequent public comment, Chomsky on similar grounds denounced Knight's entire narrative as a "wreck ... complete nonsense throughout". In contrast, a reviewer for the US Chronicle of Higher Education described Decoding Chomsky as perhaps "the most in-depth meditation on 'the Chomsky problem' ever published". In the UK, the New Scientist described Knight's account as "trenchant and compelling". The controversy continued in the London Review of Books, where the sociologist of science Hilary Rose cited Decoding Chomsky approvingly, provoking Chomsky to denounce what he called "Knight's astonishing performance" in two subsequent letters. The debate around Decoding Chomsky then continued in Open Democracy, with contributions from Frederick Newmeyer, Randy Allen Harris and others.
Since the book was published, Knight has published what he claims is evidence that Chomsky worked on a military sponsored "command and control" project for the MITRE Corporation in the early 1960s.
The argument
Decoding Chomsky begins with Chomsky's claim that his political and scientific outputs have little connection with each other. For example, asked in 2006 whether his science and his politics are related, Chomsky replied that the connection is "almost non-existent ... There is a kind of loose, abstract connection in the background. But if you look for practical connections, they're non-existent."
Knight accepts that scientific research and political involvement are distinct kinds of activity serving very different purposes. But he claims that, in Chomsky's case, the conflicts intrinsic to his institutional situation forced him to drive an unusually deep and damaging wedge between his politics and his science.
Knight points out that Chomsky began his career working in an electronics laboratory whose primary technological mission he detested on moral and political grounds. Funded by the Pentagon, the Research Laboratory of Electronics at MIT was involved in contributing to the basic research required for hi-tech weapons systems. Suggesting that he was well aware of MIT's role at the time, Chomsky himself recalls:
It was because of his anti-militarist conscience, Knight argues, that such research priorities were experienced by him as deeply troubling. By way of evidence, Knight cites George Steiner in a 1967 The New York Review of Books article, "Will Noam Chomsky announce that he will stop teaching at MIT or anywhere in this country so long as torture and napalm go on? ... Will he even resign from a university very largely implicated in the kind of 'strategic studies' he so rightly scorns?" Chomsky said, "I have given a good bit of thought to the specific suggestions that you put forth... leaving the country or resigning from MIT, which is, more than any other university, associated with activities of the department of 'defense.' ... As to MIT, I think that its involvement in the war effort is tragic and indefensible."
Chomsky's situation at MIT, according to Knight, is summed up by Chomsky when he describes some of his colleagues this way:
In order to maintain his moral and political integrity, Knight argues, Chomsky resolved to limit his cooperation to pure linguistic theory of such an abstract kind that it could not conceivably have any military use.
With this aim in mind, Chomsky's already highly abstract theoretical modelling became so unusually abstract that not even language's practical function in social communication could be acknowledged or explored. One damaging consequence, according to Knight, was that scientific investigation of the ways in which real human beings use language became divorced from what quickly became the prevailing MIT school of formal linguistic theory.
Knight argues that the conflicting pressures Chomsky experienced had the effect of splitting his intellectual output in two, prompting him to ensure that any work he conducted for the military was purely theoretical—of no practical use to anyone—while his activism, being directed relentlessly against the military, was preserved free of any obvious connection with his science.
To an unprecedented extent, according to Knight, mind in this way became divorced from body, thought from action, and knowledge from its practical applications, these disconnects characterizing a philosophical paradigm which came to dominate much of intellectual life for half a century across the Western world.
Reception
Decoding Chomsky has been both criticised and acclaimed by a wide variety of commentators.
Norbert Hornstein and Nathan J. Robinson dismiss the book as betraying a complete misunderstanding of Chomsky's linguistic theories and beliefs. They question the motives of Yale University Press, asking why Yale considered it appropriate to publish Knight's critique, which they say attacks Chomsky through political conjecture rather than addressing his linguistic or political ideas. Comparing Knight's Marxist criticism to a conservative criticism that was released in the same year by Tom Wolfe, they speculate that both were published with similar motivations – that Chomsky's criticisms were a threat to the power behind the publishers.(Current Affairs).
Robert Barsky argues that since Knight was never formally trained in Chomsky's conception of theoretical linguistics, he has no right to comment on whether it stands up as science. Decoding Chomsky, claims Barsky, offers no original insights, consisting only of "a weak rehash of critiques from naysayers to Chomsky's approach". While Barsky concedes that Chomsky did work in a military laboratory, he argues that this cannot be significant since virtually all US scientists receive Pentagon funding one way or another. (Moment).
Peter Stone claims that Knight hates Chomsky and "for that reason, he wrote Decoding Chomsky – a nasty, mean-spirited, vitriolic, ideologically-driven hatchet job". Stone states that, although Knight is on the Left, "the level of venom on display here exceeds that of all but the most unhinged of Chomsky’s detractors on the Right." He goes to state that "Knight spares no opportunity to paint Chomsky’s every thought and deed in the blackest possible terms" and that: "Decoding Chomsky is not a critique of a body of work in linguistics; it is an attempt to demonise a man for his perceived political deviations, even though that man happens to be on the same side of the political spectrum as the man who is demonising him. Reading Decoding Chomsky taught me something about the mindset of the prosecutors in the Moscow Show Trials."
Decoding Chomsky was positively received by various scientists and commentators including: Michael Tomasello, Daniel Everett, David Hawkes, Luc Steels, Sarah Blaffer Hrdy and Frederick Newmeyer. Reviewing the book in The Times Literary Supplement, Houman Barekat commended Knight for an “engaging and thought-provoking intellectual history”. In The American Ethnologist Sean O'Neill said of the book: “History comes alive via compelling narrative. ... Knight is indeed an impressive historian when it comes to recounting the gripping personal histories behind Chomsky's groundbreaking contributions to science and philosophy.”
The linguist Daniel Everett wrote that "Knight's exploration is unparalleled. No other study has provided such a full understanding of Chomsky's background, intellectual foibles, objectives, inconsistencies, and genius." The linguist Gary Lupyan wrote that Knight “makes a compelling case for the scientific vacuousness of [Chomsky’s linguistic] ideas.” Another linguist, Bruce Nevin, wrote that Knight “shows how Chomsky has acquiesced in—more than that, has participated in and abetted—a radical post-war transformation of the relation of science to society, legitimating one of the significant political achievements of the right, the pretense that science is apolitical.”
The philosopher Thomas Klikauer wrote that Decoding Chomsky is "an insightful book and, one might say, a-pleasure-to-read kind of book." Another philosopher, Rupert Read described the book as “a brilliant, if slightly harsh, disquisition”. In the Chronicle of Higher Education Tom Bartlett described the book as a "compelling read". In Anarchist Studies Peter Seyferth said the book "focuses on all the major phases of Chomsky's linguistic theories, their institutional preconditions and their ideological and political ramifications. And it is absolutely devastating."
David Golumbia has described himself as “a huge admirer of Decoding Chomsky” and Les Levidow said the book was “impressive”. The linguist Randy Allen Harris said “It’s a good and interesting book ... which everyone who is interested in Chomsky’s impact on contemporary culture should read.” Harris disagrees with some aspects of the book’s thesis. However, he describes Chomsky's misrepresentation of the book as absurd and, much like his fellow expert in Chomskyan linguistics Frederick Newmeyer, he does agree that:
Further research on Chomsky at MIT
In his book, Knight writes that the US military initially funded Chomsky's linguistics because they were interested in machine translation. Later their focus shifted and Knight cites Air Force Colonel Edmund Gaines’ statement that: "We sponsored linguistic research in order to learn how to build command and control systems that could understand English queries directly."
From 1963, Chomsky worked as a consultant to the MITRE Corporation, a military research institute set up by the US Air Force. According to one of Chomsky's former students, Barbara Partee, MITRE's justification for sponsoring Chomsky's approach to linguistics was "that in the event of a nuclear war, the generals would be underground with some computers trying to manage things, and that it would probably be easier to teach computers to understand English than to teach the generals to program."
Chomsky made his most detailed response to Knight in the 2019 book, The Responsibility of Intellectuals: Reflections by Noam Chomsky and others after 50 years. In this response, Chomsky dismissed Knight’s claims as a "vulgar exercise of defamation" and a "web of deceit and misinformation".
Knight, in turn, responded to Chomsky citing more documents, including one that states that MITRE's work to support "US Air Force-supplied command and control systems ... involves the application of a logico-mathematical formulation of linguistic structure developed by Noam Chomsky." Knight cites other documents that he claims show that Chomsky's student, Lieutenant Samuel Jay Keyser, did apply Chomskyan theory to the control of military aircraft, including the B-58 nuclear-armed bomber.
References
External links
Science and Revolution – Chris Knight's website on his Chomsky research.
2016 non-fiction books
American non-fiction books
Books about the politics of science
Books by Chris Knight
English-language books
Works about Noam Chomsky
Yale University Press books
|
371410
|
https://en.wikipedia.org/wiki/David%20Parnas
|
David Parnas
|
David Lorge Parnas (born February 10, 1941) is a Canadian early pioneer of software engineering, who developed the concept of information hiding in modular programming, which is an important element of object-oriented programming today. He is also noted for his advocacy of precise documentation.
Life
Parnas earned his Ph.D. at Carnegie Mellon University in electrical engineering. Parnas also earned a professional engineering license in Canada and was one of the first to apply traditional engineering principles to software design.
He worked there as a professor for many years. He also taught at the University of North Carolina at Chapel Hill (U.S.), at the Department of Computer Science of the Technische Universität Darmstadt (Germany), the University of Victoria (British Columbia, Canada), Queen's University in Kingston, Ontario, McMaster University in Hamilton, Ontario, and University of Limerick (Republic of Ireland).
David Parnas received a number of awards and honors:
ACM "Best Paper" Award, 1979
Norbert Wiener Award for Social and Professional Responsibility, 1987
Two "Most Influential Paper" awards International Conference on Software Engineering, 1991 and 1995
Doctor honoris causa of the Computer Science Department, ETH Zurich, Switzerland, 1986
Fellow of the Royal Society of Canada, 1992
Fellow of the Association for Computing Machinery, 1994
Doctor honoris causa of the Louvain School of Engineering, University of Louvain (UCLouvain), Belgium, 1996
ACM SIGSOFT's "Outstanding Research" award, 1998
IEEE Computer Society's 60th Anniversary Award, 2007
Doctor honoris causa of the Faculty of Informatics, University of Lugano, Switzerland, 2008
Fellow of the Gesellschaft für Informatik, 2008
Fellow of the Institute of Electrical and Electronics Engineers (IEEE), 2009
Doctor honoris causa of the Vienna University of Technology (Dr. Tech.H.C.), Vienna Austria, 2011
Work
Modular design
In modular design, his double dictum of high cohesion within modules and loose coupling between modules is fundamental to modular design in software. However, in Parnas's seminal 1972 paper On the Criteria to Be Used in Decomposing Systems into Modules, this dictum is expressed in terms of information hiding, and the terms cohesion and coupling are not used. He never used them.
Technical activism
Dr Parnas took a public stand against the US Strategic Defense Initiative (also known as "Star Wars") in the mid 1980s, arguing that it would be impossible to write an application of sufficient quality that it could be trusted to prevent a nuclear attack. He has also been in the forefront of those urging the professionalization of "software engineering" (a term that he characterizes as "an unconsummated marriage"). Dr. Parnas is also a heavy promoter of ethics in the field of software engineering.
Stance on academic evaluation methods
Parnas has joined the group of scientists which openly criticize the number-of-publications-based approach towards ranking academic production. On his November 2007 paper Stop the Numbers Game, he elaborates on several reasons on why the current number-based academic evaluation system used in many fields by universities all over the world (be it either oriented to the amount of publications or the amount of quotations each of those get) is flawed and, instead of contributing to scientific progress, it leads to knowledge stagnation.
Bibliography
See also
Automatic programming
References
Further reading
External links
McMaster University (Hamilton, Ontario, Canada)
University of Limerick profile broken 2013-4-26 and CV broken 2013-4-26
IEEE Computer Society's 60th Anniversary Award
1941 births
Living people
People from Plattsburgh, New York
American computer scientists
Carnegie Mellon University College of Engineering alumni
Carnegie Mellon University faculty
Formal methods people
Fellows of the Association for Computing Machinery
McMaster University faculty
Canadian software engineers
Software engineering researchers
Academics of the University of Limerick
Scientists from New York (state)
Technische Universität Darmstadt faculty
|
24306
|
https://en.wikipedia.org/wiki/Portable%20Network%20Graphics
|
Portable Network Graphics
|
Portable Network Graphics (PNG, officially pronounced , colloquially pronounced ) is a raster-graphics file format that supports lossless data compression. PNG was developed as an improved, non-patented replacement for Graphics Interchange Format (GIF) — unofficially, the initials PNG stood for the recursive acronym "PNG's not GIF".
PNG supports palette-based images (with palettes of 24-bit RGB or 32-bit RGBA colors), grayscale images (with or without an alpha channel for transparency), and full-color non-palette-based RGB or RGBA images. The PNG working group designed the format for transferring images on the Internet, not for professional-quality print graphics; therefore non-RGB color spaces such as CMYK are not supported. A PNG file contains a single image in an extensible structure of chunks, encoding the basic pixels and other information such as textual comments and integrity checks documented in RFC 2083.
PNG files use the file extension PNG or png and are assigned MIME media type image/png.
PNG was published as informational RFC 2083 in March 1997 and as an ISO/IEC 15948 standard in 2004.
History and development
The motivation for creating the PNG format was the realization that, on 28 December 1994, the Lempel–Ziv–Welch (LZW) data compression algorithm used in the Graphics Interchange Format (GIF) format was patented by Unisys. The patent required that all software supporting GIF pay royalties, leading to a flurry of criticism from Usenet users. One of them was Thomas Boutell, who on 4 January 1995 posted a precursory discussion thread on the Usenet newsgroup "comp.graphics" in which he devised a plan for a free alternative to GIF. Other users in that thread put forth many propositions that would later be part of the final file format. Oliver Fromme, author of the popular JPEG viewer QPEG, proposed the PING name, eventually becoming PNG, a recursive acronym meaning PING is not GIF, and also the .png extension. Other suggestions later implemented included the Deflate compression algorithm and 24-bit color support, the lack of the latter in GIF also motivating the team to create their file format. The group would become known as the PNG Development Group, and as the discussion rapidly expanded, it later used a mailing list associated with a CompuServe forum.
The full specification of PNG was released under the approval of W3C on 1 October 1996, and later as RFC 2083 on 15 January 1997. The specification was revised on 31 December 1998 as version 1.1, which addressed technical problems for gamma and color correction.
Version 1.2, released on 11 August 1999, added the iTXt chunk as the specification's only change, and a reformatted version of 1.2 was released as a second edition of the W3C standard on 10 November 2003, and as an International Standard (ISO/IEC 15948:2004) on 3 March 2004.
Although GIF allows for animation, it was decided that PNG should be a single-image format. In 2001, the developers of PNG published the Multiple-image Network Graphics (MNG) format, with support for animation. MNG achieved moderate application support, but not enough among mainstream web browsers and no usage among web site designers or publishers. In 2008, certain Mozilla developers published the Animated Portable Network Graphics (APNG) format with similar goals. APNG is a format that is natively supported by Gecko- and Presto-based web browsers and is also commonly used for thumbnails on Sony's PlayStation Portable system (using the normal PNG file extension). In 2017, Chromium based browsers adopted APNG support. In January 2020, Microsoft Edge became Chromium based, thus inheriting support for APNG. With this all major browsers now support APNG.
PNG Working Group
The original PNG specification was authored by an ad hoc group of computer graphics experts and enthusiasts. Discussions and decisions about the format were conducted by email. The original authors listed on RFC 2083 are:
Editor: Thomas Boutell
Contributing Editor: Tom Lane
Authors (in alphabetical order by last name): Mark Adler, Thomas Boutell, Christian Brunschen, Adam M. Costello, Lee Daniel Crocker, Andreas Dilger, Oliver Fromme, Jean-loup Gailly, Chris Herborth, Aleks Jakulin, Neal Kettler, Tom Lane, Alexander Lehmann, Chris Lilley, Dave Martindale, Owen Mortensen, Keith S. Pickens, Robert P. Poole, Glenn Randers-Pehrson, Greg Roelofs, Willem van Schaik, Guy Schalnat, Paul Schmidt, Tim Wegner, Jeremy Wohl
File format
File header
A PNG file starts with an 8-byte signature (refer to hex editor image on the right):
"Chunks" within the file
After the header, comes a series of chunks, each of which conveys certain information about the image. Chunks declare themselves as critical or ancillary, and a program encountering an ancillary chunk that it does not understand can safely ignore it. This chunk-based storage layer structure, similar in concept to a container format or to Amigas IFF, is designed to allow the PNG format to be extended while maintaining compatibility with older versions—it provides forward compatibility, and this same file structure (with different signature and chunks) is used in the associated MNG, JNG, and APNG formats.
A chunk consists of four parts: length (4 bytes, big-endian), chunk type/name (4 bytes), chunk data (length bytes) and CRC (cyclic redundancy code/checksum; 4 bytes). The CRC is a network-byte-order CRC-32 computed over the chunk type and chunk data, but not the length.
Chunk types are given a four-letter case sensitive ASCII type/name; compare FourCC. The case of the different letters in the name (bit 5 of the numeric value of the character) is a bit field that provides the decoder with some information on the nature of chunks it does not recognize.
The case of the first letter indicates whether the chunk is critical or not. If the first letter is uppercase, the chunk is critical; if not, the chunk is ancillary. Critical chunks contain information that is necessary to read the file. If a decoder encounters a critical chunk it does not recognize, it must abort reading the file or supply the user with an appropriate warning.
The case of the second letter indicates whether the chunk is "public" (either in the specification or the registry of special-purpose public chunks) or "private" (not standardised). Uppercase is public and lowercase is private. This ensures that public and private chunk names can never conflict with each other (although two private chunk names could conflict).
The third letter must be uppercase to conform to the PNG specification. It is reserved for future expansion. Decoders should treat a chunk with a lower case third letter the same as any other unrecognised chunk.
The case of the fourth letter indicates whether the chunk is safe to copy by editors that do not recognize it. If lowercase, the chunk may be safely copied regardless of the extent of modifications to the file. If uppercase, it may only be copied if the modifications have not touched any critical chunks.
Critical chunks
A decoder must be able to interpret critical chunks to read and render a PNG file.
IHDR must be the first chunk; it contains (in this order) the image's
width (4 bytes)
height (4 bytes)
bit depth (1 byte, values 1, 2, 4, 8, or 16)
color type (1 byte, values 0, 2, 3, 4, or 6)
compression method (1 byte, value 0)
filter method (1 byte, value 0)
interlace method (1 byte, values 0 "no interlace" or 1 "Adam7 interlace") (13 data bytes total).
As stated in the World Wide Web Consortium, bit depth is defined as "the number of bits per sample or per palette index (not per pixel)".
PLTE contains the palette: a list of colors.
IDAT contains the image, which may be split among multiple IDAT chunks. Such splitting increases filesize slightly, but makes it possible to generate a PNG in a streaming manner. The IDAT chunk contains the actual image data, which is the output stream of the compression algorithm.
IEND marks the image end; the data field of the IEND chunk has 0 bytes/is empty.
The PLTE chunk is essential for color type 3 (indexed color). It is optional for color types 2 and 6 (truecolor and truecolor with alpha) and it must not appear for color types 0 and 4 (grayscale and grayscale with alpha).
Ancillary chunks
Other image attributes that can be stored in PNG files include gamma values, background color, and textual metadata information. PNG also supports color management through the inclusion of ICC color space profiles.
bKGD gives the default background color. It is intended for use when there is no better choice available, such as in standalone image viewers (but not web browsers; see below for more details).
cHRM gives the chromaticity coordinates of the display primaries and white point.
dSIG is for storing digital signatures.
eXIf stores Exif metadata.
gAMA specifies gamma. The gAMA chunk contains only 4 bytes, and its value represents the gamma value multiplied by 100,000; for example, the gamma value 1/3.4 calculates to 29411.7647059 ((1/3.4)*(100,000)) and is converted to an integer (29412) for storage.
hIST can store the histogram, or total amount of each color in the image.
iCCP is an ICC color profile.
iTXt contains a keyword and UTF-8 text, with encodings for possible compression and translations marked with language tag. The Extensible Metadata Platform (XMP) uses this chunk with a keyword 'XML:com.adobe.xmp'
pHYs holds the intended pixel size (or pixel aspect ratio); the pHYs contains "Pixels per unit, X axis" (4 bytes), "Pixels per unit, Y axis" (4 bytes), and "Unit specifier" (1 byte) for a total of 9 bytes.
sBIT (significant bits) indicates the color-accuracy of the source data; this chunk contains a total of between 1 and 13 bytes.
sPLT suggests a palette to use if the full range of colors is unavailable.
sRGB indicates that the standard sRGB color space is used; the sRGB chunk contains only 1 byte, which is used for "rendering intent" (4 values—0, 1, 2, and 3—are defined for rendering intent).
sTER stereo-image indicator chunk for stereoscopic images.
tEXt can store text that can be represented in ISO/IEC 8859-1, with one key-value pair for each chunk. The "key" must be between 1 and 79 characters long. Separator is a null character. The "value" can be any length, including zero up to the maximum permissible chunk size minus the length of the keyword and separator. Neither "key" nor "value" can contain null character. Leading or trailing spaces are also disallowed.
tIME stores the time that the image was last changed.
tRNS contains transparency information. For indexed images, it stores alpha channel values for one or more palette entries. For truecolor and grayscale images, it stores a single pixel value that is to be regarded as fully transparent.
zTXt contains compressed text (and a compression method marker) with the same limits as tEXt.
The lowercase first letter in these chunks indicates that they are not needed for the PNG specification. The lowercase last letter in some chunks indicates that they are safe to copy, even if the application concerned does not understand them.
Pixel format
Pixels in PNG images are numbers that may be either indices of sample data in the palette or the sample data itself. The palette is a separate table contained in the PLTE chunk. Sample data for a single pixel consists of a tuple of between one and four numbers. Whether the pixel data represents palette indices or explicit sample values, the numbers are referred to as channels and every number in the image is encoded with an identical format.
The permitted formats encode each number as an unsigned integer value using a fixed number of bits, referred to in the PNG specification as the bit depth. Notice that this is not the same as color depth, which is commonly used to refer to the total number of bits in each pixel, not each channel. The permitted bit depths are summarized in the table along with the total number of bits used for each pixel.
The number of channels depends on whether the image is grayscale or color and whether it has an alpha channel. PNG allows the following combinations of channels, called the color type.
The color type is specified as an 8-bit value however only the low 3 bits are used and, even then, only the five combinations listed above are permitted. So long as the color type is valid it can be considered as a bit field as summarized in the adjacent table:
bit value 1: the image data stores palette indices. This is only valid in combination with bit value 2;
bit value 2: the image samples contain three channels of data encoding trichromatic colors, otherwise the image samples contain one channel of data encoding relative luminance,
bit value 4: the image samples also contain an alpha channel expressed as a linear measure of the opacity of the pixel. This is not valid in combination with bit value 1.
With indexed color images, the palette always stores trichromatic colors at a depth of 8 bits per channel (24 bits per palette entry). Additionally, an optional list of 8-bit alpha values for the palette entries may be included; if not included, or if shorter than the palette, the remaining palette entries are assumed to be opaque. The palette must not have more entries than the image bit depth allows for, but it may have fewer (for example, if an image with 8-bit pixels only uses 90 colors then it does not need palette entries for all 256 colors). The palette must contain entries for all the pixel values present in the image.
The standard allows indexed color PNGs to have 1, 2, 4 or 8 bits per pixel; grayscale images with no alpha channel may have 1, 2, 4, 8 or 16 bits per pixel. Everything else uses a bit depth per channel of either 8 or 16. The combinations this allows are given in the table above. The standard requires that decoders can read all supported color formats, but many image editors can only produce a small subset of them.
Transparency of image
PNG offers a variety of transparency options. With true-color and grayscale images either a single pixel value can be declared as transparent or an alpha channel can be added (enabling any percentage of partial transparency to be used). For paletted images, alpha values can be added to palette entries. The number of such values stored may be less than the total number of palette entries, in which case the remaining entries are considered fully opaque.
The scanning of pixel values for binary transparency is supposed to be performed before any color reduction to avoid pixels becoming unintentionally transparent. This is most likely to pose an issue for systems that can decode 16-bits-per-channel images (as is required for compliance with the specification) but only output at 8 bits per channel (the norm for all but the highest end systems).
Alpha storage can be "associated" ("premultiplied") or "unassociated", but PNG standardized on "unassociated" ("non-premultiplied") alpha, which means that imagery is not alpha encoded; the emissions represented in RGB are not the emissions at the pixel level. This means that the over operation will multiply the RGB emissions by the alpha, and cannot represent emission and occlusion properly.
Compression
PNG uses a 2-stage compression process:
pre-compression: filtering (prediction)
compression: DEFLATE
PNG uses DEFLATE, a non-patented lossless data compression algorithm involving a combination of LZ77 and Huffman coding. Permissively-licensed DEFLATE implementations, such as zlib, are widely available.
Compared to formats with lossy compression such as JPEG, choosing a compression setting higher than average delays processing, but often does not result in a significantly smaller file size.
Filtering
Before DEFLATE is applied, the data is transformed via a prediction method: a single filter method is used for the entire image, while for each image line, a filter type is chosen to transform the data to make it more efficiently compressible. The filter type used for a scanline is prepended to the scanline to enable inline decompression.
There is only one filter method in the current PNG specification (denoted method 0), and thus in practice the only choice is which filter type to apply to each line. For this method, the filter predicts the value of each pixel based on the values of previous neighboring pixels, and subtracts the predicted color of the pixel from the actual value, as in DPCM. An image line filtered in this way is often more compressible than the raw image line would be, especially if it is similar to the line above, since the differences from prediction will generally be clustered around 0, rather than spread over all possible image values. This is particularly important in relating separate rows, since DEFLATE has no understanding that an image is a 2D entity, and instead just sees the image data as a stream of bytes.
There are five filter types for filter method 0; each type predicts the value of each byte (of the image data before filtering) based on the corresponding byte of the pixel to the left (A), the pixel above (B), and the pixel above and to the left (C) or some combination thereof, and encodes the difference between the predicted value and the actual value. Filters are applied to byte values, not pixels; pixel values may be one or two bytes, or several values per byte, but never cross byte boundaries. The filter types are:
The Paeth filter is based on an algorithm by Alan W. Paeth.
Compare to the version of DPCM used in lossless JPEG, and to the discrete wavelet transform using 1×2, 2×1, or (for the Paeth predictor) 2×2 windows and Haar wavelets.
Compression is further improved by choosing filter types adaptively on a line-by-line basis. This improvement, and a heuristic method of implementing it commonly used by PNG-writing software, were created by Lee Daniel Crocker, who tested the methods on many images during the creation of the format; the choice of filter is a component of file size optimization, as discussed below.
If interlacing is used, each stage of the interlacing is filtered separately, meaning that the image can be progressively rendered as each stage is received; however, interlacing generally makes compression less effective.
Interlacing
PNG offers an optional 2-dimensional, 7-pass interlacing scheme—the Adam7 algorithm. This is more sophisticated than GIF's 1-dimensional, 4-pass scheme, and allows a clearer low-resolution image to be visible earlier in the transfer, particularly if interpolation algorithms such as bicubic interpolation are used.
However, the 7-pass scheme tends to reduce the data's compressibility more than simpler schemes.
Animation
PNG itself does not support animation. MNG is an extension to PNG that does; it was designed by members of the PNG Group. MNG shares PNG's basic structure and chunks, but it is significantly more complex and has a different file signature, which automatically renders it incompatible with standard PNG decoders, which led to MNG to almost have no support or support dropped by most web browsers or applications.
The complexity of MNG led to the proposal of APNG by developers of the Mozilla Foundation. It is based on PNG, supports animation and is simpler than MNG. APNG offers fallback to single-image display for PNG decoders that do not support APNG. Today APNG format is currently widely supported by all major web browsers. APNG is supported in Firefox 3.0 and up, Pale Moon (all versions), and latest version of Opera support APNG since the engine was changed to Blink. The latest version of Safari on iOS 8 and Safari 8 for OS X Yosemite, they use WebKit engine which support APNG. Chromium 59.0 has added APNG support and it was followed by Google Chrome. Microsoft Edge now support APNG with the new Chromium based engine.
The PNG Group decided in April 2007 not to embrace APNG. Several alternatives were under discussion, ANG, aNIM/mPNG, "PNG in GIF" and its subset "RGBA in GIF". However only APNG currently has support by all major web browsers.
Examples
Displayed in the fashion of hex editors, with on the left side byte values shown in hex format, and on the right side their equivalent characters from ISO-8859-1 with unrecognized and control characters replaced with periods. Additionally the PNG signature and individual chunks are marked with colors. Note they are easy to identify because of their human readable type names (in this example PNG, IHDR, IDAT, and IEND).
Advantages
Reasons to use this International Standard may be:
Portability: Transmission is independent of the software and hardware platform.
Completeness: it's possible to represent truecolor, indexed-color, and greyscale images.
Coding and decoding in series: allows to generate and read data streams in series, that is, the format of the data stream is used for the generation and visualization of images at the moment through serial communication.
Progressive presentation: to be able to transmit data flows that are initially an approximation of the entire image and progressively they improve as the data flow is received.
Soundness to transmission errors: detects the transmission errors of the data stream correctly.
Losslessness: No loss: filtering and compression preserve all information.
Efficiency: any progressive image presentation, compression and filtering seeks efficient decoding and presentation.
Compression: images can be compressed efficiently and consistently.
Easiness: the implementation of the standard is easy.
Interchangeability: any PNG decoder that follows the standards can read all PNG data streams.
Flexibility: allows future extensions and private additions without affecting the previous point.
Freedom of legal restrictions: the algorithms used are free and accessible.
Comparison with other file formats
Graphics Interchange Format (GIF)
On small images, GIF can achieve greater compression than PNG (see the section on filesize, below).
On most images, except for the above case, a GIF file has a larger size than an indexed PNG image.
PNG gives a much wider range of transparency options than GIF, including alpha channel transparency.
Whereas GIF is limited to 8-bit indexed color, PNG gives a much wider range of color depths, including 24-bit (8 bits per channel) and 48-bit (16 bits per channel) truecolor, allowing for greater color precision, smoother fades, etc. When an alpha channel is added, up to 64 bits per pixel (before compression) are possible.
When converting an image from the PNG format to GIF, the image quality may suffer due to posterization if the PNG image has more than 256 colors.
GIF intrinsically supports animated images. PNG supports animation only via unofficial extensions (see the section on animation, above).
PNG images are less widely supported by older browsers. In particular, IE6 has limited support for PNG.
JPEG
The JPEG (Joint Photographic Experts Group) format can produce a smaller file than PNG for photographic (and photo-like) images, since JPEG uses a lossy encoding method specifically designed for photographic image data, which is typically dominated by soft, low-contrast transitions, and an amount of noise or similar irregular structures. Using PNG instead of a high-quality JPEG for such images would result in a large increase in filesize with negligible gain in quality. In comparison, when storing images that contain text, line art, or graphics – images with sharp transitions and large areas of solid color – the PNG format can compress image data more than JPEG can. Additionally, PNG is lossless, while JPEG produces visual artifacts around high-contrast areas. (Such artifacts depend on the settings used in the JPG compression; they can be quite noticeable when a low-quality [high-compression] setting is used.) Where an image contains both sharp transitions and photographic parts, a choice must be made between the two effects. JPEG does not support transparency.
JPEG's lossy compression also suffers from generation loss, where repeatedly decoding and re-encoding an image to save it again causes a loss of information each time, degrading the image. Because PNG is lossless, it is suitable for storing images to be edited. While PNG is reasonably efficient when compressing photographic images, there are lossless compression formats designed specifically for photographic images, lossless WebP and Adobe DNG (digital negative) for example. However these formats are either not widely supported, or are proprietary. An image can be stored losslessly and converted to JPEG format only for distribution, so that there is no generation loss.
While the PNG specification does not explicitly include a standard for embedding Exif image data from sources such as digital cameras, the preferred method for embedding EXIF data in a PNG is to use the non-critical ancillary chunk label eXIf.
Early web browsers did not support PNG images; JPEG and GIF were the main image formats. JPEG was commonly used when exporting images containing gradients for web pages, because of GIF's limited color depth. However, JPEG compression causes a gradient to blur slightly. A PNG format reproduces a gradient as accurately as possible for a given bit depth, while keeping the file size small. PNG became the optimal choice for small gradient images as web browser support for the format improved. No images at all are needed to display gradients in modern browsers, as gradients can be created using CSS.
JPEG-LS
JPEG-LS is an image format by the Joint Photographic Experts Group, though far less widely known and supported than the other lossy JPEG format discussed above. It is directly comparable with PNG, and has a standard set of test images. On the Waterloo Repertoire ColorSet, a standard set of test images (unrelated to the JPEG-LS conformance test set), JPEG-LS generally performs better than PNG, by 10–15%, but on some images PNG performs substantially better, on the order of 50–75%. Thus, if both of these formats are options and file size is an important criterion, they should both be considered, depending on the image.
TIFF
Tagged Image File Format (TIFF) is a format that incorporates an extremely wide range of options. While this makes TIFF useful as a generic format for interchange between professional image editing applications, it makes adding support for it to applications a much bigger task and so it has little support in applications not concerned with image manipulation (such as web browsers). The high level of extensibility also means that most applications provide only a subset of possible features, potentially creating user confusion and compatibility issues.
The most common general-purpose, lossless compression algorithm used with TIFF is Lempel–Ziv–Welch (LZW). This compression technique, also used in GIF, was covered by patents until 2003. TIFF also supports the compression algorithm PNG uses (i.e. Compression Tag 000816 'Adobe-style') with medium usage and support by applications. TIFF also offers special-purpose lossless compression algorithms like CCITT Group IV, which can compress bilevel images (e.g., faxes or black-and-white text) better than PNG's compression algorithm.
PNG supports non-premultiplied alpha only whereas TIFF also supports "associated" (premultiplied) alpha.
Software support
The official reference implementation of the PNG format is the programming library libpng. It is published as free software under the terms of a permissive free software license. Therefore, it is usually found as an important system library in free operating systems.
Bitmap graphics editor support for PNG
The PNG format is widely supported by graphics programs, including Adobe Photoshop, Corel's Photo-Paint and Paint Shop Pro, the GIMP, GraphicConverter, Helicon Filter, ImageMagick, Inkscape, IrfanView, Pixel image editor, Paint.NET and Xara Photo & Graphic Designer and many others. Some programs bundled with popular operating systems which support PNG include Microsoft's Paint and Apple's Photos/iPhoto and Preview, with the GIMP also often being bundled with popular Linux distributions.
Adobe Fireworks (formerly by Macromedia) uses PNG as its native file format, allowing other image editors and preview utilities to view the flattened image. However, Fireworks by default also stores metadata for layers, animation, vector data, text and effects. Such files should not be distributed directly. Fireworks can instead export the image as an optimized PNG without the extra metadata for use on web pages, etc.
Web browser support for PNG
PNG support first appeared in 1997, in Internet Explorer 4.0b1 (32-bit only for NT), and in Netscape 4.04.
Despite calls by the Free Software Foundation and the World Wide Web Consortium (W3C), tools such as gif2png, and campaigns such as Burn All GIFs, PNG adoption on websites was fairly slow due to late and buggy support in Internet Explorer, particularly regarding transparency.
PNG compatible browsers include: Apple Safari, Google Chrome, Mozilla Firefox, Opera, Camino, Internet Explorer 7 (still numerous issues), Internet Explorer 8 (still some issues), Internet Explorer 9 and many others. For the complete comparison, see Comparison of web browsers (Image format support).
Especially versions of Internet Explorer (Windows) below 9.0 (released 2011) have numerous problems which prevent it from correctly rendering PNG images.
4.0 crashes on large PNG chunks.
4.0 does not include the functionality to view .png files, but there is a registry fix.
5.0 and 5.01 have broken OBJECT support.
5.01 prints palette images with black (or dark gray) backgrounds under Windows 98, sometimes with radically altered colors.
6.0 fails to display PNG images of 4097 or 4098 bytes in size.
6.0 cannot open a PNG file that contains one or more zero-length IDAT chunks. This issue was first fixed in security update 947864 (MS08-024). For more information, see this article in the Microsoft Knowledge Base: 947864 MS08-024: Cumulative Security Update for Internet Explorer.
6.0 sometimes completely loses ability to display PNGs, but there are various fixes.
6.0 and below have broken alpha-channel transparency support (will display the default background color instead).
7.0 and below cannot combine 8-bit alpha transparency AND element opacity (CSS – filter: Alpha (opacity=xx)) without filling partially transparent sections with black.
8.0 and below have inconsistent/broken gamma support.
8.0 and below don't have color-correction support.
Operating system support for PNG icons
PNG icons have been supported in most distributions of Linux since at least 1999, in desktop environments such as GNOME. In 2006, Microsoft Windows support for PNG icons was introduced in Windows Vista. PNG icons are supported in AmigaOS 4, AROS, macOS, iOS and MorphOS as well. In addition, Android makes extensive use of PNGs.
File size and optimization software
PNG file size can vary significantly depending on how it is encoded and compressed; this is discussed and a number of tips are given in PNG: The Definitive Guide.
Compared to GIF
Compared to GIF files, a PNG file with the same information (256 colors, no ancillary chunks/metadata), compressed by an effective compressor is normally smaller than a GIF image. Depending on the file and the compressor, PNG may range from somewhat smaller (10%) to significantly smaller (50%) to somewhat larger (5%), but is rarely significantly larger for large images. This is attributed to the performance of PNG's DEFLATE compared to GIF's LZW, and because the added precompression layer of PNG's predictive filters take account of the 2-dimensional image structure to further compress files; as filtered data encodes differences between pixels, they will tend to cluster closer to 0, rather than being spread across all possible values, and thus be more easily compressed by DEFLATE. However, some versions of Adobe Photoshop, CorelDRAW and MS Paint provide poor PNG compression, creating the impression that GIF is more efficient.
File size factors
PNG files vary in size due to a number of factors:
color depth Color depth can range from 1 to 64 bits per pixel.
ancillary chunks PNG supports metadata—this may be useful for editing, but unnecessary for viewing, as on websites.
interlacing As each pass of the Adam7 algorithm is separately filtered, this can increase file size.
filter As a precompression stage, each line is filtered by a predictive filter, which can change from line to line. As the ultimate DEFLATE step operates on the whole image's filtered data, one cannot optimize this row-by-row; the choice of filter for each row is thus potentially very variable, though heuristics exist.
compression With additional computation, DEFLATE compressors can produce smaller files.
There is thus a filesize trade-off between high color depth, maximal metadata (including color space information, together with information that does not affect display), interlacing, and speed of compression, which all yield large files, with lower color depth, fewer or no ancillary chunks, no interlacing, and tuned but computationally intensive filtering and compression. For different purposes, different trade-offs are chosen: a maximal file may be best for archiving and editing, while a stripped down file may be best for use on a website, and similarly fast but poor compression is preferred when repeatedly editing and saving a file, while slow but high compression is preferred when a file is stable: when archiving or posting.
Interlacing is a trade-off: it dramatically speeds up early rendering of large files (improves latency), but may increase file size (decrease throughput) for little gain, particularly for small files.
Lossy PNG compression
Although PNG is a lossless format, PNG encoders can preprocess image data in a lossy fashion to improve PNG compression. For example, quantizing a truecolor PNG to 256 colors allows the indexed color type to be used for a likely reduction in file size.
Image editing software
Some programs are more efficient than others when saving PNG files, this relates to implementation of the PNG compression used by the program.
Many graphics programs (such as Apple's Preview software) save PNGs with large amounts of metadata and color-correction data that are generally unnecessary for Web viewing. Unoptimized PNG files from Adobe Fireworks are also notorious for this since they contain options to make the image editable in supported editors. Also CorelDRAW (at least version 11) sometimes produces PNGs which cannot be opened by Internet Explorer (versions 6–8).
Adobe Photoshop's performance on PNG files has improved in the CS Suite when using the Save For Web feature (which also allows explicit PNG/8 use).
Adobe's Fireworks saves larger PNG files than many programs by default. This stems from the mechanics of its Save format: the images produced by Fireworks' save function include large, private chunks, containing complete layer and vector information. This allows further lossless editing. When saved with the Export option, Fireworks' PNGs are competitive with those produced by other image editors, but are no longer editable as anything but flattened bitmaps. Fireworks is unable to save size-optimized vector-editable PNGs.
Other notable examples of poor PNG compressors include:
Microsoft's Paint for Windows XP
Microsoft Picture It! Photo Premium 9
Poor compression increases the PNG file size but does not affect the image quality or compatibility of the file with other programs.
When the color depth of a truecolor image is reduced to an 8-bit palette (as in GIF), the resulting image data is typically much smaller. Thus a truecolor PNG is typically larger than a color-reduced GIF, although PNG could store the color-reduced version as a palettized file of comparable size. Conversely, some tools, when saving images as PNGs, automatically save them as truecolor, even if the original data use only 8-bit color, thus bloating the file unnecessarily. Both factors can lead to the misconception that PNG files are larger than equivalent GIF files.
Optimizing tools
Various tools are available for optimizing PNG files; they do this by:
(optionally) removing ancillary chunks,
reducing color depth, either:
use a palette (instead of RGB) if the image has 256 or fewer colors,
use a smaller palette, if the image has 2, 4, or 16 colors, or
(optionally) lossily discard some of the data in the original image,
optimizing line-by-line filter choice, and
optimizing DEFLATE compression.
Tool list
pngcrush is the oldest of the popular PNG optimizers. It allows for multiple trials on filter selection and compression arguments, and finally chooses the smallest one. This working model is used in almost every png optimizer.
advpng and the similar advdef utility in the AdvanceCOMP package recompress the PNG IDAT. Different DEFLATE implementations are applied depending on the selected compression level, trading between speed and file size: zlib at level 1, libdeflate at level 2, 7-zip's LZMA DEFLATE at level 3, and zopfli at level 4.
pngout was made with the author's own deflater (same to the author's zip utility, kzip), while keeping all facilities of color reduction / filtering. However, pngout doesn't allow for using several trials on filters in a single run. It's suggested to use its commercial GUI version, pngoutwin, or used with a wrapper to automate the trials or to recompress using its own deflater while keep the filter line by line.
zopflipng was also made with a self-own deflater, zopfli. It has all the optimizing features pngcrush has (including automating trials) while providing a very good, but slow deflater.
A simple comparison of their features is listed below.
Before zopflipng was available, a good way in practice to perform a png optimization is to use a combination of 2 tools in sequence for optimal compression: one which optimizes filters (and removes ancillary chunks), and one which optimizes DEFLATE. Although pngout offers both, only one type of filter can be specified in a single run, therefore it can be used with a wrapper tool or in combination with pngcrush, acting as a re-deflater, like advdef.
Ancillary chunk removal
For removing ancillary chunks, most PNG optimization tools have the ability to remove all color correction data from PNG files (gamma, white balance, ICC color profile, standard RGB color profile). This often results in much smaller file sizes. For example, the following command line options achieve this with pngcrush:
pngcrush -rem gAMA -rem cHRM -rem iCCP -rem sRGB InputFile.png OutputFile.png
Filter optimization
pngcrush, pngout, and zopflipng all offer options applying one of the filter types 0–4 globally (using the same filter type for all lines) or with a "pseudo filter" (numbered 5), which for each line chooses one of the filter types 0–4 using an adaptive algorithm. Zopflipng offers 3 different adaptive method, including a brute-force search that attempts to optimize the filtering.
pngout and zopflipng provide an option to preserve/reuse the line-by-line filter set present in the input image.
pngcrush and zopflipng provide options to try different filter strategies in a single run and choose the best. The freeware command line version of pngout doesn't offer this, but the commercial version, pngoutwin, does.
DEFLATE optimization
Zopfli and the LZMA SDK provide DEFLATE implementations that can produce higher compression ratios than the zlib reference implementation at the cost of performance. AdvanceCOMP's advpng and advdef can use either of these libraries to re-compress PNG files. Additionally, PNGOUT contains its own proprietary DEFLATE implementation.
advpng doesn't have an option to apply filters and always uses filter 0 globally (leaving the image data unfiltered); therefore it should not be used where the image benefits significantly from filtering. By contrast, advdef from the same package doesn't deal with PNG structure and acts only as a re-deflater, retaining any existing filter settings.
Icon optimization
Since icons intended for Windows Vista and later versions may contain PNG subimages, the optimizations can be applied to them as well. At least one icon editor, Pixelformer, is able to perform a special optimization pass while saving ICO files, thereby reducing their sizes. FileOptimizer (mentioned above) can also handle ICO files.
Icons for macOS may also contain PNG subimages, yet there isn't such tool available.
See also
Computer graphics, including:
Comparison of browser engines (graphics support)
Image editing
Image file formats
Related graphics file formats
APNG Animated PNG
JPEG Network Graphics (JNG)
Multiple-image Network Graphics (MNG)
Similar file formats
X PixMap for portable icons
Scalable Vector Graphics
WebP
IrfanView
Notes
References
Further reading
External links
PNG Home Site
libpng Home Page
The Story of PNG by Greg Roelofs
Test inline PNG images
More information about PNG color correction
The GD-library to generate dynamic PNG-files with PHP
PNG Adam7 interlacing
Encoding Web Shells in PNG files: Encoding human readable data inside an IDAT block.
Portable Network Graphics
Computer-related introductions in 1996
Graphics standards
Image compression
ISO standards
Open formats
Raster graphics file formats
World Wide Web Consortium standards
|
25476897
|
https://en.wikipedia.org/wiki/XV%20Gymnasium
|
XV Gymnasium
|
Fifteenth Gymnasium (), previously called, and still better known as MIOC (Matematičko informatički obrazovni centar; Mathematical Informatical Educational Center) is a public high school in Zagreb, Croatia. It specializes in mathematics and computer science.
History
The school was founded as Fifteenth Mathematical Gymnasium (XV. matematička gimnazija) in 1964. It was the among the first schools in former Yugoslavia specializing in mathematics along with Mathematical Gymnasium (Matematička gimnazija) in Belgrade.
The first principal was Stefanija Bakarić, sister of Vladimir Bakarić, one of the leading politicians in the ruling League of Communists of Yugoslavia and the chairman of the League of Communists of Croatia at the time. The original curriculum was composed with the help from acclaimed university professors Svetozar Kurepa, Branislav Marković and Vladimir Devide. At the beginning, most of the teachers were university professors.
In 1965, it became the first school in Croatia to have information science as a school subject. Students first got the chance to work on actual computers in 1980.
In 1977, the school, now in a new building, merged with Seventh Gymnasium (VII. gimnazija) and Fourteenth Gymnasium (XIV. gimnazija, also then known as 25. maj). The newly founded school was named Education Center for Mathematics and Computer Science (Matematičko informatički obrazovni centar), abbreviated as MIOC. The school is still informally widely known under that name.
Denis Kuljiš, a known Croatian political columnist and opinion maker, himself an alumnus of XV. gimnazija, argues that at this point the quality of the school started to go down, since the teachers at other schools involved in the merger were not up to the standards of the school.
In 1982, MIOC was renamed to MIOC Vladimir Popović.
In 1991, after the fall of Communism, the school changed its name and was once again known as Fifteenth Gymnasium.
In 2007, the management of the school planned to hold a celebration of its thirty years of existence which sparked strong protests from alumni who graduated before 1977. In the end, the school held the celebration while mentioning both 1964 and 1977 as important dates in the history of the school.
Building
The school moved to the current building in Jordanovac, which is in the Maksimir neighborhood of Zagreb, in the seventies. Before that, it was located in an older building in Sutlanska street. The new building was built especially for this purpose and thus contains some of the infrastructure that schools in Zagreb, Croatia and the whole Balkan region lack. There are two gyms, an outdoor sports center, a cafeteria and a movie theater.
In 2008, the third wing of the building was opened. With an increased number of classrooms, the classes now take place only in the morning while afternoons are reserved for extracurricular activities, a relative rarity among Croatian schools.
Curriculum
There are around 1200 students divided into two parts: the so-called "national program" and the International Baccalaureate program.
In the national program, the students follow the curriculum of mathematical - natural scientific gymnasium, as outlined by the Ministry of Science, Education and Sports (Ministarstvo znanosti, obrazovanja i športa). There are three sub-programs, the "Information Science" program which has an additional weekly hour of mathematics and an additional weekly hour of computer science, the Mathematics program. with two additional weekly hours of mathematics, and the "General" program with two weekly hours of a second foreign language, usually German.
The International Baccalaureate program, implemented at the school in 1991, is not publicly funded but is instead financed by student tuitions. In it, the school follows the usual IB curriculum, divided into two segments: IB Middle Years Programme (grades 9 and 10) and IB Diploma Programme (grades 11 and 12). Around 200 Croatian and foreign students are in IB classes. All the classes are conducted in English.
Extracurricular Activities and Successes
University students work along full-time teachers preparing the students, which is a rather uncommon way of preparing used only in a few Croatian schools and a few more schools in the Balkans.
Among most notable international results are multiple successes, both in team and single events, at:
International Olympiad in Informatics
International Mathematical Olympiad
International Physics Olympiad
International Astronomy Olympiad
International Junior Science Olympiad.
Also noted in the local media, arguably even more than the more important successes mentioned above, were the successes in American Computer Science League, controversially painted in the media as "triumph of knowledge over wealth".
Most of the students participating in the international and top-tier national competitions come from the publicly funded national program.
Cooperation
Besides cooperating with many governmental and non-governmental organizations dealing with education in Zagreb, the school is noted for its long-standing friendship with Second Gymnasium (II. gimnazija) in Maribor, Slovenia. The students of Second Gymnasium participated in the school celebrations in 2007 with the performance of the musical We Will Rock You.
Also, the school runs an exchange and cooperation program with Kasetsart University Laboratory School, one of the more notable schools in Bangkok, Thailand.
Alumni
As Fifteenth Gymnasium specializes in mathematics and computer science, it is understandable that most alumni of the school continue their studies at the Faculty of Electrical Engineering and Computing and at the Faculty of Science at the University of Zagreb. Some of them continue working in the science industry after university graduation. There are many alumni working at aforementioned faculties ranging from teaching assistants to academics, such as Marko Tadić, a professor at the Department of Mathematics. Branko Jeren, who was the Minister of Science and Technology of Croatia in the mid-nineties, who is also currently a professor at Faculty of Electrical Engineering and Computing, is also an alumnus of the school.
Many of MIOC graduates went abroad, either immediately after finishing high school or later. Perhaps the most notable scientist who graduated from MIOC is Marin Soljačić, a physicist currently residing in the United States. Some later returned to Croatia, but continued working internationally, such as Bojan Žagrović.
As alumni culture is not well-developed in the Balkans, it is difficult to compose a complete list of notable alumni of Fifteenth Gymnasium, especially in areas other than science. Besides Denis Kuljiš, a known political columnist and reporter from Zagreb, some national TV personalities and actors also graduated from Fifteenth Gymnasium. Examples include Filip Brajković, Amar Bukvić (who graduated in the International Baccalaureate program) and Domagoj Novokmet, who acted as a host for the celebration of the school held in 2007 in Vatroslav Lisinski Concert Hall.
Football player Niko Kranjčar also graduated from MIOC. He is probably the most notable sportsperson known to be an alumnus of the school. Andrej Kramarić, also a professional footballer from Zagreb had graduated from the school in 2010.
MIOC Alumni Foundation
In 2020., five of the school's alumni, along with the school itself, established MIOC Alumni Foundation. The Foundation's main goal is to provide students with financial and non-financial support, as well as to provide insight into colleges the school's students mostly apply for, both in-state and abroad. Foundation's vision and mission can be summarized into four words: "To change someone's life".
References
Educational institutions established in 1964
Gymnasium, 15
Education in Zagreb
1964 establishments in Croatia
Gymnasiums in Croatia
Buildings and structures in Zagreb
|
27782728
|
https://en.wikipedia.org/wiki/Irit%20Dinur
|
Irit Dinur
|
Irit Dinur (Hebrew: אירית דינור) is an Israeli mathematician. She is professor of computer science at the Weizmann Institute of Science. Her research is in foundations of computer science and in combinatorics, and especially in probabilistically checkable proofs and hardness of approximation.
Biography
Irit Dinur earned her doctorate in 2002 from the school of computer science in Tel Aviv University, advised by Shmuel Safra; her thesis was entitled On the Hardness of Approximating the Minimum Vertex Cover and The Closest Vector in a Lattice. She joined the Weizmann Institute after visiting the Institute for Advanced Study in Princeton, New Jersey, NEC, and the University of California, Berkeley.
Dinur published in 2006 a new proof of the PCP theorem that was significantly simpler than previous proofs of the same result.
Awards and recognition
In 2007, she was given the Michael Bruno Memorial Award in Computer Science by Yad Hanadiv. She was a plenary speaker at the 2010 International Congress of Mathematicians. In 2012, she won the Anna and Lajos Erdős Prize in Mathematics, given by the Israel Mathematical Union. She was the William Bentinck-Smith Fellow at Harvard University in 2012–2013. In 2019, she won the Gödel Prize for her paper "The PCP theorem by gap amplification".
References
External links
Personal HomePage
Turing Centennial Post 1: Irit Dinur, guest post on Luca Trevisan's blog "in theory" concerning Dinur's experiences as a lesbian academic
Israeli mathematicians
Weizmann Institute of Science faculty
Living people
Tel Aviv University alumni
21st-century mathematicians
21st-century women mathematicians
Year of birth missing (living people)
Gödel Prize laureates
|
8245873
|
https://en.wikipedia.org/wiki/The%20Sims%203
|
The Sims 3
|
The Sims 3 is a 2009 life simulation video game developed by the Redwood Shores studio of Maxis and published by Electronic Arts. Part of The Sims series, it is the sequel to The Sims 2. It was released on June 2, 2009, for macOS, Microsoft Windows and smartphone versions. Console versions were released for PlayStation 3, Xbox 360, and Nintendo DS in October 2010 and a month later for Wii. The Windows Phone version was released on October 15, 2010. A Nintendo 3DS version, released on March 27, 2011, was one of the platform's launch titles.
The game follows the same premises as its predecessors The Sims and The Sims 2 and is based around a life simulation where the player controls the actions and fates of its characters, the Sims, as well as their houses and neighbourhoods. The Sims 3 expands on previous games in having an open world system, where neighbourhoods are completely open for the sims to move around without any loading screens. A new design tool is introduced, the Create-a-Style tool, which allows every object, clothing and hair to be redesigned in any color, material or design pattern.
The Sims 3 was a commercial success, selling 1.4 million copies in its first week. It received mostly positive reviews from critics, with an 86/100 score from aggregator Metacritic indicating "generally favorable" reviews. The game has sold over ten million copies worldwide since its 2009 release, making it one of the best-selling PC games of all time. The Sims 3 has additionally received eleven expansion packs and nine stuff packs. A sequel, The Sims 4, was released in September 2014 for PC and in November 2017 for consoles to mixed reviews, largely due to the removal of the open world and Create-a-Style tool and lack of content.
Gameplay
As in previous games of the franchise, in The Sims 3 players control their own Sims' activities and relationships. The gameplay is open-ended and does not have a defined goal. The sims live in neighbourhoods, now being officially referred to as 'worlds', which can be customized, allowing the player to create their houses, community lots, and sims, although many of these come with the core game.
These worlds are now 'seamless', allowing all sims to move around freely without any loading screen in between lots, as happened in the previous games. Thus, the neighbourhood includes community lots which can be leisure lots (such as parks, gyms, and movie theatres) and job lots (town hall, hospital, businesses). Since the neighbourhood is open, the game includes the "Story Progression" mechanic, which allows all Sims in the neighborhood to autonomously continue their lives without the player ever controlling them. This helps to advance the story of the whole neighbourhood instead of only the active playing units. Sims live for a set duration of time that is adjustable by the player and advances through several life stages (baby, toddler, child, teen, young adult, adult, and elder). Sims can die of old age or they can die prematurely from causes such as fire, starvation, drowning, and electrocution.
The primary world in the game is Sunset Valley (in the console version, the main world is Moonlight Bay), while an additional world called Riverview can be obtained for free. All expansion packs to date (except Generations and Seasons) have included a world, and additional worlds can be bought at The Sims 3 Store for SimPoints. Additionally, Sunset Valley and a few of the other worlds available have some degree of connection to the storyline set up by The Sims and The Sims 2. In-game Sunset Valley is stated to be the same town as the default neighborhood in The Sims, and Pleasantview from The Sims 2, although set twenty five and fifty years earlier, respectively. Several pre-made characters from other Sims games appear throughout the Sims 3's worlds, many of them in younger form.
Career opportunities like working overtime or completing tasks can yield a pay raise, cash bonus, or relationship boost. Challenges occur randomly based on each Sim's lifestyle, like relationships, skills, and jobs. Skill opportunities are the requests by your sim's neighbors or community members for Sims to solve problems using their acquired skills for cash or relationship rewards.
The new Wishes reward system replaces the Wants and Fears system in its predecessor The Sims 2. Fulfilling a Sim's wishes contributes to the Sim's Lifetime Happiness score, allowing players to purchase Lifetime Rewards for the cost of those Lifetime Happiness points.
The game introduces a big change in terms of customization with the "Create-a-Style" tool. In this way, every object or piece of clothing in the game is completely customizable in terms of color (which can be picked from a color wheel), material (plastic, stone, fabric, wood...) or design pattern.
Create-a-Sim
The Sims 3 introduces many more character customization options than its predecessor The Sims 2. Like the previous game, the player can customize age, body build, skin color, hairstyles, clothing and personality. A new life stage is included between adolescence and adulthood: young adulthood. This stage was introduced in The Sims 2 only during university period, but is now the main life stage for the game. Additional options were added in expansions and updates, such as tattoos, breast size, and muscle definition. The Sims 3 offers a wider range of skin tones than its predecessors, ranging from realistic light and dark skin tones to fantasy green and purple colors.
The game builds upon a new personality system. As opposed to previous games, where personalities consisted on sliders, and a limited set of personality points to distribute among them, The Sims 3 introduces a trait system: adult sims can have up to 5 personality traits to pick from a list. These traits can be mental, physical, social or influenced by lifestyle and jobs. The traits will determine different actions the sims can make, as well as behaviors and wishes.
Skills
The sims can learn skills from interacting with different objects. Skills improve gradually in 10 levels. Skill improvements are useful for achieving career goals, as well as unlocking new possibilities for those activities which require the skills, for example, a high gardening level allows the sims to plant different rare seeds. The basic skills include Logic, Cooking, Painting, Gardening, Writing, Guitar, Athletic, Handiness, Charisma and Fishing. New skills were later added in expansion packs.
Careers
Many of the careers from The Sims 2 and The Sims are back in The Sims 3. The careers in the core game are Business, Culinary, Criminal, Education, Journalism, Law Enforcement, Medical, Military, Music, Political, Science, and Professional Sports, as well as part-time jobs in the book shop, supermarket or spa, which can be accomplished by both adults and teenagers. Each one of the jobs takes place in a community lot of the neighbourhood. However, these lots are only "rabbit-hole" buildings, with an external façade, but the player cannot access them and is not able to see what happens inside. Thus, jobs are automatic in the game, even if the player will sometimes receive challenges and questions with different options to have more control over the sims' career performance. Advancing in a career still depends on mood and skills, but with the addition that relationships with colleagues/boss and even certain goals that have to be fulfilled. Players can control if the sims "Work Hard", "Take It Easy", "Suck Up To Boss", etc., thus affecting their performance. A new feature The Sims 3 offers is branching careers, which allows Sims to choose a certain path in their career (such as a Sim in the Music career can eventually choose to specialize in Symphonic music or Rock). These branches are generally offered around level 6 of a career, depending on which career the Sim is working.
The Ambitions expansion pack includes brand new professions that are actually playable: Firefighter, Ghost Hunter, Investigator, Architectural Designer, and Stylist. Some of them take place in a playable community lot, such as Firefighter or Stylist, while the others are freelance jobs. Players can search for gigs in the neighbourhood and actually accomplish them. For example, an Architectural Designer can visit other sims' houses and redecorate them in exchange for money and career performance.
Sims are also able to make a living at home through their skills such as selling their own paintings, writing novels, playing guitar for tips, or growing fruit and vegetables. Sims can also buy out businesses and receive a percentage of the profits they earn.
Build/Buy modes
As in previous games, a build/buy tool is included to design houses and community lots. The two modes retain most of the main fundamental tools from the previous games.
Build mode is used to add walls, paint them, add stairs, doors and windows, lay down flooring, create foundations, basements, pools and ponds. Some expansion packs add extra build mode features such as terrain design. Players cannot build or place objects outside the limits of the lot.
In Buy mode, the player can purchase and place down new objects, such as appliances, electronics, furniture and vehicles. Buy mode largely focuses on providing objects that are useful or necessary for the sims, allowing them to build skills, provide some sort of utility, or purely to act as house decoration. The descriptions of many of the objects available for purchase in the game involve humor, sarcasm, insults towards the player, and wit, and serves as comic relief in the game.
The build and buy modes have received their own makeover. The modes maintain the grid building system from the previous game, however, this grid is more flexible now, allowing the objects to be laid down in the middle of the tiles or without any grid help at all. A blueprint mode is added in further expansions, where pre-designed rooms are available to lay down as-is. The Create-a-Style tool can also be applied to redesign every single piece of furniture or building, changing to any color, material or design pattern.
Create-a-World
On October 29, 2009, Electronic Arts announced "Create-a-World" (CAW), which is a game world editor that allows players to create their own custom cities from scratch for use within the game. Players can customize lots, choose terrain patterns and add roads, vegetation and neighborhood accents (such as water towers and lighthouses). CAW also allows players to import designs from PNG files for use in their worlds. Users can upload their worlds to The Sims 3 Exchange for download by other players. The editor tool is offered to players as a separate download, and was released on December 16, 2009, as a beta version. EA will offer technical support and updates. Players are able to share their neighborhoods as with other content. The Create-a-World tool is available for Windows-based PCs.
Family
As this game is a life simulation, sims can have families. You can create a family in Create-a-Sim and edit their relationships, or you can manually meet different sims and have children.
Young adults and adults can try for a baby. There is no "Try for Baby" option for any younger or older sims. The only exception is senior males - they can try for a baby with a female of any age, though conception can be significantly harder. In order to try for a baby, two sims of the opposite gender must have a relationship of "Romantic Interest" or higher. A lullaby-like melody will play if the sim is pregnant, although sometimes it may not play.
Sims will get the Nauseous moodlet if they are pregnant, and these symptoms will persist for about 1 sim day until they discover they are pregnant and will change into default pregnancy clothes. From now on until after birth, the sim will not be able to wear their usual clothes. Some pregnancy symptoms sims can experience are: Nausea, backache, reduced or increased appetite and inability to partake in certain actions or exercise.
When the sim goes into labour, they can either deliver the baby at hospital or have a home birth. Most sims go to hospital by default, but this action can be cancelled. Kids and baby interior can be bought in buy mode. Babies require a lot of attention and care, and can get whisked away by social services if it is not given to them - along with toddlers, kids and teenagers. Babysitters can be hired to accompany children if their parents are busy.
Development
Electronic Arts announced The Sims 3 on March 19, 2008. On January 15, 2009, Maxis invited "some of the best" custom content creators to their campus at Redwood Shores where they were hosting a Creator's Camp. Creators have been invited to spend the week exploring and creating content like Sims, houses and customized content. The Creators' work was used to pre-populate The Sims 3 Exchange.
On May 8, 2009, Maxis announced that The Sims 3 had gone gold meaning that the game had finished beta testing stage and was off for manufacturing ahead of its June 2009 release. On May 15, 2009, Maxis released several online interactive teaser experiences on The Sims 3 Website, including 'SimFriend', which allows users to choose a virtual Sim Friend who would email them throughout the day. 'SimSocial', which allows users to create their own Sim online, and have an adventure with them. 'SimSidekick', which allows users to surf the web with a sim. Two weeks before the game was scheduled to be released, an unauthorized copy of the digital distribution version of the game leaked onto the Internet. Maxis later commented the leak was a "buggy, pre-final" version. Maxis claims that more than half of the game is missing and is susceptible to crashes or worse. Reportedly, the title has seen higher copyright infringement rates than that of the most torrented game of 2008, Spore.
Maxis relied on user feedback from previous games. In order to create the animations in the game, so they look believable but goofy, they shot real life references of people doing tasks in outrageous ways until satisfied with the outcome.
Each character in the game was specifically created by the developers to have their own life story, wishes, dreams, and personalities. The developers spend a lot of time trying to get the world to feel seamless and the characters to feel real.
Marketing
On October 31, 2008, two teaser trailers were released by Electronic Arts featuring a comical view on the 2008 presidential election in the United States. Candidates John McCain and Barack Obama were included along with respective running mates Sarah Palin and Joe Biden.
In April 2009, Electronic Arts began to post billboards in many areas in advertisement for the game. Many of the billboards covered skyscrapers in densely populated areas, most notably Times Square in New York City. The costs of these billboards was estimated to be $10 million a month.
On March 23, 2009, The Sims 3 was threaded throughout the storyline of an episode of One Tree Hill.
On April 19, 2009, Target released a promotional disc of The Sims 3 that features a Ladytron band poster, The Sims 3 theme song music download, and a $5 off coupon. The main menu includes screensaver downloads, videos, Create-a-Sim, Create-A-House, and much more. There is no actual gameplay involved, but it describes what playing feels like.
On July 14, 2010, Ford began a promotion at The Sims 3 Store by allowing players to download their newest car at the time, the Ford Fiesta Mark VII. The car also came with a collection of street signs. On October 27, 2010, the download was updated to include the Fiesta Hatchback. The 2012 Ford Focus was made available to download on June 8, 2011. The car included one male Ford T-shirt, one female Ford T-shirt, a stereo, and a set of neon lights, all for use in-game. The Focus pack was available to download on Mac, PC, Xbox, and PlayStation platforms.
In 2012, EA partnered with American singer Katy Perry to promote The Sims 3. As part of the promotion, a special Katy Perry Collector's Edition of the Showtime expansion pack was released, as well as a Katy Perry's Sweet Treats stuff pack. Both packs incorporate concept elements from Perry's third studio album Teenage Dream (2010), with the latter including a Simlish rendition of the album's fifth single, "Last Friday Night (T.G.I.F.)", in the in-game radio.
Audio
Music for The Sims 3 was composed by Steve Jablonsky. Scores were recorded with the Hollywood Studio Symphony at Newman Scoring Stage at 20th Century Fox. Music for the game's stereo and guitar objects was produced by others, including Ladytron, Darrell Brown, Rebeca Mauleon, and Peppino D'Agostino. Additional music was produced by APM Music. Two soundtracks have been released for The Sims 3 base game, The Sims 3 Soundtrack and The Sims 3 – Stereo Jams. The soundtrack includes theme music and the Stereo Jams album includes music from stereos in game. All songs on Stereo Jams are in Simlish.
Several musical artists partnered with EA to perform some of their songs in the language of Sims, Simlish. Artists have ranged from Katy Perry, Lady Antebellum, Flaming Lips, Damien Marley, Depeche Mode, Nelly Furtado, and Flo Rida.
Release
On February 3, 2009, it was announced that the release date of The Sims 3 would be delayed from February 20, 2009, to June 2, 2009, in the US, and June 5, 2009, in the UK.
EA Singapore launched The Sims 3 with a large launch party which was held on June 2, 2009, at the Bugis+ shopping mall in Singapore. At the event, The Sims 3 T-shirts were available for purchase. In Sydney, Australia on June 4, 2009, a fashion event to show off the freedom and self-expression in The Sims 3 was held by Electronic Arts Australia, and included a performance by Jessica Mauboy.
The game was released as both a standard edition and a Collector's Edition. Both the Collector's Edition and the standard edition of the game comes with a coupon for 1000 Sim Points to spend at The Sims 3 Store. The standard edition contains the first release of the core game, while the Collector's Edition includes the Sims 3 core game, a 2GB The Sims Plumbob USB flash drive (preloaded with wallpapers and screensavers of the game, and the main theme as an MP3 file) with matching Green Carabiner, an exclusive European-styled Sports Car download, a Prima Tips and Hints Guide (not the actual Sims 3 Prima Guide), and Plumbob stickers. Those that pre-ordered the game also got a Vintage Sports Car download, The Sims 3 Neighborhood Poster, and a quick start reference guide. A preview CD with more information about The Sims 3, such as music samples, family descriptions, and career information, was also released.
When the game was released on June 2, 2009, it featured both versions for Microsoft Windows and macOS on the same disc, unlike the previous games in The Sims series, which were ported to Mac by Aspyr and released several months later after the initial release date. The Mac version was created with the help of Transgaming, Inc., who licensed Cider to developers in order to make their games Mac compatible by emulating Windows APIs. However, playing the game on Mac often results in poorer performance than in Windows, especially on higher-end systems. As it is a 32-bit application, it is not compatible with macOS Catalina or later. On October 2, 2019, Maxis announced that they would release an updated 64-bit version of the game, titled The Sims 3 (64-Bit & Metal), with compatibility for macOS Catalina or later. Players who register the game on Origin would get the new version for free. The Sims 3 (64-Bit & Metal) was released on October 28, 2020.
Ports
Smartphone
A now discontinued version of The Sims 3 was released on iOS, Android, Bada, Symbian, BlackBerry OS and Windows Phone on June 2, 2009. The iPhone game works similar to that of the PC version. In Create-a-Sim, instead of Lifetime Wishes, there are personas. Personas decide which lifetime wishes your sim will have, as a persona is the largest factor in a Sim's personality. Sims start out with a small house. The house can be expanded every five sim days if the player can afford it. There are four careers in the town: biology, politics, business, and culinary. As in the PC version, Sims can also learn skills. There are nearly 75 wishes in the game. When all of them are fulfilled, Sims unlock the criminal career and have the ability to purchase a car. In some events, such as appliances breaking down, the player must play a minigame to solve or do the action. The game was updated on November 30, 2010, to add support for the Retina display of newer devices.
A standalone expansion pack for the iOS version, World Adventures, was released on April 2, 2010. World Adventures adds tombs, new challenges, personas, and careers, new places to explore (Egypt, China and France), clothes and new furniture. A second standalone expansion pack, Ambitions, was released on September 16, 2010. Ambitions added new skills (firefighting, painting, parenting and sports), new community buildings, and the ability to have children. On November 6, 2009, EA announced the release of a vampire theme pack for the iPhone. The pack included Live it or Wear it Sets with Vampires and Werewolves, Castle and Campus Life themes. "Live it" sets contain car, furniture, decoration, wallpaper, and flooring. "Wear it" sets contain clothing, new CAS options, and hair styles.
Console
The Sims 3 was released to game consoles on October 26, 2010, for PlayStation 3, Xbox 360, and Nintendo DS. It was later released for Wii on November 15, 2010 and Nintendo 3DS on March 25, 2011.
The game allows the player to take on up to three friends in the Life Moments Game on the Wii, upload and download content on Xbox Live and PlayStation Network, including furnishings, houses, and player creations or experience a full life simulation on a handheld with Nintendo DS. Reviews for the game ranged from average to moderately positive. Sims can age and die, but life cycles can be disabled optionally as well. The Sims 3 features a new Karma system (similar to the influence system in The Sims 2). Sims can interact with child Sims around the neighborhood, or have children of their own. Unlike the PC version of the game, the console versions have loading times when moving from one area to another, and when accessing build/buy modes.
The PlayStation 3 and Xbox 360 versions received mostly positive reviews. On Metacritic, it holds an average score of 77 and 76 out of 100 on the PS3 and Xbox 360 versions, respectively. Game Informer gave the 360 and PS3 versions a 9/10, praising the new Karma system and The Exchange. GameSpot gave the game a 7/10, noting that "the game lacks fluidity, but is fun in its own right." In a positive review, IGN praised the game for its controls on consoles, but said they were disappointed by the fact that there is only one town in the game, as well as bugs, including a glitch where the game will not save once a certain week has been reached. The Wii version received mixed reviews and was criticized by reviewers and players for poor performance and glitches, due to the weaker Wii hardware.
Expansions, add-ons, and editions
Expansion packs
Stuff packs
Stuff Packs only include new items e.g. furniture, clothing, hairstyles. They do not add any new functionality to the game. Stuff Packs are compatible with both Windows and macOS as with the main game and expansion packs.
Editions
Reception
EA reported that in its first week, The Sims 3 sold 1.4 million copies. According to EA, this was the most successful PC game launch the company had had to date. According to retail data trackers Gfk Australia, The Sims 3 has been the top selling game in Australia from release until June 30, 2009. Response from critics and gamers alike were generally favorable, with Metacritic calculating a score of 86/100 based on 75 reviews. PC Gamer awarded The Sims 3 a 92% and an Editor's Choice badge, calling it "The best Sims game yet". IGN PC awarded The Sims 3 an 8.9/10, stating:
GameSpot awarded The Sims 3 a score of 9.0/10, the review praised the game: "The latest Sims game is also the greatest, striking a terrific balance between the fresh and the familiar."
The game was ranked #91 in IGN's "Top 100 Modern Games". In a special edition of Edge magazine listing their 100 top videogames of all-time, The Sims 3 was number 89 on the list.
References
External links
Official website
2009 video games
Android (operating system) games
Bada games
BlackBerry games
Electronic Arts games
IOS games
Life simulation games
MacOS games
N-Gage service games
Nintendo DS games
Nintendo 3DS games
Open-world video games
PlayStation 3 games
Social simulation video games
The Sims
Video game prequels
Video game sequels
Video games developed in the United States
Video games featuring protagonists of selectable gender
Video games scored by Steve Jablonsky
Video games with expansion packs
Video games with downloadable content
Wii games
Windows games
Windows Phone games
Xbox 360 games
Video games about ghosts
Video games with alternative versions
Video games with custom soundtrack support
Shorty Award winners
|
53373895
|
https://en.wikipedia.org/wiki/Circle%20%28TV%20series%29
|
Circle (TV series)
|
Circle () is a 2017 South Korean science fiction television series starring Yeo Jin-goo, Kim Kang-woo, Gong Seung-yeon, and Lee Gi-kwang. The series features two parallel plots set in the years 2017 and 2037, both centered on twin brothers' struggle with the discovery and development of an advanced alien technology that could either be a boon or bane for the entire humanity.
It aired on tvN for 12 episodes at 23:00 KST on Mondays and Tuesdays from May 22 to June 27, 2017.
Synopsis
In the year 2007, before the main events of the drama, 11-year-old fraternal twin brothers Kim Woo-jin (Jung Ji-hoon) and Kim Bum-gyun (Kim Ye-joon), along with their neuroscientist father Dr. Kim Kyu-chul (Kim Joong-ki), witness the arrival of a female humanoid alien (Gong Seung-yeon). Out of pity and curiosity, the family brings the alien with them and adopts her as a member of the family. The twins become fond of her, especially Woo-jin who names her Byul. By chance, Dr. Kim discovers Byul's secret: she had brought with her an advanced form of technology that can record, and even lock, memories and convert them into video. In order to closely study this alien technology, Dr. Kim isolates himself from his family to work secretly on Byul in what he calls the "Beta Project." Dr. Kim never goes back home, making the twin Bum-gyun think Byul has taken their father captive, though Woo-jin believes their father has abandoned them.
In 2017, Woo-jin (Yeo Jin-goo), now a 21-year-old college student in neuroscience, feels that a series of suicides in his university is somehow linked to Bum-gyun (An Woo-yeon), who has been searching for aliens, particularly Byul. While in pursuit of the case, Woo-jin meets Han Jung-yeon (Gong Seung-yeon), a computer science student who, to his great shock, looks very much like Byul. She is also investigating the multiple suicides, all of which she thinks are actually murder.
In 2037, South Korea is now divided into Normal Earth, a heavily polluted place where crimes are rampant, and Smart Earth, a clean, peaceful, and crime-free city. Kim Joon-hyuk (Kim Kang-woo) is a Normal Earth crimes detective who tries to get into Smart Earth to investigate a case of twin brothers who went missing in 2017. In doing so, he starts to uncover the dark truth lurking behind Smart Earth being “crime-free.”
Each episode of the drama, except the last, contains two parts: Part 1: Beta Project, which contains a plot set in 2017, and Part 2: A Great New World, which contains a plot set in 2037. The two plots merge in the twelfth and final episode of the series, titled Circle: One World.
Cast
Main
Yeo Jin-goo as Kim Woo-jin (Beta Project) / Circulate 3 (A Great New World)
Jung Ji-hoon as young Woo-jin
a 21-year-old second-year university student in the Department of Neuroscience in Handam University of Science and Technology in the year 2017; Bum-gyun/Joon-hyuk's "younger" twin brother and son of Dr. Kim Kyu-chul
In his younger years, Woo-jin was an enthusiast about UFOs and extraterrestrial life. He witnessed (with Bum-gyun and Dr. Kim) the arrival on Earth of a female humanoid alien, which he names "Byul." His passion faded when his father disappeared with Byul. In 2017, he meets Byul (now Han Jung-yeon) once again and together they investigate the serial suicides in the university, with the help of Detective Hong Jin-hong.
Kim Kang-woo as Kim Joon-hyuk (born Kim Bum-gyun)
Ahn Woo-yeon as 21-year-old Kim Bum-gyun
Kim Ye-joon as young Bum-gyun
(A Great New World) a 41-year-old violent crimes detective from Normal Earth in the year 2037; Woo-jin's "elder" twin brother and son of Dr. Kim Kyu-chul
Joon-hyuk enters Smart Earth to investigate cases happening within the supposedly "crime-free" city, with the help of Detective Hong Jin-hong and the alien hacker Jung-yeon a.k.a. Bluebird. Twenty years before (as Kim Bum-gyun) he was consumed with aliens on earth and had in facat been in a psychiatric hospital and prison for his seemingly mad behaviours. By 2017, he is investigating the series of suicide in Handam University of Science and Technology that he believed were brought about by aliens. He loses his memories later on; Detective Hong gives him his new name "Joon-hyuk" and he enters the police force.
Gong Seung-yeon as Han Jung-yeon (a.k.a. Byul, Bluebird)
a humanoid extraterrestrial being whose arrival on Earth was witnessed by the Kim twins and their father.
During her arrival in 2007, Byul appeared naked to the Kims and was also mute, having no knowledge of any human language. She becomes close to Woo-jin, who gave her the name "Byul," and she learns how to speak Korean. She is a semi-immortal being, i.e. she never ages and will look the same as long as she lives. She had brought with her an advanced form of alien technology that can record, and even lock, memories and convert them into video. She was taken for scientific research by the twins' neuroscientist father and later loses her memories. She is adopted by Professor Han Yong-woo who named her Han Jung-yeon.
In 2037 (in A Great New World), she becomes a hacker under the codename Bluebird, based from Maurice Maeterlinck's L'oiseau bleu.
Lee Gi-kwang as Lee Ho-soo
(A Great New World) an intelligent 26-year-old government employee in Smart Earth in the year 2037
Ho-soo is unbelieving at first on the possibility of criminality in Smart Earth and sides with the government. He is tasked to keep watch on Joon-hyuk who is investigating cases in the "crime-free" city. Upon learning the truth from Joon-hyuk, he becomes confused, but later sides with Joon-hyuk and Bluebird to take down the unjust powers controlling Smart Earth.
Supporting
Part 1: Beta Project
People around Woo-jin
Jung In-sun as Park Min-young
a 21-year-old medical student who fell in love with Bum-gyun in 2017. She helped Bum-gyun, Woo-jin, Jung-yeon, and Detective Hong Jin-hong in investigating the serial suicides in the university.
Seo Hyun-chul as Detective Hong Jin-hong
He is a police detective who began investigating the serial suicides in Handam University of Science and Technology, along with Woo-jin, Jung-yeon, and Min-young. He becomes suspended due to his involvement with the case.
Kim Joong-ki as Dr. Kim Kyu-chul
He is the father of the twins Kim Woo-jin and Kim Bum-gyun (Kim Joon-hyuk). He is a neuroscientist specializing in treating trauma, working together with fellow neuroscientist Han Yong-woo, taking advantage of Byul's special memory-controlling technology.
Shin Dam-soo as Detective Choi
Detective Hong's junior who is working for another boss
Handam University people
Song Young-gyu as Han Yong-woo
He is a professor and the dean of the Department of Neuroscience in Handam University of Science and Technology. He once worked together with Dr. Kim Kyu-chul in treating traumas using Byul's special technology. He became Byul's father a year after she came on Earth.
Han Sang-jin as Park Dong-gun
an associate professor of the Department of Neuroscience in Handam University of Science and Technology
Shin Joo-hwan as Lee Hyun-suk
a third-year student and research assistant in the Department of Neuroscience in Handam University of Science and Technology.
Part 2: A Great New World
Normal Earth people
Kim Min-kyung as Dr. Park Min-young
In 2037, she has become a skilled doctor and Joon-hyuk/Bum-gyun's girlfriend. She would always help him during investigations, especially in times when her expertise is needed.
Seo Hyun-chul as Detective Hong Jin-hong
He is reinstated in the police force as a member of the cybercrimes unit. In 2037, he helps Joon-hyuk in his investigations in Smart City.
Oh Eui-shik as Lee Dong-soo
Jung Joon-won as young Dong-soo (in Beta Project)
a skilled hacker whom Joon-hyuk usually enlists during investigations. He was Woo-jin's former tutee.
Kwon Hyuk-soo as Detective Oh
Smart Earth people
Min Sung-Wook as Lee Hyun-suk
In 2037, he is now an employee in Human B
Han Sang-jin as Park Dong-gun
In 2037, he becomes the South Korea's Minister of Science and Economy
Nam Myung-ryul as Yoon Hak-joo
Mayor of Smart City
Lee Hwa-kyum as Secretary Shin
Choi Ji-hun as Kim Min-ji
Others
Cha Myung-wook as Park Jin-gyu
Jeon Suk-kyung (Voice)
Kang Chung-hoon (Voice)
Ryu Eun-ji (Voice)
Park Young-jae (Voice)
Park Ji-a as Woman Waiting for Bus (Ep. 1, Part 2)
Park Eun-ji
Johyun as Announcer in A Great New World (Ep. 1)
Jang Ji-woo as Smart City Security Agent (Ep. 1, Part 2)
Jang Joo-hee
Kim Sa-hee as Doctor in Smart City
Choi Sung-jae as Humans B Central Control Room Agent
Kim Ji-sung as Kim Nan-hee
Tae-ha as Choi Soo-bin (Ep. 3 and 7, Part 2)
Lee Ho-soo's deceased girlfriend.
Ha Yoon-seo as Kang So-yoon
Noh Haeng-ha
Lee Tae-kyung as Section Chief Go
Original soundtrack
Part 1
Part 2
Ratings
In this table, the represent the lowest ratings and the represent the highest ratings.
Notes
References
External links
TVN (South Korean TV channel) television dramas
South Korean science fiction television series
South Korean mystery television series
2017 South Korean television series debuts
Television series by Studio Dragon
Dystopian television series
Fiction set in 2017
Fiction set in 2037
Television series about extraterrestrial life
Television series about brothers
Fiction about memory erasure and alteration
Works about twin brothers
2017 South Korean television series endings
|
27681836
|
https://en.wikipedia.org/wiki/Unofficial%20patch
|
Unofficial patch
|
An unofficial patch is a patch for a piece of software, created by a third party such as a user community without the involvement of the original developer. Similar to an ordinary patch, it alleviates bugs or shortcomings. Unofficial patches do not usually change the intended usage of the software, in contrast to other third-party software adaptions such as mods or cracks.
Motivation
A common motivation for the creation of unofficial patches is missing technical support by the original software developer or provider. Reasons may include:
the software product reached its defined end-of-life and/or was superseded by a successor product (planned obsolescence)
the software was originally designed to operate in a substantially different environment and may require improvement/optimization (porting)
the developer has gone out of business and is not available anymore (abandonware)
support is not economically viable (e.g. localization for small markets)
a fast solution for a time critical problem (e.g. security holes) when an official one takes too long
the official developer is unable to cope with the problems
Types
Unofficial patches are also sometimes called fan patches or community patches, and are typically intended to repair unresolved bugs and provide technical compatibility fixes, e.g. for newer operating systems, increased display resolutions or new display formats.
While unofficial patches are most common for the PC platform, they can also be found for console games e.g. in context of the emulation community.
Translations
Unofficial patches are not limited to technical fixes; fan translations of software, especially games, are often created if the software has not been released locally. Fan translations are most common for Japanese role-playing games which are often not localized for Western markets.
Another variant of unofficial patches are slipstream like patches which combine official patches together, when individual patches are only available online or as small incremental updates.
Methods
The most common case is that the source code and the original development tools are not available for the software. Therefore, the faulty software's binary must be analyzed at run time by reverse engineering and debugging. If the problem is found, a fix to the program must be applied. Sometimes only small changes in configuration files or the registry are required, sometimes binary hacks on the executable itself are required to fix bugs. If a software development kit (e.g. for modding) is available, fixes to the content can be easily produced, otherwise the community would need to create their own tools. These found fixes are typically packed to user deployable patches (e.g. with NSIS, Innosetup).
If the source code is available, support can by provided most effectively. Sometimes the source code is released intentionally, sometimes by leaking or mistake, such as what happened with the game engine of the Thief series. Sometimes fans even completely reverse-engineer source code from the original program binary. With the source code available even the support of completely different but recent platforms with source ports becomes possible.
Law
While no court cases have directly addressed the legal ramifications of unofficial patches, similar cases have been tried on related issues. The case of Galoob v. Nintendo found that it was not copyright infringement by a user to apply an unauthorized patch to a system (while the scope was very specific to the Game Genie). On the other hand, the case Micro Star v. FormGen Inc. found that user-generated maps were derivative works of the original game. In Sega v. Accolade, the 9th Circuit held that making copies in the course of reverse engineering is a fair use, when it is the only way to get access to the "ideas and functional elements" in the copyrighted code, and when "there is a legitimate reason for seeking such access". According to Copyright law of the United States 17 U.S. Code § 117, the owner of a copy of a program can modify it as necessary for "Maintenance or Repair", without permission from the copyright holder; an argumentation also raised by Daniel J. Bernstein professor at the University of Illinois at Chicago.
Similar user rights are given also according to European copyright laws. The question of whether unauthorized changes of lawfully obtained copyright-protected software qualify as fair use is an unsettled area of law. An article of Helbraun law firm remarks, in the context of fan translations, that while redistributing complete games with adaptions most likely does not fall under fair use, distributing the modifications as a patch might be legally permissible; however, that conclusion has not been tested in court.
Reception
Reception of unofficial patches is mixed, but by large, copyright holders are ambivalent. When the software is not considered commercially viable unofficial patches are ignored by the copyright holder as it is not seen as a source of lost revenue.
There have been seldom cases of cease and desist letters to unofficial patch and fan translation projects.
Sometimes the copyright holder actively support the patching and fixing efforts of a software community, sometimes even by releasing the source code under a software license which allows the software community the continued software support by themselves. Examples for such software are in the List of commercial video games with later released source code.
The free and open source software movement was founded in the 1980s to solve the underlying problem of unofficial patches, the limited possibility for user self-support in binary only distributed software due to missing source code. Free and open source software demands from distributed software the availability of source code, which prevents the technical problems and legal uncertainties of binary only user patching of proprietary software.
Examples in video games
Examples in general software
See also
Fan labor
Server emulator
Source port
Right to repair
References
Software maintenance
Software release
Unofficial adaptations
Video game development
Fan labor
|
54250222
|
https://en.wikipedia.org/wiki/Axway%20Software
|
Axway Software
|
Axway Software is an American publicly held information technology company that provides software tools for enterprise software, Enterprise Application Integration, business activity monitoring, business analytics, mobile application development and web API management. It has been listed on Compartment B (for companies with market capitalizations between €150 million and €1 billion) of Euronext Paris since June 2011.
History
Axway Software was incorporated on 28 December 2000 when the software infrastructure division of the French IT services company Sopra was spun-out as a subsidiary. (Sopra subsequently merged with another French IT services company Steria to form Sopra Steria in 2014.)
Sopra used Axway as a vehicle for expansion into the Enterprise Application Integration market. Subsequently, a number of acquisitions have been made by Axway. The Swedish company Viewlocity was acquired in early 2002.
Christophe Fabre became CEO of Axway in 2005 and remained in that position until 2015. Axway acquired the US company Cyclone Commerce in January 2006 after which much of the executive management of Axway relocated to Phoenix, Arizona. In February 2007, Axway acquired the Atos's B2B software business in Germany. US company Tumbleweed Communications was acquired in June 2008.
In June 2011, Axway was spun out of Sopra Group and listed on the Paris Euronext. In November 2012, the Irish company Vordel, an API Management vendor, was acquired. The Brazilian company, SCI Soluções, was acquired in September 2013. In January 2014, Axway acquired the assets of Information Gateway in Australia. Axway acquired French company Systar, a developer of Business Activity Monitoring software, in June 2014.
In January 2016, Axway acquired US company Appcelerator, creator of the Appcelerator Titanium open-source framework for multiplatform native mobile app development. Axway acquired US company Syncplicity, developer of a file share and synchronization service, in February 2017.
As of 2017, Sopra Steria holds 33.52% of Axway and Sopra GMT, a holding company for Sopra Steria and Axway, holds 21.65%. In May 2016, Sopra Steria acquired the 8.62% stake in Axway
formerly held by Société Générale. Axway is a component of the CAC Small.
In 2016, Axway had more than 11,000 customers in 100 countries. Since going public in 2011, annual revenues have grown from €217.2 million in 2011 to €301.1 million in 2016, and profits have grown from €35.3 million in 2011 to €50.8 million in 2016.
Company locations
Axway is headquartered in Phoenix, Arizona and Puteaux, Paris. The company has development centers in France in Lyon and Paris, in Romania in Bucharest, in Bulgaria in Sofia and in the United States in Scottsdale, Arizona and Santa Clara, California. It also has various support centers, including one in Noida, India. The acquisition of SCI in 2013 lead to the establishment of the Axway South America regional headquarters in São Paulo, Brazil.
Products
As of 2017, Axway's main product is the AMPLIFY platform. AMPLIFY acts as a data integration platform for a number of formerly independent legacy products, including Appcelerator Titanium. It provides a more uniform user interface to the various components.
References
External links
Axway Software corporate site
Cloud applications
Software companies based in Arizona
Software companies of France
Software performance management
Development software companies
Enterprise application integration
Companies based in Phoenix, Arizona
Companies based in Paris
2011 initial public offerings
Software companies established in 2000
Companies listed on Euronext Paris
Software companies of the United States
|
34546345
|
https://en.wikipedia.org/wiki/Redbooth
|
Redbooth
|
Redbooth (formerly Teambox) is a web-based workplace collaboration tool and communication platform.
History
Redbooth, previously known as Teambox Technologies S.L., who developed Teambox, was founded in 2008 and continued to serve commercial and free hosting for Teambox. The company also offered installation and customization of the software.
In February 2010, Teambox secured $193,400 (€140,000) and an additional US$250,000 as part of a seed funding round in November 2010. In April 2010, Talker announced that it had been acquired by Teambox. In June 2013, Teambox partnered with Zoom Video Communications to provide HD videoconferencing to its users.
On 21 January 2014, after gaining 650,000 users, Teambox announced it has rebranded as "Redbooth", the company was renamed to "Redbooth, Inc.". Later that year, on 18 November 2014, Redbooth announces a $11 million Series B round, led by Altpoint Ventures and Avalon Ventures. Redbooth's total funding was brought to $17.5 million.
In August 2016, Redbooth released an exclusive app for Apple TV, it is now available in the Apple TV App Store.
On 13 September 2017, Redbooth announced that it had merged with AeroFS, a company that develops collaboration applications. The new combined company is called Redbooth, and products from both companies will be supported going forward.
Features
Status updates and conversations — Status updates are registered as projects' conversations. You can later organize conversations by giving them headlines. There are options to notify other project members via email and to attach files from one's computer or Google Docs.
Task management – Tasks are organized into task lists under the projects. The task system is very closely related to the conversation system and conversations can be converted to tasks. Tasks' status can be changed when commenting it. There is time tracking, delegation and due date properties for tasks.
File and content management: Easily share, find and work on current documents. Comes with free file storage and integrates with: Dropbox, Box, Google Drive
Real-time communication: HD Video conferencing for up to 100 people, screen sharing, and group chat to communicate with your team in real time.
Role-based permission to access to projects
Integration with other systems (CRM, ERP, etc.)
HD Video conferencing
Pages — Pages are a wiki type documentation feature.
Discussion forums
Chat
Contacts on the project
Time tracking — Time spent on tasks can be tracked
Phone and tablet clients for iOS and Android
Language support for English, French, German, Italian, Spanish, Portuguese, Simplified Chinese, and Japanese.
See also
List of collaborative software
Comparison of project management software
References
External links
Teambox on GitHub
Project management software
Free software programmed in Ruby
Software using the GNU AGPL license
|
3596373
|
https://en.wikipedia.org/wiki/Eclipse%20Public%20License
|
Eclipse Public License
|
The Eclipse Public License (EPL) is a free and open source software license most notably used for the Eclipse IDE and other projects by the Eclipse Foundation. It replaces the Common Public License (CPL) and removes certain terms relating to litigations related to patents.
The Eclipse Public License is designed to be a business-friendly free software license, and features weaker copyleft provisions than licenses such as the GNU General Public License (GPL). The receiver of EPL-licensed programs can use, modify, copy and distribute the work and modified versions, in some cases being obligated to release their own changes.
The EPL is listed as a free software license by the Free Software Foundation (FSF) and approved by the Open Source Initiative (OSI).
Discussion of a new version of the EPL began in May 2013. Version2.0 was announced on 24August 2017.
On January 20, 2021, the license steward for the license was changed from Eclipse.org Foundation, Inc. (Delaware, USA) to Eclipse Foundation AISBL (Brussels, Belgium).
Compatibility
The EPL 1.0 is not compatible with the GPL, and a work created by combining a work licensed under the GPL with a work licensed under the EPL cannot be lawfully distributed. The GPL requires that "[any distributed work] that ... contains or is derived from the [GPL-licensed] Program ... be licensed as a whole ... under the terms of [the GPL]", and that the distributor not "impose any further restrictions on the recipients' exercise of the rights granted". The EPL, however, requires that anyone distributing the work grant every recipient a license to any patents that they might hold that cover the modifications they have made. Because this is a "further restriction" on the recipients, distribution of such a combined work does not satisfy the GPL.
The EPL, in addition, does not contain a patent retaliation clause.
Derivative works
According to article 1(b) of the EPL, additions to the original work may be licensed independently, including under a proprietary license, provided such additions are "separate modules of software" and do not constitute a derivative work. Changes and additions which do constitute a derivative work must be licensed under the same terms and conditions of the EPL, which includes the requirement to make source code available.
Linking to code (for example to a library) licensed under EPL automatically does not mean that your program is a derivative work. Eclipse Foundation interprets the term "derivative work" in a way that is consistent with the definition in the U.S. Copyright Act, as applicable to computer software.
Later versions
If a new version of the EPL is published the user/contributor can choose to distribute the software under the version with which he or she received it or upgrade to the new version.
Comparison with the CPL
The EPL was based on the Common Public License (CPL), but there are some differences between the two licenses:
The Eclipse Foundation replaces IBM as the Agreement Steward in the EPL
The EPL patent clause is revised by deleting the sentence from section 7 of the CPL
The Eclipse Foundation sought permission from contributors to re-licence their CPL code under the EPL.
Version 2.0
Version2.0 of the Eclipse Public License (SPDX code ) was announced on 24August 2017.
The Eclipse Foundation maintains an FAQ.
The FSF has analyzed the license in relation to GPL license compatibility and added it to their official list.
The bare license notice is available in several formats, including plain text.
In terms of GPL compatibility, the new license allows the initial contributor to a new project to opt in to a secondary license that provides explicit compatibility with the GNU General Public License version 2.0, or any later version. If this optional designation is absent, then the Eclipse license remains source incompatible with the GPL (any version).
Other changes include:
the license now applies to "files" not "modules"
the new license is international because the choice of law provision has been removed
the new license is suitable for scripting languages, including JavaScript
The Eclipse Foundation advises that version1.0 is deprecated and that projects should migrate to version2.0. Relicensing is a straightforward matter and does not require the consent of all contributors, past and present. Rather, the version1.0 license allows a project (preferably after forming a consensus) to adopt any new version by simply updating the relevant file headers and license notices.
Notable projects
In addition to the Eclipse Foundation, the EPL is used in some other projects, especially those running on the Java virtual machine.
Licensed solely under the EPL
AT&T KornShell
Clojure (and ClojureScript)
Graphviz
Jikes RVM
JUnit
Mondrian
OpenDaylight Project
UWIN
Multi-licensed under the EPL and one or more other licenses
Eclipse OMR
Eclipse OpenJ9
Jetty
JRuby
See also
Software using the EPL (category)
References
External links
The Eclipse Public License, version 1.0
The Eclipse Public License, version 2.0
Eclipse Public License FAQ
EPL v1.0 on OSI
EPL v2.0 on OSI
Free and open-source software licenses
Copyleft software licenses
|
60298986
|
https://en.wikipedia.org/wiki/Bioctl
|
Bioctl
|
The bio(4) pseudo-device driver and the bioctl(8) utility implement a generic RAID volume management interface in OpenBSD and NetBSD. The idea behind this software is similar to ifconfig, where a single utility from the operating system can be used to control any RAID controller using a generic interface, instead of having to rely on many proprietary and custom RAID management utilities specific for each given hardware RAID manufacturer. Features include monitoring of the health status of the arrays, controlling identification through blinking the LEDs and managing of sound alarms, and specifying hot spare disks. Additionally, the softraid configuration in OpenBSD is delegated to bioctl as well; whereas the initial creation of volumes and configuration of hardware RAID is left to card BIOS as non-essential after the operating system has already been booted. Interfacing between the kernel and userland is performed through the ioctl system call through the /dev/bio pseudo-device.
Overview
The bio/bioctl subsystem is deemed to be an important part in OpenBSD's advocacy for open hardware documentation, and the 3.8 release title and the titular song were dedicated to the topic — Hackers of the Lost RAID.
The development took place during a time of controversy where Adaptec refused to release appropriate hardware documentation that was necessary in order for the make the aac(4) driver work reliably, which followed with OpenBSD disabling support for the driver.
In the commentary to the 3.8 release, the developers express the irony of hardware RAID controllers' supposed purpose of providing reliability, through redundancy and repair, whereas in reality many vendors expect system administrators to install and depend on huge binary blobs in order to be assess volume health and service their disk arrays.
Specifically, OpenBSD is making a reference to the modus operandi of FreeBSD, where the documentation of the aac(4) driver for Adaptec specifically suggests enabling Linux compatibility layer in order to use the management utilities (where the documentation even fails to explain where exactly these utilities must be obtained from, or which versions would be compatible, evidently because the proprietary tools may have expired).
Likewise, OpenBSD developers intentionally chose to concentrate on supporting only the most basic features of each controller which are uniform across all the brands and variations; specifically, the fact that initial configuration of each controller must still be made through card BIOS was never kept secret from any bio/bioctl announcement.
This can be contrasted with the approach taken by FreeBSD, for example, where individual utilities exist for several independent RAID drivers, and the interface of each utility is independent of one another; specifically, , FreeBSD includes separate device-specific utilities called mfiutil, mptutil, mpsutil/mprutil and sesutil,, each of which provides many options with at least subtle differences in the interface for configuration and management of the controllers, contributes to code bloat, not to mention any additional drivers for which no such tool even exists as open-source software at all.
In OpenBSD 6.4 (2018), a dozen of drivers register with the bio framework.
The drive sensors
Monitoring of the state of each logical drive is also duplicated into the hardware monitoring frameworks and their corresponding utilities on both systems where bioctl is available — hw.sensors with sensorsd in OpenBSD and sysmon envsys with envstat and powerd in NetBSD. For example, on OpenBSD since 4.2 release, the status of the drive sensors could be automatically monitored simply by starting sensorsd without any specific configuration being required. More drivers are being converted to use the bio and sensors frameworks with each release.
SES/SAF-TE
In OpenBSD, both SCSI Enclosure Services (SES) and SAF-TE are supported since OpenBSD 3.8 (2005) as well, both of which feature LED blinking through bio and bioctl (by implementing the BIOCBLINK ioctl), helping system administrators identify devices within the enclosures to service. Additionally, both the SES and SAF-TE drivers in OpenBSD feature support for a combination of temperature and fan sensors, PSU, doorlock and alarm indicators; all of this auxiliary sensor data is exported into the hw.sensors framework in OpenBSD, and can be monitored through familiar tools like sysctl, SNMP and sensorsd.
, in NetBSD, an older SES/SAF-TE driver from NASA from 2000 is still in place, which is not integrated with bio or envsys, but has its own device files with a unique ioctl interface, featuring its own custom SCSI-specific userland tooling; this older implementation was also available in OpenBSD between 2000 and 2005, and was removed 2005 (together with its userland tools) just before the new leaner bio- and hw.sensors-based alternative drivers were introduced; SES and SAF-TE are now kept as two separate drivers in OpenBSD, but don't require any separate custom userland utilities anymore, reducing the code bloat and the number of source lines of code.
References
2005 software
2007 software
BSD software
Computer data storage
Computer hardware tuning
Computer performance
Free software programmed in C
Free system software
Motherboard
NetBSD
OpenBSD
RAID
SCSI
Storage software
System administration
System monitors
Volume manager
|
30171577
|
https://en.wikipedia.org/wiki/Fang%20Binxing
|
Fang Binxing
|
Fang Binxing is a former Principal of Beijing University of Posts and Telecommunications. He is also known for his substantial contribution to China's Internet censorship infrastructure, and has been dubbed "Father of China's Great Fire Wall".
Biography
Fang was born on 17 July 1960 in Harbin, Heilongjiang province. Fang went to university at Harbin Institute of Technology, where he earned a PhD in computer science and became a lecturer. He began working at the National Computer Network Emergency Response Technical Team / Coordination Center of China in 1999 as deputy chief engineer; from 2000 he was chief engineer and director. It was in this position that he oversaw the development of the filtering and blocking technology that has become known as the Great Firewall, and thus, he has been dubbed "Father of China's Great Fire Wall".
Fang has defended the Great Firewall in the media, stating that it is a "natural reaction to something newborn and unknown" and that web censoring is a "common phenomenon around the world". Appearing on China Central Television in March 2010, Fang accused Google of conducting censorship such as Chilling Effects.
Fang has helped create a major electronic surveillance operation in Chongqing for party secretary Bo Xilai. The system involved wiretaps, eavesdropping, and monitoring of internet communications.
Incidents
2011 shoe throwing incident
On 19 May 2011, Fang was hit on the chest by a shoe thrown at him by a Huazhong University of Science and Technology student who calls himself "Hanjunyi" () while Fang was giving a lecture at Wuhan University. According to RFI, the student discussed the planned shoe attack on Twitter, and with the help of other bloggers, was able to locate the exact whereabouts and the time of Fang's lecture. After the shoe throwing incident, "Hanjunyi" was able to walk out while other students were trying to obstruct school teachers who were going to detain him. "Hanjunyi" had since become an instant internet hero of the Chinese blogosphere, with bloggers offering him a large number of presents, such as cash, airline tickets, buffet dinners at Hong Kong five-star hotels, tours of various sex parlors, sight-seeing tours, a virtual private network, iPad2, admission ticket to Hong Kong Disneyland, escorted tour of Singapore, free hotel rooms, free sex with admiring female bloggers, free shoes and designer clothes. An anonymous blogger even promised him a position in his company if ever "Hanjunyi" is in trouble with the authorities.
During an interview with CNN, "Hanjunyi" said: "I'm not happy about what he (Fang) does. His work made me spend unnecessary money to get access to the website that is supposed to be free... He makes my online surfing very inconvenient."
2016 VPN incident
In April 2016, reports of a botched presentation by Fang went viral. Fang was speaking at his alma mater, the Harbin Institute of Technology, and reportedly planned to display some South Korean web sites as part of the presentation. After his initial attempts were blocked by the Great Firewall, Fang publicly attempted, with mixed success, to bypass the firewall with a VPN. The question-and-answer session following the presentation was cancelled. According to Ming Pao, Fang was later resoundingly mocked online.
References
External links
News about Fang Binxing on China Digital Times
Father of Great Firewall forced to remove microblog
Living people
1960 births
Chinese Internet celebrities
Tsinghua University alumni
Members of the Chinese Academy of Engineering
Chinese computer scientists
Delegates to the 11th National People's Congress
Politicians from Harbin
Harbin Institute of Technology alumni
Educators from Heilongjiang
Harbin Institute of Technology faculty
Beijing University of Posts and Telecommunications faculty
Presidents of Beijing University of Posts and Telecommunications
Chinese Communist Party politicians from Heilongjiang
People's Republic of China politicians from Heilongjiang
Scientists from Harbin
|
40875802
|
https://en.wikipedia.org/wiki/ServiceMax
|
ServiceMax
|
ServiceMax is a Service Execution Management company. ServiceMax provides a cloud-based software platform designed to improve the productivity of complex, equipment-centric service execution for OEMs, operators, and 3rd-party service providers.
Products & Services
ServiceMax's platform is a SaaS (Software as a Service) software running on Salesforce force.com cloud technology. ServiceMax's cloud-based, mobile-ready field service software solution supports companies across industries to manage work orders, plan and schedule work assignments, and provide mobile technician enablement, contracts and entitlements, proactive maintenance, and parts inventory management. The ServiceMax platform is designed to optimize the service execution processes and is used by service technicians, dispatchers, service planners, and their managers. Customers include medical device manufacturing, industrial manufacturing, food and beverage equipment, buildings and construction, technology, oil and gas, and power and utilities industries. ServiceMax software is primarily used by enterprise size customers.
The most recent areas of innovation include capabilities for complex jobs with functionality such as crew management and shift planning.
History
Maxplore Technologies was founded by Athani Krishnaprasad and Hari Subramanian as a consulting company focusing on customer relationship management. A client requested Maxplore build a field service module on the Salesforce platform. The project took 2 weeks and in 2007, was entered in the “AppExchange Challenge” at the Dreamforce conference. The project won $2 million in funding from Emergence Capital and went on to win the Force Million Dollar Challenge in 2008 when the company changed its name to ServiceMax. In February 2019, ServiceMax acquired Zinc, the company providing a frictionless way for service workers to get and share knowledge in real-time.
Investors
The company completed a $82 million Series F round in August 2015 led by PremjiInvest and GE. In November 2012, ServiceMax received a $27 million Series D funding round. On November 14, 2016, General Electric Co.'s GE Digital unit announced a deal to buy ServiceMax for $915 million. The acquisition was completed on January 10, 2017. On December 13, 2018, GE Digital and SilverLake announced an agreement for GE Digital to sell a majority stake in ServiceMax. GE retained a 10% equity ownership in the company.
See also
Software as a service
Field service management
Cloud Analytics
Salesforce.com
References
Cloud computing providers
Project management software
Companies based in Pleasanton, California
Silver Lake (investment firm) companies
|
33839932
|
https://en.wikipedia.org/wiki/Android%20Cloud%20to%20Device%20Messaging
|
Android Cloud to Device Messaging
|
Android Cloud to Device Messaging (commonly referred to as Cloud to Device Messaging), or C2DM, is a defunct mobile notification service that was developed by Google and replaced by the Google Cloud Messaging service. It enabled developers to send data from servers to Android applications and Chrome extensions. C2DM originally launched in 2010 and was available beginning with version 2.2 of Android. On June 27, 2012, Google unveiled the Google Cloud Messaging service aimed at replacing C2DM, citing improvements to authentication and delivery, new API endpoints and messaging parameters, and the removal of API rate limits and maximum message sizes. Google announced official deprecation of the C2DM service in August 2012, and released documentation to assist developers with migrating to the new service. The C2DM service was discontinued for existing applications and completely shut down on October 20, 2015.
Technical details
The C2DM service consisted of sub-services and interfaces necessary with maintaining security and reliability. When an application registered for C2DM messages and data, it received a C2DM Registration ID from the service. This identifier was unique to the application on the device, and was used to identify the device that the data or message request was intended for. This identifier was typically sent by the client application to a server owned by the developer or creator for tracking and statistical purposes. Upon sending a data or push request, the server sent an authentication request and the C2DM Registration ID of the device to the C2DM authentication service, which responded with an authentication token upon success. The third party server then submitted both identifiers within the final data request to be enqueued and sent to the device. When the device received the information from the C2DM, the request was removed from the C2DM queue.
Migration to the Google Cloud Messaging service
Shortly after announcing the Google Cloud Messaging service, Google published documentation to guide application developers with migrating from the C2DM and onto the new service. Migrating to the service required SDK and code changes, as well as a release of an application update to the publish repository (such as Google Play) for downloading and updating. The C2DM and the Google Cloud Messaging service were not interoperable between each other; you could not send data requests using one service to be received and processed on the client app using the other. The migration also required changes to be made on the third party server operated by the developer (depending on the complexity and use case regarding the data sent).
References
External links
Mobile telecommunications
Google services
|
7386159
|
https://en.wikipedia.org/wiki/Hash%20chain
|
Hash chain
|
A hash chain is the successive application of a cryptographic hash function to a piece of data. In computer security, a hash chain is a method to produce many one-time keys from a single key or password. For non-repudiation a hash function can be applied successively to additional pieces of data in order to record the chronology of data's existence.
Definition
A hash chain is a successive application of a cryptographic hash function to a string .
For example,
gives a hash chain of length 4, often denoted
Applications
Leslie Lamport suggested the use of hash chains as a password protection scheme in an insecure environment. A server which needs to provide authentication may store a hash chain rather than a plain text password and prevent theft of the password in transmission or theft from the server. For example, a server begins by storing which is provided by the user. When the user wishes to authenticate, they supply to the server. The server computes and verifies this matches the hash chain it has stored. It then stores for the next time the user wishes to authenticate.
An eavesdropper seeing communicated to the server will be unable to re-transmit the same hash chain to the server for authentication since the server now expects . Due to the one-way property of cryptographically secure hash functions, it is infeasible for the eavesdropper to reverse the hash function and obtain an earlier piece of the hash chain. In this example, the user could authenticate 1000 times before the hash chain were exhausted. Each time the hash value is different, and thus cannot be duplicated by an attacker.
Binary hash chains
Binary hash chains are commonly used in association with a hash tree. A binary hash chain takes two hash values as inputs, concatenates them and applies a hash function to the result, thereby producing a third hash value.
The above diagram shows a hash tree consisting of eight leaf nodes and the hash chain for the third leaf node. In addition to the hash values themselves the order of concatenation (right or left 1,0) or "order bits" are necessary to complete the hash chain.
Hash chain vs. blockchain
A hash chain is similar to a blockchain, as they both utilize a cryptographic hash function for creating a link between two nodes. However, a blockchain (as used by Bitcoin and related systems) is generally intended to support distributed consensus around a public ledger (data), and incorporates a set of rules for encapsulation of data and associated data permissions.
See also
Challenge–response authentication
Hash list – In contrast to the recursive structure of hash chains, the elements of a hash list are independent of each other.
One-time password
Key stretching
Linked timestamping – Binary hash chains are a key component in linked timestamping.
X.509
References
Cryptographic algorithms
|
4502447
|
https://en.wikipedia.org/wiki/Hybrid%20disc
|
Hybrid disc
|
A hybrid disc is a disc, such as CD-ROM or Blu-ray, which contains multiple types of data which can be used differently on different devices. These include CD-ROM music albums containing video files viewable on a personal computer, or feature film Blu-rays containing interactive content when used with a PlayStation 3 game console.
Multiple file systems
A hybrid disc is an optical disc that has multiple file systems installed on it, typically ISO 9660 and HFS+ (or HFS on older discs). One reason for the hybrid format is the restrictions of ISO 9660 (filenames of only eight characters, and a maximum depths of eight directories, similar to the Microsoft FAT file system). Another key factor is that ISO 9660 does not support resource forks, which is critical to the classic Mac OS' software design (OS X or macOS removed much of the need for resource forks in application design). Companies that released products for both DOS (later Windows) and the classic Mac OS (later macOS) could release a CD containing software for both, natively readable on either system. Data files can even be shared by both partitions, while keeping the platform specific data separate. In a "true" (or "shared") hybrid HFS filesystem, files common to both the ISO 9660 and HFS partitions are stored only once, with the ISO 9660 partition pointing to file content in the HFS area (or vice versa). Blizzard Entertainment has released most of their computer games on hybrid CDs. By default, Mac OS 9 and macOS burn hybrid discs.
An ISO 9660/HFS hybrid disc has an ISO 9660 primary volume descriptor, which makes it a valid ISO 9660 disc, and an Apple partition. It may also have an Apple partition map, although this is not necessary. The ISO 9660 portion of the disc can co-exist with an Apple partition because the header areas which define the contents of the disc are located in different places. The ISO 9660 primary volume descriptor begins 32,768 bytes (32 KB) into the disc. If present, an Apple partition map begins 512 bytes into the disc; if there is no partition map, the header for an Apple HFS partition (known as a Master Directory Block, or MDB) begins 1,024 bytes into the disc.
Audio CD with added data tracks
Hybrid-CD also refers to an audio CD which also includes a data track storing MP3 (or other digital audio compression format) copy of those CD-DA tracks. Before the introduction and subsequent popularization of iTunes and the iPod, such discs were popular for sharing music on compact disc without requiring the recipient to extract and encode the CD-DA themselves — a technical and perhaps time-consuming process on older computing hardware. However, with the advent of faster computing hardware and vastly simplified automated extraction and encoding tools (e.g. iTunes, Rhythmbox, etc.) and the lack of an automated hybrid feature in that very same software, popularity of such hybrid CD has subsequently declined. However, such hybrid discs do remain in a commercial setting as a digital rights management enforcement technique, where encrypted compressed copies of the digital audio are provided with proprietary software for listening in a computer disc drive, while the CD-DA is included for playback in stand-alone CD players.
Hybrid Blu-rays
In recent years, some Blu-rays have been released as Hybrid discs, demonstrating its compatibility with the PlayStation 3 console, which uses BD as its primary disc format. These discs generally include a movie or feature that can be played on any Blu-ray Disc player and playable content which is accessible when run on a PlayStation 3 console. Examples include Tekken Hybrid, Macross Frontier: Itsuwari no Utahime and Lagrange: The Flower of Rin-ne -Kamogawa Days-.
See also
Universal binary
References
Audio storage
Compact disc
Consumer electronics
Disk file systems
Rotating disc computer storage media
Video storage
|
41654854
|
https://en.wikipedia.org/wiki/Blackphone
|
Blackphone
|
The Blackphone is a smartphone built to ensure privacy, developed by SGP Technologies, a wholly owned subsidiary of Silent Circle. Originally, SGP Technologies was a joint venture between the makers of GeeksPhone and Silent Circle. Marketing is focused upon business users, stressing that employees often conduct business using private devices and services that are not secure and that the Blackphone service readily provides users with options that ensure confidentiality when needed. Blackphone provides Internet access through VPN. The device runs a modified version of Android called SilentOS that comes with a bundle of security-minded tools. On 30 June 2014, the Blackphone began to ship advance orders.
Background
The concept of an encrypted telephone has been an interest of Silent Circle founder and PGP creator, Phil Zimmermann, for a long time. In a video on the Blackphone web site, Zimmermann said,
Aaron Souppouris of The Verge stated:
The Blackphone also allows insecure communications. Mike Janke, CEO and co-founder of Silent Circle, has suggested there are certain calls people want to encrypt, but "if you're ordering a pizza or calling your grandma", it's unlikely you'll feel the weight of criminals on your shoulders." This is why Blackphone is unique — it gives the user the chance to choose the level of privacy."
Blackphone runs a custom-built Android OS called SilentOS. The operating system essentially "closes all backdoors" usually found open on major mobile operating systems. Some major features of SilentOS are anonymous search, privacy-enabled bundled apps, smart disabling of Wi-Fi except trusted hotspots, more control in app permissions, and private communication (calling, texting, video chat, browsing, file sharing, and conference calls). Geeksphone also claims the telephone will receive frequent secure updates from Blackphone directly.
It supports the following 2G, 3G, and 4G bands, respectively:
In North America (Region2): GSM: 850 / 900 / 1800 / 1900 MHz; HSPA+/WCDMA: 850 / 1700 / 1900 / 2100 MHz (42 Mbit/s); LTE FDD bands: 4/7/17 (Cat. 3 100 Mbit/s)
Rest of world (Region1): GSM: 850 / 900 / 1800 / 1900 MHz; HSPA+/WCDMA: 850 / 900 / 1900 / 2100 MHz (42 Mbit/s); LTE FDD bands 3/7/20 (Cat. 3 100 Mbit/s)
LTE Cat. 4 (150 Mbit/s) is under development.
In early 2015, Geeksphone sold its part of SGP Technologies to Silent Circle to focus on wearables sold under the brand Geeksme.
14 engineers from Geeksphone, including Javier Agüera, remained in SGP.
In the summer of 2015, Silent Circle announced that they would be releasing a successor to the Blackphone, the Blackphone 2, in September 2015. It has a 5.5-inches full HD screen with Gorilla glass, and a faster Qualcomm Snapdragon Octa-Core Processor. The price also has been increased to US$799.00. Blackphone 2 does not have a removable battery.
Services bundled
A one-year subscription to Silent Circle’s secure voice and video calling and text messaging services, plus a one-year "Friend and Family" Silent Circle subscription that allows others to install the service on their smartphones.
One year of SpiderOak cloud file storage and sharing, limited to 5 GB per month.
Kismet Smart Wi-Fi Manager comes pre-installed. It also includes an international power adapter kit and a headset.
Reception
Ars Technica praised that the Blackphone's Security Center in PrivatOS gives control over app permissions, such as the bundled Silent Phone and Silent Text services that anonymise and encrypt communications so no one can eavesdrop on voice, video, and text calls. They also praised the Disconnect VPN and Search that keeps web trackers from the telephone and anonymises web searches and Internet traffic. The Ars Technica reviewer did not like that the telephone’s performance is mediocre, noting that using a custom OS means no Google Play or any of the other benefits of the Google ecosystem, spotty support for sideloaded apps, and reliance on Amazon or other third-party app stores.
The telephone's original launch quantity is unknown, but was reported to have sold out shortly after the launch began. Since then, Blackphone has resumed normal sales.
, a Blackphone was on exhibit at the Victoria and Albert Museum and one had been added to the collection of the International Spy Museum.
Financial difficulties
In 2016, Silent Circle had significant financial problems, caused by a significant overestimate of how many phones they could sell. This led to the near bankruptcy of the company.
References
Further reading
External links
Mobile phones introduced in 2014
Smartphones
Android (operating system) devices
Secure telephones
|
18438887
|
https://en.wikipedia.org/wiki/Getopts
|
Getopts
|
getopts is a built-in Unix shell command for parsing command-line arguments. It is designed to process command line arguments that follow the POSIX Utility Syntax Guidelines, based on the C interface of getopt.
The predecessor to was the external program by Unix System Laboratories.
History
The original had several problems: it could not handle whitespace or shell metacharacters in arguments, and there was no ability to disable the output of error messages.
getopts was first introduced in 1986 in the Bourne shell shipped with Unix SVR3. It uses the shell's own variables to track the position of current and argument positions, and , and returns the option name in a shell variable. Earlier versions of the Bourne shell did not have getopts.
In 1995, getopts was included in the Single UNIX Specification version 1 / X/Open Portability Guidelines Issue 4. As a result, getopts is now available in shells including the Bourne shell, KornShell, Almquist shell, Bash and Zsh.
The command has also been ported to the IBM i operating system.
The modern usage of was partially revived mainly due to an enhanced implementation in util-linux. This version, based on the BSD , not only fixed the two complaints around the old , but also introduced the capability for parsing GNU-style long options and optional arguments for options, features that lacks. The various BSD distributions, however, stuck to the old implementation.
Usage
The usage synopsis of getopt and getopts is similar to its C sibling:
getopt optstring [parameters]
getopts optstring varname [parameters]
The optstring part has the same format as the C sibling.
The parameters part simply accepts whatever one wants getopt to parse. A common value is all the parameters, in POSIX shell.
This value exists in getopts but is rarely used, since it can just access the shell's parameters. It is useful with resetting the parser, however.
The varname part of getopts names a shell variable to store the option parsed into.
The way one uses the commands however varies a lot:
getopt simply returns a flat string containing whitespace-separated tokens representing the "normalized" argument. One then uses a while-loop to parse it natively.
getopts is meant to be repeatedly called like the C getopt. When it hits the end of arguments, it returns 1 (shell false).
Enhancements
In various getopts
In spring 2004 (Solaris 10 beta development), the libc implementation for was enhanced to support long options. As a result, this new feature was also available in the built-in command getopts of the Bourne Shell. This is triggered by parenthesized suffixes in the optstring specifying long aliases.
KornShell and Zsh both have an extension for long arguments. The former is defined as in Solaris, while the latter is implemented via a separate command.
KornShell additionally implements optstring extensions for options beginning with instead of .
In Linux getopt
An alternative to getopts is the Linux enhanced version of getopt, the external command line program.
The Linux enhanced version of getopt has the extra safety of getopts plus more advanced features. It supports long option names (e.g. --help) and the options do not have to appear before all the operands (e.g. command operand1 operand2 -a operand3 -b is permitted by the Linux enhanced version of getopt but does not work with getopts). It also supports escaping metacharacters for shells (like tcsh and POSIX sh) and optional arguments.
Comparison
Examples
Suppose we are building a Wikipedia downloader in bash that takes three options and zero extra arguments:
wpdown -a article name -l [language] -v
When possible, we allow the following long arguments:
-a --article
-l --language, --lang
-v --verbose
For clarity, no help text is included, and we assume there is a program that downloads any webpage. In addition, all programs are of the form:
#!/bin/bash
VERBOSE=0
ARTICLE=''
LANG=en
# [EXAMPLE HERE]
if ((VERBOSE > 2)); then
printf '%s\n' 'Non-option arguments:'
printf '%q ' "${remaining[@]]}"
fi
if ((VERBOSE > 1)); then
printf 'Downloading %s:%s\n' "$LANG" "$ARTICLE"
fi
if [[ ! $ARTICLE ]]; then
printf '%s\n' "No articles!">&2
exit 1
fi
save_webpage "https://${LANG}.wikipedia.org/wiki/${ARTICLE}"
Using old getopt
The old getopt does not support optional arguments:
# parse everything; if it fails we bail
args=`getopt 'a:l:v' $*` || exit
# now we have the sanitized args... replace the original with it
set -- $args
while true; do
case $1 in
(-v) ((VERBOSE++)); shift;;
(-a) ARTICLE=$2; shift 2;;
(-l) LANG=$2; shift 2;;
(--) shift; break;;
(*) exit 1;; # error
esac
done
remaining=("$@")
This script will also break with any article title with a space or a shell metacharacter (like ? or *) in it.
Using getopts
Getopts give the script the look and feel of the C interface, although in POSIX optional arguments are still absent:
#!/bin/sh
while getopts ':a:l:v' opt; do
case $opt in
(v) ((VERBOSE++));;
(a) ARTICLE=$OPTARG;;
(l) LANG=$OPTARG;;
(:) # "optional arguments" (missing option-argument handling)
case $OPTARG in
(a) exit 1;; # error, according to our syntax
(l) :;; # acceptable but does nothing
esac;;
esac
done
shift "$OPTIND"
# remaining is "$@"
Since we are no longer operating on shell options directly, we no longer need to shift them. However, a slicing operating is required to get the remaining arguments now.
It is possible, but tedious to emulate long argument support by treating as an argument to an option .
Using Linux getopt
Linux getopt escapes its output and an "eval" command is needed to have the shell interpret it. The rest is unchanged:
# We use "$@" instead of $* to preserve argument-boundary information
ARGS=$(getopt -o 'a:l::v' --long 'article:,language::,lang::,verbose' -- "$@") || exit
eval "set -- $ARGS"
while true; do
case $1 in
(-v|--verbose)
((VERBOSE++)); shift;;
(-a|--article)
ARTICLE=$2; shift 2;;
(-l|--lang|--language)
# handle optional: getopt normalizes it into an empty string
if [ -n "$2" ]; then
LANG=$2
fi
shift 2;;
(--) shift; break;;
(*) exit 1;; # error
esac
done
remaining=("$@")
See also
List of Unix commands
References
External links
Unix SUS2008 utilities
|
1560437
|
https://en.wikipedia.org/wiki/Portable%20media%20player
|
Portable media player
|
A portable media player (PMP) or, digital audio player (DAP) is a portable consumer electronics device capable of storing and playing digital media such as audio, images, and video files. The data is typically stored on a compact disc (CD), Digital Video Disc (DVD), Blu-ray Disc (BD), flash memory, microdrive, or hard drive; most earlier PMPs used physical media, but modern players mostly use flash memory. In contrast, analogue portable audio players play music from non-digital media that use analogue signal storage, such as cassette tapes or vinyl records.
Digital audio players are often marketed and sold as "MP3 players", even if they also support other file formats and media types. The PMP term was introduced later for devices that had additional capabilities such as video playback. Generally speaking, they are portable, employing internal or replaceable batteries, equipped with a 3.5 mm headphone jack which users can plug headphones into or connect to a boombox or shelf stereo system, or may be connected to car and home stereos via a wireless connection such as Bluetooth. Some players also include FM radio tuners, voice recording and other features. Increasing sales of smartphones and tablet computers have led to a decline in sales of portable media players, leading to most devices being phased out, though certain flagship devices like the Apple iPod and Sony Walkman are still in production. Portable DVD/BD players are still manufactured by brands across the world.
This article focuses on portable devices that have the main function of playing media.
Types
Digital audio players are generally categorised by storage media:
Flash-based players: These are non-mechanical solid state devices that hold digital audio files on internal flash memory or removable flash media called memory cards. Due to technological advances in flash memory, these originally low-storage devices are now available commercially ranging up to 128 GB. Because they are solid state and do not have moving parts they require less battery power, are less likely to skip during playback, and may be more resilient to hazards such as dropping or fragmentation than hard disk-based players. Some of these may be styled just as USB flash drives.
Hard drive-based players or digital jukeboxes: Devices that read digital audio files from a hard disk drive (HDD). These players have higher capacities ranging up to 500 GB. At typical encoding rates, this means that tens of thousands of songs can be stored on one player. The disadvantages with these units is that a hard drive consumes more power, is larger and heavier and is inherently more fragile than solid-state storage, thus more care is required to not drop or otherwise mishandle these units.
MP3 CD/DVD players: Portable CD players that can decode and play MP3 audio files stored on CDs. Such players were typically a less expensive alternative than either the hard drive or flash-based players when the first units of these were released. The blank CD-R media they use is very inexpensive, typically costing less than US$0.15 per disc. These devices have the feature of being able to play standard "Red book" CD-DA audio CDs. A disadvantage is that due to the low rotational disk speed of these devices, they are even more susceptible to skipping or other misreads of the file if they are subjected to uneven acceleration (shaking) during playback. The mechanics of the player itself however can be quite sturdy, and are generally not as prone to permanent damage due to being dropped as hard drive-based players. Since a CD can typically hold only around 700 megabytes of data a large library will require multiple disks to contain. However, some higher-end units are also capable of reading and playing back files stored on larger capacity DVD; some also have the ability to play back and display video content, such as movies. An additional consideration can be the relatively large width of these devices, since they have to be able to fit a CD.
Networked audio players: Players that connect via (WiFi) network to receive and play audio. These types of units typically do not have any local storage of their own and must rely on a server, typically a personal computer also on the same network, to provide the audio files for playback.
USB host/memory card audio players: Players that rely on USB flash drives or other memory cards to read data.
History
The immediate predecessor in the market place of the digital audio player was the portable CD player and prior to that, the personal stereo. In particular, Sony's Walkman and Discman are the ancestors of digital audio players such as Apple's iPod.
There are several types of MP3 players:
Devices that play CDs. Often, they can be used to play both audio CDs and homemade data CDs containing MP3 or other digital audio files.
Pocket devices. These are solid-state devices that hold digital audio files on internal or external media, such as memory cards. These are generally low-storage devices, typically ranging from 128MB to 1GB, which can often be extended with additional memory. As they are solid state and do not have moving parts, they can be very resilient. Such players are generally integrated into USB keydrives.
Devices that read digital audio files from a hard drive. These players have higher capacities, ranging from 1.5GB to 100GB, depending on the hard drive technology. At typical encoding rates, this means that thousands of songs—perhaps an entire music collection—can be stored in one MP3 player. Apple's popular iPod player is the best-known example.
Early digital audio players
British scientist Kane Kramer invented the first digital audio player, which he called the IXI. His 1979 prototypes were capable of approximately one hour of audio playback but did not enter commercial production. His UK patent application was not filed until 1981 and was issued in 1985 in the UK and 1987 in the US. However, in 1988 Kramer's failure to raise the £60,000 required to renew the patent meant it entering the public domain, but he still owns the designs. Apple Inc. hired Kramer as a consultant and presented his work as an example of prior art in the field of digital audio players during their litigation with Burst.com almost two decades later. In 2008, Apple acknowledged Kramer as the inventor of the digital audio player The player was as big as a credit card and had a small LCD screen, navigation and volume buttons and would have held at least 8MB of data in a solid-state bubble memory chip with a capacity of 3½ minutes' worth of audio. Plans were made for a 10-minute stereo memory card and the system was at one time fitted with a hard drive which would have enabled over an hour of recorded digital music. Later Kramer set up a company to promote the IXI and five working prototypes were produced with 16-bit sampling at 44.1 kilohertz with the pre-production prototype being unveiled at the APRS Audio/Visual trade exhibition in October 1986. However, in 1988 Kramer's failure to raise the £60,000 required to renew the patent meant it entering the public domain, but he still owns the designs.
The Listen Up Player was released in 1996 by Audio Highway, an American company led by Nathan Schulhof. It could store up to an hour of music, but despite getting an award at CES 1997 only 25 copies were made. That same year AT&T developed the FlashPAC digital audio player which initially used AT&T Perceptual Audio Coder (PAC) for music compression, but in 1997 switched to AAC. At about the same time AT&T also developed an internal Web based music streaming service that had the ability to download music to FlashPAC. AAC and such music downloading services later formed the foundation for the Apple iPod and iTunes.
The first production-volume portable digital audio player was The Audible Player (also known as MobilePlayer, or Digital Words To Go) from Audible.com available for sale in January 1998, for US$200. It only supported playback of digital audio in Audible's proprietary, low-bitrate format which was developed for spoken word recordings. Capacity was limited to 4 MB of internal flash memory, or about 2 hours of play, using a custom rechargeable battery pack. The unit had no display and rudimentary controls.
The MP3 standard
MP3 was introduced as an audio coding standard in 1994. It was based on several audio data compression techniques, including the modified discrete cosine transform (MDCT), FFT and psychoacoustic methods. The first portable MP3 player was launched in 1997 by Saehan Information Systems, which sold its “MPMan F10" player in parts of Asia in spring 1998. In mid-1998, the South Korean company licensed the players for North American distribution to Eiger Labs, which rebranded them as the EigerMan F10 and F20. The flash-based players were available in 32 MB or 64 MB (6 or 12 songs) storage capacity and had a LCD screen to tell the user the song currently playing. The first car audio hard drive-based MP3 player was also released in 1997 by MP32Go and was called the MP32Go Player. It consisted of a 3 GB IBM 2.5" hard drive that was housed in a trunk-mounted enclosure connected to the car's radio system. It retailed for $599 and was a commercial failure.
MP3 became a popular standard format and as a result most digital audio players after this supported it and hence were often called "MP3 players". The Rio PMP300 from Diamond Multimedia was introduced in September 1998, a few months after the MPMan, and also featured a 32 MB storage capacity. It was a success during the holiday season, with sales exceeding expectations. Interest and investment in digital music were subsequently spurred from it. Because of the player's notoriety as the target of a major lawsuit, the Rio is erroneously assumed to be the first digital audio player. The RIAA soon filed a lawsuit alleging that the device abetted illegal copying of music, but Diamond won a legal victory on the shoulders of Sony Corp. v. Universal City Studios and MP3 players were ruled legal devices. Eiger Labs and Diamond went on to establish a new segment in the portable audio player market and the following year saw several new manufacturers enter this market. The player would be the start of the Rio line of players.
Other early MP3 portables include Sensory Science's Rave MP2100, the I-Jam IJ-100 and the Creative Labs Nomad. These portables were small and light, but had only enough memory to hold around 7 to 20 songs at normal 128 kbit/s compression rates. They also used slower parallel port connections to transfer files from PC to player, necessary as most PCs then used the Windows 95 and NT operating systems, which did not have native support for USB connections. As more users migrated to Windows 98 by 2000, most players transitioned to USB. In 1999 the first hard drive based DAP using a 2.5" laptop drive was made, the Personal Jukebox (PJB-100) designed by Compaq and released by Hango Electronics Co with 4.8 GB storage, which held about 1,200 songs, and invented what would be called the jukebox segment of digital music portables. This segment eventually became the dominant type of digital music player.
Also at the end of 1999 the first in-dash MP3 player appeared. The Empeg Car and Rio Car (renamed after it was acquired by SONICblue and added to its Rio line of MP3 products) offered players in several capacities ranging from 5 to 28 GB. The unit didn't catch on as SONICblue had hoped, though, and was discontinued in the fall of 2001.
Sony entered the digital audio player market in 1999 with the Vaio Music Clip and Memory Stick Walkman, however they were technically not MP3 players as it did not support the MP3 format but instead Sony's own ATRAC format and WMA. The company's first MP3-supporting Walkman player did not come until 2004. The new Walkman players were originally referred to as "Network Walkman", with the introduction of the NW-MS7. This DAP plays audio files using ATRAC compression stored on a removable Memory Stick. Over the years, various hard-drive-based and flash-based DAPs and PMPs have been released under the Walkman range, albeit MP3 support only came in 2004.
Designed by Samsung Electronics, the Samsung YEPP line were first released in 1999 with the aim of making the smallest music players on the market. In 2000, Creative released the 6GB hard drive based Creative NOMAD Jukebox. The name borrowed the jukebox metaphor popularised by Remote Solution, also used by Archos. Later players in the Creative NOMAD range used microdrives rather than laptop drives. In October 2000, South Korean software company Cowon Systems released their first MP3 player, the CW100, under the brand name iAUDIO. Since then the company has released many different players. In December 2000, some months after the Creative's NOMAD Jukebox, Archos released its Jukebox 6000 with a 6GB hard drive.
While popularly being called MP3 players at the time, most players could play more than just the MP3 file format, for example Windows Media Audio (WMA), Advanced Audio Coding (AAC), Vorbis, FLAC, Speex and Ogg. Many MP3 players can encode directly to MP3 or other digital audio format directly from a line in audio signal (radio, voice, etc.). Devices such as CD players can be connected to the MP3 player (using the USB port) in order to directly play music from the memory of the player without the use of a computer.
Modular MP3 keydrive players are composed of two detachable parts: the head (or reader/writer) and the body (the memory). They can be independently obtained and upgradable (one can change the head or the body; i.e. to add more memory).
Growth of market
On 23 October 2001, Apple Computer unveiled the first generation iPod, a 5 GB hard drive based DAP with a 1.8" hard drive and a 2" monochrome display. With the development of a spartan user interface and a smaller form factor, the iPod was initially popular within the Macintosh community. In July 2002, Apple introduced the second generation update to the iPod, which was compatible with Windows computers through Musicmatch Jukebox. iPods quickly became the most popular DAP product and led the fast growth of this market during the early and mid 2000s.
In 2002, Archos released the first "portable media player" (PMP), the Archos Jukebox Multimedia with a little 1.5" colour screen. Manufacturers have since implemented abilities to view images and play videos into their devices. The next year, Archos released another multimedia jukebox, the AV300, with a 3.8" screen and a 20GB hard drive. In the same year, Toshiba released the first Gigabeat. In 2003, Dell launched a line of portable digital music players called Dell DJ. They were discontinued by 2006.
The name "MP4 player" was a marketing term for inexpensive portable media players, usually from little known or generic device manufacturers. The name itself is a misnomer, since most MP4 players through 2007 were incompatible with the MPEG-4 Part 14 or the .mp4 container format. Instead, the term refers to their ability to play more file types than just MP3. In this sense, in some markets like Brazil, any new function added to a given media player is followed by an increase in the number, for example an MP5 or MP12 Player, despite there being no corresponding MPEG-5 standard (, the current standard, still being developed, is MPEG-4).
iriver of South Korea originally made portable CD players and then started making digital audio players and portable media players from 2002. Creative also introduced the ZEN line. Both of these attained high popularity in some regions.
In 2004, Microsoft attempted to take advantage of the growing PMP market by launching the Portable Media Center (PMC) platform. It was introduced at the 2004 Consumer Electronics Show with the announcement of the Zen Portable Media Center, which was co-developed by Creative. The Microsoft Zune series would later be based on the Gigabeat S, one of the PMC-implemented players.
In May 2005, flash memory maker SanDisk entered the PMP market with the Sansa line of players, starting with the e100 series, and then following up with the m200 series, and c100 series.
In 2007, Apple introduced the iPod Touch, the first iPod with a multi-touch screen. Some similar products existed before such as the iriver clix in 2006.
PMPs in other categories
Samsung SPH-M2100, the first mobile phone with built-in MP3 player was produced in South Korea in August 1999. Samsung SPH-M100 (UpRoar) launched in 2000 was the first mobile phone to have MP3 music capabilities in the US market. The innovation spread rapidly across the globe and by 2005, more than half of all music sold in South Korea was sold directly to mobile phones and all major handset makers in the world had released MP3 playing phones. By 2006, more MP3 playing mobile phones were sold than all stand-alone MP3 players put together. The rapid rise of the media player in phones was quoted by Apple as a primary reason for developing the iPhone. In 2007, the number of phones that could play media was over 1 billion. Some companies have created music-centric sub-brands for mobile phones, for example the former Sony Ericsson's Walkman range or Nokia's XpressMusic range, which have extra emphasis on music playback and typically have features such as dedicated music buttons.
Mobile phones with PMP functionalities such as video playback also started appearing in the 2000s. Other non-phone products such as the PlayStation Portable have also been considered to be PMPs.
Contemporary
DAPs and PMPs have declined in popularity after the late 2000s due to increasing worldwide adoption of smartphones that already come with PMP functionalities. DAPs continue to be made in lower volumes by manufacturers such as SanDisk, Sony, IRIVER, Philips, Apple, Cowon, and a range of Chinese manufacturers namely Aigo, Newsmy, PYLE and ONDA. They often have specific selling points in the smartphone era, such as portability (for small sized players) or for high quality sound suited for audiophiles.
Typical features
PMPs are capable of playing digital audio, images, and/or video. Usually, a colour liquid crystal display (LCD) or organic light-emitting diode (OLED) screen is used as a display for PMPs that have a screen. Various players include the ability to record video, usually with the aid of optional accessories or cables, and audio, with a built-in microphone or from a line out cable or FM tuner. Some players include readers for memory cards, which are advertised to equip players with extra storage or transferring media. In some players, features of a personal organiser are emulated, or support for video games, like the iriver clix (through compatibility of Adobe Flash Lite) or the PlayStation Portable, is included. Only mid-range to high-end players support "savestating" for power-off (i.e. leaves off song/video in progress similar to tape-based media).
Audio playback
Nearly all players are compatible with the MP3 audio format, and many others support Windows Media Audio (WMA), Advanced Audio Coding (AAC) and WAV. Some players are compatible with open-source formats like Ogg Vorbis and the Free Lossless Audio Codec (FLAC). Audio files purchased from online stores may include digital rights management (DRM) copy protection, which many modern players support.
Image viewing
The JPEG format is widely supported by players. Some players, like the iPod series, provide compatibility to display additional file formats like GIF, PNG, and TIFF, while others are bundled with conversion software.
Video playback
Most newer players support the MPEG-4 Part 2 video format, and many other players are compatible with Windows Media Video (WMV) and AVI. Software included with the players may be able to convert video files into a compatible format.
Recording
Many players have a built-in electret microphone which allows recording. Usually recording quality is poor, suitable for speech but not music. There are also professional-quality recorders suitable for high-quality music recording with external microphones, at prices starting at a few hundred dollars.
Radio
Some DAPs have FM radio tuners built in. Many also have an option to change the band from the usual 87.5 – 108.0 MHz to the Japanese band of 76.0 – 90.0 MHz. DAPs typically never have an AM band, or even HD Radio since such features would be either cost-prohibitive for the application, or because of AM's sensitivity to interference.
Internet access
Newer portable media players are now coming with Internet access via Wi-Fi. Examples of such devices are Android OS devices by various manufacturers, and iOS devices on Apple products like the iPhone, iPod Touch, and iPad. Internet access has even enabled people to use the Internet as an underlying communications layer for their choice of music for automated music randomisation services like Pandora, to on-demand video access (which also has music available) such as YouTube. This technology has enabled casual and hobbyist DJs to cue their tracks from a smaller package from an Internet connection, sometimes they will use two identical devices on a crossfade mixer. Many such devices also tend to be smartphones.
Last position memory
Many mobile digital media players have last position memory, in which when it is powered off, a user doesn't have to worry about starting at the first track again, or even hearing repeats of others songs when a playlist, album, or whole library is cued for shuffle play, in which shuffle play is a common feature, too. Early playback devices to even remotely have "last position memory" that predated solid-state digital media playback devices were tape-based media, except this kind suffered from having to be "rewound", whereas disc-based media suffered from no native "last position memory", unless disc-players had their own last position memory. However, some models of solid-state flash memory (or hard drive ones with some moving parts) are somewhat the "best of both worlds" in the market.
Miscellaneous
Media players' firmware may be equipped with a basic file manager and a text reader.
Common audio formats
There are three categories of audio formats:
Uncompressed PCM audio: Most players can also play uncompressed PCM in a container such as WAV or AIFF.
Lossless audio formats: These formats maintain the Hi-fi quality of every song or disc. These are the ones used by CDs, many people recommend the use of lossless audio formats to preserve the CD quality in audio files on a desktop. Some of them are: Apple Lossless (proprietary format) and FLAC (Royalties free) are increasingly popular formats for lossless compression, which maintain the Hi-fi quality.
Lossy compression formats: Most audio formats use lossy compression, to produce as small as possible a file compatible with the desired sound quality. There is a trade-off between size and sound quality of lossily compressed files; most formats allow different combinations—e.g., MP3 files may use between 32 (worst), 128 (reasonable) and 320 (best) kilobits per second.
There are also royalty free lossy formats like Vorbis for general music and Speex and Opus used for voice recordings. When "ripping" music from CDs, many people recommend the use of lossless audio formats to preserve the CD quality in audio files on a desktop, and to transcode the music to lossy compression formats when they are copied to a portable player. The formats supported by a particular audio player depends upon its firmware; sometimes a firmware update adds more formats. MP3 and AAC are dominant formats, and are almost universally supported.
Software
PMPs were earlier packaged with an installation CD/DVD that inserts device drivers (and for some players, software that is capable of seamlessly transferring files between the player and the computer). For later players, however, these are usually available online via the manufacturers' websites, or increasingly natively recognised by the operating system through Universal Mass Storage (UMS) or Media Transfer Protocol (MTP).
Hardware
Storage
As with DAPs, PMPs come in either flash or hard disk storage. Storage capacities have reached up to 64 GB for flash memory based PMPs, first reached by the 3rd Generation iPod Touch, and up to 1 TB for hard disk drive PMPs, first achieved by the Archos 5 Internet Tablet.
A number of players support memory card slots, including CompactFlash (CF), Secure Digital (SD), and Memory Sticks. They are used to directly transfer content from external devices, and expand the storage capacity of PMPs.
Interface
A standard PMP uses a 5-way D-pad to navigate. Many alternatives have been used, most notably the wheel and touch mechanisms seen on players from the iPod and Sansa series. Another popular mechanism is the swipe-pad, or 'squircle', first seen on the Zune. Additional buttons are commonly seen for features such as volume control.
Screen
Sizes range all the way up to 7 inches (18 cm). Resolutions also vary, going up to WVGA. Most screens come with a colour depth of 16-bit, but higher quality video-oriented devices may range all the way to 24-bit, otherwise known as true colour, with the ability to display 16.7 million distinct colours. Screens commonly have a matte finish but may also come in glossy to increase colour intensity and contrast. More and more devices are now also coming with touch screen as a form of primary or alternate input. This can be for convenience and/or aesthetic purposes. Certain devices, on the other hand, have no screen whatsoever, reducing costs at the expense of ease of browsing through the media library.
Radio
Some portable media players include a radio receiver, most frequently receiving FM. Features for receiving signals from FM stations on MP3 players are common on more premium models.
Other features
Some portable media players have recently added features such as simple camera, built-in game emulation (playing Nintendo Entertainment System or other game formats from ROM images) and simple text readers and editors. Newer PMPs have been able to tell time, and even automatically adjust time according to radio reception, and some devices like the 6th-gen iPod Nano even have wristwatch bands available.
Modern MP4 players can play video in a multitude of video formats without the need to pre-convert them or downsize them prior to playing them. Some MP4 Players possess USB ports, to allow users to connect it to a personal computer to sideload files. Some models also have memory card slots to expand the memory of the player instead of storing files in the built-in memory.
Chipsets
Chipsets and file formats that are particular to some PMPs:
Anyka is a chip that's used by many MP4 Players. It supports the same formats as Rockchip.
Fuzhou Rockchip Electronics's video processing Rockchip has been incorporated into many MP4 players, supporting AVI with no B frames in MPEG-4 Part 2 (not Part 14), while MP2 audio compression is used. The clip must be padded out, if necessary, to fit the resolution of the display. Any slight deviation from the supported format results in a Format Not Supported error message.
Some players, like the Onda VX979+, have started to use chipsets from Ingenic, which are capable of supporting RealNetworks's video formats. Also, players with SigmaTel-based technology are compatible with SMV (SigmaTel Video).
AMV
The image compression algorithm of this format is inefficient by modern standards (about 4 pixels per byte, compared with over 10 pixels per byte for MPEG-2. There are a fixed range of resolutions (96 × 96 to 208 × 176 pixels) and framerates (12 or 16 frames) available. A 30-minute video would have a filesize of approximately 100 MB at a 160 × 120 resolution.
MTV
The MTV video format (no relation to the cable network) consists of a 512-byte file header that operates by displaying a series of raw image frames during MP3 playback. During this process, audio frames are passed to the chipset's decoder, while the memory pointer of the display's hardware is adjusted to the next image within the video stream. This method does not require additional hardware for decoding, though it will lead to a higher amount of memory consumption. For that reason, the storage capacity of an MP4 player that uses MTV files is effectively less than that of a player that decompresses files on the fly.
Operation
Digital sampling is used to convert an audio wave to a sequence of binary numbers that can be stored in a digital format, such as MP3. Common features of all MP3 players are a memory storage device, such as flash memory or a miniature hard disk drive, an embedded processor, and an audio codec microchip to convert the compressed file into an analogue sound signal. During playback, audio files are read from storage into a RAM based memory buffer, and then streamed through an audio codec to produce decoded PCM audio. Typically audio formats decode at double to more than 20 times real speed on portable electronic processors, requiring that the codec output be stored for a time until the DAC can play it. To save power, portable devices may spend much or nearly all of their time in a low power idle state while waiting for the DAC to deplete the output PCM buffer before briefly powering up to decode additional audio.
Most DAPs are powered by rechargeable batteries, some of which are not user-replaceable. They have a 3.5 mm stereo jack; music can be listened to with earbuds or headphones, or played via an external amplifier and speakers. Some devices also contain internal speakers, through which music can be listened to, although these built-in speakers are typically of very low quality.
Nearly all DAPs consists of some kind of display screen, although there are exceptions, such as the iPod Shuffle, and a set of controls with which the user can browse through the library of music contained in the device, select a track, and play it back. The display, if the unit even has one, can be anything from a simple one or two line monochrome LCD display, similar to what are found on typical pocket calculators, to large, high-resolution, full-color displays capable of displaying photographs or viewing video content on. The controls can range anywhere from the simple buttons as are found on most typical CD players, such as for skipping through tracks or stopping/starting playback to full touch-screen controls, such as that found on the iPod Touch or the Zune HD. One of the more common methods of control is some type of the scroll wheel with associated buttons. This method of control was first introduced with the Apple iPod and many other manufacturers have created variants of this control scheme for their respective devices.
Content is placed on DAPs typically through a process called "syncing", by connecting the device to a personal computer, typically via USB, and running any special software that is often provided with the DAP on a CD-ROM included with the device, or downloaded from the manufacturer's website. Some devices simply appear as an additional disk drive on the host computer, to which music files are simply copied like any other type of file. Other devices, most notably the Apple iPod or Microsoft Zune, requires the use of special management software, such as iTunes or Zune Software, respectively. The music, or other content such as TV episodes or movies, is added to the software to create a "library". The library is then "synced" to the DAP via the software. The software typically provides options for managing situations when the library is too large to fit on the device being synced to. Such options include allowing manual syncing, in that the user can manually "drag-n-drop" the desired tracks to the device, or allow for the creation of playlists. In addition to the USB connection, some of the more advanced units are now starting to allow syncing through a wireless connection, such as via Wi-Fi or Bluetooth.
Content can also be obtained and placed on some DAPs, such as the iPod Touch or Zune HD by allowing access to a "store" or "marketplace", most notably the iTunes Store or Zune Marketplace, from which content, such as music and video, and even games, can be purchased and downloaded directly to the device.
Digital signal processing
A growing number of portable media players are including audio processing chips that allow digital effects like 3D audio effects, dynamic range compression and equalisation of the frequency response. Some devices adjust loudness based on Fletcher–Munson curves. Some media players are used with Noise-cancelling headphones that use Active noise reduction to remove background noise.
De-noise mode
De-noise mode is an alternative to Active noise reduction. It provides for relatively noise-free listening to audio in a noisy environment. In this mode, audio intelligibility is improved due to selective gain reduction of the ambient noise. This method splits external signals into frequency components by "filterbank" (according to the peculiarities of human perception of specific frequencies) and processing them using adaptive audio compressors. Operation thresholds in adaptive audio compressors (in contrast to "ordinary" compressors) are regulated depending on ambient noise levels for each specific bandwidth. Reshaping of the processed signal from adaptive compressor outputs is realised in a synthesis filterbank. This method improves the intelligibility of speech signals and music. The best effect is obtained while listening to audio in the environment with constant noise (in trains, automobiles, planes), or in environments with fluctuating noise level (e.g. in a metro). Improvement of signal intelligibility in condition of ambient noise allows users to hear audio well and preserve hearing ability, in contrast to regular volume amplification.
Natural mode
Natural mode is characterised by subjective effect of balance of different frequency sounds, regardless of level of distortion, appearing in the reproduction device. It is also regardless of personal user's ability to perceive specific sound frequencies (excluding obvious hearing loss). The natural effect is obtained due to special sound processing algorithm (i.e. "formula of subjective equalisation of frequency-response function"). Its principle is to assess frequency response function (FRF) of mediaplayer or any other sound reproduction device, in accordance with audibility threshold in silence (subjective for each person), and to apply gain modifying factor. The factor is determined with the help of integrated function to test audibility threshold: the program generates tone signals (with divergent oscillations – from minimum volume 30–45 Hz to maximum volume appr. 16 kHz), and user assess their subjective audibility. The principle is similar to in situ audiometry, used in medicine to prescribe a hearing aid. However, the results of test may be used to a limited extent as far as FRF of sound devices depends on reproduction volume. It means correction coefficient should be determined several times – for various signal strengths, which is not a particular problem from a practical standpoint.
Sound around mode
Sound around mode allows for real time overlapping of music and the sounds surrounding the listener in their environment, which are captured by a microphone and mixed into the audio signal. As a result, the user may hear playing music and external sounds of the environment at the same time. This can increase user safety (especially in big cities and busy streets), as a user can hear a mugger following them or hear an oncoming car.
Controversy
Although these issues are not usually controversial within digital audio players, they are matters of continuing controversy and litigation, including but not limited to content distribution and protection, and digital rights management (DRM).
Lawsuit with RIAA
The Recording Industry Association of America (RIAA) filed a lawsuit in late 1998 against Diamond Multimedia for its Rio players, alleging that the device encouraged copying music illegally. But Diamond won a legal victory on the shoulders of the Sony Corp. v. Universal City Studios case and DAPs were legally ruled as electronic devices.
Risk of hearing damage
According to the Scientific Committee on Emerging and Newly Identified Health Risks, the risk of hearing damage from digital audio players depends on both sound level and listening time. The listening habits of most users are unlikely to cause hearing loss, but some people are putting their hearing at risk, because they set the volume control very high or listen to music at high levels for many hours per day. Such listening habits may result in temporary or permanent hearing loss, tinnitus, and difficulties understanding speech in noisy environments.
The World Health Organization warns that increasing use of headphones and earphones puts 1.1 billion teenagers and young adults at risk of hearing loss due to unsafe use of personal audio devices. Many smartphones and personal media players are sold with earphones that do a poor job of blocking ambient noise, leading some users to turn up the volume to the maximum level to drown out street noise. People listening to their media players on crowded commutes sometimes play music at high volumes feel a sense of separation, freedom and escape from their surroundings.
The World Health Organization recommends that "the highest permissible level of noise exposure in the workplace is 85 dB up to a maximum of eight hours per day" and time in "nightclubs, bars and sporting events" should be limited because they can expose patrons to noise levels of 100 dB. The report states
The report also recommends that governments raise awareness of hearing loss, and to recommend people visit a hearing specialist if they experience symptoms of hearing loss, which include pain, ringing or buzzing in the ears.
A study by the National Institute for Occupational Safety & Health found that employees at bars, nightclubs or other music venues were exposed to noise levels above the internationally recommended limits of 82–85 dB(A per eight hours. This growing phenomena has led to the coining of the term music-induced hearing loss, which includes hearing loss as a result of overexposure to music on personal media players.
FCC issues
Some MP3 players have electromagnet transmitters, as well as receivers. Many MP3 players have built-in FM radios, but FM transmitters aren't usually built-in due to liability of transmitter feedback from simultaneous transmission and reception of FM. Also, certain features like Wi-Fi and Bluetooth can interfere with professional-grade communications systems such as aircraft at airports.
See also
Comparison of portable media players
Digital video recorder
Internet radio device
Mixtape
Notel
Portable DVD player
Walkman Circ
References
External links
Collecting MP3 Portables – Part I, Part II and Part III – Richard Menta's three-part article covers the first digital audio players on the market with pictures of each player.
MP3
Boombox culture
Audio hobbies
|
55015457
|
https://en.wikipedia.org/wiki/PureOS
|
PureOS
|
PureOS is a Linux distribution focusing on privacy and security, using the GNOME desktop environment. It is maintained by Purism for use in the company's Librem laptop computers as well as the Librem 5 smartphone.
PureOS is designed to include only free/libre and open-source software (FOSS/FLOSS), and is included in the list of Free Linux distributions published by the Free Software Foundation.
PureOS is a Debian-based Linux distribution, merging open-source software packages from the Debian “testing” main archive using a hybrid point release and rolling release model. The default web browser in PureOS is called PureBrowser, a variant of GNOME Web focusing on privacy. The default search engine in PureBrowser is DuckDuckGo.
See also
Librem (computer)
Librem 5 (phone)
Purism (company)
GNU Free System Distribution Guidelines
List of Linux distributions based on Debian testing
branch
References
External links
PureOS at DistroWatch
Debian-based distributions
Mobile operating systems
ARM operating systems
GNOME Mobile
Mobile Linux
Mobile/desktop convergence
Free mobile software
Free software only Linux distributions
Linux distributions
|
1199082
|
https://en.wikipedia.org/wiki/Directory%20Opus
|
Directory Opus
|
Directory Opus (or "DOpus" as its users tend to call it) is a file manager program, originally written for the Amiga computer system in the early to mid-1990s. Commercial development on the version for the Amiga ceased in 1997. Directory Opus is still being actively developed and sold for the Microsoft Windows operating system by GPSoftware and there are open source releases of Directory Opus 4 and 5 for Amiga.
Directory Opus was originally developed by, and is still written by, Australian Jonathan Potter. Until 1994, it was published by well-known Amiga software company Inovatronics, when Potter joined with Greg Perry and the Australian-based GPSoftware to continue its development, and has since been published by GPSoftware.
Features
Directory Opus has evolved since its first release in 1990 as a basic two-panel file manager. The interface has evolved significantly due to feedback given by its users. Some of the features include:
Single or dual-panel exploring.
Folder tree (either shared or separate for dual-display).
Tabbed explorer panels.
Ability to maintain date created/modified timestamps for both files and folders.
Internal handling of ZIP, RAR, 7Zip and other archive formats (browse them like folders).
Internal FTP handling, including (for a small extra fee) advanced FTP and SSH (browse these like folders also).
Internal MTP handling for portable devices like phones and cameras.
Flat-file display, where you can flatten a folder tree and even hide the folders themselves.
Powerful file selection and renaming tools, with advanced regex.
User-definable toolbars, menus, filetypes and filetype groups.
Preview panel, with preview of thumbnails (including animated avi thumbnails).
File collections. These are like virtual folders that contain links to the original files (unlike shortcuts, these actually deal with the files directly).
History
Release history
Amiga release history
Opus 1: January 1990
Opus 2: February 1991
Opus 3: 1991-12-01
Opus 4: 1992-12-04
Opus 5: 1995-04-12
Opus 5.5: 1996-08-01
Opus Magellan (5.6): 1997-05-17
Opus Magellan II (5.8): 1998-11-01
Opus Magellan II GPL (5.90): 2014-05-11
Versions 1 and 2 were only available direct from the author. Versions 3 and 4 were published by Inovatronics. Versions since 5 have been published by GPSoftware (German versions were published by Stefan Ossowskis Schatztruhe). The full version of Magellan II is included for free with AmiKit package.
Windows major release history
Opus 6: 2001-06-18
Opus 8: 2004-10-04
Opus 9: 2007-04-27
Opus 10: 2011-04-30
Opus 11: 2014-03-03
Opus 12: 2016-09-05
All Windows versions published by GPSoftware. (German versions published by Haage & Partner Computer GmbH.)
Open source release history
GPSoftware released the older Amiga Directory Opus 4 source code in 2000 as open-source under the GNU General Public License. AmigaOS4, AROS and MorphOS ports of this version were made available. Magellan II was released as open source under the AROS Public License in December 2012.
The open source 'Worker' file manager is heavily inspired by the Directory Opus 4 series.
See also
Comparison of file managers
References
External links
Directory Opus 4 Research Project
Opus Resource Centre Forum
Getting to know Directory Opus (Guide)
Orthodox file managers
Amiga software
Utilities for Windows
|
937514
|
https://en.wikipedia.org/wiki/SURAnet
|
SURAnet
|
SURAnet was a pioneer in scientific computer networks and one of the regional backbone computer networks that made up the National Science Foundation Network (NSFNET). Many later Internet communications standards and protocols were developed by SURAnet.
How SURAnet started
The Southeastern Universities Research Association was created in December 1980 by scientists and university administrators throughout the southeastern United States, primarily led by the University of Virginia, the College of William & Mary, and the University of Maryland, College Park. The chief goal of SURA was the development of a particle accelerator for research in nuclear physics; this facility is now known as the Thomas Jefferson National Accelerator Facility. By the mid-1980s it was clear that access to high-capacity computer resources would be needed to facilitate collaboration among the SURA member institutions. A high-performance network to provide this access was essential, but no single institution could afford to develop such a system. SURA itself stepped up to the challenge and, with support from the U.S. National Science Foundation (NSF) and SURA universities, SURAnet was up and running in 1987, and was part of the first phase of National Science Foundation Network (NSFNET) funding as the agency built a network to facilitate scientific collaboration. SURAnet was one of the first and one of the largest Internet providers in the United States. SURA sites first used a 56 kbit/s backbone in 1987 which was upgraded to 1.5M bit/s (DS1) in 1989, and to a 45 Mbit/s (DS3) backbone in 1991. FIX East and MAE-East, both major peering points, were located at the main SURA facilities. Large-scale collaboration among SURA-affiliated scientists became an everyday reality.
Role of SURAnet in the development of the Internet
SURAnet participated in the development of Internet communications standards and telecommunications protocols that enabled researchers and federal agencies to communicate and work in this early Internet environment. SURAnet was one of the first NSFNET regional networks to become operational. SURAnet provided networking services for universities and industry, and was one of the first TCP/IP networks to sell commercial connections, when IBM Research in Raleigh-Durham, North Carolina was connected in 1987–1988. It was also the first network to attempt to convert to OSPF in 1990.
Beyond SURAnet
SURAnet was so successful that it outgrew SURA's primary mission, and the SURA Board approved its sale to Bolt, Beranek and Newman in 1995.
Many of the protocols and procedures created under SURAnet are still in use in the commercial Internet today. SURA continues to be a force in the information technology community, participating in projects such as the Extreme Science and Engineering Discovery Environment (XSEDE), Earthcube, and AtlanticWave.
References
Computer networking
|
17086566
|
https://en.wikipedia.org/wiki/SUN%20workstation
|
SUN workstation
|
The SUN workstation was a modular computer system designed at Stanford University in the early 1980s. It became the seed technology for many commercial products, including the original workstations from Sun Microsystems.
History
In 1979 Xerox donated some Alto computers, developed at their Palo Alto Research Center, to Stanford's Computer Science Department, as well as other universities that were developing the early Internet. The Altos were connected using Ethernet to form several local area networks. The SUN's design was inspired by that of the Alto, but used lower-cost modular components. The project name was derived from the initials of the campus' Stanford University Network.
Professor Forest Baskett suggested the best-known configuration: a relatively low-cost personal workstation for computer-aided logic design work. The design created a 3M computer: a 1 million instructions per second (MIPS) processor, 1 Megabyte of memory and a 1 Megapixel raster scan bit-map graphics display. Sometimes the $10,000 estimated price was called the fourth "M" — a "Megapenny".
Director of Computer Facilities Ralph Gorin suggested other configurations and initially funded the project.
Graduate student Andy Bechtolsheim designed the hardware, with several other students and staff members assisting with software and other aspects of the project. Vaughan Pratt became unofficial faculty leader of the project in 1980.
Three key technologies made the SUN workstation possible: very large-scale integration (VLSI) integrated circuits, Multibus and ECAD.
ECAD (Electronic Computer Assisted Design, now known as Electronic design automation) allowed a single designer to quickly develop systems of greater complexity.
The Stanford Artificial Intelligence Laboratory (SAIL) had pioneered personal display terminals, but the 1971 system was showing its age. Bechtolsheim used the Stanford University Drawing System (SUDS) to design the SUN boards on the SAIL system. SUDS had been originally developed for the Foonly computer.
The Structured Computer Aided Logic Design (SCALD) package was then used to verify the design, automate layout and produce wire-wrapped prototypes and then printed circuit boards.
VLSI integrated circuits finally allowed for a high-level of hardware functionality to be included in a single chip. The graphics display controller was the first board designed, published in 1980. A Motorola 68000 CPU, along with memory, a parallel port controller and a serial port controller, were included on the main CPU board designed by Bechtolsheim. The third board was an interface to the 2.94 Mbits/second experimental Ethernet (before the speed was standardized at 10 Mbits/second).
The Multibus computer interface made it possible to use standard enclosures, and to use circuit boards made by different vendors to create other configurations.
For example, the CPU board combined with a multi-port serial controller created a terminal server (called a TIP, for Terminal Interface Processor) which connected many terminals to the Digital Equipment Corporation time-sharing systems at Stanford or anywhere on the Internet.
Configuring multiple Ethernet controllers (including commercial ones, once they were available) with one CPU board created a router. William Yeager wrote the software, which was later adopted and evolved by Cisco Systems on its version of the hardware.
Les Earnest licensed the CPU board for one of the first commercial low-cost laser printer controllers at a company called Imagen.
The processor board was combined with a prototype high performance graphics display by students of James H. Clark.
That group later formed Silicon Graphics Incorporated.
Eventually about ten SUN workstations were built during 1981 and 1982, after which Stanford declined to build any more. Bechtolsheim then licensed the hardware design to several vendors, but was frustrated that none of them had chosen to build a workstation.
Vinod Khosla, also from Stanford, convinced Bechtolsheim along with Scott McNealy to found Sun Microsystems in order to build the Sun-1 workstation, which included some improvements to the earlier design.
Other faculty members who did research using SUN workstations included David Cheriton, Brian Reid, and John Hennessy.
See also
NuMachine, a similar MIT project
References
External links
History of computing hardware
Sun Microsystems
Stanford University
Computer workstations
68k architecture
|
1999738
|
https://en.wikipedia.org/wiki/SIGCSE
|
SIGCSE
|
SIGCSE is the Association for Computing Machinery's (ACM) Special Interest Group (SIG) on Computer Science Education (CSE), which provides a forum for educators to discuss issues related to the development, implementation, and/or evaluation of computing programs, curricula, and courses, as well as syllabi, laboratories, and other elements of teaching and pedagogy. SIGCSE is also the name of one of the four annual conferences organized by SIGCSE.
The main focus of SIGCSE is higher education, and discussions include improving computer science education at high school level and below. The membership level has held steady at around 3300 members for several years. The current chair of SIGCSE is Adrienne Decker for July 1, 2019 to June 30, 2022.
Conferences
SIGCSE has four annual conferences.
The SIGCSE Technical Symposium on Computer Science Education is held in the United States with an average annual attendance of approximately 1800 in recent years. The next conference will be held March 11 through March 14, 2020 in Portland, Oregon.
The annual conference on Innovation and Technology in Computer Science Education (ITiCSE). The next ITiCSE will be held June 26 - July 1 virtually, hosted by Paderborn University in Paderborn, Germany. This conference is attended by about 200-300 and is mainly held in Europe, but has also been held in countries outside of Europe (Turkey - 2010), (Israel - 2012), and (Peru - 2016).
The International Computing Education Research (ICER) conference. This conference has about 70 attendees and is held in the United States every other year. On the alternate years it rotates between Europe and Australasia. The next conference will be held August 10-12, 2020 in Dunedin, New Zealand.
The ACM Global Computing Education (CompEd) conference. This conference will be held at locations outside of the typical North American and European locations. The first annual conference was held in Chengdu, China between the 17th and 19th of May 2019.
Newsletter/Bulletin
The SIGCSE Bulletin is a quarterly newsletter that was first published in 1969. It evolved from an informal gathering of news and ideas to a venue for columns, editor-reviewed articles, research announcements, editorials, symposium proceedings, etc.
In 2010, with the inception of ACM Inroads magazine, the Bulletin was transformed into an electronic newsletter sent to all SIGCSE members providing communications about SIGCSE: announcing activities, publicizing events, and highlighting topics of interest. In other words, it has returned to its roots.
Awards
SIGCSE has two main awards that are given out annually.
Outstanding Contribution Award
The SIGCSE Award for Outstanding Contribution to Computer Science Education is given annually since 1981.
Lifetime Service Award
The SIGCSE Life Service to Computer Science Education has been awarded annually since 1997.
SIGCSE Board
The current SIGCSE Board for July 1, 2019 – June 30, 2022 is:
Adrienne Decker, Chair
Amber Settle, Past Chair
Dan Garcia, Vice-Chair
Andrew Luxton-Reilly, Treasurer
Leo Porter, Secretary
Mary Anne Egan, at-large member
Laurie Murphy, at-large member
Manuel Perez-Quinones, at-large member
SIGCSE Chairs over the years:
Amber Settle, 2016-19
Susan H. Rodger, 2013–16
Renee McCauley, 2010-2013
Barbara Boucher Owens, 2007–10
Henry Walker, 2001-2007
Bruce Klein, 1997-01
, 1993–97
Nell B. Dale, 1991–93
References
Association for Computing Machinery Special Interest Groups
Computer science education
|
27968475
|
https://en.wikipedia.org/wiki/PocketMac
|
PocketMac
|
PocketMac Software is a small software developer and publisher that produces software primarily for Macintosh-based systems. The company, founded in 2000, is run by two brothers, Terence Goggin and Tim Goggin. It is headquartered in San Diego, CA
History
One of their best known products, PocketMac Pro was released in 2002. It was the first software developed that was able to sync Pocket PCs to Mac computers
In 2006, Research In Motion licensed PocketMac for BlackBerry and started offering it as a free download. This license remained in place until 2009.
Products
Software
2002ugi- PocketMac Pro
2004- PocketMac for Blackberry
2004- iPod addition offering PDA-capabilities for both Mac- and Windows-based iPod users.
2005- PocketMac Lite - a lite version of PocketMac Pro, ppcTunes, a Windows utility to sync iTunes to Pocket PCs, iCalPrinter, a utility to print iCal appointments like Entourage.
2008- PocketMac for iPhones
2008- Ringtone Studio for iPhone and Blackberry
2010- PlayNice Syncs between Mac and PC
iPhone Apps/Games
2009- Shivering Kittens - a tetris-like game
2009- Puzzlicious - a jigsaw puzzle game
2009- Rock Paper Airplane - 50's based paper airplane flight simulator
2010- Uniformity -an app to help Navy sailors build their uniform
External links
Uniformity
References
Macintosh software companies
|
415847
|
https://en.wikipedia.org/wiki/Rewriting
|
Rewriting
|
In mathematics, computer science, and logic, rewriting covers a wide range of methods of replacing subterms of a formula with other terms. Such methods may be achieved by rewriting systems (also known as rewrite systems, rewrite engines, or reduction systems). In their most basic form, they consist of a set of objects, plus relations on how to transform those objects.
Rewriting can be non-deterministic. One rule to rewrite a term could be applied in many different ways to that term, or more than one rule could be applicable. Rewriting systems then do not provide an algorithm for changing one term to another, but a set of possible rule applications. When combined with an appropriate algorithm, however, rewrite systems can be viewed as computer programs, and several theorem provers and declarative programming languages are based on term rewriting.
Example cases
Logic
In logic, the procedure for obtaining the conjunctive normal form (CNF) of a formula can be implemented as a rewriting system. The rules of an example of such a system would be:
(double negation elimination)
(De Morgan's laws)
(distributivity)
where the symbol () indicates that an expression matching the left hand side of the rule can be rewritten to one formed by the right hand side, and the symbols each denote a subexpression. In such a system, each rule is chosen so that the left side is equivalent to the right side, and consequently when the left side matches a subexpression, performing a rewrite of that subexpression from left to right maintains logical consistency and value of the entire expression.
Arithmetic
Term rewriting systems can be employed to compute arithmetic operations on natural numbers.
To this end, each such number has to be encoded as a term.
The simplest encoding is the one used in the Peano axioms, based on the constant 0 (zero) and the successor function S.
for example, the numbers 0, 1, 2, and 3 are represented by the terms 0, S(0), S(S(0)), and S(S(S(0))), respectively.
The following term rewriting system can then be used to compute sum and product of given natural numbers.
For example, the computation of 2+2 to result in 4 can be duplicated by term rewriting as follows:
where the rule numbers are given above the rewrites-to arrow.
As another example, the computation of 2⋅2 looks like:
where the last step comprises the previous example computation.
Linguistics
In linguistics, phrase structure rules, also called rewrite rules, are used in some systems of generative grammar, as a means of generating the grammatically correct sentences of a language. Such a rule typically takes the form , where A is a syntactic category label, such as noun phrase or sentence, and X is a sequence of such labels or morphemes, expressing the fact that A can be replaced by X in generating the constituent structure of a sentence. For example, the rule means that a sentence can consist of a noun phrase (NP) followed by a verb phrase (VP); further rules will specify what sub-constituents a noun phrase and a verb phrase can consist of, and so on.
Abstract rewriting systems
From the above examples, it is clear that we can think of rewriting systems in an abstract manner. We need to specify a set of objects and the rules that can be applied to transform them. The most general (unidimensional) setting of this notion is called an abstract reduction system or abstract rewriting system (abbreviated ARS). An ARS is simply a set A of objects, together with a binary relation → on A called the reduction relation, rewrite relation or just reduction.
Many notions and notations can be defined in the general setting of an ARS. is the reflexive transitive closure of . is the symmetric closure of . is the reflexive transitive symmetric closure of . The word problem for an ARS is determining, given x and y, whether . An object x in A is called reducible if there exists some other y in A such that ; otherwise it is called irreducible or a normal form. An object y is called a "normal form of x" if , and y is irreducible. If the normal form of x is unique, then this is usually denoted with . If every object has at least one normal form, the ARS is called normalizing. or x and y are said to be joinable if there exists some z with the property that . An ARS is said to possess the Church–Rosser property if implies . An ARS is confluent if for all w, x, and y in A, implies . An ARS is locally confluent if and only if for all w, x, and y in A, implies . An ARS is said to be terminating or noetherian if there is no infinite chain . A confluent and terminating ARS is called convergent or canonical.
Important theorems for abstract rewriting systems are that an ARS is confluent iff it has the Church-Rosser property, Newman's lemma which states that a terminating ARS is confluent if and only if it is locally confluent, and that the word problem for an ARS is undecidable in general.
String rewriting systems
A string rewriting system (SRS), also known as semi-Thue system, exploits the free monoid structure of the strings (words) over an alphabet to extend a rewriting relation, , to all strings in the alphabet that contain left- and respectively right-hand sides of some rules as substrings. Formally a semi-Thue system is a tuple where is a (usually finite) alphabet, and is a binary relation between some (fixed) strings in the alphabet, called the set of rewrite rules. The one-step rewriting relation induced by on is defined as: if are any strings, then if there exist such that , , and . Since is a relation on , the pair fits the definition of an abstract rewriting system. Obviously is a subset of . If the relation is symmetric, then the system is called a Thue system.
In a SRS, the reduction relation is compatible with the monoid operation, meaning that implies for all strings . Similarly, the reflexive transitive symmetric closure of , denoted , is a congruence, meaning it is an equivalence relation (by definition) and it is also compatible with string concatenation. The relation is called the Thue congruence generated by . In a Thue system, i.e. if is symmetric, the rewrite relation coincides with the Thue congruence .
The notion of a semi-Thue system essentially coincides with the presentation of a monoid. Since is a congruence, we can define the factor monoid of the free monoid by the Thue congruence. If a monoid is isomorphic with , then the semi-Thue system is called a monoid presentation of .
We immediately get some very useful connections with other areas of algebra. For example, the alphabet with the rules , where is the empty string, is a presentation of the free group on one generator. If instead the rules are just , then we obtain a presentation of the bicyclic monoid. Thus semi-Thue systems constitute a natural framework for solving the word problem for monoids and groups. In fact, every monoid has a presentation of the form , i.e. it may always be presented by a semi-Thue system, possibly over an infinite alphabet.
The word problem for a semi-Thue system is undecidable in general; this result is sometimes known as the Post-Markov theorem.
Term rewriting systems
A term rewriting system (TRS) is a rewriting system whose objects are terms, which are expressions with nested sub-expressions. For example, the system shown under above is a term rewriting system. The terms in this system are composed of binary operators and and the unary operator . Also present in the rules are variables, which represent any possible term (though a single variable always represents the same term throughout a single rule).
In contrast to string rewriting systems, whose objects are sequences of symbols, the objects of a term rewriting system form a term algebra. A term can be visualized as a tree of symbols, the set of admitted symbols being fixed by a given signature.
Formal definition
A rewrite rule is a pair of terms, commonly written as , to indicate that the left-hand side can be replaced by the right-hand side . A term rewriting system is a set of such rules. A rule can be applied to a term if the left term matches some subterm of , that is, if there is some substitution such that the subterm of rooted at some position is the result of applying the substitution to the term . The subterm matching the left hand side of the rule is called a redex or reducible expression. The result term of this rule application is then the result of replacing the subterm at position in by the term with the substitution applied, see picture 1. In this case, is said to be rewritten in one step, or rewritten directly, to by the system , formally denoted as , , or as by some authors.
If a term can be rewritten in several steps into a term , that is, if , the term is said to be rewritten to , formally denoted as . In other words, the relation is the transitive closure of the relation ; often, also the notation is used to denote the reflexive-transitive closure of , that is, if or A term rewriting given by a set of rules can be viewed as an abstract rewriting system as defined above, with terms as its objects and as its rewrite relation.
For example, is a rewrite rule, commonly used to establish a normal form with respect to the associativity of .
That rule can be applied at the numerator in the term with the matching substitution , see picture 2. Applying that substitution to the rule's right-hand side yields the term , and replacing the numerator by that term yields , which is the result term of applying the rewrite rule. Altogether, applying the rewrite rule has achieved what is called "applying the associativity law for to " in elementary algebra. Alternately, the rule could have been applied to the denominator of the original term, yielding .
Termination
Termination issues of rewrite systems in general are handled in Abstract rewriting system#Termination and convergence. For term rewriting systems in particular, the following additional subtleties are to be considered.
Termination even of a system consisting of one rule with a linear left-hand side is undecidable. Termination is also undecidable for systems using only unary function symbols; however, it is decidable for finite ground systems.
The following term rewrite system is normalizing, but not terminating, and not confluent:
The following two examples of terminating term rewrite systems are due to Toyama:
and
Their union is a non-terminating system, since
This result disproves a conjecture of Dershowitz, who claimed that the union of two terminating term rewrite systems and is again terminating if all left-hand sides of and right-hand sides of are linear, and there are no "overlaps" between left-hand sides of and right-hand sides of . All these properties are satisfied by Toyama's examples.
See Rewrite order and Path ordering (term rewriting) for ordering relations used in termination proofs for term rewriting systems.
Higher-order rewriting systems
Higher-order rewriting systems are a generalization of first-order term rewriting systems to lambda terms, allowing higher order functions and bound variables. Various results about first-order TRSs can be reformulated for HRSs as well.
Graph rewriting systems
Graph rewrite systems are another generalization of term rewrite systems, operating on graphs instead of (ground-) terms / their corresponding tree representation.
Trace rewriting systems
Trace theory provides a means for discussing multiprocessing in more formal terms, such as via the trace monoid and the history monoid. Rewriting can be performed in trace systems as well.
Philosophy
Rewriting systems can be seen as programs that infer end-effects from a list of cause-effect relationships. In this way, rewriting systems can be considered to be automated causality provers.
See also
Critical pair (logic)
Compiler
Knuth–Bendix completion algorithm
L-systems specify rewriting that is done in parallel.
Referential transparency in computer science
Regulated rewriting
Rho calculus
Notes
Further reading
316 pages. A textbook suitable for undergraduates.
Marc Bezem, Jan Willem Klop, Roel de Vrijer ("Terese"), Term Rewriting Systems ("TeReSe"), Cambridge University Press, 2003, . This is the most recent comprehensive monograph. It uses however a fair deal of non-yet-standard notations and definitions. For instance, the Church–Rosser property is defined to be identical with confluence.
Nachum Dershowitz and Jean-Pierre Jouannaud "Rewrite Systems", Chapter 6 in Jan van Leeuwen (Ed.), Handbook of Theoretical Computer Science, Volume B: Formal Models and Semantics., Elsevier and MIT Press, 1990, , pp. 243–320. The preprint of this chapter is freely available from the authors, but it is missing the figures.
Nachum Dershowitz and David Plaisted. "Rewriting", Chapter 9 in John Alan Robinson and Andrei Voronkov (Eds.), Handbook of Automated Reasoning, Volume 1.
Gérard Huet et Derek Oppen, Equations and Rewrite Rules, A Survey (1980) Stanford Verification Group, Report N° 15 Computer Science Department Report N° STAN-CS-80-785
Jan Willem Klop. "Term Rewriting Systems", Chapter 1 in Samson Abramsky, Dov M. Gabbay and Tom Maibaum (Eds.), Handbook of Logic in Computer Science, Volume 2: Background: Computational Structures.
David Plaisted. "Equational reasoning and term rewriting systems", in Dov M. Gabbay, C. J. Hogger and John Alan Robinson (Eds.), Handbook of Logic in Artificial Intelligence and Logic Programming, Volume 1.
Jürgen Avenhaus and Klaus Madlener. "Term rewriting and equational reasoning". In Ranan B. Banerji (Ed.), Formal Techniques in Artificial Intelligence: A Sourcebook, Elsevier (1990).
String rewriting
Ronald V. Book and Friedrich Otto, String-Rewriting Systems, Springer (1993).
Benjamin Benninghofen, Susanne Kemmerich and Michael M. Richter, Systems of Reductions. LNCS 277, Springer-Verlag (1987).
Other
Martin Davis, Ron Sigal, Elaine J. Weyuker, (1994) Computability, Complexity, and Languages: Fundamentals of Theoretical Computer Science – 2nd edition, Academic Press, .
External links
The Rewriting Home Page
IFIP Working Group 1.6
Researchers in rewriting by Aart Middeldorp, University of Innsbruck
Termination Portal
Maude System — a software implementation of a generic term rewriting system.
References
Formal languages
Logic in computer science
Mathematical logic
Rewriting systems
|
480685
|
https://en.wikipedia.org/wiki/Paillier%20cryptosystem
|
Paillier cryptosystem
|
The Paillier cryptosystem, invented by and named after Pascal Paillier in 1999, is a probabilistic asymmetric algorithm for public key cryptography. The problem of computing n-th residue classes is believed to be computationally difficult. The decisional composite residuosity assumption is the intractability hypothesis upon which this cryptosystem is based.
The scheme is an additive homomorphic cryptosystem; this means that, given only the public key and the
encryption of and , one can compute the encryption of .
Algorithm
The scheme works as follows:
Key generation
Choose two large prime numbers p and q randomly and independently of each other such that . This property is assured if both primes are of equal length.
Compute and . lcm means Least Common Multiple.
Select random integer where
Ensure divides the order of by checking the existence of the following modular multiplicative inverse: ,
where function is defined as .
Note that the notation does not denote the modular multiplication of times the modular multiplicative inverse of but rather the quotient of divided by , i.e., the largest integer value to satisfy the relation .
The public (encryption) key is .
The private (decryption) key is
If using p,q of equivalent length, a simpler variant of the above key generation steps would be to set and , where .
Encryption
Let be a message to be encrypted where
Select random where
Compute ciphertext as:
Decryption
Let be the ciphertext to decrypt, where
Compute the plaintext message as:
As the original paper points out, decryption is "essentially one exponentiation modulo ."
Homomorphic properties
A notable feature of the Paillier cryptosystem is its homomorphic properties along with its non-deterministic encryption (see Electronic voting in Applications for usage). As the encryption function is additively homomorphic, the following identities can be described:
Homomorphic addition of plaintexts
The product of two ciphertexts will decrypt to the sum of their corresponding plaintexts,
The product of a ciphertext with a plaintext raising will decrypt to the sum of the corresponding plaintexts,
Homomorphic multiplication of plaintexts
A ciphertext raised to the power of a plaintext will decrypt to the product of the two plaintexts,
More generally, a ciphertext raised to a constant k will decrypt to the product of the plaintext and the constant,
However, given the Paillier encryptions of two messages there is no known way to compute an encryption of the product of these messages without knowing the private key.
Background
Paillier cryptosystem exploits the fact that certain discrete logarithms can be computed easily.
For example, by binomial theorem,
This indicates that:
Therefore, if:
then
.
Thus:
,
where function is defined as (quotient of integer division) and .
Semantic security
The original cryptosystem as shown above does provide semantic security against chosen-plaintext attacks (IND-CPA). The ability to successfully distinguish the challenge ciphertext essentially amounts to the ability to decide composite residuosity. The so-called decisional composite residuosity assumption (DCRA) is believed to be intractable.
Because of the aforementioned homomorphic properties however, the system is malleable, and therefore does not enjoy the highest level of semantic security, protection against adaptive chosen-ciphertext attacks (IND-CCA2).
Usually in cryptography the notion of malleability is not seen as an "advantage," but under certain applications such as secure electronic voting and threshold cryptosystems, this property may indeed be necessary.
Paillier and Pointcheval however went on to propose an improved cryptosystem that incorporates the combined hashing of message m with random r. Similar in intent to the Cramer–Shoup cryptosystem, the hashing prevents an attacker, given only c, from being able to change m in a meaningful way. Through this adaptation the improved scheme can be shown to be IND-CCA2 secure in the random oracle model.
Applications
Electronic voting
Semantic security is not the only consideration. There are situations under which malleability may be desirable. The above homomorphic properties can be utilized by secure electronic voting systems. Consider a simple binary ("for" or "against") vote. Let m voters cast a vote of either 1 (for) or 0 (against). Each voter encrypts their choice before casting their vote. The election official takes the product of the m encrypted votes and then decrypts the result and obtains the value n, which is the sum of all the votes. The election official then knows that n people voted for and m-n people voted against. The role of the random r ensures that two equivalent votes will encrypt to the same value only with negligible likelihood, hence ensuring voter privacy.
Electronic cash
Another feature named in paper is the notion of self-blinding. This is the ability to change one ciphertext into another without changing the content of its decryption. This has application to the development of ecash, an effort originally spearheaded by David Chaum. Imagine paying for an item online without the vendor needing to know your credit card number, and hence your identity. The goal in both electronic cash and electronic voting, is to ensure the e-coin (likewise e-vote) is valid, while at the same time not disclosing the identity of the person with whom it is currently associated.
See also
The Naccache–Stern cryptosystem and the Okamoto–Uchiyama cryptosystem are historical antecedents of Paillier.
The Damgård–Jurik cryptosystem is a generalization of Paillier.
References
Notes
External links
The Homomorphic Encryption Project implements the Paillier cryptosystem along with its homomorphic operations.
Encounter: an open-source library providing an implementation of Paillier cryptosystem and a cryptographic counters construction based on the same.
python-paillier a library for Partially Homomorphic Encryption in Python, including full support for floating point numbers.
The Paillier cryptosystem interactive simulator demonstrates a voting application.
An interactive demo of the Paillier cryptosystem.
A proof-of-concept Javascript implementation of the Paillier cryptosystem with an interactive demo.
A googletechtalk video on voting using cryptographic methods.
A Ruby implementation of Paillier homomorphic addition and a zero-knowledge proof protocol (documentation)
Public-key encryption schemes
|
68136100
|
https://en.wikipedia.org/wiki/Enter%20Museum
|
Enter Museum
|
Enter is a museum for computer and consumer electronics in the Swiss town of Solothurn. Now a non-profit foundation ("Stiftung ENTER"), it originated as the project of Swiss entrepreneur Felix Kunz. It is the largest private technology collection open to the public in Switzerland. Its current location in Solothurn opened in 2011.
History
The museum originated in the private collection of the Swiss entrepreneur Felix Kunz who has been collecting computers and electronics since the mid 1960s. In 2010, Kunz established a foundation for the museum jointly with Peter Regenass, a collector of calculators. In 2011, the Enter museum moved into a building right at the train station in Solothurn with a surface area of 1800 square metres. In 2022, the museum will be closed and transferred to the nearby village of Derendingen and re-open there on a larger scale in 2023, using a surface area of over 5000 square meters.
Collection
The collection has a focus on history of technology made in Switzerland with products of Studer-Revox Paillard, Bolex, Crypto AG, Gretag. It also shows the main stages of computer history with examples of IBM mainframes, Cray supercomputers, Commodore home computers, personal computers from Apple and IBM. It claims to feature "the largest physical collection of working Apple devices in Europe".
Part of the museum is a collection of 300 mechanical calculators of the Swiss collector Peter Regenass. Furthermore, it holds a large collection on the history of radio and television including a vast number of radio and TV sets, recording devices for audio and video and projectors including the Eidophor projectors used 1958 - 1999.Over time the Enter Museum has integrated other collections such as the Audiorama Montreux, which closed its doors in 2010 or the computer collection of the Swiss collector Robert Weiss or Peter Beck. Its collection has been named as outstanding by the media. There is hardly a computer of the past 50 years that is not on display there, according to a 2013 Neue Zürcher Zeitung article.
Selected exhibits
Switzerland's first radio station that started regular emission in Lausanne as early as 26 February 1923.
Early home computers such as Mark-8 minicomputer, Commodore PET 2001 or Apple 1
Mechanical calculators such as The Millionaire calculator and Curta.
Cryptographic devices of Crypto AG, Gretag and others including the Swiss Nema and the Russian Fialka
The Smaky-computer of Swiss engineer Jean-Daniel Nicoud and the Lilith Computer by Niklaus Wirth
The video projector Eidophor used 1958 – 1999.
The outdoor projector Spitlight used at the 1956 Olympic winter games in Cortina di Ampezzo.
Museum shop for spare parts
The museum endeavors to keep as many of its artifacts working and has a number of veteran engineers and specialists that support the museum as volunteers. The museum keeps a stock of 1.5 million electronic and mechanical spare parts including 40'000 radio valves that can also be purchased at their nominal price from the museum shop.
References
External links
Enter Museum Solothurn
Vintage Electronic Shop
Solothurn Tourism Organization: Museum Enter
Museums in Switzerland
Computer museums
Technology museums in Switzerland
|
596688
|
https://en.wikipedia.org/wiki/Integrated%20Windows%20Authentication
|
Integrated Windows Authentication
|
Integrated Windows Authentication (IWA)
is a term associated with Microsoft products that refers to the SPNEGO, Kerberos, and NTLMSSP authentication protocols with respect to SSPI functionality introduced with Microsoft Windows 2000 and included with later Windows NT-based operating systems. The term is used more commonly for the automatically authenticated connections between Microsoft Internet Information Services, Internet Explorer, and other Active Directory aware applications.
IWA is also known by several names like HTTP Negotiate authentication, NT Authentication, NTLM Authentication, Domain authentication, Windows Integrated Authentication, Windows NT Challenge/Response authentication, or simply Windows Authentication.
Overview
Integrated Windows Authentication uses the security features of Windows clients and servers. Unlike Basic or Digest authentication, initially, it does not prompt users for a user name and password. The current Windows user information on the client computer is supplied by the web browser through a cryptographic exchange involving hashing with the Web server. If the authentication exchange initially fails to identify the user, the web browser will prompt the user for a Windows user account user name and password.
Integrated Windows Authentication itself is not a standard or an authentication protocol. When IWA is selected as an option of a program (e.g. within the Directory Security tab of the IIS site properties dialog) this implies that underlying security mechanisms should be used in a preferential order. If the Kerberos provider is functional and a Kerberos ticket can be obtained for the target, and any associated settings permit Kerberos authentication to occur (e.g. Intranet sites settings in Internet Explorer), the Kerberos 5 protocol will be attempted. Otherwise NTLMSSP authentication is attempted. Similarly, if Kerberos authentication is attempted, yet it fails, then NTLMSSP is attempted. IWA uses SPNEGO to allow initiators and acceptors to negotiate either Kerberos or NTLMSSP. Third party utilities have extended the Integrated Windows Authentication paradigm to UNIX, Linux and Mac systems.
Supported web browsers
Integrated Windows Authentication works with most modern web browsers, but does not work over some HTTP proxy servers. Therefore, it is best for use in intranets where all the clients are within a single domain. It may work with other web browsers if they have been configured to pass the user's logon credentials to the server that is requesting authentication. Where a proxy itself requires NTLM authentication, some applications like Java may not work because the protocol is not described in RFC-2069 for proxy authentication.
Internet Explorer 2 and later versions.
In Mozilla Firefox on Windows operating systems, the names of the domains/websites to which the authentication is to be passed can be entered (comma delimited for multiple domains) for the "network.negotiate-auth.trusted-uris" (for Kerberos) or in the "network.automatic-ntlm-auth.trusted-uris" (NTLM) Preference Name on the about:config page. On the Macintosh operating systems this works if you have a kerberos ticket (use negotiate). Some websites may also require configuring the "network.negotiate-auth.delegation-uris".
Opera 9.01 and later versions can use NTLM/Negotiate, but will use Basic or Digest authentication if that is offered by the server.
Google Chrome works as of 8.0.
Safari works, once you have a Kerberos ticket.
Microsoft Edge 77 and later.
Supported mobile browsers
Bitzer Secure Browser supports Kerberos and NTLM SSO from iOS and Android. Both KINIT and PKINIT are supported.
See also
SSPI (Security Support Provider Interface)
NTLM (NT Lan Manager)
SPNEGO (Simple and Protected GSSAPI Negotiation Mechanism)
GSSAPI (Generic Security Services Application Program Interface)
References
External links
Discussion of IWA in Microsoft IIS 6.0 Technical Reference
Microsoft Windows security technology
Internet Explorer
Computer access control
|
53391134
|
https://en.wikipedia.org/wiki/Hollow%20Knight
|
Hollow Knight
|
Hollow Knight is a 2017 Metroidvania action-adventure game developed and published by Team Cherry. It was released for Microsoft Windows, macOS, and Linux in early 2017 and for the Nintendo Switch, Playstation 4, and Xbox One in 2018. Team Cherry wanted to create a game inspired by older platformers that replicated the exploration aspects of its influences. Development was partially funded through a Kickstarter crowdfunding campaign, raising over by the end of 2014.
Players control the Knight, a nameless warrior uncertain of their own identity or origin. The Knight explores Hallownest, a once thriving kingdom whose inhabitants were deprived of their minds. The game itself is set in diverse locations, and it features friendly and hostile bug-like characters and numerous bosses. Players have the opportunity to unlock new abilities as they explore each location, along with pieces of lore that are spread throughout the world.
Hollow Knight was well received by critics, who highlighted the game's worldbuilding, character design, and combat mechanics, with emphasis on how extensive the story is. The game had sold 3 million copies as of December 2020. A sequel, Hollow Knight: Silksong, is in development.
Gameplay
Hollow Knight is a 2D Metroidvania action-adventure game that takes place in Hallownest, a fictional ancient kingdom. The player controls an insect-like, silent, nameless knight while exploring the underground world. The Knight wields a Nail, which is a type of sword, that is used in both combat and environmental interaction.
In most areas of the game, players encounter hostile bugs and other creatures. Melee combat involves using the Nail to strike enemies from a short distance. The player can learn spells, allowing for long-range attacks. Defeated enemies drop currency called Geo. The Knight starts with a limited number of masks, which represent the hit points of the character. "Mask shards" can be collected throughout the game to increase the player's maximum health. When the Knight takes damage from an enemy or from the environment, a mask is reduced. By striking enemies, the Knight gains Soul, which is stored in the Soul Vessel. If all masks are lost, the Knight dies and a Shade appears where they died. The player loses all Geo and can hold a reduced amount of Soul. Players need to defeat the Shade to recover the lost currency and to carry the normal amount of Soul. The game continues from the last visited bench they sat on which are scattered throughout the game world and act as save points. Initially the player can only use Soul to "Focus" and regenerate masks, but as the game progresses players unlock several offensive spells, which consume Soul.
Many areas feature more challenging enemies and bosses which the player may need to defeat in order to progress further. Defeating some bosses grants the player new abilities. Later in the game, players acquire the "Dream Nail", a special blade that can "cut through the veil between dreams and waking". It enables the player to face more challenging versions of a few bosses, and to break what is sealing the path to the final boss. If the player defeats the final boss of the game, called "The Hollow Knight", they are given access to a mode called "Steel Soul". In this mode, dying is permanent, and if the Knight loses all of their masks, the save slot will be reset.
During the game, the player encounters non-player characters (NPCs) with whom they can interact. These characters provide information about the game's plot and lore, offer aid, and sell items or services. The player can upgrade the Knight's Nail to deal more damage or find Soul Vessels to carry more Soul. During the course of the game, players acquire items that provide new movement abilities including an additional mid-air jump (Monarch Wings), adhering to walls and jumping off them (Mantis Claw), and a quick dash (Mothwing Cloak). The player can learn other combat abilities, known as Nail Arts, and the aforementioned spells. To further customize the Knight, players can equip various charms, which can be found or purchased from NPCs. Some of their effects include improved combat abilities or skills, more masks without regeneration, better movement skills, easier collecting of currency or Soul, and transformation. Equipping a charm takes up a certain number of limited slots, called notches.
Hallownest consists of several large, inter-connected areas with unique themes. With its nonlinear gameplay design, Hollow Knight does not bind the player to one path through the game nor require them to explore the whole world, though there are obstacles that limit the player's access to various areas. The player may need to progress in the story of the game, or acquire a specific movement ability, skill, or item to progress further. To fast travel through the game's world, the player can utilise Stag Stations, terminals of a network of tunnels; players can only travel to previously visited and unlocked stations. Other fast travel methods, such as trams, lifts, and the "Dreamgate", are encountered later in the game.
As the player enters a new area, they do not have access to the map of their surroundings. They must find Cornifer, the cartographer, in order to buy a rough map. As the player explores an area, the map becomes more accurate and complete, although it is updated only when sitting on a bench. The player will need to buy specific items to complete maps, to see points of interest, and to place markers. The Knight's position on the map can only be seen if the player is carrying a specific charm.
Plot
At the outset of the game, the Knight arrives in Dirthmouth, a quiet town that sits just above the ruined remains of Hallownest. As the Knight ventures through the ruins, they discover that Hallownest was once a flourishing kingdom which fell to ruin after becoming overrun with "The Infection", which drove its citizens to madness and undeath. The first and last ruler of Hallownest, the Pale King, attempted to seal away the Infection; however, it becomes increasingly clear that the seal is failing. The knight's mission is to find and kill the three Dreamers who act as living locks to the seal. This quest brings the Knight into conflict with Hornet, a warrior who acts as the protector of Hallownest who tests the Knight's resolve in several battles.
Through dialogue with certain characters as well as cutscenes, the Knight receives insight into the origin of the Infection and itself. In ancient times, some of the bugs of Hallownest worshipped a higher being called the Radiance: a primordial, mothlike creature whose mere presence could sway the denizens of Hallownest to mindless obedience. One day, another higher being named The Pale King arrived at Hallownest and used his power to "free" the minds of the bugs of Hallownest, granting them sapience. Thankful for the minds they had been given, the bugs of Hallownest revered and worshipped the Pale King, causing The Radiance to fall into obscurity. This allowed him to establish the grand kingdom of Hallownest, however some bugs continued to worship the Radiance in secret, which allowed its continued existence despite its influence and power waning into almost nothing.
Hallownest prospered until the Radiance began invading the dreams of Hallownest's citizens, driving them to madness in the form of an affliction known as The Infection. In an attempt to contain the spreading Infection, the Pale King used the power of Void (a living darkness found in an area known as The Abyss) to create the Vessels: completely hollow beings that would lack a will which couldn't be corrupted by The Infection. The process of creating these vessels involved much trial and error, which resulted in an innumerable number of dead vessel shells locked away in The Abyss. Eventually the Pale King chose what he found to be the most suitable Vessel, deemed the "Hollow Knight", and trained it to contain the Infection. He then sealed the Hollow Knight within the Temple of the Black Egg with the aid of the three Dreamers. However, the Hollow Knight was not able to hold The Radiance as it had been tarnished by an "idea instilled", presumably its parental bond with the Pale King, effectively compromising the seal and causing the infection to rage on.
Depending on the player's actions, Hollow Knight has multiple endings. In the first ending, "Hollow Knight", the Knight defeats the Hollow Knight and takes its place in the Temple of the Black Egg, though the Radiance still survives. The second ending, "Sealed Siblings", occurs if the player collects the Void Heart charm before fighting the Hollow Knight: this is roughly similar to the first ending, except Hornet arrives to help during the final battle and is sealed with the Knight, becoming the Dreamer that locks the door.
The third ending, "Dream No More", occurs if the player collects the Void Heart and uses the Awakened Dream Nail (an upgraded version of the previously mentioned Dream Nail) to gain entry to the Hollow Knight's dreams when Hornet arrives to help. This allows the Knight to challenge the Radiance directly. The battle ends when the Knight commands the complete power of the Void and the remaining vessels to consume the Radiance utterly and thus end the threat of the Infection, though the Knight is destroyed in the process, leaving behind their shell split in two.
Godmaster expansion
Two additional endings were added with the Godmaster content update, in which the Knight can battle harder versions of all of the bosses in the game in a series of challenges. The main hub of the expansion is known as Godhome, and is accessed by using the Dreamnail on a new NPC called the Godseeker. There are five "pantheons", each being a "boss rush", containing a set of bosses that must all be defeated without dying. The final pantheon, the Pantheon of Hallownest, contains every boss in the game. If the Knight completes the Pantheon of Hallownest, the Absolute Radiance, a more powerful version of the Radiance, appears, acting as the new final boss. Upon defeating her, the Knight transforms into a massive entity consisting of void and annihilates the Absolute Radiance. Godhome is consumed by darkness as the Godseeker begins oozing Void, which eventually erupts and destroys her as well before appearing to spread out into Hallownest. The game then cuts to Hornet, seen standing by the Temple of the Black Egg as infected vines shrivel up and turn black. A chained creature wielding a nail, appearing to be the freed Hollow Knight, moves to confront her.
The fifth ending is unlocked if the Knight has given the Godseeker a Delicate Flower item before defeating the Absolute Radiance. The ending is identical to the fourth, but as the Godseeker is oozing the void, a drop of it touches the delicate flower, a flash is seen, and then the Godseeker and the Void vanish, leaving only the wilting flower behind.
The Grimm Troupe expansion
In the second expansion to Hollow Knight, the Knight lights a "Nightmare Lantern" found hidden in the Howling Cliffs after dream nailing a dead bug with a strange mask, after which a mysterious group of circus performers known as the Grimm Troupe arrives in Dirtmouth. Their leader, Troupe Master Grimm, gives the Knight a quest to collect magic flames throughout Hallownest in order to take part in a "twisted ritual". He gives the player the Grimmchild charm, which absorbs the flames into itself, progressing the ritual, and, when it is upgraded, attacks enemies. Eventually, the Knight must choose to either complete the ritual by fighting Nightmare King Grimm and fully upgrade the Grimmchild charm or banish the Troupe from Hallownest with the help of Brumm, a traitor of the Grimm Troupe and replace the Grimmchild charm with the Carefree Melody charm.
Development
The idea that prompted the creation of Hollow Knight originated in a game jam, Ludum Dare 2013, in which two of the game's developers, Ari Gibson and William Pellen, developed a game called Hungry Knight, in which the character that would later become the Knight kills bugs to stave off starvation. The game, considered "not very good", used to hold a 1/5 star rating on Newgrounds, but has since increased to 4/5. The developers decided to work on another game jam with the theme "Beneath the Surface", but missed the deadline. However, the concept gave them the idea to create a game with an underground setting, a "deep, old kingdom", and insect characters.
Influences for the game include Faxanadu, Metroid, Zelda II, and Mega Man X. Team Cherry noted that Hallownest was in some ways the inverse of the world tree setting in Faxanadu. The team also noted that they wanted to replicate the sense of wonder and discovery of games from their childhood from such games, in which "[t]here could be any crazy secret or weird creature."
Believing that control of the character was most important for the player's enjoyment of the game, the developers based the Knight's movement on Mega Man X. They gave the character no acceleration or deceleration when moving horizontally, as well as a large amount of aerial control and the ability to interrupt one's jump with a dash. This was meant to make the player feel that any hit they took could have been avoided right up until the last second.
To create the game's art, Gibson's hand-drawn sketches were scanned directly into the game engine, creating a "vivid sense of place". The developers decided to "keep it simple" in order to prevent the development time from becoming extremely protracted. The complexity of the world was based on Metroid, which allows players to become disoriented and lost, focusing on the enjoyment of finding one's way. Only basic signs are placed throughout the world to direct players to important locations. The largest design challenge for the game was creating the mapping system and finding a balance between not divulging the world's secrets while not being too player-unfriendly.
Hollow Knight was revealed on Kickstarter in November 2014, seeking a "modest" sum of . The game passed this goal, raising more than from 2,158 backers, allowing its scope to be expanded and another developer to be hired—technical director David Kazi—as well as composer Christopher Larkin. The game reached a beta state in September 2015 and continued to achieve numerous stretch goals to add in more content after an engine switch from Stencyl to Unity.
Release
Hollow Knight was officially released for Microsoft Windows on 24 February 2017, with versions for macOS and Linux being released on 11 April of the same year.
The Nintendo Switch version of Hollow Knight was announced in January 2017 and released on 12 June 2018. Team Cherry originally planned to make their game available on the Wii U. Development of the Wii U version began in 2016, alongside the PC version, and it eventually shifted to Switch. The creators of Hollow Knight worked with another Australian developer, Shark Jump Studios, to speed up the porting process. Initially, Team Cherry planned the Switch version to arrive "not too long after the platform's launch"; subsequently they delayed it to early 2018. A release date was not announced until the Nintendo Direct presentation at E3 2018 on 12 June 2018, when it was unveiled the game would be available later that day via Nintendo eShop.
On 3 August 2017, the "Hidden Dreams" DLC was released, featuring two new optional boss encounters, two new songs in the soundtrack, a new fast travel system, and a new Stag Station to discover. On 26 October 2017, "The Grimm Troupe" was released, adding new major quests, new boss fights, new charms, new enemies, and other content. The update also added support for Russian, Portuguese, and Japanese languages. On 20 April 2018, "Lifeblood" was released, bringing various optimizations, changes to the color palette, bug fixes, minor additions as well as a new boss fight. On 23 August 2018, the final DLC, "Godmaster" was released, containing new characters, boss fights, music, a new game mode as well as two new endings. It was renamed from its former title of "Gods and Glory" due to trademark concerns.
Reception
Hollow Knights PC and PlayStation 4 versions received "generally favorable" reviews and the Nintendo Switch and Xbox One versions received "universal acclaim", according to review aggregator website Metacritic. Jed Whitaker of Destructoid praised it as a "masterpiece of gaming" and, on PC Gamer, Tom Marks called it a "new classic". Reviewers spoke highly of Hollow Knights atmosphere, visuals, sound and music, noting the vastness of the game's world.
Critics recognized the combat system as simple and nuanced; they praised its responsiveness, or "tightness", similarly to the movement system. On IGN, Marks stated: "The combat in Hollow Knight is relatively straightforward, but starts out tricky ... It rewards patience and skill massively". In his review on PC Gamer, Marks praised the "brilliant" charm system: "What's so impressive about these charms is that I could never find a 'right' answer when equipping them. There were no wrong choices." Adam Abou-Nasr from NintendoWorldReport stated: "Charms offer a huge variety of upgrades ... removing them felt like trading a part of myself for a better chance at an upcoming battle."
The difficulty of Hollow Knight received attention from reviewers and was described as challenging; Vikki Blake of Eurogamer called the game "ruthlessly tough, even occasionally unfair". For Nintendo World Reports Adam Abou-Nasr it also seemed unfair—he had so frustratingly hard that I cannot recommend this game' angrily scrawled in [his] notes"—but "it eventually clicked". Whitaker "never found any of the bosses to be unfair". Destructoid and Nintendo World Report reviewers felt a sense of accomplishment after difficult fights. Critics also made comparisons to Dark Souls, noting the mechanic of losing currency on death and having to defeat a Shade to regain it. Destructoid praised this feature, as well as the holding down of a button to heal, because "[t]hey circumvent a couple of issues games have always had, namely appropriate punishment for failing, and a risk-reward system".
An official Hollow Knight Piano Collections sheet music book and album was released in 2019 by video game music label Materia Collective, arranged by David Peacock and performed by Augustine Mayuga Gonzales.
Sales
Hollow Knight had sold over 500,000 copies by November 2017 and surpassed 1,000,000 in sales on PC platforms on 11 June 2018, one day before releasing on Nintendo Switch, where it had sold over 250,000 copies in the two weeks after its launch. By July 2018 it had sold over 1,250,000 copies. As of February 2019, Hollow Knight had sold over 2,800,000 copies.
Awards
The game was nominated for "Best PC Game" in Destructoids Game of the Year Awards 2017, and for "Best Platformer" in IGN'''s Best of 2017 Awards. It won the award for "Best Platformer" in PC Gamers 2017 Game of the Year Awards. Polygon later named the game among the decade's best.
Sequel
A sequel, Hollow Knight: Silksong, is in development and is set to be released on Microsoft Windows, Mac, Linux, and Nintendo Switch, with Team Cherry stating that "more platforms may happen in the future". Team Cherry had previously planned this game as a piece of downloadable content for its predecessor. Kickstarter backers of Hollow Knight will receive Silksong'' for free when it is released. The sequel will revolve around Hornet exploring the kingdom of Pharloom. The game was announced in February 2019 but has not yet received a release date.
Notes
References
External links
2017 video games
Cancelled Wii U games
Dark fantasy video games
Fictional knights in video games
Indie video games
Kickstarter-funded video games
Linux games
MacOS games
Metroidvania games
Nintendo Switch games
PlayStation 4 games
Single-player video games
Soulslike video games
Video games about insects
Video games developed in Australia
Video games scored by Christopher Larkin (composer)
Video games with alternate endings
Windows games
Xbox Cloud Gaming games
Xbox One games
|
7613634
|
https://en.wikipedia.org/wiki/Title%2021%20CFR%20Part%2011
|
Title 21 CFR Part 11
|
Title 21 CFR Part 11 is the part of Title 21 of the Code of Federal Regulations that establishes the United States Food and Drug Administration (FDA) regulations on electronic records and electronic signatures (ERES). Part 11, as it is commonly called, defines the criteria under which electronic records and electronic signatures are considered trustworthy, reliable, and equivalent to paper records (Title 21 CFR Part 11 Section 11.1 (a)).
Coverage
Practically speaking, Part 11 applies to drug makers, medical device manufacturers, biotech companies, biologics developers, CROs, and other FDA-regulated industries, with some specific exceptions. It requires that they implement controls, including audits, system validations, audit trails, electronic signatures, and documentation for software and systems involved in processing the electronic data that FDA predicate rules require them to maintain. A predicate rule is any requirement set forth in the Federal Food, Drug and Cosmetic Act, the Public Health Service Act, or any FDA regulation other than Part 11.
The rule also applies to submissions made to the FDA in electronic format (e.g., a New Drug Application) but not to paper submissions by electronic methods (i.e., faxes). It specifically does not require the 21 CFR Part 11 requirement for record retention for trackbacks by food manufacturers. Most food manufacturers are not otherwise explicitly required to keep detailed records, but electronic documentation kept for HACCP and similar requirements must meet these requirements.
Broad sections of the regulation have been challenged as "very expensive and for some applications almost impractical", and the FDA has stated in guidance that it will exercise enforcement discretion on many parts of the rule. This has led to confusion on exactly what is required, and the rule is being revised. In practice, the requirements on access controls are the only part routinely enforced. The "predicate rules", which required organizations to keep records in the first place, are still in effect. If electronic records are illegible, inaccessible, or corrupted, manufacturers are still subject to those requirements.
If a regulated firm keeps "hard copies" of all required records, those paper documents can be considered the authoritative document for regulatory purposes, and the computer system is not in scope for electronic records requirements—though systems that control processes subject to predicate rules still require validation. Firms should be careful to make a claim that the "hard copy" of required records is the authoritative document. For the "hard copy" produced from electronic source to be the authoritative document, it must be a complete and accurate copy of the electronic source. The manufacturer must use the hard copy (rather than electronic versions stored in the system) of the records for regulated activities. The current technical architecture of computer systems increasingly makes the Part 11, Electronic Records; Electronic Signatures — Scope and Application for the complete and accurate copy requirement extremely mandatory.
Content
Subpart-A – General Provisions
Scope
Implementation
Definitions
Subpart-B – Electronic Records
Controls for closed systems
Controls for open systems
Signature manifestations
Signature/record linking
Subpart-C – Electronic Signatures
General requirements
Electronic signatures and controls
Controls for identification codes/passwords
History
FDA Title 21 CFR Part 11:Electronic Records; Electronic Signatures; Final Rule (1997)
Various keynote speeches by FDA insiders early in the 21st century (in addition to high-profile audit findings focusing on computer system compliance) resulted in many companies scrambling to mount a defense against rule enforcement that they were procedurally and technologically unprepared for. Many software and instrumentation vendors released Part 11 "compliant" updates that were either incomplete or insufficient to fully comply with the rule. Complaints about the wasting of critical resources, non-value added aspects, in addition to confusion within the drug, medical device, biotech/biologic and other industries about the true scope and enforcement aspects of Part 11 resulted in the FDA release of:
FDA Guidance for Industry Part 11, Electronic Records: Electronic Signatures – Scope and Application (2003)
This document was intended to clarify how Part 11 should be implemented and would be enforced. But, as with all FDA guidances, it was not intended to convey the full force of law—rather, it expressed the FDA's "current thinking" on Part 11 compliance. Many within the industry, while pleased with the more limited scope defined in the guidance, complained that, in some areas, the 2003 guidance contradicted requirements in the 1997 Final Rule.
Guidance for Industry Computerized Systems Used in Clinical Investigations
In May 2007, the FDA issued the final version of their guidance on computerized systems in clinical investigations. This guidance supersedes the guidance of the same name dated April 1999; and supplements the guidance for industry on Part 11, Electronic Records; Electronic Signatures — Scope and Application and the Agency’s international harmonization efforts when applying these guidances to source data generated at clinical study sites.
FDA had previously announced that a new Part 11 would be released late 2006. The Agency has since pushed that release date back. The FDA has not announced a revised time of release. John Murray, member of the Part 11 Working Group (the team at FDA developing the new Part 11), has publicly stated that the timetable for release is "flexible".
See also
Electronic lab notebook
Electronic medical record
Electronic Signatures in Global and National Commerce Act (ESIGN, United States)
References
External links
FDA:
FDA Regulation 21 CFR Part 11 - Electronic Records; Electronic Signatures (1997)
FDA announcement 08-July-2010 (21 CFR Part 11)
General Principles of Software Validation; Final Guidance for Industry and FDA Staff (2002)
FDA Guidance for Industry Part 11, Electronic Records: Electronic Signatures – Scope and Application (2003)
Scientific documents
Computer law
Food and Drug Administration
Regulation of medical devices
Pharmaceuticals policy
Pharmaceutical industry
Code of Federal Regulations
United States administrative law
United States federal law
Clinical data management
|
43151528
|
https://en.wikipedia.org/wiki/Nikon%20D810
|
Nikon D810
|
The Nikon D810 is a 36.3-megapixel professional-grade full-frame digital single-lens reflex camera produced by Nikon. The camera was officially announced in June 2014, and became available in July 2014.
Compared to the former D800/D800E it offers an image sensor with a base sensitivity of ISO 64 and extended range of ISO 32 to 51,200, an Expeed processor with noise reduction with claimed 1 stop noise improvement, doubled buffer size, increased frame rate and extended battery life, improved autofocus – now similar to the D4S, improved video with 1080p 60 fps and many software improvements.
The D810 was succeeded by the Nikon D850 in August 2017 and was listed as discontinued in December 2019.
Features
New 37.09 megapixel (36.3 effective) full-frame (35.9×24 mm) sensor with sensitivity of ISO 64–12,800 (ISO 32–51,200 boost) and no optical low-pass filter (OLPF, anti-aliasing filter)
Improved microlenses with increased light gathering
Nikon Expeed 4 image processor with improved noise reduction, moiré (aliasing) reduction and increased battery life to 1200 shots / 40 minutes video notwithstanding 30% higher speed
Roughly doubled buffer size of D800/D800E
Frame rate (photo) increased to 5 fps FX (full-frame, DX up to 7 fps). Videos up to 1080p 60p / 50p fps
Simultaneous video recoding on external recorder (uncompressed video, clean HDMI; up to 1080p60) and compressed on memory card
Autofocus equivalent to D4S, also Group Area mode: uses five AF sensors together. Face-detection switchable with custom settings
Highlight-weighted metering preventing blown highlights or underexposed shadows. Also Highlight Display with Zebra Stripes and full aperture metering during live view and video
Kevlar/carbon fiber composite shutter with reduced lag, vibrations and shutter noise. Redesigned Sequencer / Balancer Mechanism for Quiet and Quiet Continuous modes
Electronic front curtain shutter for further reduced vibrations enabling higher resolutions
OLED viewfinder display
Timelapse up to 9,999 frames, additionally timelapse videos. Timelapse / Interval Timer Exposure Smoothing
Customizable 'Picture Control 2.0' options: Flat affecting dynamic range (preserve highlights and shadows), Clarity affecting details. Other settings affecting exposure, white balance, sharpness, brightness, saturation, hue; allowing custom curves to be created, edited, saved, exported and imported
3.2" 1229k-dot (RGBW, four dots per pixel: extra white dot) VGA LCD display with "Split-screen display zoom" function
USB 3.0, HDMI C (mini), Nikon 10-Pin interfaces and 3.5 mm / 1/8″ stereo headphone + 3.5 mm / 1/8″ stereo microphone connectors
"Superior" resistance to dust and water (Nikon claim)
Accessories
Nikon WT-4/WT-4A or WT-5/WT-5A (also UT-1 network) Wireless Transmitter for WLAN. Third-party solutions available.
Nikon Wireless remote control or third-party solutions.
Nikon GP-1 or GP-1A GPS Unit for direct GPS geotagging. Third-party solutions partly with three-axis compass, data-logger, bluetooth and support for indoor use are available from Solmeta, Dawn, Easytag, Foolography, Gisteq and Phottix. See comparisons/reviews.
Nikon Battery grip or third-party solutions
Various Nikon Speedlight or third-party flash units. Also working as commander for Nikon Creative Lighting System wireless (slave) flash.
Third-party radio (wireless) flash control triggers
Tethered shooting with Nikon Camera Control Pro 2, third-party solutions or open-source software and apps
Other accessories from Nikon and third parties, including protective cases and bags, eyepiece adapters and correction lenses, and underwater housings.
Nikon D810 animator's kit including the AF-S VR Micro-NIKKOR 105mm f/2.8G, Dragonframe 3.5 software, power supply and cables
Nikon D810 DSLR Filmmaker's Kit including three fast prime lenses, a portable HDMI recorder using "Pro" codecs, but not capable for storing uncompressed video, ME-1 Stereo Microphone, filters, batteries and cables
Reception
At the time of its release, the Nikon D810 became the Dxomark image sensor leader ahead of the Nikon D800E and received many reviews.
Service advisory
On August 19, 2014, Nikon acknowledged a problem reported by some users, of bright spots appearing in long-exposure photographs, as well as "in some images captured at an Image area setting of 1.2× (30×20)." Existing owners of D810 cameras were asked to visit a website to determine whether their camera could be affected, on the basis of serial numbers. Repairs would be made by Nikon free of charge. If bright spots still appear in images after servicing, Nikon recommends enabling Long exposure NR. Products already serviced have a black dot inside the tripod socket.
Nikon D810A
An astrophotography variant with a special infrared filter capable of deep red / near infrared and with special software tweaks like long-exposure modes up to 15 minutes, virtual horizon indicator and a special Astro Noise Reduction software was announced February 10, 2015. The D810A's IR filter is optimized for H-alpha (Hα) red tones, resulting in four times greater sensitivity to the 656 nm wavelength than the D810. In comparison, Canon's astrophotography DSLRs 20Da and 60Da Hα sensitivity was 2.5 times and 3 times (respectively) more than the standard 20D / 60D. The D810A additionally has 1.39 stops advantage due to the larger image sensor format resulting in better than 2 stops sensitivity advantage giving over four times faster exposure times compared to the Canon 20Da/60Da.
Although the D810A can be used for normal photography, due to the deep red / near infrared sensitivity the in-camera white balance may fail in case of fluorescent light or difficult cases with very strong infrared light requiring an external infrared filter. Nikon published an D810A astrophotography guide that recommends live view focusing with 23× enlarged selected areas and a gallery showing the mostly small effects to the color reproduction in "normal" photos.
A review concludes that especially the D810A long exposure noise is superior compared to the D800E and other Nikon fullframes, and shows effects of the increased H-alpha sensitivity. Color balance of "normal" photos seems mostly correct, except comparatively hotter objects with strong infrared radiation and a bit more purple in sunsets.
References
External links
Nikon D810, Nikon USA
Nikon D810A, Nikon USA
Nikon D810 - D800/D800E Comparison Sheet Nikon
D810-D810A Comparison Sheet Nikon
Nikon D810 User Manuals, Guides and Software Nikon Download-center
Nikon D810A User Manuals, Guides and Software Nikon Download-center
Nikon D810 Review Imaging Resource
Best camera for astrophotography? Nikon D810A review, Skies & Scopes
D810
D810
Live-preview digital cameras
Cameras introduced in 2014
Full-frame DSLR cameras
|
17979308
|
https://en.wikipedia.org/wiki/QorIQ
|
QorIQ
|
QorIQ is a brand of ARM-based and Power ISA-based communications microprocessors from NXP Semiconductors (formerly Freescale). It is the evolutionary step from the PowerQUICC platform, and initial products were built around one or more e500mc cores and came in five different product platforms, P1, P2, P3, P4, and P5, segmented by performance and functionality. The platform keeps software compatibility with older PowerPC products such as the PowerQUICC platform. In 2012 Freescale announced ARM-based QorIQ offerings beginning in 2013.
The QorIQ brand and the P1, P2 and P4 product families were announced in June 2008. Details of P3 and P5 products were announced in 2010.
QorIQ P Series processors were manufactured on a 45 nm fabrication process and was available in the end of 2008 (P1 and P2), mid-2009 (P4) and 2010 (P5).
QorIQ T Series is based on a 28 nm process and is pushing a very aggressive power envelope target, capping at 30 W. These are using the e6500 core with AltiVec and are expected to be shipping in 2013.
QorIQ LS-1 and LS-2 families are ARM based processors using the Cortex A7, Cortex A9, A15, A53 and A72 cores upon the ISA agnostic Layerscape architecture. They are available since 2013 and
target low and mid range networking and wireless infrastructure applications.
Layerscape
The Layerscape (LS) architecture is the latest evolution of the QorIQ family, in that features previously provided by DPAA (like compression) may be implemented in software or hardware, depending on the specific chip, but transparent to application programmers.
LS-1 and LS-2 are announced to use Cortex A7, A9, A15, A53 and A72 cores.
The initial LS-1 series does not include any accelerated packet processing layer, focusing typical power consumption of less than 3W using two Cortex A7 with providing ECC for caches and DDR3/4 at 1000 to 1600 MT/s, dual PCI Express Controllers in x1/x2/x4 operation, SD/MMC, SATA 1/2/3, USB 2/3 with integrated PHY, and virtualized eTSEC Gigabit Ethernet Controllers.
LS1 means LS1XXX series (e.g., LS1021A, etc.); LS2 means LS2XXX series. LS2 means a higher performance level than LS1, and it does not indicate a second generation. The middle two digits of the product name are core count; the last digit distinguishes models, with, in most but not all cases, a higher digit meaning greater performance. “A” at the end indicates the Arm processor. LX designates the 16 nm FinFET generation.
The LS1 family is built on the Layerscape architecture is a programmable data-plane engine networking architecture. Both LS1 and LS2 families of processors offer the advanced, high-performance datapath and network peripheral interfaces. These features are frequently required for networking, telecom/datacom, wireless infrastructure, military and aerospace applications.
Initial announcement
Freescale Semiconductor Inc. (acquired by NXP Semiconductors in late 2015) announced a network processor system architecture said to give the flexibility and scalability required by network infrastructure OEMs to handle the market trends of connected devices, massive datasets, tight security, real-time service and increasingly unpredictable network traffic patterns.
Layerscape product family list
P Series
The QorIQ P Series processors are based on e500 or e5500 cores. The P10xx series, P2010 and P2020 are based on the e500v2 core, P204x, P30xx and P40xx on the e500mc core, and P50xx on the e5500 core. Features include 32/32 kB data/instruction L1 cache, 36-bit physical memory addressing [appended to the top of the virtual address in the process context, each process is still 32bit], a double precision floating point unit is present on some cores (not all) and support for virtualization through a hypervisor layer is present in products featuring the e500mc or the e5500. The dual and multi-core devices supports both symmetric and asymmetric multiprocessing, and can run multiple operating systems in parallel.
P1
The P1 series is tailored for gateways, Ethernet switches, wireless LAN access points, and general-purpose control applications. It is the entry level platform, ranging from 400 to 800 MHz devices. It is designed to replace the PowerQUICC II Pro and PowerQUICC III platforms. The chips include among other integrated functionality, Gigabit Ethernet controllers, two USB 2.0 controllers, a security engine, a 32-bit DDR2 and DDR3 memory controller with ECC support, dual four-channel DMA controllers, a SD/MMC host controller and high speed interfaces which can be configured as SerDes lanes, PCIe and SGMII interfaces. The chip is packaged in 689-pin packages which are pin compatible with the P2 family processors.
P1011 – Includes one 800 MHz e500 core, 256 kB L2 cache, four SerDes lanes, three Gbit Ethernet controllers and a TDM engine for legacy phone applications.
P1020 – includes two 800 MHz e500 cores, 256 kB shared L2 cache, four SerDes lanes, three Gbit Ethernet controllers and a TDM engine.
P2
The P2 series is designed for a wide variety of applications in the networking, telecom, military and industrial markets. It will be available in special high quality parts, with junction tolerances from −40 to 125 °C, especially suited for demanding out doors environments. It is the mid-level platform, with devices ranging from 800 MHz up to 1.2 GHz. It is designed to replace the PowerQUICC II Pro and PowerQUICC III platforms. The chips include, among other integrated functionality, a 512 kB L2 cache, a security engine, three Gigabit Ethernet controllers, a USB 2.0 controller, a 64-bit DDR2 and DDR3 memory controller with ECC support, dual four-channel DMA controllers, a SD/MMC host controller and high speed SerDes lanes which can be configured as three PCIe interfaces, two RapidIO interfaces and two SGMII interfaces. The chips are packaged in 689-pin packages which are pin compatible with the P1 family processors.
P2010 – Includes one 1.2 GHz core
P2020 – Includes two 1.2 GHz cores, with shared L2 cache
P3
The P3 series is a mid performance networking platform, designed for switching and routing. The P3 family offers a multi-core platform, with support for up to four e500mc cores at frequencies up to 1.5 GHz on the same chip, connected by the CoreNet coherency fabric. The chips include among other integrated functionality, integrated L3 caches, memory controller, multiple I/O-devices such as DUART, GPIO and USB 2.0, security and encryption engines, a queue manager scheduling on-chip events and a SerDes based on-chip high speed network configurable as multiple Gigabit Ethernet, 10 Gigabit Ethernet, RapidIO or PCIe interfaces.
The P3 family processors share the same physical package with, and are also software backwards compatible with, P4 and P5. The P3 processors have 1.3 GHz 64-bit DDR3 memory controllers, 18 SerDes lanes for networking, hardware accelerators for packet handling and scheduling, regular expressions, RAID, security, cryptography and RapidIO.
The cores are supported by a hardware hypervisor and can be run in symmetric or asymmetric mode meaning that the cores can run and boot operating systems together or separately, resetting and partitioning cores and datapaths independently without disturbing other operating systems and applications.
P2040
P2041
P3041 – Quad 1.5 GHz cores, 128 kB L2 cache per core, single 1.3 GHz 64-bit DDR3 controller. Manufactured on a 45 nm process operating in a 12W envelope.
P4
The P4 series is a high performance networking platform, designed for backbone networking and enterprise level switching and routing. The P4 family offers an extreme multi-core platform, with support for up to eight e500mc cores at frequencies up to 1.5 GHz on the same chip, connected by the CoreNet coherency fabric. The chips include among other integrated functionality, integrated L3 caches, memory controllers, multiple I/O-devices such as DUART, GPIO and USB 2.0, security and encryption engines, a queue manager scheduling on-chip events and a SerDes based on-chip high speed network configurable as multiple Gigabit Ethernet, 10 Gigabit Ethernet, RapidIO or PCIe interfaces.
The cores are supported by a hardware hypervisor and can be run in symmetric or asymmetric mode meaning that the cores can run and boot operating systems together or separately, resetting and partitioning cores and datapaths independently without disturbing other operating systems and applications.
P4080 – Includes eight e500mc cores, each with 32/32kB instruction/data L1 caches and a 128 kB L2 cache. The chip has dual 1 MB L3 caches, each connected to a 64-bit DDR2/DDR3 memory controller. The chip contains a security and encryption module, capable of packet parsing and classification, and acceleration of encryption and regexp pattern matching. The chip can be configured with up to eight Gigabit and two 10 Gigabit Ethernet controllers, three 5 GHz PCIe ports and two RapidIO interfaces. It also has various other peripheral connectivity such as two USB2 controllers. It is designed to operate below 30 W at 1.5 GHz. The processor is manufactured on a 45 nm SOI process and begun sampling to customers in August 2009.
To help software developers and system designers get started with the QorIQ P4080, Freescale worked with Virtutech to create a virtual platform for the P4080 that can be used prior to silicon availability to develop, test, and debug software for the chip. Currently, the simulator is only for the P4080, not the other chips announced in 2008.
Because of its complete set of network engines, this processor can be used for telecommunication systems (LTE eNodeB, EPC, WCDMA, BTS), so Freescale and 6WIND ported 6WIND's packet processing software to the P4080.
P5
The P5 series is based on the high performance 64-bit e5500 core scaling up to 2.5 GHz and allowing numerous auxiliary application processing units as well as multi core operation via the CoreNet fabric. The P5 series processors share the same physical package and are also software backwards compatible with P3 and P4. The P5 processors have 1.3 GHz 64-bit DDR3 memory controllers, 18 SerDes lanes for networking, hardware accelerators for packet handling and scheduling, regular expressions, RAID, security, cryptography and RapidIO.
Introduced in June 2010, samples will be available late 2010 and full production is expected in 2011.
Applications range from high end networking control plane infrastructure, high end storage networking and complex military and industrial devices.
P5010 – Single e5500 2.2 GHz core, 1 MB L3 cache, single 1.333 lGHz DDR3 controller, manufactured on a 45 nm process and operating in a 30W envelope.
P5020 – Dual e5500 2.2 GHz cores, dual 1 MB L3 caches, dual 1.333 lGHz DDR3 controllers, manufactured on a 45 nm process and operating in a 30W envelope.
P5021 – Dual e5500 2.4 GHz cores, 1.6 GHz DDR3/3L. Sampling since March 2012; production expected in 4Q12.
P5040 – Quad e5500 2.4 GHz cores, 1.6 GHz DDR3/3L. Sampling since March 2012; production expected in 4Q12.
Qonverge
In February 2011 Freescale introduced the QorIQ Qonverge platform which is a series of combined CPU and DSP SoC processors targeting wireless infrastructure applications. The PSC913x family chips uses an e500 core based CPU and StarCore SC3850 DSPs will be available in 2011, and is manufactured on a 45 nm process, with e6500 and CS3900 core based 28 nm parts available in 2012 called P4xxx.
AMP Series
The QorIQ Advanced Multiprocessing, AMP Series, processors are all based on the multithreaded 64-bit e6500 core with integrated AltiVec SIMD processing units except the lowest end T1 family that uses the older e5500 core. Products will range from single core versions up to parts with 12 cores or more with frequencies ranging all the way up to 2.5 GHz. The processes will be sectioned into five classes according to performance and features, named T1 through T5, and will be manufactured in a 28 nm process beginning in 2012.
T4
The T4 family uses the e6500 64-bit dual threaded core.
T4240 – The first product announced and incorporates twelve cores, three memory controllers and various other accelerators.
T4160 – A feature reduced version of the T4240 with only eight cores, and less I/O options and just two memory controllers.
T4080 – A feature reduced version of the T4240 with only four cores, and less I/O options and just two memory controllers.
T2
The T2 family uses the e6500 64-bit dual threaded core.
T2080 and T2081 – Processors with four cores running at speeds of 1.5 to 1.8 GHz. The '81 parts comes in smaller package, slightly different I/O options and therefore fewer I/O pins. The T2081 is pin compatible with the lower end T104x and T102x parts.
T1
The T1 family uses the e5500 64-bit single threaded core at 1.2 to 1.5 GHz with 256 kB L2 cache per core and 256kB shared CoreNet L3 cache.
T1040 – Quad-core, four Gbit Ethernet ports and an 8 port Ethernet switch
T1042 – Quad-core, five Gbit Ethernet ports, no Ethernet switch.
T1020 – Dual-core, four Gbit Ethernet ports and an 8 port Ethernet switch
T1022 – Dual-core, five Gbit Ethernet ports, no Ethernet switch.
System design
Networking, IT and telecommunication systems
The QorIQ products bring some new challenges in order to design some control planes of telecommunication systems and their data plane. For instance, when 4 or 8 cores are used, such as the P4080, in order to achieve millions of Packet Processing per seconds, the system does not scale with regular software stack because so many cores require a different system design. In order to restore simplicity and still get the highest level of performance, the telecommunication systems are based on a segregation of the cores. Some cores are used for the control plane while some others are used for a re-designed data plane based on a Fast Path.
Freescale has partnered with networking company 6WIND to provide software developers with a high-performance commercial packet processing solution for the QorIQ platform.
See also
PowerQUICC
PowerPC e500
PowerPC e6500
References
External links
NXP Semiconductors QorIQ website
EE Times, NXP Processor Powers IoT, Networks
Migrating PowerQUICC® III Processors to QorIQ™ Platforms
QorIQ 45-nm communications MPUs feature dual cores, low power – ElectronicProducts.com
MontaVista Provides First No-Cost Evaluation of Commercial Linux for Freescale QorIQ P4080 Multicore Processor – Money.AOL.com
NXP Semiconductors
Power microprocessors
PowerPC microprocessors
|
27654374
|
https://en.wikipedia.org/wiki/Next3
|
Next3
|
Next3 is a journaled file system for Linux based on ext3 which adds snapshots support, yet retains compatibility to the ext3 on-disk format. Next3 is implemented as open-source software, licensed under the GPL license.
Background
A snapshot is a read-only copy of the file system frozen at a point in time. Versioning file systems like Next3 can internally track old versions of files and make snapshots available through a special namespace.
Features
Snapshots
An advantage of copy-on-write is that when Next3 writes new data, the blocks containing the old data can be retained, allowing a snapshot version of the file system to be maintained. Next3 snapshots are created quickly, since all the data composing the snapshot is already stored; they are also space efficient, since any unchanged data is shared among the file system and its snapshots.
Dynamically Provisioned Snapshots Space
The traditional Linux Logical Volume Manager volume level snapshots implementation requires that storage space be allocated in advance. Next3 uses Dynamically provisioned snapshots, meaning it does not require pre-allocation of storage space for snapshots, instead allocating space as it is needed. Storage space is conserved by sharing unchanged data among the file system and its snapshots.
Compatibility
Since Next3 aims to be both forward and backward compatible with the earlier ext3, all of the on-disk structures are identical to those of ext3. The file system can be mounted for read by existing ext3 implementations with no modification. Because of that, Next3, like ext3, lacks a number of features of more recent designs, such as extents.
Performance
When there are no snapshots, Next3 performance is equivalent to ext3 performance. With snapshots, there is a minor overhead per write of metadata block (copy-on-write) and a smaller overhead (~1%) per write of data block (move-on-write).
Next4
As of 2011, Next4, a project for porting of Next3 snapshot capabilities to the Ext4 file system, is mostly completed. The porting is attributed to members of the Pune Institute of Computer Technology (PICT) and the Chinese Academy of Sciences.
See also
ext3cow
List of file systems
Comparison of file systems
References
Disk file systems
File systems supported by the Linux kernel
2010 software
Computer file systems
|
24945887
|
https://en.wikipedia.org/wiki/Time-based%20one-time%20password
|
Time-based one-time password
|
Time-based one-time password (TOTP) is a computer algorithm that generates a one-time password (OTP) that uses the current time as a source of uniqueness. As an extension of the HMAC-based one-time password algorithm (HOTP), it has been adopted as Internet Engineering Task Force (IETF) standard .
TOTP is the cornerstone of Initiative for Open Authentication (OATH), and is used in a number of two-factor authentication (2FA) systems.
History
Through the collaboration of several OATH members, a TOTP draft was developed in order to create an industry-backed standard. It complements the event-based one-time standard HOTP, and it offers end user organizations and enterprises more choice in selecting technologies that best fit their application requirements and security guidelines. In 2008, OATH submitted a draft version of the specification to the IETF. This version incorporates all the feedback and commentary that the authors received from the technical community based on the prior versions submitted to the IETF. In May 2011, TOTP officially became RFC 6238.
Algorithm
To establish TOTP authentication, the authenticatee and authenticator must pre-establish both the HOTP parameters and the following TOTP parameters:
T, the Unix time from which to start counting time steps (default is 0),
T, an interval which will be used to calculate the value of the counter C (default is 30 seconds).
Both the authenticator and the authenticatee compute the TOTP value, then the authenticator checks whether the TOTP value supplied by the authenticatee matches the locally generated TOTP value. Some authenticators allow values that should have been generated before or after the current time in order to account for slight clock skews, network latency and user delays.
TOTP uses the HOTP algorithm, substituting the counter with a non-decreasing value based on the current time:
TOTP value(K) = HOTP value(K, C),
calculating counter value
where
C is the count of the number of durations T between T and T,
T is the current time in seconds since a particular epoch,
T is the epoch as specified in seconds since the Unix epoch (e.g. if using Unix time, then T is 0),
T is the length of one time duration (e.g. 30 seconds).
Unix time is not strictly increasing. When a leap second is inserted into UTC, Unix time repeats one second. But a single leap second does not cause the integer part of Unix time to decrease, and C is non-decreasing as well so long as T is a multiple of one second.
Security
TOTP values can be phished like passwords, though this requires attackers to proxy the credentials in real time.
An attacker who steals the shared secret can generate new, valid TOTP values at will. This can be a particular problem if the attacker breaches a large authentication database.
Because of latency, both network and human, and unsynchronised clocks, the one-time password must validate over a range of times between the authenticator and the authenticatee. Here, time is downsampled into larger durations (e.g., 30 seconds) to allow for validity between the parties. For subsequent authentications to work, the clocks of the authenticatee and the authenticator need to be roughly synchronized (the authenticator will typically accept one-time passwords generated from timestamps that differ by ±1 time interval from the authenticatee's timestamp). TOTP values are typically valid for longer than 30 seconds so that client and server time delays are accounted for.
See also
Botan (programming library)
FreeOTP
Google Authenticator
multiOTP
References
External links
Step by step Python implementation in a Jupyter Notebook
Designing Docker Hub Two-Factor Authentication, (section "Using Time-Based One-Time Password (TOTP) Authentication").
Internet protocols
Computer access control
Cryptographic algorithms
|
3252631
|
https://en.wikipedia.org/wiki/Middlebox
|
Middlebox
|
A middlebox is a computer networking device that transforms, inspects, filters, and manipulates traffic for purposes other than packet forwarding. These extraneous functions have interfered with application performance and have been criticized for violating "important architectural principles" such as the end-to-end principle. Examples of middleboxes include firewalls, network address translators (NATs), load balancers, and deep packet inspection (DPI) boxes.
UCLA computer science professor Lixia Zhang coined the term middlebox in 1999.
Usage
Middleboxes are widely deployed across both private and public networks. Dedicated middlebox hardware is widely deployed in enterprise networks to improve network security and performance, however, even home network routers often have integrated firewall, NAT, or other middlebox functionality. One 2017 study counting more than 1,000 deployments in autonomous systems, in both directions of traffic flows, and across a wide range networks, including mobile operators and data center networks.
Examples
The following are examples of commonly deployed middleboxes:
Firewalls filter traffic based on a set of predefined security rules defined by a network administrator. IP firewalls reject packets "based purely on fields in the IP and transport headers (e.g., disallow incoming traffic to certain port numbers, disallow any traffic to certain subnets etc.)" Other types of firewalls may use more complex rulesets, including those that inspect traffic at the session or application layer.
Intrusion detection systems (IDSs) monitor traffic and collect data for offline analysis for security anomalies. Unlike firewalls, IDSs do not filter packets in real time, as they are capable of more complex inspection and must decide whether to accept or reject each packet as it arrives.
Network address translators (NATs) replace the source and/or destination IP addresses of packets that traverse them. Typically, NATs are deployed to allow multiple end hosts to share a single IP address: hosts "behind" the NAT are assigned a private IP address and their packets destined to the public Internet traverse a NAT, which replaces their internal private address with a shared public address. These are widely used by cellular network providers to manage scarce resources.
WAN optimizers improve bandwidth consumption and perceived latency between endpoints. Typically deployed in large enterprises, WAN optimizers are deployed near both sending and receiving endpoints of communication; the devices then coordinate to cache and compress traffic that traverses the Internet.
Load balancers provide one point of entry to a service, but forward traffic flows to one or more hosts that actually provide the service.
Cellular networks use middleboxes to ensure scarce network resources are used efficiently as well as to protect client devices.
Criticism and challenges
Middleboxes have generated technical challenges for application development and have incurred "scorn" and "dismay" in the network architecture community for violating the end-to-end principle of computer system design.
Application interference
Some middleboxes interfere with application functionality, restricting or preventing end host applications from performing properly.
In particular, network address translators (NATs) present a challenge in that NAT devices divide traffic destined to a public IP address across several receivers. When connections between a host on the Internet and a host behind the NAT are initiated by the host behind the NAT, the NAT learns that traffic for that connection belongs to the local host. Thus, when traffic coming from the Internet is destined to the public (shared) address on a particular port, the NAT can direct the traffic to the appropriate host. However, connections initiated by a host on the Internet do not present the NAT any opportunity to "learn" which internal host the connection belongs to. Moreover, the internal host itself may not even know its own public IP address to announce to potential clients what address to connect to. To resolve this issue, several new protocols have been proposed.
Additionally, because middlebox deployments by cell operators such as AT&T and T-Mobile are opaque, application developers are often "unaware of the middlebox policies enforced by operators" while operators lack full knowledge about application behavior and requirements. For example, one carrier set an "aggressive timeout value to quickly recycle the resources held by inactive TCP connections in the firewall, unexpectedly causing frequent disruptions to long-lived and occasionally idle connections maintained by applications such as push-based email and instant messaging".
Other common middlebox-induced application challenges include web proxies serving "stale" or out-of-date content, and firewalls rejecting traffic on desired ports.
Internet extensibility and design
One criticism of middleboxes is they can limit the choice of transport protocols, thus limiting application or service designs. Middleboxes may filter or drop traffic that does not conform to expected behaviors, so new or uncommon protocols or protocol extensions may be filtered out. Specifically, because middleboxes make hosts in private address realms unable to "pass handles allowing other hosts to communicate with them" has hindered the spread of newer protocols like the Session Initiation Protocol (SIP) as well as various peer-to-peer systems. This progressive reduction in flexibility has been described as protocol ossification.
Conversely, some middleboxes can assist in protocol deployment by providing a translation between new and old protocols. For example, IPv6 can be deployed on public endpoints such as load balancers, proxies, or other forms of NAT, with backend traffic routed over IPv4 or IPv6.
See also
End-to-end connectivity
Interactive Connectivity Establishment (ICE)
Session Traversal Utilities for NAT (STUN)
Traversal Using Relay NAT (TURN)
Multilayer switch
References
Computer network security
1990s neologisms
|
42027929
|
https://en.wikipedia.org/wiki/Nokia%20Lumia%20Icon
|
Nokia Lumia Icon
|
The Nokia Lumia Icon (originally known as the Lumia 929) is a high-end smartphone developed by Nokia that runs Microsoft's Windows Phone 8 operating system. It was announced on February 12, 2014, and released on Verizon Wireless in the United States on February 20, 2014. It is currently exclusive to Verizon and the U.S. market; its international counterpart is the Nokia Lumia 930.
On February 11, 2015, Verizon released the Windows Phone 8.1 operating system and Lumia Denim firmware update for the Icon. On June 23, 2016, Verizon released the Windows 10 Mobile operating system update for the Icon.
Primary features
The primary features of the Lumia Icon are:
5in 1920x1080 AMOLED 441 PPI touchscreen display
Qualcomm Snapdragon 800 Processor
2GB of LPDDR3 RAM
20 MP PureView camera with Carl Zeiss optics and pixel oversampling
Optical Image Stabilization
2160p (4K UHD) video recording at 30fps
Quad microphones with noise reduction
Wireless AC Wi-Fi
4G LTE support
Microsoft Cortana Voice Assistant with "Hey Cortana" voice activation (with Lumia Denim update)
Availability
The phone was released for sale exclusively through Verizon in the United States for $199.99 with a 2-year contract or $549.99 with no contract. The Lumia Icon has almost identical internal specifications to the larger Nokia Lumia 1520 with the primary difference being that it has a smaller screen of 5 inches compared with the Lumia 1520's 6 inches.
The Nokia Lumia 930, released in April 2014, is nearly identical to the Icon in both appearance and specifications. However, the 930 uses GSM radios and comes with Windows Phone 8.1 and the Cyan firmware, and is the worldwide variant of the Icon. While the 930 has since been updated to Denim (which contains the Windows Phone 8.1 Update), Verizon previously faced criticism for not releasing the Cyan update for the Icon. Now that Verizon Wireless has updated the Icon directly to Denim, skipping Cyan, the OS and firmware distinctions have largely been eliminated.
Naming
While in development, the Nokia Lumia Icon was known by its model number. Early development screenshots and prototype accessories referred to the phone as the Lumia 929. This was in keeping with Nokia's previous branding practice of assigning a corresponding number to the place where the phone would sit in Nokia's lineup, with higher numbers indicating higher-end models and lower numbers indicating lower-end products. Upon release, the phone kept the model number 929, but was the first Lumia to utilize a name other than its model number for branding.
Reception
The Lumia Icon received fairly positive reviews, with some reviewers calling it the best Windows Phone released, praising the phone's camera quality, display, and overall speed but detracting its being locked to one carrier and having a camera with a slow transition time between taking photographs. Reviewers were split on the design of the phone, with some praising its metal build quality as solid and premium, and others criticizing it for being too utilitarian and conservative.
Brad Molen of Engadget called the Lumia Icon "the solid high-end Windows Phone that we've wanted for a long time. It has an amazing display, great performance and solid imaging capability, but its exclusivity to Verizon will severely limit its appeal." and Mark Hachman of PCWorld said "If you’re an app fiend, you’d still be better off buying an iPhone or Android phone, which dependably receive third-party apps. But the Icon and Lumia 1520 are clearly the best Windows Phones on the market. Deciding between them simply depends on which size you prefer." Christina Bonnington from Wired said that the best Windows Phone ever still disappoints, and mentioned poor call quality as one of the detractors, but praised the solid build quality, inclusion of wireless charging, and powerful processor.
See also
Microsoft Lumia
Nokia Lumia 1520
Nokia Lumia 930
References
Lumia Icon
Microsoft Lumia
Mobile phones
Mobile phones introduced in 2014
Discontinued smartphones
Windows Phone devices
PureView
|
6343076
|
https://en.wikipedia.org/wiki/USC%20Physical%20Education%20building
|
USC Physical Education building
|
The Physical Education building is the University of Southern California's oldest on-campus athletic building. It is home to the 1,000-seat North Gym as well as the campus's first indoor swimming facilities.
The North Gym was the USC Trojans men's volleyball and USC Trojans women's volleyball teams' home court from 1970 until 1988. From 1989 to 2006, the North Gym and the Lyon Center split time as the teams' home courts. In 2007, the teams moved to the Galen Center, but use the old venues if the Galen Center is reserved for other events.
Until 2006, the Trojans basketball and volleyball teams held practice in the North Gym.
The Physical Education building is home to USC's Air Force, Army, and Navy ROTC programs, and has been used as a filming location for many films, including Love & Basketball and Swimfan.
References
Basketball venues in Los Angeles
College basketball venues in the United States
College swimming venues in the United States
College volleyball venues in the United States
Swimming venues in Los Angeles
Physical Education Building
Physical Education Building
Physical Education Building
Physical Education Building
Volleyball venues in Los Angeles
|
4910705
|
https://en.wikipedia.org/wiki/Navy%20Enlisted%20Classification
|
Navy Enlisted Classification
|
The Navy Enlisted Classification (NEC) system supplements the rating designators for enlisted members of the United States Navy. A naval rating and NEC designator are similar to the Military Occupational Specialty (MOS) designators used in the U.S. Army and U.S. Marine Corps and the Air Force Specialty Code (AFSC) used in the U.S. Air Force.
The U.S. Navy has several ratings or job specialties for its enlisted members. An enlisted member is known by the enlisted rating, for example, a Machinist's Mate (or MM or by the enlisted rate, for example Petty Officer First Class (or PO1). Often Navy enlisted members are addressed by a combination of rating and rate. In this example, this machinist's mate petty officer first class may be addressed as Machinist's Mate 1st Class (or MM1).
However, the NEC designator is a four-digit code that identifies skills and abilities beyond the standard (or outward) rating designator. According to the Military Personnel Manual (MILSPERMAN) 1221-010, the NEC designator facilitates personnel planning, procurement, and selection for training; development of training requirements; promotion, distribution, assignment and the orderly call to active duty of inactive duty personnel in times of national emergency or mobilization.
For example, a person holding the MM-3385 is a nuclear-trained machinist's mate for surface ships, and a person with an MM-3355 is a nuclear-trained machinist's mate for submarines.
In the U.S. Navy's officer ranks, the naval officer designator serves a similar purpose.
01 Deck Department
0160 - Causeway Barge Ferry Pilot PO2-MCPO
0161 - YTB/YT Tugmaster PO1-MCPO
0164 - Patrol Boat Coxswain SN-PO1
0167 - LCAC Operator CPO-MCPO
0169 - Causeway Barge Ferry Coxswain PO3-PO1
0170 - Surface Rescue Swimmer SN-MCPO
0171 - Landing Craft Utility Craftmaster PO1-MCPO
0172 - LCAC Loadmaster SN-PO1
0181 - Navy Lighterage Deck Supervisor PO3-PO1
0190 - Force Protection Boat Coxswain PO3-PO1
0199 - Boatswain's Mate Basic
02 Navigation Department
0202 - Assistant Navigator PO1-MCPO
0215 - Harbor/Docking Pilot PO1-MCPO
0299 - Quartermaster Basic
03 Operations Department
0302 - AN/SYS-2 Integrated Automatic Detection and Tracking (IADT) Systems Operator SN-CPO
0304 - LCAC Radar Operator/Navigator PO2-CPO
0318 - Air Intercept Controller PO2-PO1
0319 - Supervisory Air Intercept Controller PO1-MCPO
0324 - ASW/SUW Tactical Air Controller (ASTAC) PO2-MCPO
0327 - Sea Combat Air Controller (SCAC) SN-CPO
0328 - ASW/ASUW Tactical Air Control (ASTAC) Leadership CPO-MCPO
0334 - HARPOON (AN/SWG-1A) Engagement Planning Operator PO3-MCPO
0336 - Tactical/Mobile (TacMobile) Operations Control (OPCON) Operator SN-CPO
0340 - Global Command and Control System Common Operational Picture/Maritime 4.X (GCCS COP/M 4.X) Operator SN-MCPO
0342 - Global Command and Control System Common Operational Picture/Maritime (GCCS COP/M) Operator SN-MCPO
0345 - Joint Tactical Ground Station (JTAGS)/Multi-Mission Mobile Processor (M3P) System Operator/Maintainer PO3-CPO
0346 - AEGIS Console Operator Track 3 SN-MCPo
0347 - Ship Self Defense System (SSDS) MK1 Operator SN-PO1
0348 - Multi-Tactical Digital Information Link Operator (TADIL) PO3-MCPO
0349 - SSDS MK 2 Advanced Operator SN-CPO
0350 - Interface Control Officer (ICO) PO1-MCPO
0356 - Global Command and Control System-Maritime (4.1) Increment 2 (GCCS-M 4.1 Inc 2) Operator SR-MCPO
0399 - Operations Specialist Basic
1523 - AN/SPN-35 Amphibious Air Traffic Control Radar Technician
04 Sonar Technician (Submarine/Surface)
0402 - AN/SQQ-89(V)2/9 Active Sonar Level II Technician/Operator Petty Officer -Chief
0410 - AN/SLQ-48(V) Mine Neutralization Systems (MNS) Operator/Maintenance Technician SN-MCPO
0411 - AN/SQQ-89(V)4/6 Sonar Subsystem Level I Operator SN-PO1
0414 - AN/SQQ-89(V)3/5 Active Sonar Level II Technician/Operator PO3-SCPO
0415 - AN/SQQ-89(V) 2/3/4/6/7/8/9/12 Passive Sonar Level II Technician/Operator PO3-SCPO
0416 - Acoustic Intelligence Specialist PO1-MCPO
0417 - ASW Specialist CPO-MCPO
0425 - AN/BQQ-6 TRIDENT LEVEL III Master Operation and Maintenance Technician PO2-MCPO
0430 - Underwater Fire Control System MK-116 MOD 7 Anti-Submarine Warfare Control System Operator PO2-MCPO
0450 - Journeyman Level Acoustic Analyst PO2-MCPO
0455 - AN/SQQ-89(V) 4/6 Active Sonar Level II Technician PO3-SCPO
0461 - AN/BSY-2(V) Advanced Maintainer PO3-MCPO
0466 - Journeyman Surface Ship USW Supervisor PO2-SCPO
0501 - Sonar (Submarines) Leading Chief Petty Officer PO1-MCPO
0505 - Integrated Undersea Surveillance System (IUSS) Analyst SN-SCPO
0506 - Integrated Undersea Surveillance System (IUSS) Maintenance Technician SN-SCPO
0507 - Integrated Undersea Surveillance System (IUSS) Master Analyst PO2-SCPO
0509 - AN/SQQ-89 (V) Adjunct Subsystem Level II Technician PO3-SCPO
0510 - AN/SQS-53D Sensor Subsystem Level II Technician/Operator SSN-CPO
0511 - AN/SQQ-89(V) 11/12 Sonar Subsystem Level I Operator SN-PO1
0512 - AN/BSY-1 and AN/BQQ-5E Combined Retained Equipment Maintenance Technician PO3-SCPO
0518 - Sonar Technician AN/BQQ-10(V) Operator/Maintainer PO3-SCPO
0520 - Sonar, Combat Control and Architecture Equipment Technician PO2-SCPO
0521 - AN/SQQ-89(V)15 Sonar System Level I Operator SN-PO1
0522 - AN/SQQ-89(V)15 Sonar System Level II Technician PO3-MCPO
0523 - AN/SQQ-89(V)15 Sonar System Journeyman PO2-SCPO
0524 - AN/SQQ-89A(V)15/(V)15 EC204 Surface Ship USW Combat Systems Senor Operator SN-PO1
0525 - AN/SQQ-89A(V)15 Surface Ship USW Combat Systems Maintenance Technician PO3-SCPO
0527 - AN/SQQ-89A(V)15/(V)15 EC204 Surface Ship USW Combat Systems Journeyman PO2-CPO
0530 - AN/BQQ-10(V) TI-10/12 Operator/Maintainer PO3-SCPO
0540 - AN/SQQ-34C (V) 2 Aircraft Carrier Tactical Support Center (CV-TSC) Operator PO3-SCPO
0541 - AN/SQQ-34C (V) 2 Aircraft Carrier Tactical Support Center (CV-TSC) Maintenance Technicians PO3-PO1
0550 - Integrated Undersea Surveillance System (IUSS) Passive Sensor Operator (PSO) SN-SCPO
0551 - Integrated Undersea Surveillance System (IUSS) Supervisor PO2-SCPO
0552 - Integrated Undersea Surveillance System (IUSS) Low Frequency Active (LFA)/Compact Low Frequency Active (CLFA) Operator SN-SCPO
0553 - Integrated Undersea Surveillance System (IUSS) SURTASS Mission Commander CPO-MCPO
08 Weapons Department
0746 - Advanced Undersea MK-46 Maintenance Weapons Smith (SN-SCPO)
0812 - Small Arms Marksmanship Instructor (PO2-MCPO)
0814 - Crew Served Weapons (CSW) Instructor (PO2-MCPO)
0857 - 25mm Machine Gun System (MGS) MK 38 MOD Gun Weapon System (GWS) Technician (SN-PO1)
0870 - MK 46 MOD 2 Gun Weapon System (GWS) Technician (SN-SCPO)
0878 - MK-75 Operator/Maintainer (SN-CPO)
0879 - 5"/54 Caliber Gun System MK-45 MOD 1 and 2 Operator/Maintainer (SN-SCPO)
0880 - 5”/62-Caliber MK 45 MOD 4 Gun Mount Maintenance (SN-SCPO)
0979 - MK-41 VLS Baseline IV Through VII Technician (SN-MCPO)
0981 - MK-41 VLS Maintainer Technician (SN-MCPO)
References
NAVPERS 18068F Volume II the official manual of Navy Enlisted Classifications (NECs) published in BUPERS. April 2021
United States Navy
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.