id
stringlengths 3
8
| url
stringlengths 32
207
| title
stringlengths 1
114
| text
stringlengths 93
492k
|
---|---|---|---|
67219001 | https://en.wikipedia.org/wiki/IOS%2015 | IOS 15 | iOS 15 is the fifteenth and current major release of the iOS mobile operating system developed by Apple for its iPhone and iPod Touch lines of products. It was announced at the company's Worldwide Developers Conference on June 7, 2021, as the successor to iOS 14, and released to the public on September 20, 2021.
History
Updates
The first developer beta of iOS 15 was released on June 7, 2021, and the first public beta was released on June 30, 2021, six days after the release of the second developer beta. iOS 15 was officially released on September 20, 2021.
Legend:
Features
Focus
The Focus allows users to set their "state," such as work, sleep, do not disturb or a custom focus. Based on the selected state, users can set the type of notification they want to receive and from which application. It is also possible to choose which pages and then apps to show on the Home based on the state. The state can change automatically based on where the user is or a time.
Focus also controls the interactions with Contacts, so it is possible to decide which specific contacts can "disturb" the user.
Some Lock Screen settings can be controlled based on the state: for example, the Dim Lock Screen feature, which darkens the lock screen from not showing notifications on that screen, can be automatically turned on or off based on the state.
Focus is synchronized automatically across different iOS and macOS devices on the same iCloud account, as well as any paired watchOS devices.
Notifications
Notifications receive a new look with contact photos for all communication apps and larger app icons. When the notification arrives, the user can mute the corresponding app for one hour or all the day.
The Summary allows the user to group and postpone the notifications coming from the chosen apps, delivering them at a scheduled time in a single big notification called summary notification.
Live Text
Devices with an A12 chip or later support Live Text in all apps which can transcribe text from the camera in the real world, images, photos through Artificial Intelligence text recognition using the Neural Engine.
Smart stacks with suggested Widgets
The Widgets in iOS 15 are more dynamic: depending on the context, the system can add or remove widgets to existing stacks. For example, near the start of a certain event in the Calendar, the system can decide to add the Calendar Widget to an existing smart stack, if it is not already present, and then remove it at the end of the event.
Cross-App Drag and Drop
Users can now drag pictures and text from one app to another. This feature was previously only available in iPadOS.
Home
Users can now reorder or delete the various Home screens and hide or limit selected Home screens using Focus mode.
Per-App Text Size
From the Control Center, the text size can be set per app.
Spotlight
The global search function has been enhanced and it is also available on the lock screen by pulling the page down.
Dictation
Formerly limited to 60 seconds, the Voice-To-Text dictation available in the keyboard is unlimited.
System-wide translation
System-wide translation allows the user to translate text in all apps by selecting it and tapping on the Translate option.
Adjust video playback speed
The system default player, used for videos and by many apps, has the control to adjust the playback speed.
Video Effects and Mic Mode
Video Effects and Mic Mode are two new commands in the Control Center that allow the user to add Portrait effect to the camera and set Microphone mode to Voice Isolation on any apps.
Accessibility improvements
Per-App Accessibility: each app can have a different accessibility setting to customize the text (Bold Text, Larger Text, Button Shapes, On/Off Labels, Reduce Transparency), increase contrast, reduce motion, autoplay video-preview, etc.
Image Exploration with VoiceOver: it describes photos to give users with low vision more context about what's displayed on the photo.
Audible charts: the Audio Graphs accessibility framework allows to represent chart data with audio for blind and low-vision people.
iCloud
Backups to iCloud can now also be made on 5G cellular networks.
RealityKit 2
The new version allows apps to build more immersive AR experiences using new APIs to capture objects even faster, custom shaders, dynamic assets, custom systems and character control.
Accounts for School and Work
It is possible to add to the iPhone the accounts of your organization, managed by schools or companies, without having to use external apps or profiles.
StoreKit 2
StoreKit 2 allows apps to implement the "Request a Refund" option in-app. The users can tap this option, select a specific in-app purchase and identify the problem that led to the refund request.
It also allows developers monitoring the purchases made by their users without using third-party solutions.
App improvements
FaceTime
iOS 15 adds several new features to FaceTime including:
Grid view for group conversations
Portrait mode, requires A12 Bionic chip or later
Spatial Audio
Voice isolation mode: remove background noise during calls
Wide spectrum mode
FaceTime links and web integration: Allow Android and Microsoft Windows users to join calls
Calendar integration
Mute alerts, which let users know when they are talking while muted
FaceTime can now take advantage of all rear cameras (on compatible devices)
SharePlay allows you to share your content from compatible video and music apps to a FaceTime call. You can also share your screen to a FaceTime call.
Memoji
Memoji in iOS 15 have more customization options, including new clothing, two different eye colors, new glasses, new stickers, multicolored headwear, and new accessibility options.
Messages
Multiple images in stack: Messages now displays multiple images in a stack, making them easier to navigate.
Pinned content: it is possible to pin any content, text or link you receive from a contact.
Shared with You: Messages also introduces a new feature called "Shared with You," which organizes links and other content shared via Messages in a dedicated section in their native apps for later viewing (for instance, a news article shared via messages is shown in the News app).
Maps
Apple Maps receives several new features:
Greater depth has been added to the driving maps with the use of 3D modeling, which will make it easier to interpret directions when faced with roads that go over or under the one being driven on, including buildings, bridges and trees.
3D globe with a new color palette and increased mountain, desert, and forest detail
Increased traffic information, turn lanes, bike, bus and taxi lanes, medians, crosswalks
Walking directions in augmented reality on A12 devices or later
Redesigned place cards
Improved filtering for search
The night mode set in Maps now follows the night mode set in the OS instead of activating it only at night and the colors are improved.
Public Transport Info: public transport routes and times with ability to pin favorite routes to the top. In-app notifications will alert users when they need to get off a bus or train. Transit information can be visible on a connected Apple Watch.
Reports and Reviews: you can report incidents, write reviews and add photos to points of interest, etc.
New features for driving, including a new map where details such as traffic and incidents are highlighted, as well as an itinerary planner that lets you view a future journey by selecting departure or arrival time.
Photos
The Photos app can now manually set the time, date, and location of a photo. The app also allows the user to view information about the photo such as the camera used to take the photo and the photo's file size. In the Photos app, the user can now look up places that are inside images, however, this feature only works on the Apple A12 chip or later.
Camera
Improved panoramic shooting mode on iPhone 12 and above: less geometric distortion in panoramic shots with elongated fields of view, noise and banding reduction that are formed in the image due to changes in brightness and contrast when moving the camera from side to side other, a less blurry and clearer image even when capturing moving subjects within the panorama.
Safari
Safari was completely redesigned, moving the tab bar and address bar to the bottom of the screen. It now has tab groups, allowing users to organize tabs and share entire groups of tabs. The user can now use a pull to refresh gesture to refresh a webpage. Browser extensions are available for the first time in Safari for iOS; they are the same extensions available in Safari for the Mac. Safari will automatically upgrade HTTP URLs to HTTPS if compatible. The WebM audio codec is supported.
Safari opens with a new start page; it is possible to have a custom page on startup that contains sections including favorites, most frequently visited sites, Siri suggestions, etc.
App Clips more discoverable: it is possible to show a full-screen preview of the app clip in Safari.
Weather
Weather received an overhaul, with new animations and weather maps in full screen, and the weather icon was updated. The app also has a next-time precipitation notifications feature which allows the user to get a notification whenever rain or snow is going to start or stop within the next hour for the user’s current location and each saved location, independently.
Siri
Siri now works offline, offering shorter response times for the most common requests that do not require an Internet connection. This feature requires a device with a A12 Bionic chip or later.
News
Apple News has been completely redesigned, featuring more rounded corners.
Share with Siri
The user can ask Siri “Hey Siri, share this with [name]” (or something similar like “send this to [name]”) and Siri will share the content on the screen to that person using Messages. Items like images, web pages, Apple Music or Podcasts, Apple News stories, and Maps locations will share the actual content (or a link to it). For content Siri can't share, it will warn you that it can only send a screenshot—but Siri will still automatically take that screenshot and drop it in a Message to that person.
Announce Notifications: with announce notifications update in iOS 15, Siri can now read all incoming notifications and even allow users to respond to them using their voice. The user can choose to enable this for specific apps.
Health
Health data can now be shared.
A new monitored parameter called "Walking Steadiness" has been added, which determines the risk of falling using gyroscopic sensors that measure balance, stability, and coordination.
Added the Trend analysis i.e. horizontal lines that show the trend of the various parameters over the long term.
Lab Results allows you to import laboratory results into the Health app from a healthcare provider.
Files
Groups is a new view mode that groups files of the same type.
The built-in PDF editor can insert pages from existing files or scans, remove pages, and rotate pages. PDFs can also be locked with a password.
Notes
New #tags allow classifying, organizing, then finding your notes faster. Smart Folders automatically group various notes based on tags.
Ability to share notes with other collaborators and work on them together. The activity view shows a summary of the changes made by other collaborators before your last reading and a day-by-day list of the activities carried out by each collaborator. It is possible to mention @someone in the notes, who will be notified.
Reminders
Ability to insert #tags in reminders to classify them.
Shortcuts
Sound recognition has been added to the automations, so it is possible to execute a customized command when a certain sound is recognized.
New automation triggers based on the current reading of a HomeKit-enabled humidity, air quality, or light level sensor.
Voice Memos
Added new playback options to adjust speed and skip silence.
Wallet
Keys: iPhone is able to unlock select HomeKit-enabled smart locks. Requires an iPhone with an A12 chip or newer.
Identification cards and driver licenses: iPhone can store a copy of a U.S. user's state-issued identity card or driver license. Arizona, Georgia, Connecticut, Iowa, Kentucky, Maryland, Oklahoma, and Utah will be the first states to support the feature.
Security and privacy
App Privacy Report
By activating this app logging, the user can save a 7-day summary of the times when the various apps access certain data and the domains or websites they visit.
Siri Improved Privacy
On devices with an Apple A12 chip or later, Siri now converts audio into words on the device itself instead of sending it to Apple servers.
Hide my IP for Trackers in Safari
Safari's anti-tracking now prevents known trackers from reading your real IP address.
Hide my IP for External Content in Mail
In the Mail app, you can enable the setting to hide your IP address when downloading external content that may be present in a mail message. In this way it is possible to privately download this external content without being tracked by spammers or commercial companies that have inserted them without the user's knowledge.
Hide My Email
Hide My Email creates random email addresses that forward to the inbox so e-mail can be sent and received anonymously.
iCloud Private Relay
Private Relay masks the user's IP address in Safari, preserving the region without revealing the actual location. It also protects the DNS query resolution and insecure HTTP traffic in all apps.
Built-in one-time password authenticator With Autofill
The built-in authenticator allows iOS devices to be used to generate verification codes for additional sign-in security of accounts. There is no need to download a separate app because it is integrated into the OS. The verification codes are automatically filled when a user signs in to the site.
WPA3 Hotspot
Hotspot connections now can also use the WPA3 security protocol.
Tethering to/from older iOS devices is not possible as WPA2 compatibility is not fully supported.
Apple Support have said that if enough people put in a feature request they might fix this issue.
CSAM detection
CSAM detection identifies known Child Sexual Abuse Material (CSAM) in photos stored iCloud Photos was originally intended to be included. Implementation of CSAM has been delayed indefinitely. This recognition is based on a perceptual hash called NeuralHash.
Siri and Search are also being updated to intervene when users perform searches for queries related to CSAM. The Messages app will warn children and their parents when receiving or sending sexually explicit photos, blurring sent and received photos.
Other changes
New widgets have been added: Mail, Contacts, Game Center, Find My, App Store, Sleep, Apple Card (upcoming widget)
New Emojis will be added in iOS 15.4.
iOS 13 wallpapers were removed in the first beta of iOS 15.
iOS 15 features a new wallpaper in two modes: light and dark.
Keyboard Brightness to Control Center will be added in iOS 15.4.
Game Controllers Support for App Store will be added in iOS 15.4.
SharePlay was officially released in iOS 15.1. It was initially present in the iOS 15.0 betas but was disabled and hidden behind a developer profile before the final release.
Support for SharePlay will be added in iOS 15.4.
"Other" storage was renamed to "System Data".
Keychain Notes to iCloud will be added in iOS 15.4.
iOS 15 includes a new feature called "Record App Activity", and in iOS 15.2 this functionality was extended and is now called "App Privacy Report".
Custom Email Domains for iCloud Support will be expanded and added in iOS 15.4.
‘Cosmetic Scan’ Trade-In Tool for iPhone will be added in iOS 15.4.
“Corner Gestures” for Notes in Settings will be added in iOS 15.4.
Passkey Website Sign-in will be added in iOS 15.4.
Emergency SOS was improved in iOS 15.2.
Notifications for Personal Automations in Shortcuts Turn Off will be added in iOS 15.4.
TV App Customization will choose “Preferences” and choose “Still Frame” or “Poster Art” options for the Up Next Display will be added in iOS 15.4.
On iOS 15.0-15.1, Face ID was disabled on the iPhone 13 series after a third-party screen replacement. This is no longer the case starting from iOS 15.2. Face ID will be fixed and re-enabled in iOS 15.4.
120Hz Animations on iPhone 13 will be fixed in iOS 15.4.
Face Mask and Glasses for Face ID will be added in iOS 15.4.
Vaccination Records in the Health App will be added in iOS 15.4.
Supported devices
All the devices supporting iOS 13 and iOS 14 also support iOS 15. The following is a list of devices that support iOS 15:
iPhone
iPhone 6S
iPhone 6S Plus
iPhone 7
iPhone 7 Plus
iPhone 8
iPhone 8 Plus
iPhone X
iPhone XS
iPhone XS Max
iPhone XR
iPhone 11
iPhone 11 Pro
iPhone 11 Pro Max
iPhone SE (1st generation)
iPhone SE (2nd generation)
iPhone 12 Mini
iPhone 12
iPhone 12 Pro
iPhone 12 Pro Max
iPhone 13 Mini
iPhone 13
iPhone 13 Pro
iPhone 13 Pro Max
iPod Touch
iPod Touch (7th generation)
See also
iPadOS 15
macOS Monterey
tvOS 15
watchOS 8
References
15
2021 software
Products introduced in 2021
Mobile operating systems |
14199 | https://en.wikipedia.org/wiki/Handheld%20game%20console | Handheld game console | A handheld game console, or simply handheld console, is a small, portable self-contained video game console with a built-in screen, game controls and speakers. Handheld game consoles are smaller than home video game consoles and contain the console, screen, speakers, and controls in one unit, allowing people to carry them and play them at any time or place.
In 1976, Mattel introduced the first handheld electronic game with the release of Auto Race. Later, several companies—including Coleco and Milton Bradley—made their own single-game, lightweight table-top or handheld electronic game devices. The first commercial successful handheld console was Merlin from 1978 which sold more than 5 million units. The first handheld game console with interchangeable cartridges is the Milton Bradley Microvision in 1979.
Nintendo is credited with popularizing the handheld console concept with the release of the Game Boy in 1989 and continues to dominate the handheld console market. The first internet-enabled handheld console and the first with a touchscreen was the Game.com released by Tiger Electronics in 1997. The Nintendo DS, released in 2004, introduced touchscreen controls and wireless online gaming to a wider audience, becoming the best-selling handheld console with over units sold worldwide.
History
Timeline
This table describes handheld games consoles over video game generations with over 1 million sales.
Origins
The origins of handheld game consoles are found in handheld and tabletop electronic game devices of the 1970s and early 1980s. These electronic devices are capable of playing only a single game, they fit in the palm of the hand or on a tabletop, and they may make use of a variety of video displays such as LED, VFD, or LCD. In 1978, handheld electronic games were described by Popular Electronics magazine as "nonvideo electronic games" and "non-TV games" as distinct from devices that required use of a television screen. Handheld electronic games, in turn, find their origins in the synthesis of previous handheld and tabletop electro-mechanical devices such as Waco's Electronic Tic-Tac-Toe (1972) Cragstan's Periscope-Firing Range (1951), and the emerging optoelectronic-display-driven calculator market of the early 1970s. This synthesis happened in 1976, when "Mattel began work on a line of calculator-sized sports games that became the world's first handheld electronic games. The project began when Michael Katz, Mattel's new product category marketing director, told the engineers in the electronics group to design a game the size of a calculator, using LED (light-emitting diode) technology."
our big success was something that I conceptualized—the first handheld game. I asked the design group to see if they could come up with a game that was electronic that was the same size as a calculator.
—Michael Katz, former marketing director, Mattel Toys.
The result was the 1976 release of Auto Race. Followed by Football later in 1977, the two games were so successful that according to Katz, "these simple electronic handheld games turned into a '$400 million category.'" Mattel would later win the honor of being recognized by the industry for innovation in handheld game device displays. Soon, other manufacturers including Coleco, Parker Brothers, Milton Bradley, Entex, and Bandai began following up with their own tabletop and handheld electronic games.
In 1979 the LCD-based Microvision, designed by Smith Engineering and distributed by Milton-Bradley, became the first handheld game console and the first to use interchangeable game cartridges. The Microvision game Cosmic Hunter (1981) also introduced the concept of a directional pad on handheld gaming devices, and is operated by using the thumb to manipulate the on-screen character in any of four directions.
In 1979, Gunpei Yokoi, traveling on a bullet train, saw a bored businessman playing with an LCD calculator by pressing the buttons. Yokoi then thought of an idea for a watch that doubled as a miniature game machine for killing time. Starting in 1980, Nintendo began to release a series of electronic games designed by Yokoi called the Game & Watch games. Taking advantage of the technology used in the credit-card-sized calculators that had appeared on the market, Yokoi designed the series of LCD-based games to include a digital time display in the corner of the screen. For later, more complicated Game & Watch games, Yokoi invented a cross shaped directional pad or "D-pad" for control of on-screen characters. Yokoi also included his directional pad on the NES controllers, and the cross-shaped thumb controller soon became standard on game console controllers and ubiquitous across the video game industry since. When Yokoi began designing Nintendo's first handheld game console, he came up with a device that married the elements of his Game & Watch devices and the Famicom console, including both items' D-pad controller. The result was the Nintendo Game Boy.
In 1982, the Bandai LCD Solarpower was the first solar-powered gaming device. Some of its games, such as the horror-themed game Terror House, features two LCD panels, one stacked on the other, for an early 3D effect. In 1983, Takara Tomy's Tomytronic 3D simulates 3D by having two LCD panels that were lit by external light through a window on top of the device, making it the first dedicated home video 3D hardware.
Beginnings
The late 1980s and early 1990s saw the beginnings of the modern-day handheld game console industry, after the demise of the Microvision. As backlit LCD game consoles with color graphics consume a lot of power, they were not battery-friendly like the non-backlit original Game Boy whose monochrome graphics allowed longer battery life. By this point, rechargeable battery technology had not yet matured and so the more advanced game consoles of the time such as the Sega Game Gear and Atari Lynx did not have nearly as much success as the Game Boy.
Even though third-party rechargeable batteries were available for the battery-hungry alternatives to the Game Boy, these batteries employed a nickel-cadmium process and had to be completely discharged before being recharged to ensure maximum efficiency; lead-acid batteries could be used with automobile circuit limiters (cigarette lighter plug devices); but the batteries had mediocre portability. The later NiMH batteries, which do not share this requirement for maximum efficiency, were not released until the late 1990s, years after the Game Gear, Atari Lynx, and original Game Boy had been discontinued. During the time when technologically superior handhelds had strict technical limitations, batteries had a very low mAh rating since batteries with heavy power density were not yet available.
Modern game systems such as the Nintendo DS and PlayStation Portable have rechargeable Lithium-Ion batteries with proprietary shapes. Other seventh-generation consoles such as the GP2X use standard alkaline batteries. Because the mAh rating of alkaline batteries has increased since the 1990s, the power needed for handhelds like the GP2X may be supplied by relatively few batteries.
Game Boy
Nintendo released the Game Boy on April 21, 1989 (September 1990 for the UK). The design team headed by Gunpei Yokoi had also been responsible for the Game & Watch system, as well as the Nintendo Entertainment System games Metroid and Kid Icarus. The Game Boy came under scrutiny by Nintendo president Hiroshi Yamauchi, saying that the monochrome screen was too small, and the processing power was inadequate. The design team had felt that low initial cost and battery economy were more important concerns, and when compared to the Microvision, the Game Boy was a huge leap forward.
Yokoi recognized that the Game Boy needed a killer app—at least one game that would define the console, and persuade customers to buy it. In June 1988, Minoru Arakawa, then-CEO of Nintendo of America saw a demonstration of the game Tetris at a trade show. Nintendo purchased the rights for the game, and packaged it with the Game Boy system as a launch title. It was almost an immediate hit. By the end of the year more than a million units were sold in the US. As of March 31, 2005, the Game Boy and Game Boy Color combined to sell over 118 million units worldwide.
Atari Lynx
In 1987, Epyx created the Handy Game; a device that would become the Atari Lynx in 1989. It is the first color handheld console ever made, as well as the first with a backlit screen. It also features networking support with up to 17 other players, and advanced hardware that allows the zooming and scaling of sprites. The Lynx can also be turned upside down to accommodate left-handed players. However, all these features came at a very high price point, which drove consumers to seek cheaper alternatives. The Lynx is also very unwieldy, consumes batteries very quickly, and lacked the third-party support enjoyed by its competitors. Due to its high price, short battery life, production shortages, a dearth of compelling games, and Nintendo's aggressive marketing campaign, and despite a redesign in 1991, the Lynx became a commercial failure. Despite this, companies like Telegames helped to keep the system alive long past its commercial relevance, and when new owner Hasbro released the rights to develop for the public domain, independent developers like Songbird have managed to release new commercial games for the system every year until 2004's Winter Games.
TurboExpress
The TurboExpress is a portable version of the TurboGrafx, released in 1990 for $249.99. Its Japanese equivalent is the PC Engine GT.
It is the most advanced handheld of its time and can play all the TurboGrafx-16's games (which are on a small, credit-card sized media called HuCards). It has a 66 mm (2.6 in.) screen, the same as the original Game Boy, but in a much higher resolution, and can display 64 sprites at once, 16 per scanline, in 512 colors. Although the hardware can only handle 481 simultaneous colors. It has 8 kilobytes of RAM. The Turbo runs the HuC6820 CPU at 1.79 or 7.16 MHz.
The optional "TurboVision" TV tuner includes RCA audio/video input, allowing users to use TurboExpress as a video monitor. The "TurboLink" allowed two-player play. Falcon, a flight simulator, included a "head-to-head" dogfight mode that can only be accessed via TurboLink. However, very few TG-16 games offered co-op play modes especially designed with the TurboExpress in mind.
Bitcorp Gamate
The Bitcorp Gamate is the one of the first handheld game systems created in response to the Nintendo Game Boy. It was released in Asia in 1990 and distributed worldwide by 1991.
Like the Sega Game Gear, it was horizontal in orientation and like the Game Boy, required 4 AA batteries. Unlike many later Game Boy clones, its internal components were professionally assembled (no "glop-top" chips). Unfortunately the system's fatal flaw is its screen. Even by the standards of the day, its screen is rather difficult to use, suffering from similar ghosting problems that were common complaints with the first generation Game Boys. Likely because of this fact sales were quite poor, and Bitcorp closed by 1992. However, new games continued to be published for the Asian market, possibly as late as 1994. The total number of games released for the system remains unknown.
Gamate games were designed for stereo sound, but the console is only equipped with a mono speaker.
Sega Game Gear
The Game Gear is the third color handheld console, after the Lynx and the TurboExpress; produced by Sega. Released in Japan in 1990 and in North America and Europe in 1991, it is based on the Master System, which gave Sega the ability to quickly create Game Gear games from its large library of games for the Master System. While never reaching the level of success enjoyed by Nintendo, the Game Gear proved to be a fairly durable competitor, lasting longer than any other Game Boy rivals.
While the Game Gear is most frequently seen in black or navy blue, it was also released in a variety of additional colors: red, light blue, yellow, clear, and violet. All of these variations were released in small quantities and frequently only in the Asian market.
Following Sega's success with the Game Gear, they began development on a successor during the early 1990s, which was intended to feature a touchscreen interface, many years before the Nintendo DS. However, such a technology was very expensive at the time, and the handheld itself was estimated to have cost around $289 were it to be released. Sega eventually chose to shelve the idea and instead release the Genesis Nomad, a handheld version of the Genesis, as the successor.
Watara Supervision
The Watara Supervision was released in 1992 in an attempt to compete with the Nintendo Game Boy. The first model was designed very much like a Game Boy, but it is grey in color and has a slightly larger screen. The second model was made with a hinge across the center and can be bent slightly to provide greater comfort for the user. While the system did enjoy a modest degree of success, it never impacted the sales of Nintendo or Sega. The Supervision was redesigned a final time as "The Magnum". Released in limited quantities it was roughly equivalent to the Game Boy Pocket. It was available in three colors: yellow, green and grey. Watara designed many of the games themselves, but did receive some third party support, most notably from Sachen.
A TV adapter was available in both PAL and NTSC formats that could transfer the Supervision's black-and-white palette to 4 colors, similar in some regards to the Super Game Boy from Nintendo.
Hartung Game Master
The Hartung Game Master is an obscure handheld released at an unknown point in the early 1990s. Its graphics fidelity was much lower than most of its contemporaries, displaying just 64x64 pixels. It was available in black, white, and purple, and was frequently rebranded by its distributors, such as Delplay, Videojet and Systema.
The exact number of games released is not known, but is likely around 20. The system most frequently turns up in Europe and Australia.
Late 1990s
By this time, the lack of significant development in Nintendo's product line began allowing more advanced systems such as the Neo Geo Pocket Color and the WonderSwan Color to be developed.
Sega Nomad
The Nomad was released in October 1995 in North America only. The release was five years into the market span of the Genesis, with an existing library of more than 500 Genesis games. According to former Sega of America research and development head Joe Miller, the Nomad was not intended to be the Game Gear's replacement; he believed that there was little planning from Sega of Japan for the new handheld. Sega was supporting five different consoles: Saturn, Genesis, Game Gear, Pico, and the Master System, as well as the Sega CD and 32X add-ons. In Japan, the Mega Drive had never been successful and the Saturn was more successful than Sony's PlayStation, so Sega Enterprises CEO Hayao Nakayama decided to focus on the Saturn. By 1999, the Nomad was being sold at less than a third of its original price.
Game Boy Pocket
The Game Boy Pocket is a redesigned version of the original Game Boy having the same features. It was released in 1996. Notably, this variation is smaller and lighter. It comes in seven different colors; red, yellow, green, black, clear, silver, blue, and pink. It has space for two AAA batteries, which provide approximately 10 hours of game play. The screen was changed to a true black-and-white display, rather than the "pea soup" monochromatic display of the original Game Boy. Although, like its predecessor, the Game Boy Pocket has no backlight to allow play in a darkened area, it did notably improve visibility and pixel response-time (mostly eliminating ghosting).
The first model of the Game Boy Pocket did not have an LED to show battery levels, but the feature was added due to public demand. The Game Boy Pocket was not a new software platform and played the same software as the original Game Boy model.
Game.com
The Game.com (pronounced in TV commercials as "game com", not "game dot com", and not capitalized in marketing material) is a handheld game console released by Tiger Electronics in September 1997. It featured many new ideas for handheld consoles and was aimed at an older target audience, sporting PDA-style features and functions such as a touch screen and stylus. However, Tiger hoped it would also challenge Nintendo's Game Boy and gain a following among younger gamers too. Unlike other handheld game consoles, the first game.com consoles included two slots for game cartridges, which would not happen again until the Tapwave Zodiac, the DS and DS Lite, and could be connected to a 14.4 kbit/s modem. Later models had only a single cartridge slot.
Game Boy Color
The Game Boy Color (also referred to as GBC or CGB) is Nintendo's successor to the Game Boy and was released on October 21, 1998, in Japan and in November of the same year in the United States. It features a color screen, and is slightly bigger than the Game Boy Pocket. The processor is twice as fast as a Game Boy's and has twice as much memory. It also had an infrared communications port for wireless linking which did not appear in later versions of the Game Boy, such as the Game Boy Advance.
The Game Boy Color was a response to pressure from game developers for a new system, as they felt that the Game Boy, even in its latest incarnation, the Game Boy Pocket, was insufficient. The resulting product was backward compatible, a first for a handheld console system, and leveraged the large library of games and great installed base of the predecessor system. This became a major feature of the Game Boy line, since it allowed each new launch to begin with a significantly larger library than any of its competitors. As of March 31, 2005, the Game Boy and Game Boy Color combined to sell 118.69 million units worldwide.
The console is capable of displaying up to 56 different colors simultaneously on screen from its palette of 32,768, and can add basic four-color shading to games that had been developed for the original Game Boy. It can also give the sprites and backgrounds separate colors, for a total of more than four colors.
Neo Geo Pocket Color
The Neo Geo Pocket Color (or NGPC) was released in 1999 in Japan, and later that year in the United States and Europe. It is a 16-bit color handheld game console designed by SNK, the maker of the Neo Geo home console and arcade machine. It came after SNK's original Neo Geo Pocket monochrome handheld, which debuted in 1998 in Japan.
In 2000 following SNK's purchase by Japanese Pachinko manufacturer Aruze, the Neo Geo Pocket Color was dropped from both the US and European markets, purportedly due to commercial failure.
The system seemed well on its way to being a success in the U.S. It was more successful than any Game Boy competitor since Sega's Game Gear, but was hurt by several factors, such as SNK's infamous lack of communication with third-party developers, and anticipation of the Game Boy Advance. The decision to ship U.S. games in cardboard boxes in a cost-cutting move rather than hard plastic cases that Japanese and European releases were shipped in may have also hurt US sales.
Wonderswan Color
The WonderSwan Color is a handheld game console designed by Bandai. It was released on December 9, 2000, in Japan, Although the WonderSwan Color was slightly larger and heavier (7 mm and 2 g) compared to the original WonderSwan, the color version featured 512 kB of RAM and a larger color LCD screen. In addition, the WonderSwan Color is compatible with the original WonderSwan library of games.
Prior to WonderSwan's release, Nintendo had virtually a monopoly in the Japanese video game handheld market. After the release of the WonderSwan Color, Bandai took approximately 8% of the market share in Japan partly due to its low price of 6800 yen (approximately US$65). Another reason for the WonderSwan's success in Japan was the fact that Bandai managed to get a deal with Square to port over the original Famicom Final Fantasy games with improved graphics and controls. However, with the popularity of the Game Boy Advance and the reconciliation between Square and Nintendo, the WonderSwan Color and its successor, the SwanCrystal quickly lost its competitive advantage.
Early 2000s
The 2000s saw a major leap in innovation, particularly in the second half with the release of the DS and PSP.
Game Boy Advance
In 2001, Nintendo released the Game Boy Advance (GBA or AGB), which added two shoulder buttons, a larger screen, and more computing power than the Game Boy Color.
The design was revised two years later when the Game Boy Advance SP (GBA SP), a more compact version, was released. The SP features a "clamshell" design (folding open and closed, like a laptop computer), as well as a frontlit color display and rechargeable battery. Despite the smaller form factor, the screen remained the same size as that of the original. In 2005, the Game Boy Micro was released. This revision sacrifices screen size and backwards compatibility with previous Game Boys for a dramatic reduction in total size and a brighter backlit screen. A new SP model with a backlit screen was released in some regions around the same time.
Along with the Nintendo GameCube, the GBA also introduced the concept of "connectivity": using a handheld system as a console controller. A handful of games use this feature, most notably Animal Crossing, Pac-Man Vs., Final Fantasy Crystal Chronicles, The Legend of Zelda: Four Swords Adventures, The Legend of Zelda: The Wind Waker, Metroid Prime, and Sonic Adventure 2: Battle.
As of December 31, 2007, the GBA, GBA SP, and the Game Boy Micro combined have sold 80.72 million units worldwide.
Game Park 32
The original GP32 was released in 2001 by the South Korean company Game Park a few months after the launch of the Game Boy Advance. It featured a 32-bit CPU, 133 MHz processor, MP3 and Divx player, and e-book reader. SmartMedia cards were used for storage, and could hold up to 128mb of anything downloaded through a USB cable from a PC. The GP32 was redesigned in 2003. A front-lit screen was added and the new version was called GP32 FLU (Front Light Unit). In summer 2004, another redesign, the GP32 BLU, was made, and added a backlit screen. This version of the handheld was planned for release outside South Korea; in Europe, and it was released for example in Spain (VirginPlay was the distributor). While not a commercial success on a level with mainstream handhelds (only 30,000 units were sold), it ended up being used mainly as a platform for user-made applications and emulators of other systems, being popular with developers and more technically adept users.
N-Gage
Nokia released the N-Gage in 2003. It was designed as a combination MP3 player, cellphone, PDA, radio, and gaming device. The system received much criticism alleging defects in its physical design and layout, including its vertically oriented screen and requirement of removing the battery to change game cartridges. The most well known of these was "sidetalking", or the act of placing the phone speaker and receiver on an edge of the device instead of one of the flat sides, causing the user to appear as if they are speaking into a taco.
The N-Gage QD was later released to address the design flaws of the original. However, certain features available in the original N-Gage, including MP3 playback, FM radio reception, and USB connectivity were removed.
Second generation of N-Gage launched on April 3, 2008 in the form of a service for selected Nokia Smartphones.
Cybiko
The Cybiko is a Russian hand-held computer introduced in May 2000 by David Yang's company and designed for teenage audiences, featuring its own two-way radio text messaging system. It has over 430 "official" freeware games and applications. Because of the text messaging system, it features a QWERTY keyboard that was used with a stylus. An MP3 player add-on was made for the unit as well as a SmartMedia card reader. The company stopped manufacturing the units after two product versions and only a few years on the market. Cybikos can communicate with each other up to a maximum range of 300 metres (0.19 miles). Several Cybikos can chat with each other in a wireless chatroom.
Cybiko Classic:
There were two models of the Classic Cybiko. Visually, the only difference was that the original version had a power switch on the side, whilst the updated version used the "escape" key for power management. Internally, the differences between the two models were in the internal memory, and the location of the firmware.
Cybiko Xtreme:
The Cybiko Xtreme was the second-generation Cybiko handheld. It featured various improvements over the original Cybiko, such as a faster processor, more RAM, more ROM, a new operating system, a new keyboard layout and case design, greater wireless range, a microphone, improved audio output, and smaller size.
Tapwave Zodiac
In 2003, Tapwave released the Zodiac. It was designed to be a PDA-handheld game console hybrid. It supported photos, movies, music, Internet, and documents. The Zodiac used a special version Palm OS 5, 5.2T, that supported the special gaming buttons and graphics chip. Two versions were available, Zodiac 1 and 2, differing in memory and looks. The Zodiac line ended in July 2005 when Tapwave declared bankruptcy.
Mid 2000s
Nintendo DS
The Nintendo DS was released in November 2004. Among its new features were the incorporation of two screens, a touchscreen, wireless connectivity, and a microphone port. As with the Game Boy Advance SP, the DS features a clamshell design, with the two screens aligned vertically on either side of the hinge.
The DS's lower screen is touch sensitive, designed to be pressed with a stylus, a user's finger or a special "thumb pad" (a small plastic pad attached to the console's wrist strap, which can be affixed to the thumb to simulate an analog stick). More traditional controls include four face buttons, two shoulder buttons, a D-pad, and "Start" and "Select" buttons. The console also features online capabilities via the Nintendo Wi-Fi Connection and ad-hoc wireless networking for multiplayer games with up to sixteen players. It is backwards-compatible with all Game Boy Advance games, but like the Game Boy Micro, it is not compatible with games designed for the Game Boy or Game Boy Color.
In January 2006, Nintendo revealed an updated version of the DS: the Nintendo DS Lite (released on March 2, 2006, in Japan) with an updated, smaller form factor (42% smaller and 21% lighter than the original Nintendo DS), a cleaner design, longer battery life, and brighter, higher-quality displays, with adjustable brightness. It is also able to connect wirelessly with Nintendo's Wii console.
On October 2, 2008, Nintendo announced the Nintendo DSi, with larger, 3.25-inch screens and two integrated cameras. It has an SD card storage slot in place of the Game Boy Advance slot, plus internal flash memory for storing downloaded games. It was released on November 1, 2008, in Japan, April 2, 2009 in Australia, April 3, 2009 in Europe, and April 5, 2009 in North America. On October 29, 2009, Nintendo announced a larger version of the DSi, called the DSi XL, which was released on November 21, 2009 in Japan, March 5, 2010 in Europe, March 28, 2010 in North America, and April 15, 2010 in Australia.
As of December 31, 2009, the Nintendo DS, Nintendo DS Lite, and Nintendo DSi combined have sold 125.13 million units worldwide.
Game King
The GameKing is a handheld game console released by the Chinese company TimeTop in 2004. The first model while original in design owes a large debt to Nintendo's Game Boy Advance. The second model, the GameKing 2, is believed to be inspired by Sony's PSP. This model also was upgraded with a backlit screen, with a distracting background transparency (which can be removed by opening up the console). A color model, the GameKing 3 apparently exists, but was only made for a brief time and was difficult to purchase outside of Asia. Whether intentionally or not, the GameKing has the most primitive graphics of any handheld released since the Game Boy of 1989.
As many of the games have an "old school" simplicity, the device has developed a small cult following. The Gameking's speaker is quite loud and the cartridges' sophisticated looping soundtracks (sampled from other sources) are seemingly at odds with its primitive graphics.
TimeTop made at least one additional device sometimes labeled as "GameKing", but while it seems to possess more advanced graphics, is essentially an emulator that plays a handful of multi-carts (like the GB Station Light II). Outside of Asia (especially China) however the Gameking remains relatively unheard of due to the enduring popularity of Japanese handhelds such as those manufactured by Nintendo and Sony.
PlayStation Portable
The PlayStation Portable (officially abbreviated PSP) is a handheld game console manufactured and marketed by Sony Computer Entertainment. Development of the console was first announced during E3 2003, and it was unveiled on May 11, 2004, at a Sony press conference before E3 2004. The system was released in Japan on December 12, 2004, in North America on March 24, 2005, and in the PAL region on September 1, 2005.
The PlayStation Portable is the first handheld video game console to use an optical disc format, Universal Media Disc (UMD), for distribution of its games. UMD Video discs with movies and television shows were also released. The PSP utilized the Sony/SanDisk Memory Stick Pro Duo format as its primary storage medium. Other distinguishing features of the console include its large viewing screen, multi-media capabilities, and connectivity with the PlayStation 3, other PSPs, and the Internet.
Gizmondo
Tiger's Gizmondo came out in the UK during March 2005 and it was released in the U.S. during October 2005. It is designed to play music, movies, and games, have a camera for taking and storing photos, and have GPS functions. It also has Internet capabilities. It has a phone for sending text and multimedia messages. Email was promised at launch, but was never released before Gizmondo, and ultimately Tiger Telematics', downfall in early 2006. Users obtained a second service pack, unreleased, hoping to find such functionality. However, Service Pack B did not activate the e-mail functionality.
GP2X Series
The GP2X is an open-source, Linux-based handheld video game console and media player created by GamePark Holdings of South Korea, designed for homebrew developers as well as commercial developers. It is commonly used to run emulators for game consoles such as Neo-Geo, Genesis, Master System, Game Gear, Amstrad CPC, Commodore 64, Nintendo Entertainment System, TurboGrafx-16, MAME and others.
A new version called the "F200" was released October 30, 2007, and features a touchscreen, among other changes. Followed by GP2X Wiz (2009) and GP2X Caanoo (2010).
Late 2000s
Dingoo
The Dingoo A-320 is a micro-sized gaming handheld that resembles the Game Boy Micro and is open to game development. It also supports music, radio, emulators (8 bit and 16 bit) and video playing capabilities with its own interface much like the PSP. There is also an onboard radio and recording program. It is currently available in two colors — white and black. Other similar products from the same manufacturer are the Dingoo A-330 (also known as Geimi), Dingoo A-360, Dingoo A-380 (available in pink, white and black) and the recently released Dingoo A-320E.
PSP Go
The PSP Go is a version of the PlayStation Portable handheld game console manufactured by Sony. It was released on October 1, 2009, in American and European territories, and on November 1 in Japan. It was revealed prior to E3 2009 through Sony's Qore VOD service. Although its design is significantly different from other PSPs, it is not intended to replace the PSP 3000, which Sony continued to manufacture, sell, and support. On April 20, 2011, the manufacturer announced that the PSP Go would be discontinued so that they may concentrate on the PlayStation Vita. Sony later said that only the European and Japanese versions were being cut, and that the console would still be available in the US.
Unlike previous PSP models, the PSP Go does not feature a UMD drive, but instead has 16 GB of internal flash memory to store games, video, pictures, and other media. This can be extended by up to 32 GB with the use of a Memory Stick Micro (M2) flash card. Also unlike previous PSP models, the PSP Go's rechargeable battery is not removable or replaceable by the user. The unit is 43% lighter and 56% smaller than the original PSP-1000, and 16% lighter and 35% smaller than the PSP-3000. It has a 3.8" 480 × 272 LCD (compared to the larger 4.3" 480 × 272 pixel LCD on previous PSP models). The screen slides up to reveal the main controls. The overall shape and sliding mechanism are similar to that of Sony's mylo COM-2 internet device.
Pandora
The Pandora is a handheld game console/UMPC/PDA hybrid designed to take advantage of existing open source software and to be a target for home-brew development. It runs a full distribution of Linux, and in functionality is like a small PC with gaming controls. It is developed by OpenPandora, which is made up of former distributors and community members of the GP32 and GP2X handhelds.
OpenPandora began taking pre-orders for one batch of 4000 devices in November 2008 and after manufacturing delays, began shipping to customers on May 21, 2010.
FC-16 Go
The FC-16 Go is a portable Super NES hardware clone manufactured by Yobo Gameware in 2009. It features a 3.5-inch display, two wireless controllers, and CRT cables that allow cartridges to be played on a television screen. Unlike other Super NES clone consoles, it has region tabs that only allow NTSC North American cartridges to be played. Later revisions feature stereo sound output, larger shoulder buttons, and a slightly re-arranged button, power, and A/V output layout.
2010s
Nintendo 3DS
The Nintendo 3DS is the successor to Nintendo's DS handheld. The autostereoscopic device is able to project stereoscopic three-dimensional effects without requirement of active shutter or passive polarized glasses, which are required by most current 3D televisions to display the 3D effect. The 3DS was released in Japan on February 26, 2011; in Europe on March 25, 2011; in North America on March 27, 2011, and in Australia on March 31, 2011. The system features backward compatibility with Nintendo DS series software, including Nintendo DSi software except those that require the Game Boy Advance slot. It also features an online service called the Nintendo eShop, launched on June 6, 2011, in North America and June 7, 2011, in Europe and Japan, which allows owners to download games, demos, applications and information on upcoming film and game releases. On November 24, 2011, a limited edition Legend of Zelda 25th Anniversary 3DS was released that contained a unique Cosmo Black unit decorated with gold Legend of Zelda related imagery, along with a copy of The Legend of Zelda: Ocarina of Time 3D.
There are also other models including the Nintendo 2DS and the New Nintendo 3DS, the latter with a larger (XL/LL) variant, like the original Nintendo 3DS, as well as the New Nintendo 2DS XL.
Xperia Play
The Sony Ericsson Xperia PLAY is a handheld game console smartphone produced by Sony Ericsson under the Xperia smartphone brand. The device runs Android 2.3 Gingerbread, and is the first to be part of the PlayStation Certified program which means that it can play PlayStation Suite games. The device is a horizontally sliding phone with its original form resembling the Xperia X10 while the slider below resembles the slider of the PSP Go. The slider features a D-pad on the left side, a set of standard PlayStation buttons (, , and ) on the right, a long rectangular touchpad in the middle, start and select buttons on the bottom right corner, a menu button on the bottom left corner, and two shoulder buttons (L and R) on the back of the device. It is powered by a 1 GHz Qualcomm Snapdragon processor, a Qualcomm Adreno 205 GPU, and features a display measuring 4.0 inches (100 mm) (854 × 480), an 8-megapixel camera, 512 MB RAM, 8 GB internal storage, and a micro-USB connector. It supports microSD cards, versus the Memory Stick variants used in PSP consoles. The device was revealed officially for the first time in a Super Bowl ad on Sunday, February 6, 2011. On February 13, 2011, at Mobile World Congress (MWC) 2011, it was announced that the device would be shipping globally in March 2011, with a launch lineup of around 50 software titles.
PlayStation Vita
The PlayStation Vita is the successor to Sony's PlayStation Portable (PSP) Handheld series. It was released in Japan on December 17, 2011 and in Europe, Australia, North, and South America on February 22, 2012.
The handheld includes two analog sticks, a 5-inch (130 mm) OLED/LCD multi-touch capacitive touchscreen, and supports Bluetooth, Wi-Fi and optional 3G. Internally, the PS Vita features a 4 core ARM Cortex-A9 MPCore processor and a 4 core SGX543MP4+ graphics processing unit, as well as LiveArea software as its main user interface, which succeeds the XrossMediaBar.
The device is fully backwards-compatible with PlayStation Portable games digitally released on the PlayStation Network via the PlayStation Store. However, PSone Classics and PS2 titles were not compatible at the time of the primary public release in Japan. The Vita's dual analog sticks will be supported on selected PSP games. The graphics for PSP releases will be up-scaled, with a smoothing filter to reduce pixelation.
On September 20, 2018, Sony announced at Tokyo Game Show 2018 that the Vita would be discontinued in 2019, ending its hardware production. Production of Vita hardware officially ended on March 1, 2019.
Razer Switchblade
The Razer Switchblade was a prototype pocket-sized like a Nintendo DSi XL designed to run Windows 7, featured a multi-touch LCD screen and an adaptive keyboard that changed keys depending on the game the user would play. It also was to feature a full mouse.
It was first unveiled on January 5, 2011, on the Consumer Electronics Show (CES). The Switchblade won The Best of CES 2011 People's Voice award. It has since been in development and the release date is still unknown. The device has likely been suspended indefinitely.
Nvidia Shield
Project Shield is a handheld system developed by Nvidia announced at CES 2013. It runs on Android 4.2 and uses Nvidia Tegra 4 SoC. The hardware includes a 5-inches multitouch screen with support for HD graphics (720p). The console allows for the streaming of games running on a compatible desktop PC, or laptop.
Nvidia Shield Portable has received mixed reception from critics. Generally, reviewers praised the performance of the device, but criticized the cost and lack of worthwhile games. Engadget's review noted the system's "extremely impressive PC gaming", but also that due to its high price, the device was "a hard sell as a portable game console", especially when compared to similar handhelds on the market. CNET's Eric Franklin states in his review of the device that "The Nvidia Shield is an extremely well made device, with performance that pretty much obliterates any mobile product before it; but like most new console launches, there is currently a lack of available games worth your time." Eurogamer's comprehensive review of the device provides a detailed account of the device and its features; concluded by saying: "In the here and now, the first-gen Shield Portable is a gloriously niche, luxury product - the most powerful Android system on the market by a clear stretch and possessing a unique link to PC gaming that's seriously impressive in beta form, and can only get better."
Nintendo Switch
The Nintendo Switch is a hybrid console that can either be used in a handheld form, or inserted into a docking station attached to a television to play on a bigger screen. The Switch features two detachable wireless controllers, called Joy-Con, which can be used individually or attached to a grip to provide a traditional gamepad form. A handheld-only revision named Nintendo Switch Lite was released on September 20, 2019.
The Switch Lite had sold about 1.95 million units worldwide by September 30, 2019, only 10 days after its launch.
Evercade
Evercade is a handheld game console developed and manufactured by UK company Blaze Entertainment. It focuses on retrogaming with ROM cartridges that each contain a number of emulated games. Development began in 2018, and the console was released in May 2020, after a few delays. Upon its launch, the console offered 10 game cartridges with a combined total of 122 games.
Arc System Works, Atari, Data East, Interplay Entertainment, Bandai Namco Entertainment and Piko Interactive have released emulated versions of their games for the Evercade. Pre-existing homebrew games have also been re-released for the console by Mega Cat Studios. The Evercade is capable of playing games originally released for the Atari 2600, the Atari 7800, the Atari Lynx, the NES, the SNES, and the Sega Genesis/Mega Drive.
2020s
Analogue Pocket
The Analogue Pocket is a FPGA-based handheld game console designed and manufactured by Analogue, Inc., It is designed to play games designed for handhelds of the fourth, fifth and sixth generation of video game consoles. The console features a design reminiscent of the Game Boy, with additional buttons for the supported platforms. It features a 3.5" 1600x1440 LTPS LCD display, an SD card port, and a link cable port compatible with Game Boy link cables. The Analogue Pocket uses an Altera Cyclone V processor, and is compatible with the original Game Boy, Game Boy Color and Game Boy Advance cartridges out of the box. With cartridge adapters (sold separately) the Analogue Pocket can play Game Gear, Neo Geo Pocket, Neo Geo Pocket Color and Atari Lynx game cartridges. The Analogue Pocket includes an additional FPGA, allowing 3rd party FPGA development. The Analogue Pocket was released in December 2021.
Steam Deck
The Steam Deck is a handheld computer device, developed by Valve, which runs SteamOS 3.0, a tailored distro of Arch Linux and includes support for Proton, a compatibility layer that allows most Microsoft Windows games to be played on the Linux-based operating system. In terms of hardware, the Deck includes a custom accelerated processing unit (APU) built by AMD based on their Zen 2 and RDNA 2 architectures, with the CPU running a four-core/eight-thread unit and the GPU running on eight compute units with a total estimated performance of 1.6 TFLOPS. Both the CPU and GPU use variable timing frequencies, with the CPU running between 2.4 and 3.5 GHz and the GPU between 1.0 and 1.6 GHz based on current processor needs. Valve stated that the CPU has comparable performance to Ryzen 3000 desktop computer processors and the GPU performance to the Radeon RX 6000 series. The Deck includes 16 GB of LPDDR5 RAM in a quad channel configuration.
Valve revealed the Steam Deck on July 15, 2021, with pre-orders being made option the next day. The Deck was expected to ship in December 2021 to the US, Canada, the EU and the UK but was delayed to February 2022, with other regions to follow in 2022. Pre-orders were limited to those with Steam accounts opened before June 2021 to prevent resellers from controlling access to the device. Pre-orders reservations on July 16, 2021 through the Steam storefront briefly crashed the servers due to the demand. While initial shipments are still planned by February 2022, Valve has reported to new purchasers that wider availability will be later, with the 64 GB model and 256 GB NVMe model due in Q2 2022, and the 512 GB NVMe model by Q3 2022. Steam Deck was released on February 25, 2022.
List of handheld consoles
See also
Comparison of handheld game consoles
List of handheld game consoles
Video game console emulator
Handheld electronic game
Handheld television
Linux gaming
Cloud gaming
Mobile game
References
Video game terminology
Handheld game consoles |
23515402 | https://en.wikipedia.org/wiki/Francis%20Hacker | Francis Hacker | Colonel Francis Hacker (died 19 October 1660) was an English soldier who fought for Parliament during the English Civil War and one of the Regicides of King Charles I of England.
Biography
Hacker was third son of Francis Hacker of East Bridgford and Colston Basset, Nottinghamshire, by Margaret, daughter of Walter Whalley of Cotgrave. From the outbreak of the English Civil War Hacker vehemently supported the Parliamentary cause, though the rest of his family seem to have been royalists. On 10 July 1644 he was appointed one of the militia committee for the county of Leicester, the scene of most of his exploits during the Civil War, On 27 November 1643 he and several others of the Leicestershire committee were surprised and taken prisoners at Melton Mowbray by Gervase Lucas, the Royalist governor of Belvoir Castle. A month later Parliament ordered that he should be exchanged for Colonel Sands.
At the capture of Leicester by the king in May 1645 Hacker, who distinguished himself in the defence, was again taken prisoner. Hacker was nevertheless attacked for his conduct during the defence, but he was warmly defended in a pamphlet published by the Leicester committee. His services are there enumerated at length, and special commendation is bestowed on his conduct at the taking of Bagworth House and his defeat of the enemy at Belvoir, where he was in command of the Leicester, Nottingham, and Derby horse (cavalry). Hacker is further credited with having freely given "all the prizes that ever he took" to the state and to his soldiers, and with having, while prisoner at Belvoir, refused with scorn an offer of "pardon and the command of a regiment of horse to change his side". "At the king's taking of Leicester", the pamphleteer proceeds, he "was so much prized by the enemy as they offered him the command of a choice regiment of horse to serve the king". At the defeat of the Royalists at the Battle of Willoughby Field in Nottinghamshire (5 July 1648) Hacker commanded the left wing of the Parliamentary forces.
During the trial of Charles I, Hacker was one of the officers specially charged with the custody of the King, and usually commanded the guard of halberdiers which escorted Charles to and from Westminster Hall. He was one of the three officers to whom the warrant for the King's execution was addressed, was present himself on the scaffold, supervised the execution, and signed the order to the executioner. According to Herbert he treated the King respectfully.
Hacker commanded a regiment under Cromwell during the Invasion of Scotland. Cromwell wrote to Hacker, 25 December 1650, rebuking him for slightingly describing one of his subalterns as a better preacher than fighter, and telling him that he expects him and all the chief officers of the army to encourage preaching. Hacker was a religious man, but a strict Presbyterian and a persecutor of the Quakers, He confessed shortly before his death "that he had formerly born too great a prejudice in his heart towards the good people of God that differed from him in judgement". While Cromwell lived he was a staunch supporter of the Protectorate, arrested Lord Grey in February 1655, and was employed in the following year to suppress the intrigues of the Cavaliers and Fifth Monarchists in Leicestershire and Nottinghamshire. In Richard Cromwell's Parliament Hacker represented Leicestershire, but was a silent member. "All that have known me", he said at his execution, "in my best estate have not known me to have been a man of oratory, and God hath not given me the gift of utterance as to others".
During the Second Commonwealth (the unstable period preceding the Restoration) he followed generally the leadership of his neighbour Sir Arthur Haslerig, whose "creature" he was (as Mrs. Hutchinson terms him). By Haslerig's persuasion he, first of all the colonels of the army, accepted a new commission from the hands of the speaker of the restored Long Parliament, and was among the first to own the supremacy of the civil power over the army, He opposed the mutinous petitions of Lambert's partisans in September 1659, and, after they had expelled the Rump Parliament from Westminster, entered into communication with Hutchinson and Haslerig for
armed opposition. After the triumph of the Rump he was again confirmed in the command of his regiment, and seems to have been still in the army when the Restoration took place. On 5 July 1660 he was arrested and sent to the Tower of London, and his regiment given to Lord Hawley. The House of Commons did not at first except him from the Indemnity and Oblivion Act, but during the debates upon it in the lords the fact came out that the warrant for the execution of the King had been in Hacker's possession. The Lords desired to use it as evidence against the regicides, and ordered him to produce it. Mrs. Hacker was sent to fetch it, and, in the hope of saving her husband, delivered up the strongest testimony against himself and his associates. The next
day (1 August 1660) the Lords added Hacker's name to the list of those excepted, and a fortnight later (13 August) the House of Commons accepted this amendment.
Hacker's trial took place on 15 October 1660. He made no serious attempt to defend himself: "I have no more to say for myself but that I was a soldier, and under command, and what I did was by the commission you have read". The particulars of the share Colonel Hacker had in trial and execution, were related by Colonel Tomlinson, at Hacker's trial:
Colonel Tomlinson further deposed, "that Colonel Hacker led the King forth on the day of his execution, followed by the bishop of London, and was there in prosecution of that warrant, and upon the same their orders were at an end". This evidence of Tomlinson was corroborated by Colonel Huncks, who stated that:
Hacker was sentenced to death, and was hanged at Tyburn on 19 October 1660. His body, instead of being quartered, was given to his friends for burial, and is said to have been interred in the church of St. Nicholas Cole Abbey, London, the advowson of which was at one time vested in the Hacker family. As with all convicted traitors, his property was forfeited to the Crown. His estate passed to the Duke of York, but was bought back by Rowland Hacker, and was still in the possession of the Hacker family in 1890.
Notes
References
Attribution
Further reading
1660 deaths
English soldiers
Executed regicides of Charles I
Roundheads
Executed English people
People executed under the Stuarts for treason against England
Year of birth unknown
People executed by the Kingdom of England by hanging
People executed at Tyburn |
29048960 | https://en.wikipedia.org/wiki/Greenpois0n | Greenpois0n | Greenpois0n is a name shared by a series of iOS jailbreaking tools developed by Chronic Dev Team (sometimes called the Greenpois0n team) that use exploits to remove software restrictions on iPhones, iPads, iPod touches, and Apple TVs. Greenpois0n's initial release in October 2010 jailbroke iOS 4.1, and its second version in February 2011 jailbroke iOS 4.2.1 as well as iOS 4.2.6 on CDMA iPhones. The second generation of the tool, Greenpois0n Absinthe, was developed with iPhone Dev Team members and jailbroke iOS 5.0.1 in January 2012 (providing the first jailbreak of the iPhone 4S), and a second version jailbroke iOS 5.1.1 in May 2012 (providing the first jailbreak of the third generation iPad).
Jailbreaking enables root access to the iOS operating system, allowing the installation of applications and customizations that are unavailable through the official App Store for iOS. Jailbreaking voids the device's warranty, and Apple releases iOS updates to make jailbreaking more difficult.
Greenpois0n for iOS 4
On October 12, 2010, Chronic Dev Team released Greenpois0n, a desktop-based tool for jailbreaking iOS 4.1 on iPhone 4, iPhone 3GS, iPod touch third and fourth generation, and iPad 1. During its development, Apple released the second generation Apple TV, and Greenpois0n's developers reported that it could jailbreak the Apple TV as well. The developers announced plans to release it on October 10, but after news spread of another jailbreak developer, George Hotz, preparing to release a jailbreaking tool called limera1n that would perform a similar function with a different exploit, the Greenpois0n developers delayed in order to integrate the limera1n exploit, which supported more devices. Using limera1n also meant that the original Greenpois0n exploit (SHAtter) could be saved for use in later jailbreaks. Both SHAtter and limera1n are boot ROM exploits, which means they cannot be patched by iOS updates because boot ROM code is embedded in iOS devices during manufacturing.
In February 2011, Chronic Dev Team released a new version of Greenpois0n to jailbreak iOS 4.2.1 and to jailbreak iOS 4.2.6 on CDMA (Verizon) iPhone 4, with desktop-based tools for OS X, Microsoft Windows, and Linux. It provides an "untethered" jailbreak, which means that the jailbroken device can be rebooted without computer assistance. It supports iPad, iPhone, iPod touch, and Apple TV. Chronic Dev Team announced support for the newly released CDMA iPhone 4 before the devices were in stores.
Chronic Dev Team
As of late 2011, Joshua Hill was described as a "head honcho" of Chronic-Dev Team, and as a principal. Other members, in addition to Hill, in early 2012 included Cyril, and Nikias Bassen.
Greenpois0n Absinthe for iOS 5
Developers from Chronic Dev Team and iPhone Dev Team released Greenpois0n Absinthe (sometimes simply known as just "Absinthe") in January 2012, a desktop-based tool (for OS X, Microsoft Windows, and Linux) to jailbreak the iPhone 4S for the first time and the iPad 2 for the second time, on iOS 5.0.1 for both devices and also iOS 5.0 for iPhone 4S. Absinthe provides an "untethered" jailbreak, which means that the patched device can be rebooted directly into a jailbroken state without computer assistance (or, as with a semi-untethered jailbreak, without requiring an application to be launched on the device following startup, in order to reactivate the jailbreak exploit). It incorporated the untether exploit called Corona that pod2g had released in December for older iOS devices. The Next Web said that the jailbreak took a long time to be released, and VentureBeat said Absinthe wasn't as easy to use as the earlier jailbreaking tool JailbreakMe. According to iPhone Dev Team, approximately one million devices were newly jailbroken in the three days after Absinthe's release. The developers called their joint effort the Jailbreak Dream Team, which Apple credited in its document listing security patches in the subsequent version of iOS.
In May 2012, developers from Chronic Dev Team and iPhone Dev Team released Absinthe 2.0 (for OS X, Windows, and Linux), which can jailbreak iOS 5.1.1 untethered on all iPhone, iPad, and iPod touch models that support iOS 5.1.1, including jailbreaking the third generation iPad for the first time. They announced it at the Hack In The Box security conference in Amsterdam at the end of a presentation about the earlier Absinthe jailbreak, and it did not initially support a recently released model of iPad 2. According to Chronic Dev Team, approximately one million devices were jailbroken over the weekend after its Friday release. PC World noted that devices jailbroken with tools such as Absinthe 2.0 can be a security concern for companies that have "bring your own device" policies.
References
External links
Greenpois0n code on GitHub
Conference presentation by "Dream Team" members on technical details of Absinthe for iOS 5.0.1
Homebrew software
IOS software
IOS jailbreaks
Free software |
532303 | https://en.wikipedia.org/wiki/Topcoder | Topcoder | Topcoder (formerly TopCoder) is a crowdsourcing company with an open global community of designers, developers, data scientists, and competitive programmers. Topcoder pays community members for their work on the projects and sells community services to corporate, mid-size, and small-business clients. Topcoder also organizes the annual Topcoder Open tournament and a series of smaller regional events.
History
Topcoder was founded in 2001 by Jack Hughes, Chairman and Founder of the Tallan company. The name was formerly spelt as "TopCoder" until 2013. Topcoder ran regular competitive programming challenges, known as Single Round Matches or "SRMs," where each SRM was a timed 1.5-hour algorithm competition and contestants would compete against each other to solve the same set of problems. The contestants were students from different secondary schools or universities. Cash prizes ranging from $5,000 to $10,000 per match were secured from corporate sponsors and awarded to tournament winners to generate interest from the student community.
As the community of designers, developers, data scientists, and competitive programmers involved in Topcoder grew, the company started to offer software development services to 3rd party clients, contracting individual community members to work on specific tasks. Most of the revenue, though, still came from consulting services provided to clients by Topcoder employees. From 2006 onwards, Topcoder held design competitions, thus offering design services to their clients. In 2006 Topcoder also started to organize Marathon Matches (MM) – one week long algorithmic contests.
In an attempt to optimize expenses, Topcoder introduced new competition tracks in 2007-2008 and delegated more work from its employees to the community. By 2009, the size of Topcoder's staff had been reduced to 16 project managers servicing 35 clients, while the community did most of the actual work via crowdsourcing. Topcoder representatives claim that at this point their community had about 170k registered members, and the company's annual revenue was approximately $19 million.
In 2013, Topcoder was acquired by Appirio, and the Topcoder community (of around 500 thousand at the time), was merged, under the Topcoder brand, with the 75k member crowdsourcing community Cloudspokes, created and managed by Appirio.
In 2016, Topcoder, along with Appirio, was acquired by Wipro as a part of a $500 million deal and continued to operate as a separate company under its brand.
Since the end of 2017, Topcoder has continued to offer its enterprise clients the Hybrid Crowd platform, as a way to protect intellectual property in crowdsourcing projects. In addition to the public Topcoder community, the Hybrid Crowd platform allows for the creation of certified and private crowdsourcing communities. Its certified communities include members of public Topcoder communities who are vetted for a customer's specific requirements, such as signing an additional NDA, completing a background check, or meeting any other particular certifications. The private communities may include an enterprise's employees and contractors. As the first user of Hybrid Crowd, Wipro integrated its internal (employee-only) crowdsourcing platform TopGear with Topcoder.
Topcoder community
Topcoder community is the primary source of the workforce behind all Topcoder projects. It is open and global: anybody, with a few legal restrictions dictated by US laws, and listed in Community Terms, can join and compete, without any financial commitment to Topcoder. Also, participation in challenges organized in the interests of commercial clients generally requires the community member to sign a non-disclosure agreement. Intellectual property for the winning submissions to commercial challenges is passed to the client, in exchange for monetary prizes paid to the winners.
While the majority of community members participate in Topcoder challenges as regular competitors, those who become recognized for their performance, and involvement in community life (via communication in Topcoder forums, attending Topcoder events, etc.), are offered additional roles in the community, which include: copilots (technical coordinators of challenges), problem writers, reviewers, etc. Since the end of 2014 till the end of 2017, a Community Advisory Board (CAB) was selected from active community members for a one-year term to help improve communications between Topcoder company and its community. In 2018 the CAB was replaced by the Topcoder MVP (Most Valuable Player) program.
There are four primary segments of each Topcoder community, open to every member: Design, Development, Data Science, and Competitive Programming. Also, since the end of 2017, Topcoder, as a part of their Hybrid Crowd offering, creates sub-communities dedicated to specific clients/projects. The sub-communities may require members to meet additional eligibility criteria before joining.
Design
Topcoder design community is focused on:
Information Architecture
Wireframes – With customer ideas, application and business requirements as input, competitors are challenged to create a black-and-white interactive user experience guide, able to showcase the logic and user-experience with the further application, without spending time on the exact look and feel.
Idea Generation – Competitors are asked to develop an idea proposed by the customer, with a written report or visual presentation as deliverables.
UI/UX/CX Design
Applications and Web Design – Competitors develop graphical designs for customer application or website; the deliverables are the actual design specifications (graphical images with associated measurements, font details, etc.) for software developers.
Design Concept – More informal design challenges, where participants should turn client idea into a design, which is not meant to be used for the actual development without further processing.
Icons design
Presentation Design – Infographics, print materials, PowerPoint presentations.
Two particular types of Topcoder design challenges are LUX (Live User Experience, 24 – 48 hours long) and RUX (Rapid User Experience, three days long). In both cases, more substantial prizes compared to regular design challenges with the similar goals, are offered in exchange for the shorter timeline. Short timelines allow Topcoder managers to demonstrate to customers how crowdsourcing works on real cases, during live, and few-days meetings with the clients.
Development
Software development segment of Topcoder community is focused on:
Bug Bash – Challenges concentrate on fixes of numerous small bugs in an existing software product.
Code – Generic software development challenges, typically with five day competition phase, and four more days for review, appeals and appeal responses. Usually, two prizes are offered, ~$600 - $1200 for the winner, and half of that for the second place.
First-to-Finish (F2F) – Rapid software development challenges with no fixed timeline for the competition phase. The first participant who submits a solution satisfying the specifications wins the only prize. In case of defects in a submission, that competitor is provided with review feedback as soon as possible, and allowed to submit again, with no penalty for the failed submission. Typically, such challenges have a small scope, compared to other challenge types.
Quality Assurance – Challenges focused on testing and search for bugs in the provided software products.
UI Prototype – Challenges focused on frontend development. Typically, they are reviewed by scorecards paying more attention to the exact match with provided visual design specifications, and include additional phases for final fixes, compared to the regular code challenges.
Data science
There are several types of data science challenges at Topcoder; typically, they are longer than software development challenges and focused on data science and algorithms, rather than on end-user software products:
Marathon Match (MM) – A week-long algorithmic contest, in which submissions are judged objectively by an automated scoring function that feeds a live leaderboard, and multiple submission from the same competitor is encouraged during the match with no penalty. Programming languages allowed in MMs are C++, Java, Python, C#.NET, VB.NET. Topcoder has organized Marathon Matches since 2006, and 100th MM was held in April 2018. There are few similar types of challenges (Banner Match, Mini-Marathon Match), different by length and allowed programming languages.
Data Science First to Finish – Algorithmic contests scored by an automated scoring function, where the first competitor that reaches the specified score thresholds wins.
Data Science Sprint – A series of rapid data-science challenges, scored by a manual scoring function, and with no leaderboard.
Data Visualization – Subjectively-judged competition that asks to analyze data and propose the best way to visualize them, along with trends and/or peculiarities in data that should be highlighted. The output of such challenges serves as input into design competition that outputs the actual visualizations of the data.
Data Science Ideation – A challenge to discover new data/approaches/ideas for a problem with the help of a community.
Competitive programming
The Competitive Programming track of Topcoder community rotates around Single Round Matches (SRMs) – timed 1.5-hour competitions in which all participants compete online trying to solve the same set of problems as fast as possible. These were the first type of challenges at Topcoder.
Specialized sub-communities
The following table includes the list of Topcoder sub-communities dedicated to specific technologies and/or clients (within their Hybrid Crowd offering). See section for further information on these sub-communities.
Topcoder Open
Topcoder Open (TCO) is an annual design, software development, data science and competitive programming championship, organized by Topcoder, and hosted in different venues around the US. Each year, the most successful participants of each competition track included into TCO are selected and invited for a free one-week trip to on-site finals, where they compete for prizes, and also socialize with each other, helping to build community spirit among the most active members. In the first two years, 2001 and 2002, the tournament was titled TopCoder Invitational.
In addition to the main championship, from 2001 to 2007 Topcoder organized an annual TopCoder Collegiate Challenge tournament, for college students only. Also from 2007 to 2010, a TopCoder High School competition was held.
Since 2015, Topcoder Regional events have been held through the year in different countries.
Notable clients and projects
ConsenSys
In 2017, Topcoder entered into a partnership with ConsenSys, an incubator of Ethereum projects, to promote the Topcoder Blockchain Community, and provide ConsenSys with design and development support for their blockchain projects.
Eli Lilly and Company
It was reported in 2008 that Eli Lilly and Co. would use Topcoder platform to crowdsource development of IT applications for its global drug discovery operations.
Harvard Medical School
In 2013, it was reported that researchers from Harvard Medical School, Harvard Business School, and London Business School successfully used Topcoder Community to solve complex biological problems. Researchers say that Topcoder competitors approached the biology-related big-data challenge, and managed to create a more accurate and 1000 times faster alternative of BLAST algorithm.
IARPA
Intelligence Advanced Research Projects Activity organization collaborates with Topcoder to create innovative algorithms for intelligence applications. From July 2017 to February 2018 it ran the Functional Map of the World challenge to develop deep learning algorithms capable of scanning and identifying in satellite imagery different classes of objects, such as airports, schools, oil wells, shipyards, or ports . In the ongoing Mercury challenge it aims to create AI methods for automated prediction of critical events, involving military action, non-violent civil unrest, and infectious diseases in Middle East.
IBM
Since 2016 IBM has been collaborating with Topcoder to promote their cloud platform, IBM Cloud, and IBM Watson services, in particular. Within this partnership, Topcoder has created a dedicated Cognitive sub-community and run numerous educational and customer-oriented challenges.
NASA
In 2010, NASA asked the Topcoder community to optimize the contents of medical kits for future human space exploration missions.
In 2013, NASA Tournament Lab cooperated with Topcoder to run data-science challenges targeting to improve computer vision algorithms for their Robonaut 2 humanoid robot; in another challenge, Topcoder members were asked to develop algorithms for optimization of ISS solar arrays usage. Also in 2013 Topcoder helped NASA to develop a software solution for tracking food consumption by astronauts.
In another challenge, Topcoder community helped NASA and National Geographic's explorer Albert Lin to develop an algorithm to identify human-built structures in Genghis Khan's homeland.
In 2014, Asteroid Data Hunter, Asteroid Tracker, and many other challenges were carried on to develop better algorithms for asteroids detection in space images.
In 2015, the Topcoder Data Science community was challenged by NASA, Quakefinder, Harvard Crowd Innovation Lab, and Amazon Web Services, to come up with an algorithm that finds correlations between ultra-low frequency electromagnetic signals emanating from the earth, and subsequent moderate and large earthquakes.
In 2017, NASA, HeroX, and Topcoder announced a challenge to optimize their computational-intensive software solution for fluid dynamics, FUN3D, which was cancelled later due to a high number of applicants (more than 1,800) during the registration, coupled with concerns about control over the public distribution of the software to optimize.
In 2018, a data science challenge is running currently to develop better algorithms for tracking of RFID-tagged items within the International Space Station.
Topcoder Veterans Community
At the end of 2017 Topcoder, together with Operation Code non-profit charity, announced the launch of Topcoder Veterans Community, that will focus on helping US military veterans to make their way into tech careers in software development via education programs and paid crowdsourcing challenges.
See also
ACM International Collegiate Programming Contest
CodeSignal
Codeforces
Facebook Hacker Cup
Google Code Jam
HackerRank
ICFP Programming Contest
Internet Problem Solving Contest
Kaggle
Online judge
SPOJ
UVa Online Judge
Notes
References
External links
Wipro
Companies established in 2001
Programming contests |
331070 | https://en.wikipedia.org/wiki/EuroLinux | EuroLinux |
EuroLinux is a campaigning organisation that promotes open source software / free software in Europe, and that are opposed to the European Union's proposals to introduce laws on software patents. It is also known as EuroLinux Alliance.
It is not the umbrella organisation for Linux User Groups in Europe.
It describes itself as: "The EuroLinux Alliance for a Free Information Infrastructure is an open coalition of commercial companies and non-profit associations united to promote and protect a vigorous European Software Culture based on copyright, open standards, open competition and open source software such as Linux. Corporate members or sponsors of EuroLinux develop or sell software under free, semi-free and non-free licenses for operating systems such as Linux, Mac OS or Microsoft Windows."
Eurolinux organised the public EU campaign against software patents that was signed by more than 300,000 people. Members include FFII, APRIL, AFUL, AEL, and European Linux user group (LUG) umbrella associations.
See also
European Information, Communications and Consumer Electronics Technology Industry Associations (EICTA)
External links
Web site
Old petition for a software patent-free Europe (archive.org copy)
Linux organizations |
65171074 | https://en.wikipedia.org/wiki/Boxy%20SVG | Boxy SVG | Boxy SVG is a vector graphics editor for creating illustrations, as well as logos, icons, and other elements of graphic design. It is primarily focused on editing drawings in the SVG file format. The program is available as both a web app and a desktop application for Windows, macOS, Chrome OS, and Linux-based operating systems.
History
Boxy SVG was originally designed for macOS and written in both Objective-C and CoffeeScript. The first version was published on 2013-03-15 on the Mac App Store.
The second version, released on 2014-08-01, was a complete rewrite in JavaScript and Electron to make the application work as both a web app in a browser and a regular desktop application.
The third major release (2017-06-06) introduced a new user interface based on Xel, an HTML5 widget toolkit.
Afterwards, the developers switched to a shorter release cycle, with new versions rolled out every 1 or 2 months.
Platforms and requirements
Boxy SVG is available on multiple platforms.
Devices support
Boxy SVG is compatible with Apple desktop computers and laptops, touchscreen-based devices such as Google Pixelbook and Microsoft Surface.
The program is partially compatible with mobile devices running Android. The device must be able to run the latest stable version of Google Chrome, only saving files to the cloud storage is available. It is not compatible with Apple mobile devices like iPad because of the dependency on the Chromium engine.
The program also has basic support for graphics tablets such as those manufactured by Wacom.
Compatibility
The program uses SVG and SVGZ (zlib compressed version) as its native file formats. Some elements are in program's own namespace to either extend the feature set beyond what's available in the W3C SVG specification or provide a convenience layer for low-level details. Boxy SVG can also open SVG files authored with Inkscape and Adobe Illustrator, all software-specific elements and attributes will be dropped.
The application is based on the Electron framework and thus supports the same subset of the SVG format as Chromium-based web browsers such as Google Chrome, Microsoft Edge, and Opera. A major exception is the lack of support for animation.
Boxy SVG reads and writes PNG, JPEG, WebP, GIF, and PDF files, and reads Adobe Illustrator documents saved with the PDF compatibility mode on. Additionally, it can export HTML files.
Features
Markup inspection: The XML code of the SVG document can be viewed and edited directly in the Elements panel.
Objects manipulation: General transformations such as moving, rotating, scaling, and skewing can be performed right on the canvas. Gradient and pattern fills can be customized using on-canvas handles.
Shapes: In addition to tools for drawing basic geometric shapes, such as rectangles and ellipses, the program features tools for drawing procedural shapes like cogwheels and crosses. This is done by using a custom namespace to extend the SVG specification. All shapes can be edited directly on the canvas. Additionally, numeric control over size, position, and other aspects of objects is available in the Geometry panel.
Path drawing tools: The program has dedicated tools for drawing quadratic (2nd order) and cubic (3rd order) splines, as well as an Arc tool to draw consecutive arcs in a single Bézier curve.
Reusable items: Boxy SVG can save colors, gradients, and patterns in the <defs> section of the SVG document so that multiple objects would be able to use the same fill definition and automatically update their look once that definition changes. The same principle applies to more elements like filters, markers, and fonts.
Filters: The program has full support for SVG filter effects. It ships with a number of predefined filters such as Drop Shadow or Hue Rotation. New filters can be created with a graph-based filter designer.
Bitmap tracing: Boxy SVG provides a Vectorize generator to trace bitmaps into Bézier curves with color fills depending on user-defined color quantization settings.
Asset libraries: The program allows using fonts from Google Fonts, clip art and photos from Pixabay, and color swatches from the online service called Color Hunt.
Licensing
Boxy SVG is proprietary software. The web version is available under the subscription model with an option for team licensing. Desktop apps for Windows, macOS, and Chrome OS are distributed under a perpetual license. The version for Linux is free and has all features of its macOS and Windows counterparts.
See also
Comparison of vector graphics editors
Scalable Vector Graphics
References
External links
Official website
Official video tutorials
MacOS graphics software
Windows graphics-related software
Vector graphics editors
Vector graphics editors for Linux |
4441677 | https://en.wikipedia.org/wiki/Apple%20II%20system%20clocks | Apple II system clocks | Apple II system clocks, also known as real-time clocks, were devices in the early years of microcomputing. A clock/calendar did not become standard in the Apple II line of computers until 1986 with the introduction of the Apple IIGS. Although many productivity programs as well as the ProDOS operating system implemented time and date functions, users would have to manually enter this information every time they turned the computer on. Power users often had their Apple II's peripheral slots completely filled with expansion cards, so third party vendors came up with alternative approaches with products like the Serial Pro and No-Slot Clock.
No-Slot Clock (Dallas Semiconductor)
The No-Slot Clock, also known as the Dallas Smartwatch (DS1216E), was a 28-pin chip-like device that could be used directly in any Apple II or Apple II compatible with a 28-pin ROM. Dallas Semiconductor produced the device as an easy implementation for a real-time clock for a variety of applications. The clock was powered by an embedded lithium battery, electrically disconnected until power was first applied to retain freshness. The non-replaceable battery had a life expectancy of 10 years.
In an Apple II, the No-Slot Clock resided under any 28-pin ROM chip, including one on a peripheral card. A user had to remove the ROM from its socket, insert the No-Slot Clock, and then reinsert the ROM chip into the top of the No-Slot Clock. The No-Slot Clock was both ProDOS and Dos 3.3 compatible, however a software driver had to be patched into ProDOS or integrated into the applicable DOS 3.3 program. Once the driver was installed it emulated the Thunderclock. The No-Slot Clock was usually installed in the following locations on the motherboard in the following computers:
Apple IIe: under the CD ROM (or CF ROM in later models)
Apple IIc: under the Monitor ROM
Apple IIc+: under the Monitor ROM
Laser 128: under the ROM behind the metal cover on the bottom
Serial Pro (Applied Engineering)
The Serial Pro was a multifunction serial interface and clock/calendar card from Applied Engineering. By combining the functions of two cards into one, the Serial Pro freed up an extra slot for those with highly populated machines. This card was unique in the sense that it did not use "Phantom Slots" to achieve this functionality. Previous multifunction cards required that a secondary function be "mapped" to a different slot in the computer's memory, rendering that slot unusable. The card was capable of a 12‑ and 24‑hour clock format, was both ProDOS and DOS 3.3 compatible, and had on-screen time and date setting built into its ROM, eliminating the need to run a program in order to set the time. The battery was a GE DataSentry rechargeable Ni-cad battery which had a lifespan rating of 20 years. The card retailed for $139 during the late 1980s.
For more on the Serial Pro's communication capabilities, see its entry in Apple II serial cards.
Thunderclock Plus (Thunderware Incorporated)
When the Thunderware Thunderclock Plus was released in 1980, it quickly became the de facto standard for an Apple II system clock. When Apple Computer released its new ProDOS operating system in 1984, a Thunderclock software driver came built-in. From that point on, all new Apple II system clocks strived to emulate the Thunderclock. The card itself was more compact than the earlier "The Clock" from Mountain Computers and contained two battery holders for off the shelf alkaline batteries which were easily replaceable.
Time Master H.O. (Applied Engineering)
The Time Master H.O. clock card from Applied Engineering was possibly the most advanced system clock ever designed for any Apple II. The card utilized an onboard VIA 6522 and was capable of emulating all other system clocks which preceded it. The Timemaster H.O. was powered by a GE Datasentry rechargeable Ni-cad battery which had a lifespan rating of 20 years. It was capable of 24‑hour format or 12‑hour with AM/PM format, millisecond timekeeping with an accuracy of 0.00005%, and an onboard timer which could time down any interval up to 48 days. It also maintained an internal calendar, separate of the 7‑year cycle which ProDOS mapped. The Timemaster H.O. was 100% ProDOS and DOS 3.3 compatible.
The "H.O." in Timemaster H.O. stood for "High Output". This referred to the 8-pin Digital I/O port on the card for advanced applications. Through this port, one could hook up Applied Engineering's BSR X-10 interface and "command console" to remotely control lights and electrical appliances. The BSR system could send signals over existing 120‑volt wiring, eliminating the need for additional wires. The system could also be used for low‑voltage implementations. The Timemaster H.O. retailed for $99 during the late 1980s while the BSR option cost an additional $29. The command console cost $39.
Other system clocks
AppleClock (Mountain Computer)
California Computer Systems Clock (California Computer Systems)
CPS Multifunction Card (Mountain Computer)
The Clock (Mountain Computer)
Timemaster II H.O. (Applied Engineering)
Hayes Stack Chronograph (Hayes Microcomputer Products)
Time II (Applied Engineering)
VersaCard (Prometheus Products)
The Cricket! (Street Electronics)
Clockworks (Micro Systems Research)
References
See also
Apple II peripheral cards
Clocks
Computer real-time clocks |
26792590 | https://en.wikipedia.org/wiki/1946%20Rose%20Bowl | 1946 Rose Bowl | The 1946 Rose Bowl was the 32nd edition of the college football bowl game, played at the Rose Bowl in Pasadena, California, on Tuesday, January 1.
The game matched the undefeated Crimson Tide of the University of Alabama of the Southeastern Conference (SEC) and the #7 Trojans of the University of Southern California of the Pacific Coast Conference (PCC). The Tide defeated the underdog Trojans 34–14. It was Alabama’s sixth and final trip to the Rose Bowl until their College Football Playoff semifinal appearance in 2021 and Frank Thomas' final bowl trip as head coach.
Game summary
Alabama, known as the "wooden horse" led at the half 20–0 and the Trojans had a net loss of 24 yards. USC, which had won eight straight Rose Bowl games since 1923, didn't make a first down until the third quarter when the score was 27–0.
Alabama outgained USC 351 to 41 yards. Quarterback Harry Gilmer threw only eleven times in the game for one touchdown and ran for 116 yards on 16 carries. Hal Self scored twice, on a one-yard run and on a 24-yard Gilmer pass. Gilmer went over from the one, and Lowell Tew hit left guard from the two for points and Norwood Hodges scored up the middle on a one-yard plunge. Hugh Morrow kicked four extra points in the game.
Scoring
First quarter
Ala – Hal Self, 1-yard run (Hugh Morrow kick good)
Second quarter
Ala – Self, 1-yard run (Morrow kick good)
Ala – Lowell Tew, 5-yard run (Morrow kick missed)
Third quarter
Ala – Norwood Hodges, 1-yard run (Morrow kick good)
Fourth quarter
Ala – Harry Gilmer, 20-yard pass from Self (Morrow kick good)
USC – Harry Adelman, 26-yard pass from Verl Lillywhite (Joe Bowman kick good)
USC – Chick Clark returned a blocked kick (Lillywhite kick good)
Aftermath
Following this game, the PCC and Big Nine (now Pac-12 and Big Ten) entered into an exclusive five-year agreement for their champions to meet in the Rose Bowl. It has been extended numerous times, and outside of rotations in the playoffs, it continues. Both conferences openly admitted restricting the Rose Bowl, because they were tired of getting beaten by teams playing "hillbilly ball," the same reason they cited for not inviting them before 1920.
This was last Rose Bowl appearance by an SEC team for 72 years, when Georgia defeated Oklahoma in a national semifinal in early 2018. The first break in the Pac-12/Big Ten arrangement came in 2002, when it was the BCS Championship Game between Nebraska and Miami.
See also
Dissatisfaction with distribution of tickets
References
1945–46 NCAA football bowl games
1946
1946
1946
1946 in sports in California
January 1946 sports events |
26246088 | https://en.wikipedia.org/wiki/IEEE%201394 | IEEE 1394 | IEEE 1394 is an interface standard for a serial bus for high-speed communications and isochronous real-time data transfer. It was developed in the late 1980s and early 1990s by Apple in cooperation with a number of companies, primarily Sony and Panasonic. Apple called the interface FireWire. It is also known by the brand names i.LINK (Sony), and Lynx (Texas Instruments).
The copper cable used in its most common implementation can be up to long. Power and data is carried over this cable, allowing devices with moderate power requirements to operate without a separate power supply. FireWire is also available in Cat 5 and optical fiber versions.
The 1394 interface is comparable to USB. USB was developed subsequently and gained much greater market share. USB requires a master controller whereas IEEE 1394 is cooperatively managed by the connected devices.
History and development
FireWire is Apple's name for the IEEE 1394 High Speed Serial Bus. Its development was initiated by Apple in 1986, and developed by the IEEE P1394 Working Group, largely driven by contributions from Sony (102 patents), Apple (58 patents), and Panasonic (46 patents), in addition to contributions made by engineers from Philips, LG Electronics, Toshiba, Hitachi, Canon, INMOS/SGS Thomson (now STMicroelectronics), and Texas Instruments.
IEEE 1394 is a serial bus architecture for high-speed data transfer. FireWire is a serial bus, meaning that information is transferred one bit at a time. Parallel buses utilize a number of different physical connections, and as such are usually more costly and typically heavier. IEEE 1394 fully supports both isochronous and asynchronous applications.
Apple intended FireWire to be a serial replacement for the parallel SCSI bus, while providing connectivity for digital audio and video equipment. Apple's development began in the late 1980s, later presented to the IEEE, and was completed in January 1995. In 2007, IEEE 1394 was a composite of four documents: the original IEEE Std. 1394–1995, the IEEE Std. 1394a-2000 amendment, the IEEE Std. 1394b-2002 amendment, and the IEEE Std. 1394c-2006 amendment. On June 12, 2008, all these amendments as well as errata and some technical updates were incorporated into a superseding standard, IEEE Std. 1394–2008.
Apple first included onboard FireWire in some of its 1999 Macintosh models (though it had been a build-to-order option on some models since 1997), and most Apple Macintosh computers manufactured in the years 2000 through 2011 included FireWire ports. However, in February 2011 Apple introduced the first commercially available computer with Thunderbolt. Apple released its last computers with FireWire in 2012. By 2014, Thunderbolt had become a standard feature across Apple's entire line of computers (later with the exception of the 12-inch MacBook introduced in 2015, which featured only a sole USB-C port) effectively becoming the spiritual successor to FireWire in the Apple ecosystem. Apple's last products with FireWire, the Thunderbolt Display and 2012 13-inch MacBook Pro, were discontinued in 2016. Apple still sells a Thunderbolt to FireWire Adapter, which provides one FireWire 800 port. A separate adapter is required to use it with Thunderbolt 3.
Sony's implementation of the system, i.LINK, used a smaller connector with only four signal conductors, omitting the two conductors that provide power for devices in favor of a separate power connector. This style was later added into the 1394a amendment. This port is sometimes labeled S100 or S400 to indicate speed in Mbit/s.
The system was commonly used to connect data storage devices and DV (digital video) cameras, but was also popular in industrial systems for machine vision and professional audio systems. Many users preferred it over the more common USB 2.0 for its then greater effective speed and power distribution capabilities. Benchmarks show that the sustained data transfer rates are higher for FireWire than for USB 2.0, but lower than USB 3.0. Results are marked on Apple Mac OS X but more varied on Microsoft Windows.
Patent considerations
Implementation of IEEE 1394 is said to require use of 261 issued international patents held by 10 corporations. Use of these patents requires licensing; use without license generally constitutes patent infringement. Companies holding IEEE 1394 IP formed a patent pool with MPEG LA, LLC as the license administrator, to whom they licensed patents. MPEG LA sublicenses these patents to providers of equipment implementing IEEE 1394. Under the typical patent pool license, a royalty of US$0.25 per unit is payable by the manufacturer upon the manufacture of each 1394 finished product; no royalties are payable by users.
The last of the patents, MY 120654 by Sony, expired on November 30, 2020. , the following are patent holders of the IEEE 1394 standard, as listed in the patent pool managed by MPEG LA.
A person or company may review the actual 1394 Patent Portfolio License upon request to MPEG LA. MPEG LA does not provide assurance of protection to licensees beyond its own patents. At least one formerly licensed patent is known to have been removed from the pool, and other hardware patents exist that reference IEEE 1394.
The 1394 High Performance Serial Bus Trade Association (the "1394 TA") was formed to aid the marketing of IEEE 1394. Its bylaws prohibit dealing with intellectual property issues. The 1394 Trade Association operates on an individual no cost membership basis to further enhancements to 1394 standards. The Trade Association also is the library source for all 1394 documentation and standards available.
Technical specifications
FireWire can connect up to 63 peripherals in a tree or daisy-chain topology (as opposed to Parallel SCSI's electrical bus topology). It allows peer-to-peer device communication — such as communication between a scanner and a printer — to take place without using system memory or the CPU. FireWire also supports multiple hosts per bus. It is designed to support plug and play and hot swapping. The copper cable it uses in its most common implementation can be up to long and is more flexible than most parallel SCSI cables. In its six-conductor or nine-conductor variations, it can supply up to 45 watts of power per port at up to 30 volts, allowing moderate-consumption devices to operate without a separate power supply.
FireWire devices implement the ISO/IEC 13213 "configuration ROM" model for device configuration and identification, to provide plug-and-play capability. All FireWire devices are identified by an IEEE EUI-64 unique identifier in addition to well-known codes indicating the type of device and the protocols it supports.
FireWire devices are organized at the bus in a tree topology. Each device has a unique self-ID. One of the nodes is elected root node and always has the highest ID. The self-IDs are assigned during the self-ID process, which happens after each bus resets. The order in which the self-IDs are assigned is equivalent to traversing the tree depth-first, post-order.
FireWire is capable of safely operating critical systems due to the way multiple devices interact with the bus and how the bus allocates bandwidth to the devices. FireWire is capable of both asynchronous and isochronous transfer methods at once. Isochronous data transfers are transfers for devices that require continuous, guaranteed bandwidth. In an aircraft, for instance, isochronous devices include control of the rudder, mouse operations and data from pressure sensors outside the aircraft. All these elements require constant, uninterrupted bandwidth. To support both elements, FireWire dedicates a certain percentage to isochronous data and the rest to asynchronous data. In IEEE 1394, 80% of the bus is reserved for isochronous cycles, leaving asynchronous data with a minimum of 20% of the bus.
Encoding scheme
FireWire uses Data/Strobe encoding (D/S encoding). In D/S encoding, two non-return-to-zero (NRZ) signals are used to transmit the data with high reliability. The NRZ signal sent is fed with the clock signal through an XOR gate, creating a strobe signal. This strobe is then put through another XOR gate along with the data signal to reconstruct the clock. This in turn acts as the bus's phase-locked loop for synchronization purposes.
Arbitration
The process of the bus deciding which node gets to transmit data at what time is known as arbitration. Each arbitration round lasts about 125 microseconds. During the round, the root node (device nearest the processor) sends a cycle start packet. All nodes requiring data transfer respond, with the closest node winning. After the node is finished, the remaining nodes take turns in order. This repeats until all the devices have used their portion of the 125 microseconds, with isochronous transfers having priority.
Standards and versions
The previous standards and its three published amendments are now incorporated into a superseding standard, IEEE 1394-2008. The features individually added give a good history on the development path.
FireWire 400 (IEEE 1394-1995)
The original release of IEEE 1394-1995 specified what is now known as FireWire 400. It can transfer data between devices at 100, 200, or 400 Mbit/s half-duplex data rates (the actual transfer rates are 98.304, 196.608, and 393.216 Mbit/s, i.e., 12.288, 24.576 and 49.152 MB/s respectively). These different transfer modes are commonly referred to as S100, S200, and S400.
Cable length is limited to , although up to 16 cables can be daisy chained using active repeaters; external hubs or internal hubs are often present in FireWire equipment. The S400 standard limits any configuration's maximum cable length to . The 6-conductor connector is commonly found on desktop computers and can supply the connected device with power.
The 6-conductor powered connector, now referred to as an alpha connector, adds power output to support external devices. Typically a device can pull about 7 to 8 watts from the port; however, the voltage varies significantly from different devices. Voltage is specified as unregulated and should nominally be about 25 volts (range 24 to 30). Apple's implementation on laptops is typically related to battery power and can be as low as 9 V.
Improvements (IEEE 1394a-2000)
An amendment, IEEE 1394a, was released in 2000, which clarified and improved the original specification. It added support for asynchronous streaming, quicker bus reconfiguration, packet concatenation, and a power-saving suspend mode.
IEEE 1394a offers a couple of advantages over the original IEEE 1394–1995. 1394a is capable of arbitration accelerations, allowing the bus to accelerate arbitration cycles to improve efficiency. It also allows for arbitrated short bus reset, in which a node can be added or dropped without causing a big drop in isochronous transmission.
1394a also standardized the 4-conductor alpha connector developed by Sony and trademarked as "i.LINK", already widely in use on consumer devices such as camcorders, most PC laptops, a number of PC desktops, and other small FireWire devices. The 4-conductor connector is fully data-compatible with 6-conductor alpha interfaces but lacks power connectors.
FireWire 800 (IEEE 1394b-2002)
IEEE 1394b-2002 introduced FireWire 800 (Apple's name for the 9-conductor "S800 bilingual" version of the IEEE 1394b standard). This specification and corresponding products allow a transfer rate of 786.432 Mbit/s full-duplex via a new encoding scheme termed beta mode. It is backwards compatible with the slower rates and 6-conductor alpha connectors of FireWire 400. However, while the IEEE 1394a and IEEE 1394b standards are compatible, FireWire 800's connector, referred to as a beta connector, is different from FireWire 400's alpha connectors, making legacy cables incompatible. A bilingual cable allows the connection of older devices to the newer port. In 2003, Apple was the first to introduce commercial products with the new connector.
The full IEEE 1394b specification supports data rates up to 3200 Mbit/s (i.e., 400 MB/s) over beta-mode or optical connections up to in length. Standard Category 5e unshielded twisted pair supports at S100. The original 1394 and 1394a standards used data/strobe (D/S) encoding (renamed to alpha mode) with the cables, while 1394b added a data encoding scheme called 8b/10b referred to as beta mode.
Beta mode is based on 8b/10b (from Gigabit Ethernet, also used for many other protocols). 8b/10b encoding involves expanding an 8-bit data word into 10 bits, with the extra bits after the 5th and 8th data bits. The partitioned data is sent through a Running Disparity calculator function. The Running Disparity calculator attempts to keep the number of 1s transmitted equal to 0s, thereby assuring a DC-balanced signal. Then, the different partitions are sent through a 5b/6b encoder for the 5-bit partition and a 3b/4b encoder for the 3-bit partition. This gives the packet the ability to have at least two 1s, ensuring synchronization of the PLL at the receiving end to the correct bit boundaries for reliable transfer. An additional function of the coding scheme is to support the arbitration for bus access and general bus control. This is possible due to the "surplus" symbols afforded by the 8b/10b expansion. (While 8-bit symbols can encode a maximum of 256 values, 10-bit symbols permit the encoding of up to 1024.) Symbols invalid for the current state of the receiving PHY indicate data errors.
FireWire S800T (IEEE 1394c-2006)
IEEE 1394c-2006 was published on June 8, 2007. It provided a major technical improvement, namely new port specification that provides 800 Mbit/s over the same 8P8C (Ethernet) connectors with Category 5e cable, which is specified in IEEE 802.3 clause 40 (gigabit Ethernet over copper twisted pair) along with a corresponding automatic negotiation that allows the same port to connect to either IEEE Std 1394 or IEEE 802.3 (Ethernet) devices.
Though the potential for a combined Ethernet and FireWire 8P8C port is intriguing, , no products or chipsets include this capability.
FireWire S1600 and S3200
In December 2007, the 1394 Trade Association announced that products would be available before the end of 2008 using the S1600 and S3200 modes that, for the most part, had already been defined in 1394b and were further clarified in IEEE Std. 1394–2008. The 1.572864 Gbit/s and 3.145728 Gbit/s devices use the same 9-conductor beta connectors as the existing FireWire 800 and are fully compatible with existing S400 and S800 devices. It competes with USB 3.0.
S1600 (Symwave) and S3200 (Dap Technology) development units have been made, however because of FPGA technology DapTechnology targeted S1600 implementations first with S3200 not becoming commercially available until 2012.
Steve Jobs declared FireWire dead in 2008. , there were few S1600 devices released, with a Sony camera being the only notable user.
Future enhancements (including P1394d)
A project named IEEE P1394d was formed by the IEEE on March 9, 2009 to add single mode fiber as an additional transport medium to FireWire. The project was withdrawn in 2013.
Other future iterations of FireWire were expected to increase speed to 6.4 Gbit/s and additional connectors such as the small multimedia interface.
Operating system support
Full support for IEEE 1394a and 1394b is available for Microsoft Windows, FreeBSD, Linux, Apple Mac OS 8.6 through Mac OS 9, NetBSD, and Haiku.
In Windows XP, a degradation in performance of 1394 devices may have occurred with installation of Service Pack 2. This was resolved in Hotfix 885222 and in SP3. Some FireWire hardware manufacturers also provide custom device drivers that replace the Microsoft OHCI host adapter driver stack, enabling S800-capable devices to run at full 800 Mbit/s transfer rates on older versions of Windows (XP SP2 w/o Hotfix 885222) and Windows Vista. At the time of its release, Microsoft Windows Vista supported only 1394a, with assurances that 1394b support would come in the next service pack. Service Pack 1 for Microsoft Windows Vista has since been released, however the addition of 1394b support is not mentioned anywhere in the release documentation. The 1394 bus driver was rewritten for Windows 7 to provide support for higher speeds and alternative media.
In Linux, support was originally provided by libraw1394 making direct communication between user space and IEEE 1394 buses. Subsequently, a new kernel driver stack, nicknamed JuJu, has been implemented.
Cable TV system support
Under FCC Code 47 CFR 76.640 section 4, subsections 1 and 2, Cable TV providers (in the US, with digital systems) must, upon request of a customer, have provided a high-definition capable cable box with a functional FireWire interface. This applied only to customers leasing high-definition capable cable boxes from their cable provider after April 1, 2004.
The interface can be used to display or record Cable TV, including HDTV programming. In June 2010, the FCC issued an order that permitted set-top boxes to include IP-based interfaces in place of FireWire.
Comparison with USB
While both technologies provide similar end results, there are fundamental differences between USB and FireWire. USB requires the presence of a bus master, typically a PC, which connects point to point with the USB slave. This allows for simpler (and lower-cost) peripherals, at the cost of lowered functionality of the bus. Intelligent hubs are required to connect multiple USB devices to a single USB bus master. By contrast, FireWire is essentially a peer-to-peer network (where any device may serve as the host or client), allowing multiple devices to be connected on one bus.
The FireWire host interface supports DMA and memory-mapped devices, allowing data transfers to happen without loading the host CPU with interrupts and buffer-copy operations. Additionally, FireWire features two data buses for each segment of the bus network, whereas, until USB 3.0, USB featured only one. This means that FireWire can have communication in both directions at the same time (full-duplex), whereas USB communication prior to 3.0 can only occur in one direction at any one time (half-duplex).
While USB 2.0 expanded into the fully backwards-compatible USB 3.0 and 3.1 (using the same main connector type), FireWire used a different connector between 400 and 800 implementations.
Common applications
Consumer automobiles
IDB-1394 Customer Convenience Port (CCP) was the automotive version of the 1394 standard.
Consumer audio and video
IEEE 1394 was the High-Definition Audio-Video Network Alliance (HANA) standard connection interface for A/V (audio/visual) component communication and control. HANA was dissolved in September 2009 and the 1394 Trade Association assumed control of all HANA-generated intellectual property.
Military and aerospace vehicles
SAE Aerospace standard AS5643 originally released in 2004 and reaffirmed in 2013 establishes IEEE-1394 standards as a military and aerospace databus network in those vehicles. AS5643 is utilized by several large programs, including the F-35 Lightning II, the X-47B UCAV aircraft, AGM-154 weapon and JPSS-1 polar satellite for NOAA. AS5643 combines existing 1394-2008 features like looped topology with additional features like transformer isolation and time synchronization, to create deterministic double and triple fault-tolerant data bus networks.
General networking
FireWire can be used for ad hoc (terminals only, no routers except where a FireWire hub is used) computer networks. Specifically, RFC 2734 specifies how to run IPv4 over the FireWire interface, and RFC 3146 specifies how to run IPv6.
Mac OS X, Linux, and FreeBSD include support for networking over FireWire. Windows 95, Windows 98, Windows Me, Windows XP and Windows Server 2003 include native support for IEEE 1394 networking. Windows 2000 does not have native support but may work with third party drivers. A network can be set up between two computers using a single standard FireWire cable, or by multiple computers through use of a hub. This is similar to Ethernet networks with the major differences being transfer speed, conductor length, and the fact that standard FireWire cables can be used for point-to-point communication.
On December 4, 2004, Microsoft announced that it would discontinue support for IP networking over the FireWire interface in all future versions of Microsoft Windows. Consequently, support for this feature is absent from Windows Vista and later Windows releases.
Microsoft rewrote their 1394 driver in Windows 7 but networking support for FireWire is not present. Unibrain offers free FireWire networking drivers for Windows called ubCore, which support Windows Vista and later versions.
Some models of the PlayStation 2 console had an i.LINK-branded 1394 connector. This was used for networking until the release of an Ethernet adapter late in the console's lifespan, but very few software titles supported the feature.
IIDC
IIDC (Instrumentation & Industrial Digital Camera) is the FireWire data format standard for live video, and is used by Apple's iSight A/V camera. The system was designed for machine vision systems but is also used for other computer vision applications and for some webcams. Although they are easily confused since they both run over FireWire, IIDC is different from, and incompatible with, the ubiquitous AV/C (Audio Video Control) used to control camcorders and other consumer video devices.
DV
Digital Video (DV) is a standard protocol used by some digital camcorders. All DV cameras that recorded to tape media had a FireWire interface (usually a 4-conductor). All DV ports on camcorders only operate at the slower 100 Mbit/s speed of FireWire. This presents operational issues if the camcorder is daisy chained from a faster S400 device or via a common hub because any segment of a FireWire network cannot support multiple speed communication.
Labeling of the port varied by manufacturer, with Sony using either its i.LINK trademark or the letters 'DV'. Many digital video recorders have a "DV-input" FireWire connector (usually an alpha connector) that can be used to record video directly from a DV camcorder ("computer-free"). The protocol also accommodates remote control (play, rewind, etc.) of connected devices, and can stream time code from a camera.
USB is unsuitable for the transfer of the video data from tape because tape by its very nature does not support variable data rates. USB relies heavily on processor support and this was not guaranteed to service the USB port in time. The later move away from tape towards solid-state memory or disc media (e.g., SD Cards, optical disks or hard drives) has facilitated moving to USB transfer because file-based data can be moved in segments as required.
Frame grabbers
IEEE 1394 interface is commonly found in frame grabbers, devices that capture and digitize an analog video signal; however, IEEE 1394 is facing competition from the Gigabit Ethernet interface (citing speed and availability issues).
iPod and iPhone synchronization and charging
iPods released prior to the iPod with Dock Connector used IEEE 1394a ports for syncing music and charging, but in 2003, the FireWire port in iPods was succeeded by Apple's dock connector and IEEE 1394 to 30-pin connector cables were made. Apple Inc. dropped support for FireWire cables starting with iPod nano (4th Generation), iPod touch (2nd Generation), and iPhone in favor of USB cables.
Security issues
Devices on a FireWire bus can communicate by direct memory access (DMA), where a device can use hardware to map internal memory to FireWire's "Physical Memory Space". The SBP-2 (Serial Bus Protocol 2) used by FireWire disk drives uses this capability to minimize interrupts and buffer copies. In SBP-2, the initiator (controlling device) sends a request by remotely writing a command into a specified area of the target's FireWire address space. This command usually includes buffer addresses in the initiator's FireWire Physical Address Space, which the target is supposed to use for moving I/O data to and from the initiator.
On many implementations, particularly those like PCs and Macs using the popular OHCI, the mapping between the FireWire "Physical Memory Space" and device physical memory is done in hardware, without operating system intervention. While this enables high-speed and low-latency communication between data sources and sinks without unnecessary copying (such as between a video camera and a software video recording application, or between a disk drive and the application buffers), this can also be a security or media rights-restriction risk if untrustworthy devices are attached to the bus and initiate a DMA attack. One of the applications known to exploit this to gain unauthorized access to running Windows, Mac OS and Linux computers is the spyware FinFireWire. For this reason, high-security installations typically either use newer machines that map a virtual memory space to the FireWire "Physical Memory Space" (such as a Power Mac G5, or any Sun workstation), disable relevant drivers at operating system level, disable the OHCI hardware mapping between FireWire and device memory, physically disable the entire FireWire interface, or opt to not use FireWire or other hardware like PCMCIA, PC Card, ExpressCard or Thunderbolt, which expose DMA to external components.
An unsecured FireWire interface can be used to debug a machine whose operating system has crashed, and in some systems for remote-console operations. Windows natively supports this scenario of kernel debugging, although newer Windows Insider Preview builds no longer include the ability out of the box. On FreeBSD, the dcons driver provides both, using gdb as debugger. Under Linux, firescope and fireproxy exist.
See also
DMA attack
HAVi
Linux IEEE 1394 target
List of interface bit rates
Pin control attack
References
Further reading
External links
1394 Trade Association
1394 Standards Orientation, Introduction.
IEEE 1394 connectors pinout
Computer buses
Computer connectors
Computer storage buses
IEEE standards
Macintosh internals
Personal area networks
Serial buses
Television terminology
Video signal |
1758913 | https://en.wikipedia.org/wiki/ATM%20Adaptation%20Layer%205 | ATM Adaptation Layer 5 | ATM Adaptation Layer 5 (AAL5) is an ATM adaptation layer used to send variable-length packets up to 65,535 octets in size across an Asynchronous Transfer Mode (ATM) network.
Unlike most network frames, which place control information in the header, AAL5 places control information in an 8-octet trailer at the end of the packet. The AAL5 trailer contains a 16-bit length field, a 32-bit cyclic redundancy check (CRC) and two 8-bit fields labeled UU and CPI that are currently unused.
Each AAL5 packet is divided into an integral number of ATM cells and reassembled into a packet before delivery to the receiving host. This process is known as Segmentation and Reassembly (see below). The last cell contains padding to ensure that the entire packet is a multiple of 48 octets long. The final cell contains up to 40 octets of data, followed by padding bytes and the 8-octet trailer. In other words, AAL5 places the trailer in the last 8 octets of the final cell where it can be found without knowing the length of the packet; the final cell is identified by a bit in the ATM header (see below), and the trailer is always in the last 8 octets of that cell.
Convergence, segmentation, and reassembly
When an application sends data over an ATM connection using AAL5, the host delivers a block of data to the AAL5 interface. AAL5 generates a trailer, divides the information into 48-octet pieces, and transfers each piece across the ATM network in a single cell. On the receiving end of the connection, AAL5 reassembles incoming cells into a packet, checks the CRC to ensure that all pieces arrived correctly, and passes the resulting block of data to the host software. The process of dividing a block of data into cells and regrouping them is known as ATM segmentation and reassembly (SAR).
By separating the functions of segmentation and reassembly from cell transport, AAL5 follows the layering principle. The ATM cell transfer layer is classified as "machine-to-machine" because the layering principle applies from one machine to the next (e.g., between a host and a switch or between two switches). The AAL5 layer is classified as "end-to-end" because the layering principle applies from the source to the destination - AAL5 presents the receiving software with data in exactly the same size blocks as the application passed to the AAL5 on the sending end.
The AAL5 on the receiving side knows how many cells comprise a packet because the sending AAL5 uses the low-order bit of the "PAYLOAD TYPE" field of the ATM cell header to mark the final cell in a packet. This final cell header can be thought of as an "end-to-end bit". Thus, the receiving AAL5 collects incoming cells until it finds one with an end-of-packet bit set. ATM standards use the term "convergence" to describe mechanisms that recognize the end of a packet. Although AAL5 uses a single bit in the cell header for convergence, other ATM adaptation layer protocols are free to use other convergence mechanisms.
Packet type and multiplexing
The AAL5 trailer does not include a type field. Thus, an AAL5 frame does not identify its content. This means that either the two hosts at the ends of a virtual circuit must agree a priori that the circuit will be used for one specific protocol (e.g., the circuit will only be used to send IP datagrams), or the two hosts at the ends of a virtual circuit must agree a priori that some octets of the data area will be reserved for use as a type field to distinguish packets containing one protocol's data from packets containing another protocol's data.
, Multiprotocol Encapsulation over ATM, describes two encapsulation mechanisms for network traffic, one of which implements the former scheme and one of which implements the latter scheme.
The former scheme, in which the hosts agree on the high-level protocol for a given circuit, is referred to in RFC 2684 as "VC Multiplexing". It has the advantage of not requiring additional information in a packet, which minimises the overhead. For example, if the hosts agree to transfer IP, a sender can pass each datagram directly to AAL5 to transfer, nothing needs to be sent besides the datagram and the AAL5 trailer. The chief disadvantage of such a scheme lies in duplication of virtual circuits: a host must create a separate virtual circuit for each high-level protocol if more than one protocol is used. Because most carriers charge for each virtual circuit, customers try to avoid using multiple circuits because it adds unnecessary cost.
The latter scheme, in which the hosts use a single virtual circuit for multiple protocols, is referred to in RFC 2684 as "LLC Encapsulation". The standards suggest that hosts should use a standard IEEE 802.2 Logical Link Control (LLC) header, followed by a Subnetwork Access Protocol (SNAP) header if necessary. This scheme has the advantage of allowing all traffic over the same circuit, but the disadvantage of requiring each packet to contain octets that identify the protocol type, which adds overhead. The scheme also has the disadvantage that packets from all protocols travel with the same delay and priority.
RFC 2684 specifies that hosts can choose between the two methods of using AAL5. Both the sender and receiver must agree on how the circuit will be used. The agreement may involve manual configuration.
Datagram encapsulation and IP MTU size
Internet Protocol (IP) can use AAL5, combined with one of the encapsulation schemes described in RFC 2684, to transfer datagrams across an ATM network, as specified in RFC 2225. Before data can be sent, a virtual circuit (PVC or SVC) must be in place to the destination host and both ends must agree to use AAL5 on the circuit. To transfer a datagram, the sender passes it to AAL5 along with the VPI/VCI identifying the circuit. AAL5 generates a trailer, divides the datagram into cells, and transfers the cells across the network. At the receiving end, AAL5 reassembles the cells, checks the CRC to verify that no bits were lost or corrupted, extracts the datagram, and passes it to the IP layer.
AAL5 uses a 16-bit length field, making it possible to send 65,535 (216−1) octets in a single packet. However, RFC 2225 ("Classical IP and ARP over ATM") specifies a default MTU of 9180 octets per datagram, so, unless the hosts on both ends of the virtual circuit negotiate a larger MTU, IP datagrams larger than 9180 octets will be fragmented.
References
Network protocols |
65324781 | https://en.wikipedia.org/wiki/2021%20NASCAR%20Camping%20World%20Truck%20Series | 2021 NASCAR Camping World Truck Series | The 2021 NASCAR Camping World Truck Series was the 27th season of the NASCAR Camping World Truck Series, a stock car racing series sanctioned by NASCAR in the United States. The season began at Daytona International Speedway with the NextEra Energy 250 on February 12. The regular season will end with the race at Watkins Glen International on August 7. The NASCAR playoffs will end with the Lucas Oil 150 at Phoenix Raceway on November 5. This season marks the 13th for Camping World Holdings as the series' title sponsor. After two years of advertising their Gander Outdoors retail chain in the title sponsorship, company CEO Marcus Lemonis announced on September 15, 2020 that the sponsorship would switch back to the Camping World brand beginning in 2021, which was the same name of the series from 2009 to 2018.
Following the Corn Belt 150 at Knoxville Raceway, John Hunter Nemechek of Kyle Busch Motorsports clinched the Regular Season Championship one race early. Toyota claimed its 12th Manufacturer Championship following the United Rentals 200 at Martinsville Speedway. At the season finale, Ben Rhodes of ThorSport Racing became the 2021 Truck Series champion.
Teams and drivers
Complete schedule
Limited schedule
Notes
Changes
Teams
On June 10, 2020, Ray Ciccarelli announced that he would be closing down his CMI Motorsports team after the 2020 season as a result of him disagreeing with NASCAR's decision to ban the confederate flag (which happened after the murder of George Floyd and the subsequent Black Lives Matter protests), meaning his No. 49 and No. 83 trucks will not be returning in 2021. However, CMI hinted in a tweet on October 30 that they would return for the 2021 season in some capacity. The team would later announce that Tim Viens would run a full season in the No. 49 while the No. 83 would continue to run part-time with and a rotation of drivers, including Ciccarelli himself.
On July 15, 2020, ARCA Menards Series team owner/driver Justin Carroll announced that his TC Motorsports team would run part-time in the Truck Series in 2021 after they purchased a race truck in the summer of 2020. The team's first race was going to come at the Daytona Road Course, but they later postponed their debut to Richmond (which they did not enter) with additional starts planned at Bristol in September (which they also did not enter) and Martinsville.
On July 31, 2020, eventual 2020 ARCA Menards Series champion Bret Holmes told reporter Chris Knight that he was looking at debuting in the Truck Series for select races in 2021. On January 14, 2021, Holmes announced that he would be driving for his own team as it expands into the Truck Series. He and Sam Mayer will run partial schedules in the No. 32 truck. On March 6, 2021, it was revealed that the team had purchased the owner points of the No. 28 FDNY Racing truck, which attempted the season-opener at Daytona, in order to be more likely to qualify for races without qualifying if an entry list had over 40 trucks.
On October 8, 2020, Trey Hutchens revealed that he and his team planned on expanding their part-time schedule to between 8 and 12 races in 2021, after previously attempting 5 and 3 races in 2020 and 2019, respectively.
On November 13, 2020, it was announced that Lira Motorsports would be returning to the series for the first time since 2016, fielding a part-time entry for late model and NASCAR Roots driver Logan Misuraca. She and the team also announced that they would run part-time in the ARCA Menards Series in 2021. Misuraca revealed in an interview in April that her deal with Lira had fallen through. Soon after, she joined On Point Motorsports to potentially drive the No. 30 with Camping World sponsorship during Marcus Lemonis' efforts to get all trucks in each race sponsored. She filmed a video to try to get Camping World and Lemonis to sponsor her, but as of August 2021, no deal has been put together.
On December 1, 2020, McAnally-Hilgemann Racing announced that they would field a truck at the season-finale at Phoenix that will be driven by one of the drivers who participated in their new Driver Academy Series. All of the drivers who win a race in that series will be put into a drawing, and the winner of that drawing will get to drive this truck in this one race. With Derek Kraus returning to the team's No. 19 in 2021, this entry for the team at Phoenix will be a second part-time McAnally truck.
On December 18, 2020, driver Willie Allen and Rackley Roofing owner Curtis Sutton announced the formation of the team Rackley WAR, fielding the No. 25 full-time for Timothy Peters. On June 1, 2021, the team announced that Peters would be released and replaced with JR Motorsports Xfinity Series driver Josh Berry for the next three races. On June 3, Rackley WAR announced that Hendrick Motorsports Cup Series driver William Byron would make his first Truck Series start since his seven-win full season in the series in 2016, driving a part-time second truck, the No. 27, for the team.
On January 13, 2021, it was announced that Hattori Racing Enterprises would be adding a part-time second truck in 2021 for Max McLaughlin. The races that it will be entered in and its number have yet to be announced. McLaughlin, the son of retired NASCAR driver Mike McLaughlin, ran full-time for the team in their ARCA Menards Series East No. 1 car in 2019 and 2020, but will not return to that ride in 2021 in order to concentrate on his dirt racing efforts. Max has one prior Truck start, which came in the 2018 Eldora Dirt Derby for Niece Motorsports.
On January 14, 2021, it was announced that Win-Tron Racing would merge into AM Racing, a Truck Series team that they have had an alliance with for multiple years and that they have shared a race shop with beginning in 2020.
On January 22, 2021, Spencer Davis announced that he and his team would be running the full season in 2021, after previously running part-time in 2020. After failing to qualify for the season-opener at Daytona, they did not attempt the next two races (the Daytona Road Course and Las Vegas). The team returned in the next race at Atlanta, as it was announced on March 20 that Davis had acquired the owner points of the No. 8 NEMCO Motorsports truck, which would allow his team to run the entire season since the No. 8 had attempted the first two races of the season. On March 18, the team announced that full-time Cup Series driver and 2014 Eldora Dirt Derby winner Bubba Wallace would return to the Truck Series to drive their No. 11 in the Bristol dirt race, which would be fielded in a collaboration with Hattori Racing Enterprises for this race.
On January 29, 2021, it was announced that Young's Motorsports would expand back up to three full-time trucks in 2021 with the re-addition of the No. 12, which was driven by Tate Fogleman, who drove the team's No. 02 in 2021, which will now be driven by Kris Wright.
On March 6, 2021, it was revealed that Henderson Motorsports had purchased the owner points of Clay Greenfield's No. 68 truck, which ran nearly the full season in 2020, to use for their No. 75 truck for the rest of the season.
On March 17, 2021, Ryan Newman revealed that he would be entering the Bristol dirt race with a new team, DCC Racing, owned by Brad Means. The team used the No. 39, Newman's number when he drove for Stewart-Haas Racing in the Cup Series, and was a Ford, the manufacturer Newman drives in the Cup Series with Roush Fenway Racing. Brad Means is the son of Jimmy Means, the team owner of Xfinity Series team Jimmy Means Racing. On August 27, 2021, it was announced that DCC would be partnering with Reaume Brothers Racing to jointly field Dylan Lupton in the No. 34 RBR truck for the last four races of the season.
Drivers
On August 28, 2020, it was announced that Chris Hacker would be joining Cram Racing Enterprises, starting with one ARCA start in 2020 (which ended up being the West Series season-finale at Phoenix), which would be followed by the 2021 ARCA season-opener at Daytona, and then either a part-time or full-time schedule in the Truck Series, depending on sponsorship. On June 1, 2021, Hacker tweeted that he would be driving Cram's No. 41 truck at Nashville, although the team later refuted that statement hours later. On June 2, CRE and Hacker parted ways, supposedly over the team's frustration of Hacker "announcing" his start prematurely. On August 16, 2021, it was announced that Hacker would make his Truck Series debut at Gateway in the No. 34 for Reaume Brothers Racing in a partnership with On Point Motorsports.
On September 24, 2020, it was announced that Carson Hocevar, who drove part-time in the Nos. 40 and 42 for Niece Motorsports in 2020, would return to the team in 2021 to run full-time and for Rookie of the Year in the No. 42 truck.
On September 27, 2020, it was announced that West Series driver Keith McGee would drive for Reaume Brothers Racing at some point during the 2021 season. He was scheduled to run the race at Talladega in the team's No. 33 truck in 2020, but was not approved to make his debut due to his lack of superspeedway experience coupled with how there was no practice and qualifying due to COVID-19. As a result, the team announced his first start with them would be pushed back a year. On January 18, 2021, Reaume announced that McGee's first race would be at Richmond in the No. 33. In addition, the team stated that he could run more races in 2021 if sponsorship could be found.
On October 17, 2020, Ford Performance announced that Hailie Deegan would run full-time and for Rookie of the Year in a DGR-Crosley truck in 2021. She drove for the team full-time in the ARCA Menards Series in 2020 along with making one Truck Series start.
On October 29, 2020, Frontstretch reporter Kevin Rutherford revealed that Howie DiSavino III had told him that he would make his Truck Series debut in 2021. Before COVID-19 hit, DiSavino III had planned on doing so in 2020 in the race at Richmond in a No. 32 truck for Win-Tron Racing. On April 5, it was announced that DiSavino's first race would still come at Richmond, but it would be in the No. 3 for Jordan Anderson Racing.
On November 10, 2020, NASCAR issued an indefinite suspension to Josh Reaume for an allegedly antisemitic post on social media that violated Sections 12.1 and 12.8.1.e in the NASCAR Rule Book. Reaume was reinstated by NASCAR on March 31, 2021.
On November 12, 2020, it was announced that Brett Moffitt would move up to the Xfinity Series full-time in 2021, driving the Our Motorsports No. 02 car which he drove in many races in 2020 and leaving his full-time Truck Series ride in the No. 23 for GMS Racing. The following day, GMS announced that Chase Purdy would be a full-time driver for the team, replacing Moffitt, with the truck crew chief to be determined. Purdy drove part-time in the team's No. 24 in 2020. On August 5, 2021, Purdy tested positive for COVID-19, forcing him to miss Watkins Glen while A. J. Allmendinger substituted for him.
On November 19, 2020, it was revealed that pending the signing of contracts, Tim Viens would drive full-time in the No. 49 for CMI Motorsports in 2021, after having entered a majority of the races in 2020 for the team in either the No. 49 or No. 83 truck. Viens later ended up driving the No. 83 instead of the No. 49.
On November 23, 2020, it was announced that John Hunter Nemechek would return to the Truck Series full-time in 2021 to drive the Kyle Busch Motorsports No. 4, replacing Raphaël Lessard. Nemechek drove the Front Row Motorsports No. 38 in the Cup Series in 2020 as well as a part-time schedule in the No. 8 for his family team, NEMCO Motorsports.
On November 24, 2020, Ryan Truex, who ran part-time in the No. 40 for Niece Motorsports in 2020, announced that he would return for a full-time season in the same truck in 2021.
On November 25, 2020, it was announced that Raphaël Lessard would drive in the No. 24 for 12 races for GMS Racing, with the possibility of a full season if sponsorship could be found. Lessard ran full-time in the No. 4 for Kyle Busch Motorsports in 2020. On January 7, 2021, GMS announced that Lessard would be able to run the full season after the team found additional sponsorship. However, on April 3, it was announced that sponsorship had dried up again and Lessard would be taken out of the truck.
On December 2, 2020, Moffitt surprisingly announced that he would return to the Truck Series full-time, driving the Niece Motorsports No. 45, meaning that he will run full-time in both the Xfinity and Truck Series in 2021.
On December 7, 2020, Kyle Busch Motorsports announced that Chandler Smith, who drove the Nos. 46 and 51 part-time for them for the previous two seasons, would replace Christian Eckes in the No. 18 in 2021 in his first full season in the Truck Series. Smith also competed part-time in the ARCA Menards Series for the previous three seasons with Venturini Motorsports, winning a total of nine races.
On December 31, 2020, it was announced that Kris Wright would drive full-time for Young's Motorsports after driving part-time for GMS Racing in the 2020 ARCA Menards Series and a one-off truck race at the Daytona road course. He also drove part-time for JP Racing in the 2020 ARCA Menards Series West. Wright would later miss the Atlanta race and the Bristol dirt race after testing positive for COVID-19. JR Motorsports Xfinity Series driver Josh Berry substituted for Wright at Atlanta, and Trackhouse Cup Series driver Daniel Suárez substituted for Wright in the Bristol dirt race.
On January 17, 2021, it was announced that Sam Mayer would drive the No. 75 for Henderson Motorsports in seven races, beginning at the Daytona Road Course, in addition to his part-time schedule in the No. 32 for Bret Holmes Racing. The rest of his schedule with Henderson has yet to be announced.
On February 4, 2021, ThorSport Racing announced that Christian Eckes would drive 10 races in the No. 98 truck, sharing the ride with Grant Enfinger throughout the season.
On February 15, 2021, Kyle Busch Motorsports announced that road course ringer Parker Chase would be in their No. 51 for the Daytona Road Course and Circuit of the Americas. Chase was Kyle Busch's teammate in the 2020 Rolex 24 and he made his stock car debut in the ARCA Menards Series race at the DRC in 2020.
On February 26, 2021, Bill Lester announced on NASCAR Race Hub that he would be coming out of retirement to compete in the Truck Series race at his home track of Atlanta. He previously competed full-time in the series from 2002 to midway through 2007. On March 12, it was announced that Lester would drive the No. 17 for David Gilliland Racing.
On March 4, 2021, Roper Racing announced that full-time Cup Series driver and accomplished dirt racer Chase Briscoe would return to the Truck Series to drive the No. 04 for the team in the Bristol dirt race, replacing team owner/driver Cory Roper in the truck. It was then announced on April 26 that Briscoe would also make a start on pavement for the team at Kansas.
On March 15, 2021, Hill Motorsports announced that 2018 World of Outlaws Late Model Series champion Mike Marlar would drive their No. 56 in the Bristol dirt race. This will be his second start in the series after his debut at Eldora in 2019, where he finished fourth in the No. 33 for Reaume Brothers Racing.
On March 17, 2021, Niece Motorsports announced that full-time Cup Series driver and accomplished dirt racer Kyle Larson would return to the Truck Series to drive the No. 44 for the team in the Bristol dirt race. This will also be his first start in the series since returning from his suspension in 2020.
On April 8, 2021, Taylor Gray was forced to miss his first race at Richmond after suffering a fractured L4 vertebra, left foot, and ankle in a single-car accident in Statesville, NC. On July 6, David Gilliland Racing announced that he had recovered and would make his Truck Series debut at Watkins Glen.
On April 14, 2021, CMI Motorsports announced that Ryan Reed would drive their No. 49 in the race at Richmond, which they were able to qualify for. This was Reed's first start in the Truck Series and in NASCAR since 2019 when he drove the DGR No. 17 at Las Vegas and only his second start since losing his full-time Xfinity Series ride with Roush Fenway Racing. He has worked as a driver coach during his time without a ride. Reed would then return to the truck in the next race at Kansas. On April 3, it was announced that Reed would replace Raphaël Lessard in the No. 24 for GMS Racing at Darlington due to the lack of funding for Lessard to remain in the truck full-time.
On April 19, 2021, Kyle Busch revealed in an interview that Corey Heim would make his Truck Series debut for his team at Darlington in the No. 51. Heim, a Toyota development driver, competes full-time in the ARCA Menards Series for Venturini Motorsports. He will also be in the truck at Martinsville.
On April 21, 2021, C. J. McLaughlin revealed on an appearance on a podcast that he would drive for Reaume Brothers Racing in five races, which would include Kansas and Charlotte in the No. 34. McLaughlin previously drove for the team in the race at Iowa in 2019.
On July 28, 2021, it was announced that ARCA Menards Series driver Toni Breidinger, who was scheduled to make her Truck Series debut at some point in 2021 in a part-time fourth Young's Motorsports truck, the No. 82, would be leaving the team to drive for Venturini Motorsports, the team she drove for part-time in ARCA in 2018.
On September 10, 2021, Jordan Anderson Racing announced that Sage Karam would make his Truck Series debut in their No. 3 truck. The IndyCar driver made his NASCAR debut in the team's No. 31 car in the Xfinity Series race at the Indianapolis Motor Speedway road course.
On October 18, 2021, it was announced that ARCA Menards Series West driver Dean Thompson would make his Truck Series debut in the season-finale at Phoenix. He also drove for the team in the main ARCA Menards Series season-finale at Kansas.
Crew chiefs
On October 26, 2020, it was announced that Kyle Busch Motorsports crew chief Rudy Fugle, who won 28 races, two drivers' championships and five owners' championships in the Truck Series, would become the crew chief for William Byron (who he worked with in 2016) in the Cup Series for Hendrick Motorsports in 2021 after the retirement of Chad Knaus.
On November 25, 2020, it was announced that Mike Hillman Jr. would join DGR-Crosley to be Hailie Deegan's crew chief in 2021. He previously worked for Kyle Busch Motorsports, serving as Raphaël Lessard's crew chief on the No. 4 truck in 2020.
On December 8, 2020, Kyle Busch Motorsports announced changes to their crew chief lineup in 2021.
Eric Phillips, who was a crew chief for KBM from 2010 to 2014, would return to the team to be the crew chief of the No. 4, now driven by John Hunter Nemechek, replacing Mike Hillman Jr, who left to crew chief Hailie Deegan at DGR-Crosley. Phillips was previously the car chief for Denny Hamlin's No. 11 Joe Gibbs Racing Cup Series team and prior to that, the crew chief of JGR's No. 18 Xfinity Series team.
Danny Stockman Jr. will move from the No. 51 to the No. 18, now driven by Chandler Smith (who he previously worked with part-time in the No. 51), replacing Rudy Fugle, who left to crew chief the No. 24 of William Byron for Hendrick Motorsports in the Cup Series.
Mardy Lindley, a four-time East Series championship-winning crew chief (including the last two years with Sam Mayer at GMS Racing), replaces Stockman as the crew chief of the No. 51.
On December 10, 2020, it was announced that Kevin Bellicourt, who crew chiefed the No. 19 of Derek Kraus for McAnally-Hilgemann Racing in 2020, would be joining Spire Motorsports to crew chief their No. 77 car in the NASCAR Cup Series. He is the 2015 East Series championship-winning crew chief.
On December 22, 2020, Matt Noyce, who was the crew chief for the No. 99 of Ben Rhodes at ThorSport Racing for the last two years, and prior to that was the crew chief of Jesse Little's No. 97 team, was announced to be moving to McAnally-Hilgemann to replace Bellicourt as the crew chief for the No. 19 of Kraus. On April 1, 2021, McAnally-Hilgemann announced that Noyce had left the team and would be replaced by former MBM Motorsports Xfinity Series crew chief Mark Hillman. Noyce would move to Niece Motorsports, where he replaced Tim Mooney as crew chief of their No. 44 truck.
On February 4, 2021, ThorSport Racing announced that Jeriod Prince would be the new crew chief for the No. 98 truck of Enfinger and Eckes. He has crew chiefed for the team in the past on the No. 13 truck in 2013 and 2014 and on the team's former ARCA car, the No. 44 of Frank Kimmel, in 2012 and 2013. They also announced that the crew chief for the No. 99 truck of Rhodes would be Rich Lushes, who was the crew chief of ThorSport's No. 13 in 2018 when it was driven by Myatt Snider. Both Prince and Lushes had been truck chiefs after the years they were crew chiefs for ThorSport.
On April 30, 2021, it was announced that Shane Wilson had been released as crew chief of the No. 15 truck of Tanner Gray for David Gilliland Racing. Seth Smith, who is one of the crew chiefs of DGR's part-time No. 17 truck, became the interim crew chief for the No. 15 starting at Kansas.
On May 4, 2021, it was revealed that Niece Motorsports would be making changes to their crew chief chief lineup beginning at Darlington. Phil Gould, who was the crew chief for the No. 45, moves to the No. 42 of Carson Hocevar. The previous crew chief of the No. 42, Cody Efaw, moved to the No. 44, replacing Matt Noyce, who replaced Gould as the crew chief of the No. 45.
On May 28, 2021, it was revealed that Jon Leonard would be replacing Tripp Bruce as Stewart Friesen's crew chief. Leonard was an engineer for the team and previously a crew chief for Todd Gilliland and Front Row Motorsports in the Truck Series and an engineer and interim crew chief with Leavine Family Racing in the Cup Series. Bruce is also the team's competition director and moved to that role full-time after Leonard's promotion to crew chief.
Interim crew chiefs
Danny Stockman Jr. was the crew chief of the No. 51 Kyle Busch Motorsports truck in 2020 and was suspended for three races after the truck had a loose wheel during one of their pit stops in the SpeedyCash.com 400 at Texas on October 25, 2020. Because there were only two races left in the season, the third race of his suspension came at the 2021 season-opener at Daytona. Wes Ward was the interim crew chief at Martinsville and Phoenix in 2020. Stockman moved from the No. 51 to the No. 18 truck for KBM in 2021. 1992 Cup Series champion crew chief Paul Andrews, who joined KBM as their new shop foreman for 2021 after previously being a crew chief for the closed Chad Bryant Racing team in the ARCA Menards Series, filled in for Stockman at Daytona.
On March 15, 2021, it was announced that Patrick Magee, a crew chief for MBM Motorsports in the Xfinity Series, would substitute for Greg Ely as the crew chief for the No. 56 Hill Motorsports truck at the Bristol dirt race because Ely had an illness. Magee was available with the Xfinity Series off that weekend. The driver of that truck for that race was Mike Marlar, who also was driving MBM's No. 66 in the Cup Series Bristol dirt race, a car usually driven by Hill Motorsports driver/co-owner Timmy Hill.
On June 18, 2021, David Gilliland Racing's No. 15 truck, driven by Tanner Gray, and the DGR-aligned No. 38 Front Row Motorsports truck, driven by Todd Gilliland, both failed pre-qualifying inspection before the race at Nashville. Crew chiefs Seth Smith and Chris Lawson were ejected from the track and replaced by interim crew chiefs Jacob Hampton (the engineer for Gray's No. 15) and David Gilliland (DGR owner, part-time driver, and father of Todd Gilliland).
On October 6, 2021, NASCAR indefinitely suspended Young's Motorsports No. 2 crew chief Eddie Troconis for violation of Sections 12.8.1.c Behavioral.
Manufacturers
On January 8, 2021, Ray Ciccarelli revealed that he would be upgrading the equipment for CMI Motorsports ahead of the 2021 season. Over the offseason, he bought a Ford from ThorSport Racing (which switched manufacturers for 2021) as well as a Toyota from another team which was not specified. In turn, he then sold some of his old Chevy trucks. CMI now has trucks from all three manufacturers in their fleet, which Ciccarelli stated is in order to attract part-time drivers to the team that have a contract with one of the manufacturers.
On January 18, 2021, ThorSport Racing announced that Ford would not be their manufacturer in 2021. On February 4, the team announced that they would be switching back to Toyota, which was their manufacturer from 2012 to 2017.
Sponsorship
On November 19, 2020, it was revealed that Tim Viens' primary sponsors in 2021 on the CMI Motorsports No. 49 truck (it ended up instead being the No. 83) would be Subsafe (who sponsored him once in 2020 in the Pocono race) and Boat Gadget, who replace Patriots PAC of America, which was a PAC for President Donald Trump's 2020 re-election campaign.
On January 14, 2021, it was announced that Credit MRI would join Young's Motorsports and sponsor the No. 20 of Spencer Boyd in multiple races in 2021.
On March 1, 2021, Marcus Lemonis, the CEO of series title sponsor Camping World, tweeted that he would offer 15,000 to any team that ran the race at Las Vegas in March with a Camping World paint scheme. All the teams (10 in total) that were set to be unsponsored received sponsorship from Camping World, and those teams were GMS Racing's No. 2 of Sheldon Creed and No. 24 of Raphaël Lessard, Jordan Anderson in his team's No. 3, Norm Benning in his team's No. 6, Grant Enfinger in CR7 Motorsports' No. 9, both of the Reaume Brothers Racing trucks (the No. 33 of Jesse Iwuji and the No. 34 of B. J. McLeod), Dawson Cram in his team's No. 41, Tyler Hill in his team's No. 56, and the Henderson Motorsports No. 75 of Parker Kligerman.
Lemonis would do the same thing for the next race at Atlanta, but this time with his Overton's company. Creed's No. 2, Anderson's No. 3, Benning's No. 6, the Reaume No. 34 (driven by Ryan Ellis in that race), Cram's No. 41, and Kligerman's No. 75 would again receive backing from Lemonis and had Overton's sponsorship. The GMS No. 21 of Zane Smith and Niece Motorsports Nos. 40, 42 and 45 of Ryan Truex, Carson Hocevar, and Brett Moffitt, respectively, also carried sponsorship from Lemonis through the Overton's brand for the first time. In addition to those ten Overton's trucks, Camping World was one of the sponsors on Bill Lester's No. 17 truck for DGR that weekend.
On August 31, 2021, Young's Motorsports announced that DogeCola, a beverage brand owned by Dogecoin, would sponsor the No. 12 at Darlington.
Rule changes
NASCAR has increased the field count of the Truck Series to 36 trucks per race, which was the same field count of the series until the 2014 season. In the races without practice or qualifying, the field count will be expanded to a maximum of 40 vehicles, similar to 2020 due to the COVID-19 pandemic.
Schedule
Daytona, Phoenix, Texas, Circuit of the Americas, and Gateway revealed their race dates ahead of the release of the entire schedule, which NASCAR announced on November 19, 2020. The schedule has since been adjusted twice, with the most recent changes announced on May 25, 2021.
Note: The Triple Truck Challenge races (the first race at Darlington and races at Circuit of the Americas and Charlotte) are listed in bold.
Broadcasting
Fox will air the entirety of the schedule in 2021, as their contract TV deal goes through 2024. It has yet to be announced whether any races will be aired on the main Fox network. The vast majority of the races are broadcast on FS1, but typically changes are made during the season.
Fox made a change to their broadcasting lineup for their Truck Series coverage for the 2021 season, as pit reporter Alan Cavanna was released by the network ahead of the season.
Schedule changes
There is one less race on the schedule, as it now contains 22 races instead of 23. This is also the first time since 2000 that the Truck Series has had more than one road course race (when Watkins Glen and Portland International Raceway were on the schedule). The original schedule had three road course races, the most since 1999 (when it had those two races plus Topeka). The current schedule features three such events (Daytona road course, Austin, Watkins Glen).
Eldora Speedway, the series' original dirt race which had been on the schedule since 2013 (except for 2020 when it was removed due to COVID-19), was replaced by a race at Knoxville Raceway in Iowa, home of the Knoxville Nationals, which will be run on Friday, July 9.
The series has two dirt races for the first time with the addition of a spring race at Bristol, which will see dirt temporarily put onto the track's surface. The Cup Series will also be running with the Truck Series on that weekend. This race replaces the race at Kentucky Speedway, which will not host any NASCAR races in 2021.
Circuit of the Americas replaces the INDYCAR weekend race at Texas (which was run at the playoff weekend for Cup last year because of the pandemic). This gives the series a second road course race.
After a 21 year absence, Watkins Glen returns to the schedule for the first time since 2000, giving the series a third road course race. The race will be in August on the same weekend as the Cup and Xfinity races there. This race replaces the race at Michigan, which will not host a Truck race for the first time since 2001 because of logistics.
Nashville Superspeedway is added to the schedule, replacing Dover, as is also the case with the Cup and Xfinity Series. This is the first race for the series at Nashville since 2011. The series will not race at Dover at all this season. It is the first time since 1999 where the track has not been on the schedule.
Iowa Speedway is permanently taken off the schedule after its race on the 2020 schedule was removed as part of the COVID-19 schedule changes.
Darlington Raceway, which was not originally on the 2020 schedule but added on as part of the COVID-19 schedule changes, becomes a permanent event for the first time since 2011. The race moves from Labor Day weekend to May as part of the new second Cup and Xfinity Series races at the track on Mother's Day weekend in May, as the annual throwback weekend moves from Darlington's Cup Playoff weekend to the spring weekend for 2021.
Canadian Tire Motorsport Park was scheduled to return to the schedule after being taken off of the 2020 schedule during the season as part of the COVID-19 schedule changes. CTMP was to be the Truck race run on Labor Day weekend, its usual spot on the schedule, with Darlington, (which was on that weekend in 2020) moving to May.
Schedule changes due to the COVID-19 pandemic
Due to state COVID regulations in California, NASCAR cancelled the Cup and Xfinity races at Auto Club Speedway, scheduled for February 27 and 28, and those races were moved to the Daytona infield road course (the Truck Series did not have a race scheduled at Auto Club this season). With the season-opening races for all three series at Daytona being two weeks before, NASCAR made the Daytona Road Course the second race of the season for all three series, bumping the Homestead races back by one week to where Auto Club had been, along with cancelling the Truck Series race at Homestead and moving it to the Daytona Road Course for financial and logistics reasons (NASCAR could keep timing and scoring equipment at the circuit an additional week instead of setting it up and taking it down for two trips to Daytona within three weeks).
Due to COVID-19 restrictions in terms of international travel, the race at Canadian Tire Motorsport Park was removed again and replaced by a second race at Darlington Raceway. The second Darlington date will be held on Sunday afternoon before the Cup Series Cook Out Southern 500 that evening.
Results and standings
Race results
Drivers' championship
(key) Bold – Pole position awarded by time. Italics – Pole position set by final practice results or owner's points. * – Most laps led. 1 – Stage 1 winner. 2 – Stage 2 winner. 1-10 – Regular season top 10 finishers.
. – Eliminated after Round of 10
. – Eliminated after Round of 8
Owners' championship (Top 15)
(key) Bold – Pole position awarded by time. Italics – Pole position set by final practice results or owner's points. * – Most laps led. 1 – Stage 1 winner. 2 – Stage 2 winner. 1-10 – Regular season top 10 finishers.
. – Eliminated after Round of 10
. – Eliminated after Round of 8
Manufacturers' championship
See also
2021 NASCAR Cup Series
2021 NASCAR Xfinity Series
2021 ARCA Menards Series
2021 ARCA Menards Series East
2021 ARCA Menards Series West
2021 NASCAR Whelen Modified Tour
2021 NASCAR Pinty's Series
2021 NASCAR PEAK Mexico Series
2021 NASCAR Whelen Euro Series
2021 eNASCAR iRacing Pro Invitational Series
2021 SRX Series
References
Camping World Truck Series |
23810743 | https://en.wikipedia.org/wiki/List%20of%20computer%20science%20conference%20acronyms | List of computer science conference acronyms | This is a list of academic conferences in computer science, ordered by their acronyms or abbreviations.
A
AAAI – AAAI Conference on Artificial Intelligence
AAMAS – International Conference on Autonomous Agents and Multiagent Systems
ABZ – International Conference on Abstract State Machines, Alloy, B and Z
ACL – Annual Meeting of the Association for Computational Linguistics
AE - Artificial Evolution Conference
ALGO – ALGO Conference
AMCIS – Americas Conference on Information Systems
ANTS – Algorithmic Number Theory Symposium
ARES – International Conference on Availability, Reliability and Security
ASIACRYPT – International Conference on the Theory and Application of Cryptology and Information Security
ASP-DAC – Asia and South Pacific Design Automation Conference
ASE – IEEE/ACM International Conference on Automated Software Engineering
ASWEC – Australian Software Engineering Conference
ATMOS – Workshop on Algorithmic Approaches for Transportation Modeling, Optimization, and Systems
C
CADE – Conference on Automated Deduction
CAV – Computer Aided Verification
CC – International Conference on Compiler Construction
CCSC – Consortium for Computing Sciences in Colleges
CHES – Workshop on Cryptographic Hardware and Embedded Systems
CHI – ACM Conference on Human Factors in Computing Systems
CIAA – International Conference on Implementation and Application of Automata
CIBB – International Meeting on Computational Intelligence Methods for Bioinformatics and Biostatistics
CICLing – International Conference on Intelligent Text Processing and Computational Linguistics
CIDR – Conference on Innovative Data Systems Research
CIKM – Conference on Information and Knowledge Management
CONCUR - International Conference on Concurrency Theory
CRYPTO – International Cryptology Conference
CVPR – Conference on Computer Vision and Pattern Recognition
D
DAC – Design Automation Conference
DATE – Design, Automation, and Test in Europe
DCFS – International Workshop on Descriptional Complexity of Formal Systems
DISC – International Symposium on Distributed Computing
DLT – International Conference on Developments in Language Theory
DSN – International Conference on Dependable Systems and Networks
E
ECAI – European Conference on Artificial Intelligence
ECCO – Conference of the European Chapter on Combinatorial Optimization
ECIS – European Conference on Information Systems
ECML PKDD – European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases
ECOOP – European Conference on Object-Oriented Programming
ECSS – European Computer Science Summit
ER - International Conference on Conceptual Modeling
ESA – European Symposium on Algorithms
ESOP – European Symposium on Programming
ESWC – Extended (formerly European) Semantic Web Conference
ETAPS – European Joint Conferences on Theory and Practice of Software
EUROCRYPT – International Conference on the Theory and Applications of Cryptographic Techniques
Eurographics – Annual Conference of the European Association for Computer Graphics
EWSN – European Conference on Wireless Sensor Networks
F
FASE – International Conference on Fundamental Approaches to Software Engineering
FAST – USENIX Conference on File and Storage Technologies
FCRC – Federated Computing Research Conference
FLoC – Federated Logic Conference
FOCS – IEEE Symposium on Foundations of Computer Science
FORTE – IFIP International Conference on Formal Techniques for Networked and Distributed Systems
FoSSaCS – International Conference on Foundations of Software Science and Computation Structures
FSE – Fast Software Encryption Workshop
FTP – International Workshop on First-Order Theorem Proving
G
GD – International Symposium on Graph Drawing
GlobeCom – IEEE Global Communications Conference
GraphiCon – International Conference on Computer Graphics and Vision
H
HICSS – Hawaii International Conference on System Sciences
HiPC – International Conference on High Performance Computing
HOPL – History of Programming Languages Conference
Hot Interconnects – IEEE Symposium on High Performance Interconnects
I
ICALP – International Colloquium on Automata, Languages and Programming
ICASSP – International Conference on Acoustics, Speech, and Signal Processing
ICCAD – International Conference on Computer-Aided Design
ICC – IEEE International Conference on Communications
ICCIT – International Conference on Computer and Information Technology
ICCV – International Conference on Computer Vision
ICDCS – International Conference on Distributed Computing Systems
ICFP – International Conference on Functional Programming
ICIS – International Conference on Information Systems
ICL – International Conference on Interactive Computer Aided Learning
ICLP – International Conference on Logic Programming
ICML – International Conference on Machine Learning
ICPADS – International Conference on Parallel and Distributed Systems
ICSE – International Conference on Software Engineering
ICSOC – International Conference on Service Oriented Computing
ICSR – International Conference on Software Reuse
ICTer – International Conference on Advances in ICT for Emerging Regions
ICWS – International Conference on Web Services
IJCAI – International Joint Conference on Artificial Intelligence
IJCAR – International Joint Conference on Automated Reasoning
IndoCrypt – International Conference on Cryptology in India
IPDPS – IEEE International Parallel and Distributed Processing Symposium
IPSN – ACM/IEEE International Conference on Information Processing in Sensor Networks
ISAAC – International Symposium on Algorithms and Computation
ISCA – International Symposium on Computer Architecture
ISCAS – IEEE International Symposium on Circuits and Systems
ISMAR – IEEE International Symposium on Mixed and Augmented Reality
ISWC – International Semantic Web Conference
ISPD – International Symposium on Physical Design
ISSCC – International Solid-State Circuits Conference
ISWC – International Symposium on Wearable Computers
ITNG - International Conference on Information Technology: New Generations
K
KDD – ACM SIGKDD Conference on Knowledge Discovery and Data Mining
L
LICS – IEEE Symposium on Logic in Computer Science
LREC – International Conference on Language Resources and Evaluation
M
MM – ACM International Conference on Multimedia
MECO – Mediterranean Conference on Embedded Computing
MobiCom – ACM International Conference on Mobile Computing and Networking
MobiHoc – ACM International Symposium on Mobile Ad Hoc Networking and Computing
MobileHCI – Conference on Human-Computer Interaction with Mobile Devices and Services
N
NAACL – Annual Conference of the North American Chapter of the Association for Computational Linguistics
NIPS – Conference on Neural Information Processing Systems
NeurIPS – Conference on Neural Information Processing Systems
NIME – New Interfaces for Musical Expression
O
OOPSLA – Conference on Object-Oriented Programming, Systems, Languages, and Applications
P
PACIS – Pacific Asia Conference on Information Systems
PIMRC – International Symposium on Personal, Indoor and Mobile Radio Communications
PKC – International Workshop on Practice and Theory in Public Key Cryptography
PKDD – European Conference on Principles and Practice of Knowledge Discovery in Databases
PLDI – ACM SIGPLAN Conference on Programming Language Design and Implementation
PLoP – Pattern Languages of Programs
PODC – ACM Symposium on Principles of Distributed Computing
PODS – ACM Symposium on Principles of Database Systems
POPL – Symposium on Principles of Programming Languages
POST – Conference on Principles of Security and Trust
PPoPP – ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming
PSB – Pacific Symposium on Biocomputing
R
RECOMB – Research in Computational Molecular Biology
REV – International Conference on Remote Engineering and Virtual Instrumentation
RSA – RSA Conference
RTA – International Conference on Rewriting Techniques and Applications
S
SAC – ACM SIGAPP Symposium on Applied Computing
SAC – Selected Areas in Cryptography
SEAMS – Software Engineering for Adaptive and Self-Managing Systems
SEFM – International Conference on Software Engineering and Formal Methods
SenSys – ACM Conference on Embedded Networked Sensor Systems
SIGCOMM – ACM SIGCOMM Conference
SIGCSE – ACM Technical Symposium on Computer Science Education
SIGDOC – ACM International Conference on Design of Communication
SIGGRAPH – International Conference on Computer Graphics and Interactive Techniques
SIGIR – Annual International ACM SIGIR Conference
SIGMOD – ACM SIGMOD Conference
SPAA – ACM Symposium on Parallelism in Algorithms and Architectures
SRDS – IEEE International Symposium on Reliable Distributed Systems
STACS – Symposium on Theoretical Aspects of Computer Science
STOC – ACM Symposium on Theory of Computing
SWAT – Scandinavian Symposium and Workshops on Algorithm Theory
T
TABLEAUX – International Conference on Automated Reasoning with Analytic Tableaux and Related Methods
TACAS – International Conference on Tools and Algorithms for the Construction and Analysis of Systems
TAMC – International Conference on Theory and Applications of Models of Computation
TCC – Theory of Cryptography Conference
TPHOLs – Theorem Proving in Higher-Order Logics
TSD – Text, Speech and Dialogue
U
USENIX ATC – USENIX Annual Technical Conference
V
VIS – IEEE Visualization
VLDB – International Conference on Very Large Data Bases
W
WABI – Workshop on Algorithms in Bioinformatics
WADS – Algorithms and Data Structures Symposium
WAE – Workshop on Algorithms Engineering
WAOA – Workshop on Approximation and Online Algorithms
WDAG – Workshop on Distributed Algorithms on Graphs
WikiSym – International Symposium on Wikis and Open Collaboration
WINE – Conference on Web and Internet Economics
WMSCI – World Multiconference on Systemics, Cybernetics and Informatics
WWW – World Wide Web Conference
Z
ZUM – Z User Meeting
See also
List of computer science conferences for more conferences organised by field.
Conference acronym index for conferences and workshops published in LNCS, LNAI and LNBI proceedings series by Springer.
References
Computer science conference abbreviations
computer science conferences |
33520674 | https://en.wikipedia.org/wiki/Software-defined%20networking | Software-defined networking | Software-defined networking (SDN) technology is an approach to network management that enables dynamic, programmatically efficient network configuration in order to improve network performance and monitoring, making it more like cloud computing than traditional network management. SDN is meant to address the static architecture of traditional networks. SDN attempts to centralize network intelligence in one network component by disassociating the forwarding process of network packets (data plane) from the routing process (control plane). The control plane consists of one or more controllers, which are considered the brain of the SDN network where the whole intelligence is incorporated. However, centralization has its own drawbacks when it comes to security, scalability and elasticity and this is the main issue of SDN.
SDN was commonly associated with the OpenFlow protocol (for remote communication with network plane elements for the purpose of determining the path of network packets across network switches) since the latter's emergence in 2011. However, since 2012, proprietary systems also used the term. These include Cisco Systems' Open Network Environment and Nicira's network virtualization platform.
SD-WAN applies similar technology to a wide area network (WAN).
History
The history of SDN principles can be traced back to the separation of the control and data plane first used in the public switched telephone network as a way to simplify provisioning and management well before this architecture began to be used in data networks.
The Internet Engineering Task Force (IETF) began considering various ways to decouple the control and forwarding functions in a proposed interface standard published in 2004 appropriately named "Forwarding and Control Element Separation" (ForCES). The ForCES Working Group also proposed a companion SoftRouter Architecture. Additional early standards from the IETF that pursued separating control from data include the Linux Netlink as an IP Services Protocol and A Path Computation Element (PCE)-Based Architecture.
These early attempts failed to gain traction for two reasons. One is that many in the Internet community viewed separating control from data to be risky, especially owing to the potential for a failure in the control plane. The second is that vendors were concerned that creating standard application programming interfaces (APIs) between the control and data planes would result in increased competition.
The use of open-source software in split control/data plane architectures traces its roots to the Ethane project at Stanford's computer sciences department. Ethane's simple switch design led to the creation of OpenFlow. An API for OpenFlow was first created in 2008. That same year witnessed the creation of NOX—an operating system for networks.
Several patent applications were filed by independent researchers in 2007 describing practical applications for SDN, operating system for networks, network infrastructure compute units as a multi-core CPU, and a method for virtual network segmentation based on functionality. These applications became public in 2009, and since these patents were abandoned, all information in the patents in free for public use and cannot be patented by anyone.
Research of SDN included emulators such as vSDNEmul, EstiNet, and Mininet.
Work on OpenFlow continued at Stanford, including with the creation of testbeds to evaluate the use of the protocol in a single campus network, as well as across the WAN as a backbone for connecting multiple campuses. In academic settings there were a few research and production networks based on OpenFlow switches from NEC and Hewlett-Packard; as well as based on Quanta Computer whiteboxes, starting from about 2009.
Beyond academia, the first deployments were by Nicira in 2010 to control OVS from Onix, co-developed with NTT and Google. A notable deployment was Google's B4 deployment in 2012. Later Google acknowledged their first OpenFlow with Onix deployments in their Datacenters at the same time. Another known large deployment is at China Mobile.
The Open Networking Foundation was founded in 2011 to promote SDN and OpenFlow.
At the 2014 Interop and Tech Field Day, software-defined networking was demonstrated by Avaya using shortest path bridging (IEEE 802.1aq) and OpenStack as an automated campus, extending automation from the data center to the end device, removing manual provisioning from service delivery.
Concept
SDN architectures decouple network control and forwarding functions, enabling the network control to become directly programmable and the underlying infrastructure to be abstracted from applications and network services.
The OpenFlow protocol can be used in SDN technologies. The SDN architecture is:
Directly programmable: Network control is directly programmable because it is decoupled from forwarding functions.
Agile: Abstracting control from forwarding lets administrators dynamically adjust network-wide traffic flow to meet changing needs.
Centrally managed: Network intelligence is (logically) centralized in software-based SDN controllers that maintain a global view of the network, which appears to applications and policy engines as a single, logical switch.
Programmatically configured: SDN lets network managers configure, manage, secure, and optimize network resources very quickly via dynamic, automated SDN programs, which they can write themselves because the programs do not depend on proprietary software.
Open standards-based and vendor-neutral: When implemented through open standards, SDN simplifies network design and operation because instructions are provided by SDN controllers instead of multiple, vendor-specific devices and protocols.
The need for a new network architecture
The explosion of mobile devices and content, server virtualization, and the advent of cloud services are among the trends driving the networking industry to re-examine traditional network architectures. Many conventional networks are hierarchical, built with tiers of Ethernet switches arranged in a tree structure. This design made sense when client-server computing was dominant, but such a static architecture is ill-suited to the dynamic computing and storage needs of today's enterprise data centers, campuses, and carrier environments. Some of the key computing trends driving the need for a new network paradigm include:
Changing traffic patterns
Within the enterprise data center, traffic patterns have changed significantly. In contrast to client-server applications where the bulk of the communication occurs between one client and one server, today's applications access different databases and servers, creating a flurry of "east-west" machine-to-machine traffic before returning data to the end user device in the classic "north-south" traffic pattern. At the same time, users are changing network traffic patterns as they push for access to corporate content and applications from any type of device (including their own), connecting from anywhere, at any time. Finally, many enterprise data centers managers are contemplating a utility computing model, which might include a private cloud, public cloud, or some mix of both, resulting in additional traffic across the wide area network.
The "consumerization of IT"
Users are increasingly employing mobile personal devices such as smartphones, tablets, and notebooks to access the corporate network. IT is under pressure to accommodate these personal devices in a fine-grained manner while protecting corporate data and intellectual property and meeting compliance mandates.
The rise of cloud services
Enterprises have enthusiastically embraced both public and private cloud services, resulting in unprecedented growth of these services. Enterprise business units now want the agility to access applications, infrastructure, and other IT resources on demand and à la carte. To add to the complexity, IT's planning for cloud services must be done in an environment of increased security, compliance, and auditing requirements, along with business reorganizations, consolidations, and mergers that can change assumptions overnight. Providing self-service provisioning, whether in a private or public cloud, requires elastic scaling of computing, storage, and network resources, ideally from a common viewpoint and with a common suite of tools.
"Big data" means more bandwidth
Handling today's "big data" or mega datasets requires massive parallel processing on thousands of servers, all of which need direct connections to each other. The rise of mega datasets is fueling a constant demand for additional network capacity in the data center. Operators of hyperscale data center networks face the daunting task of scaling the network to previously unimaginable size, maintaining any-to-any connectivity without going broke.
Energy use on large datacenters
As Internet of Things, Cloud computing and SaaS emerged the need for larger datacenters has increased the energy consumption of those facilities. Many researchers have improved SDN's energy efficiency applying existing routing techniques to dynamically adjust the network data plane to save energy. Also techniques to improve control plane energy efficiency are being researched.
Architectural components
The following list defines and explains the architectural components:
SDN Application
SDN Applications are programs that explicitly, directly, and programmatically communicate their network requirements and desired network behavior to the SDN Controller via a northbound interface (NBI). In addition, they may consume an abstracted view of the network for their internal decision-making purposes. An SDN Application consists of one SDN Application Logic and one or more NBI Drivers. SDN Applications may themselves expose another layer of abstracted network control, thus offering one or more higher-level NBIs through respective NBI agents.
SDN Controller
The SDN Controller is a logically centralized entity in charge of (i) translating the requirements from the SDN Application layer down to the SDN Datapaths and (ii) providing the SDN Applications with an abstract view of the network (which may include statistics and events). An SDN Controller consists of one or more NBI Agents, the SDN Control Logic, and the Control to Data-Plane Interface (CDPI) driver. Definition as a logically centralized entity neither prescribes nor precludes implementation details such as the federation of multiple controllers, the hierarchical connection of controllers, communication interfaces between controllers, nor virtualization or slicing of network resources.
SDN Datapath
The SDN Datapath is a logical network device that exposes visibility and uncontested control over its advertised forwarding and data processing capabilities. The logical representation may encompass all or a subset of the physical substrate resources. An SDN Datapath comprises a CDPI agent and a set of one or more traffic forwarding engines and zero or more traffic processing functions. These engines and functions may include simple forwarding between the datapath's external interfaces or internal traffic processing or termination functions. One or more SDN Datapaths may be contained in a single (physical) network element—an integrated physical combination of communications resources, managed as a unit. An SDN Datapath may also be defined across multiple physical network elements. This logical definition neither prescribes nor precludes implementation details such as the logical to physical mapping, management of shared physical resources, virtualization or slicing of the SDN Datapath, interoperability with non-SDN networking, nor the data processing functionality, which can include OSI layer 4-7 functions.
SDN Control to Data-Plane Interface (CDPI)
The SDN CDPI is the interface defined between an SDN Controller and an SDN Datapath, which provides at least (i) programmatic control of all forwarding operations, (ii) capabilities advertisement, (iii) statistics reporting, and (iv) event notification. One value of SDN lies in the expectation that the CDPI is implemented in an open, vendor-neutral and interoperable way.
SDN Northbound Interfaces (NBI)
SDN NBIs are interfaces between SDN Applications and SDN Controllers and typically provide abstract network views and enable direct expression of network behavior and requirements. This may occur at any level of abstraction (latitude) and across different sets of functionality (longitude). One value of SDN lies in the expectation that these interfaces are implemented in an open, vendor-neutral and interoperable way.
SDN Control Plane
Centralized - Hierarchical - Distributed
The implementation of the SDN control plane can follow a centralized, hierarchical, or decentralized design. Initial SDN control plane proposals focused on a centralized solution, where a single control entity has a global view of the network. While this simplifies the implementation of the control logic, it has scalability limitations as the size and dynamics of the network increase. To overcome these limitations, several approaches have been proposed in the literature that fall into two categories, hierarchical and fully distributed approaches. In hierarchical solutions, distributed controllers operate on a partitioned network view, while decisions that require network-wide knowledge are taken by a logically centralized root controller. In distributed approaches, controllers operate on their local view or they may exchange synchronization messages to enhance their knowledge. Distributed solutions are more suitable for supporting adaptive SDN applications.
Controller Placement
A key issue when designing a distributed SDN control plane is to decide on the number and placement of control entities. An important parameter to consider while doing so is the propagation delay between the controllers and the network devices, especially in the context of large networks. Other objectives that have been considered involve control path reliability, fault tolerance, and application requirements.
SDN flow forwarding (sdn)
Proactive vs Reactive vs Hybrid
OpenFlow uses TCAM tables to route packet sequences (flows). If flows arrive at a switch, a flow table lookup is performed. Depending on the flow table implementation this is done in a software flow table if a vSwitch is used or in an ASIC if it's implemented in hardware. In the case when no matching flow is found, a request to the controller for further instructions is sent. This is handled in one of three different modes. In reactive mode the controller acts after these requests and creates and installs a rule in the flow table for the corresponding packet if necessary. In proactive mode the controller populates flow table entries for all possible traffic matches possible for this switch in advance. This mode can be compared with typical routing table entries today, where all static entries are installed ahead of time. Following this, no request is sent to the controller since all incoming flows will find a matching entry. A major advantage in proactive mode is that all packets are forwarded in line rate (considering all flow table entries in TCAM) and no delay is added. The third mode, hybrid mode, follows the flexibility of a reactive mode for a set of traffic and the low-latency forwarding (proactive mode) for the rest of the traffic.
Applications
SDMN
Software-defined mobile networking (SDMN) is an approach to the design of mobile networks where all protocol-specific features are implemented in software, maximizing the use of generic and commodity hardware and software in both the core network and radio access network. It is proposed as an extension of SDN paradigm to incorporate mobile network specific functionalities. Since 3GPP Rel.14, a Control User Plane Separation was introduced in the Mobile Core Network architectures with the PFCP protocol.
SD-WAN
An SD-WAN is a WAN managed using the principles of software-defined networking. The main driver of SD-WAN is to lower WAN costs using more affordable and commercially available leased lines, as an alternative or partial replacement of more expensive MPLS lines. Control and management is administered separately from the hardware with central controllers allowing for easier configuration and administration.
SD-LAN
An SD-LAN is a Local area network (LAN) built around the principles of software-defined networking, though there are key differences in topology, network security, application visibility and control, management and quality of service. SD-LAN decouples control management, and data planes to enable a policy driven architecture for wired and wireless LANs. SD-LANs are characterized by their use of a cloud management system and wireless connectivity without the presence of a physical controller.
Security using the SDN paradigm
SDN architecture may enable, facilitate or enhance network-related security applications due to the controller's central view of the network, and its capacity to reprogram the data plane at any time. While security of SDN architecture itself remains an open question that has already been studied a couple of times in the research community, the following paragraphs only focus on the security applications made possible or revisited using SDN.
Several research works on SDN have already investigated security applications built upon the SDN controller, with different aims in mind. Distributed Denial of Service (DDoS) detection and mitigation, as well as botnet and worm propagation, are some concrete use-cases of such applications: basically, the idea consists in periodically collecting network statistics from the forwarding plane of the network in a standardized manner (e.g. using Openflow), and then apply classification algorithms on those statistics in order to detect any network anomalies. If an anomaly is detected, the application instructs the controller how to reprogram the data plane in order to mitigate it.
Another kind of security application leverages the SDN controller by implementing some moving target defense (MTD) algorithms. MTD algorithms are typically used to make any attack on a given system or network more difficult than usual by periodically hiding or changing key properties of that system or network. In traditional networks, implementing MTD algorithms is not a trivial task since it is difficult to build a central authority able of determining - for each part of the system to be protected - which key properties are hid or changed. In an SDN network, such tasks become more straightforward thanks to the centrality of the controller. One application can for example periodically assign virtual IPs to hosts within the network, and the mapping virtual IP/real IP is then performed by the controller. Another application can simulate some fake opened/closed/filtered ports on random hosts in the network in order to add significant noise during reconnaissance phase (e.g. scanning) performed by an attacker.
Additional value regarding security in SDN enabled networks can also be gained using FlowVisor and FlowChecker respectively. The former tries to use a single hardware forwarding plane sharing multiple separated logical networks. Following this approach the same hardware resources can be used for production and development purposes as well as separating monitoring, configuration and internet traffic, where each scenario can have its own logical topology which is called slice. In conjunction with this approach FlowChecker realizes the validation of new OpenFlow rules that are deployed by users using their own slice.
SDN controller applications are mostly deployed in large-scale scenarios, which requires comprehensive checks of possible programming errors. A system to do this called NICE was described in 2012. Introducing an overarching security architecture requires a comprehensive and protracted approach to SDN. Since it was introduced, designers are looking at possible ways to secure SDN that do not compromise scalability. One architecture called SN-SECA (SDN+NFV) Security Architecture.
Group Data Delivery Using SDN
Distributed applications that run across datacenters usually replicate data for the purpose of synchronization, fault resiliency, load balancing and getting data closer to users (which reduces latency to users and increases their perceived throughput). Also, many applications, such as Hadoop, replicate data within a datacenter across multiple racks to increase fault tolerance and make data recovery easier. All of these operations require data delivery from one machine or datacenter to multiple machines or datacenters. The process of reliably delivering data from one machine to multiple machines is referred to as Reliable Group Data Delivery (RGDD).
SDN switches can be used for RGDD via installation of rules that allow forwarding to multiple outgoing ports. For example, OpenFlow provides support for Group Tables since version 1.1 which makes this possible. Using SDN, a central controller can carefully and intelligently setup forwarding trees for RGDD. Such trees can be built while paying attention to network congestion/load status to improve performance. For example, MCTCP is a scheme for delivery to many nodes inside datacenters that relies on regular and structured topologies of datacenter networks while DCCast and QuickCast are approaches for fast and efficient data and content replication across datacenters over private WANs.
Relationship to NFV
NFV Network Function Virtualization is a concept that complements SDN. Thus, NFV is not dependent on SDN or SDN concepts. NFV disunites software from hardware to enable flexible network deployment and dynamic operation. NFV deployments typically use commodity servers to run network services software versions that previously were hardware-based. These software-based services that run in an NFV environment are called Virtual Network Functions (VNF). SDN-NFV hybrid program was provided for high efficiency, elastic and scalable capabilities NFV aimed at accelerating service innovation and provisioning using standard IT virtualization technologies. SDN provides the agility of controlling the generic forwarding devices such as the routers and switches by using SDN controllers. On the other hand, NFV agility is provided for the network applications by using virtualized servers. It is entirely possible to implement a virtualized network function (VNF) as a standalone entity using existing networking and orchestration paradigms. However, there are inherent benefits in leveraging SDN concepts to implement and manage an NFV infrastructure, particularly when looking at the management and orchestration of VNFs, and that's why multivendor platforms are being defined that incorporate SDN and NFV in concerted ecosystems.
Relationship to DPI
DPI Deep Packet Inspection provides network with application-awareness, while SDN provides applications with network-awareness. Although SDN will radically change the generic network architectures, it should cope with working with traditional network architectures to offer high interoperability. The new SDN based network architecture should consider all the capabilities that are currently provided in separate devices or software other than the main forwarding devices (routers and switches) such as the DPI, security appliances
Quality of Experience (QoE) estimation using SDN
When using an SDN based model for transmitting multimedia traffic, an important aspect to take account is the QoE estimation. To estimate the QoE, first we have to be able to classify the traffic and then, it's recommended that the system can solve critical problems on its own by analyzing the traffic.
See also
Active networking
Frenetic (programming language)
IEEE 802.1aq
Intel Data Plane Development Kit (DPDK)
List of SDN controller software
Network functions virtualization
ONOS
OpenDaylight Project
SD-WAN
Software-defined data center
Software-defined mobile network
Software-defined protection
References
Configuration management
Emerging technologies
Network architecture |
19132559 | https://en.wikipedia.org/wiki/Version%20targeting | Version targeting | In computing, version targeting is a technique that allows a group of (presumably knowledgeable) users (including software developers) to use some advanced software features that were introduced in a particular software version while allowing users accustomed to the prior versions to still use the same software as if the new features were never added to the software. It is a way to ensure backward compatibility when new software features would otherwise break it.
In Mozilla Firefox
Version targeting has been used in Mozilla Firefox when it introduced JavaScript 1.6 in Firefox 1.5 and JavaScript 1.7 in Firefox 2.0: developers willing to use the new scripting engine had to explicitly opt-in.
Use in Internet Explorer
Version targeting was proposed by Microsoft for use in its Internet Explorer 8 product-in-development, but the idea was later discarded.
The proposal came after the release of Internet Explorer 7 which improved its CSS 2.1 support at the cost of causing some websites that were developed for Internet Explorer 6 to be rendered incorrectly when viewed with the new browser version.
Microsoft contacted the Web Standards Project and experts on Web standards and asked for assistance in devising a new DOCTYPE-like technique that could work across browsers and let Web developers specify exact browser versions under which their Web sites are known to work correctly, and browsers implementing this form of version targeting would use the correct rendering engine versions to display the site correctly. Members of the WaSP Microsoft Task Force were involved in the proposal, albeit not every member backed it.
Some commentators suggested that it would be possible to use Internet Explorer 8's support for new DOCTYPEs in order to avoid using its version targeting meta tag.
Criticism
The concept of version targeting, especially as proposed by Microsoft, has been criticised for being a new form of browser sniffing and for violating the principle of forward-compatible development where progressive enhancement is preferred.
Version targeting has been criticised for not giving incentives to developers to plan ahead for forward compatibility.
Positive reception
Version targeting has been welcomed by some people as a means to enable browsers to adopt Web standards without breaking compatibility with Web sites depended on old rendering engines for their functionality.
References
Bibliography
WaSP IE8 round table discussion
Web browsers |
5060899 | https://en.wikipedia.org/wiki/List%20of%20MeSH%20codes%20%28L01%29 | List of MeSH codes (L01) | The following is a list of "L" codes for Medical Subject Headings (MeSH), as defined by the United States National Library of Medicine (NLM).
This list continues the information at List of MeSH codes (K01). Codes following these are found at List of MeSH codes (M01). For other MeSH codes, see List of MeSH codes.
The source for this content is the set of 2006 MeSH Trees from the NLM.
– information science
– book collecting
– chronology
– classification
– phylogeny
– communication
– advertising
– answering services
– communication barriers
– computer literacy
– cybernetics
– feedback
– diffusion of innovation
– technology transfer
– hotlines
– Information dissemination
– interdisciplinary communication
– language
– language arts
– lipreading
– multilingualism
– reading
– speech
– translating
– writing
– authorship
– correspondence
– electronic mail
– handwriting
– paleography
– shorthand
– linguistics
– terminology
– names
– abbreviations
– anonyms and pseudonyms
– eponyms
– phonetics
– psycholinguistics
– neurolinguistic programming
– semantics
– vocabulary
– negotiating
– nonverbal communication
– manual communication
– sign language
– persuasive communication
– propaganda
– reminder systems
– communications media
– erotica
– library materials
– mass media
– motion pictures
– radio
– television
– videodisc recording
– compact disks
– CD-i
– CD-ROM
– videotape recording
– publications
– bibliography
– national bibliography
– bibliography of medicine
– bibliometrics
– biobibliography
– book reviews
– books
– book imprints
– printers' marks
– book ornamentation
– bookplates
– illustrated books
– incunabula
– manuals
– sex manuals
– rare books
– reference books
– almanacs
– atlases
– dictionaries
– chemical dictionaries
– classical dictionaries
– dental dictionaries
– medical dictionaries
– pharmaceutic dictionaries
– polyglot dictionaries
– directories
– dispensatories
– encyclopedias
– formularies
– dental formularies
– homeopathic formularies
– hospital formularies
– pharmacopoeias
– homeopathic pharmacopoeias
– medical reference books
– medical dictionaries
– textbooks
– broadsides
– catalogs
– commercial catalogs
– booksellers' catalogs
– publishers' catalogs
– drug catalogs
– library catalogs
– union catalogs
– academic dissertations
– government publications
– manuscripts
– medical manuscripts
– pamphlets
– review literature
– consensus development conferences
– nih consensus development conferences
– serial publications
– newspapers
– periodicals
– translations
– teaching materials
– audiovisual aids
– exhibits
– maps
– medical illustration
– structural models
– anatomic models
– manikins
– visible human project
– motion pictures
– multimedia
– optical storage devices
– videodisc recording
– compact disks
– CD-i
– CD-ROM
– radio
– tape recording
– videotape recording
– television
– video microscopy
– videodisc recording
– compact disks
– CD-i
– CD-ROM
– videotape recording
– manuals
– sex manuals
– textbooks
– telecommunications
– electronic mail
– radar
– radio
– satellite communications
– telefacsimile
– telemedicine
– remote consultation
– telepathology
– teleradiology
– telephone
– answering services
– cellular phone
– modems
– television
– video microscopy
– videoconferencing
– computer security
– computing methodologies
– algorithms
– artificial intelligence
– expert systems
– fuzzy logic
– knowledge bases
– natural language processing
– neural networks (computer)
– robotics
– automatic data processing
– punched-card systems
– computer graphics
– computer-aided design
– computer simulation
– computer systems
– computer communication networks
– internet
– local area networks
– computers
– computer peripherals
– computer storage devices
– optical storage devices
– compact disks
– CD-i
– CD-ROM
– computer terminals
– modems
– analog computers
– hybrid computers
– analog-to-digital conversion
– mainframe computers
– molecular computers
– microcomputers
– handheld computers
– minicomputers
– molecular computers
– computer-assisted image processing
– data compression
– image enhancement
– radiographic image enhancement
– dual-energy scanned projection radiography
– three-dimensional imaging
– mathematical computing
– decision support techniques
– statistical data interpretation
– decision theory
– decision trees
– neural networks (computer)
– nomograms
– computer-assisted numerical analysis
– computer-assisted signal processing
– data compression
– software
– database management systems
– grateful med
– hypermedia
– programming languages
– software design
– software validation
– speech recognition software
– user-computer interface
– video games
– word processing
– copying processes
– microfilming
– tape recording
– videotape recording
– telefacsimile
– video recording
– videodisc recording
– compact disks
– CD-i
– CD-ROM
– videotape recording
– data collection
– geriatric assessment
– interviews
– focus groups
– narration
– questionnaires
– delphi technique
– records
– birth certificates
– death certificates
– dental records
– hospital records
– medical records
– medical record linkage
– problem-oriented medical records
– computerized medical records systems
– trauma severity indices
– abbreviated injury scale
– glasgow coma scale
– glasgow outcome scale
– injury severity score
– nursing records
– registries
– seer program
– vital statistics
– life expectancy
– life tables
– quality-adjusted life years
– morbidity
– basic reproduction number
– incidence
– prevalence
– mortality
– cause of death
– child mortality
– fatal outcome
– fetal mortality
– hospital mortality
– infant mortality
– maternal mortality
– survival rate
– pregnancy rate
– birth rate
– data display
– computer graphics
– computer-aided design
– informatics
– dental informatics
– medical informatics
– nursing informatics
– public health informatics
– information centers
– archives
– libraries
– dental libraries
– digital libraries
– hospital libraries
– medical libraries
– national library of medicine (u.s.)
– nursing libraries
– information management
– information services
– bibliography
– descriptive bibliography
– bibliography of medicine
– bibliometrics
– biobibliography
– bibliographic databases
– book selection
– documentation
– abstracting and indexing
– cataloging
– book classification
– classification
– filing
– molecular sequence data
– amino acid sequence
– base sequence
– carbohydrate sequence
– controlled vocabulary
– Current Procedural Terminology
– diagnostic and statistical manual of mental disorders
– healthcare common procedure coding system
– International Classification of Disease
– Logical Observation Identifiers Names and Codes
– subject headings
– medical subject headings
– Systematized Nomenclature of Medicine
– Unified Medical Language System
– drug information services
– adverse drug reaction reporting systems
– clinical pharmacy information systems
– human genome project
– library services
– interlibrary loans
– library technical services
– cataloging
– book classification
– information storage and retrieval
– data compression
– databases
– bibliographic databases
– PubMed
– MEDLINE
– factual databases
– genetic databases
– nucleic acid databases
– protein databases
– geographic information systems
– national practitioner data bank
– visible human project
– information theory
– library science
– library administration
– library associations
– library automation
– library collection development
– library schools
– library services
– interlibrary loans
– library surveys
– library technical services
– medical informatics
– medical informatics applications
– computer-assisted decision making
– computer-assisted diagnosis
– computer-assisted image interpretation
– computer-assisted radiographic image interpretation
– computer-assisted therapy
– computer-assisted drug therapy
– computer-assisted radiotherapy
– conformal radiotherapy
– intensity-modulated radiotherapy
– computer-assisted radiotherapy planning
– computer-assisted surgery
– information storage and retrieval
– grateful med
– MEDLARS
– MEDLINE
– MedlinePlus
– PubMed
– MEDLINE
– information systems
– clinical laboratory information systems
– community networks
– clinical decision support systems
– databases
– bibliographic databases
– PubMed
– MEDLINE
– factual databases
– genetic databases
– nucleic acid databases
– protein databases
– national practitioner data bank
– visible human project
– geographic information systems
– hospital information systems
– medical order entry systems
– integrated advanced information management systems
– knowledge bases
– management information systems
– ambulatory care information systems
– clinical laboratory information systems
– clinical pharmacy information systems
– database management systems
– management decision support systems
– healthcare common procedure coding system
– hospital information systems
– medical order entry systems
– operating room information systems
– personnel staffing and scheduling information systems
– radiology information systems
– computerized medical records systems
– medical order entry systems
– MEDLARS
– MEDLINE
– online systems
– digital libraries
– PubMed
– MEDLINE
– radiology information systems
– reminder systems
– medical informatics computing
– pattern recognition, automated
– neural networks (computer)
– publishing
– book industry
– bookbinding
– bookselling
– book prices
– copyright
– duplicate publication
– editorial policies
– journalism
– dental journalism
– medical journalism
– research peer review
– plagiarism
– printing
– publication bias
– retraction of publication
– systems analysis
– operations research
– monte carlo method
– probability theory
– linear programming
– systems integration
The list continues at List of MeSH codes (M01).
L01 |
1189498 | https://en.wikipedia.org/wiki/Marriott%20International | Marriott International | Marriott International, Inc. is an American multinational company that operates, franchises, and licenses lodging including hotel, residential, and timeshare properties. It is headquartered in Bethesda, Maryland. The company was founded by J. Willard Marriott and his wife Alice Marriott.
Profile
Marriott is the largest hotel chain in the world by the number of available rooms. It has 30 brands with 7,642 properties containing 1,423,044 rooms in 131 countries and territories. Of these 7,642 properties, 2,149 are operated by Marriott, and 5,493 are operated by others pursuant to franchise agreements. The company also operates 20 hotel reservation centers.
Marriott International, Inc. was formed in 1993 when Marriott Corporation split into two companies: Marriott International, Inc., which franchises and manages properties, and Host Marriott Corporation (now Host Hotels & Resorts), which owns properties.
Since the founders were Mormon missionaries, copies of the Book of Mormon are provided in hotel rooms in addition to the Bible.
History
Founding and early years
Marriott Corporation was founded by John Willard Marriott in 1927 when he and his wife, Alice Marriott, opened a root beer stand in Washington, D.C. As Mormon missionaries in the humid summers in Washington, D.C., the Marriotts were convinced that what residents of the city needed was a place to get a cool drink. The Marriotts later expanded their enterprise into a chain of Hot Shoppes restaurants. In 1953, Hot Shoppes, Inc. became a public company via an initial public offering.
The company opened its first hotel, the Twin Bridges Motor Hotel, in Arlington, Virginia, on January 16, 1957. It cost $9 per night, plus an extra $1 for every person that was in the car. Its second hotel, the Key Bridge Marriott in Rosslyn, Arlington, Virginia, was opened in 1959 and is Marriott International's longest continuously operating hotel.
Hot Shoppes, Inc. was renamed the Marriott Corporation in 1967.
In 1976, the company opened two theme parks: California's Great America and Six Flags Great America.
Marriott International
Marriott International, Inc. was formed in 1993 when Marriott Corporation split into two companies: Marriott International, Inc., which franchises and manages properties, and Host Marriott Corporation (now Host Hotels & Resorts), which owns properties.
In 1995, Marriott was the first hotel company to offer online reservations.
In April 1995, Marriott acquired a 49% interest in The Ritz-Carlton Hotel Company. Marriott believed that it could increase sales and profit margins for The Ritz-Carlton, a troubled chain with many properties either losing money or barely breaking even. The cost to Marriott was estimated to have been about $200million in cash and assumed debt. The next year, Marriott spent $331million to acquire The Ritz-Carlton, Atlanta, and buy a majority interest in two properties owned by William Johnson, a real estate developer who had purchased The Ritz-Carlton, Boston in 1983 and expanded his Ritz-Carlton holdings over the next twenty years. Ritz-Carlton expanded into the timeshare market. Ritz Carlton benefited from Marriott's reservation system and buying power. In 1998, Marriott acquired majority ownership of The Ritz-Carlton.
In 1997, the company acquired the Renaissance Hotels and Ramada brands from Chow Tai Fook Group and its associate company, New World Development. Marriott International also signed an agreement to manage hotels owned by New World Development.
In 2001, the Marriott World Trade Center was destroyed during the September 11 attacks.
In 2003, the company completed the corporate spin-off of its senior living properties (now part of Sunrise Senior Living) and Marriott Distribution Services.
In 2004, the company sold its right to the Ramada brand to Cendant, acquired in 1997.
In 2005, Marriott International and Marriott Vacation Club International were two of the 53 entities that contributed the maximum of $250,000 to the Second inauguration of George W. Bush.
On July 19, 2006, Marriott implemented a smoking ban in all buildings it operated in the United States and Canada effective September 2006.
In 2007, Marriott became the first hotel chain to serve food that is completely free of trans fats at all of its North American properties.
Hotels franchised or operated by the company were affected by the 2003 Marriott Hotel bombing, the Islamabad Marriott Hotel bombing in 2008, and the 2009 Jakarta bombings.
On November 11, 2010, Marriott announced plans to add over 600 hotel properties by 2015, primarily in emerging markets: India, where it planned to have 100 hotel properties, China, and Southeast Asia.
On January 21, 2011, Marriott said that adult movies would not be included in the entertainment offered at new hotels, which would use an Internet-based video on demand system.
Effective March 31, 2012, Bill Marriott assumed the role of executive chairman of the company and relinquished the role of chief executive officer to Arne Sorenson.
In 2011, Mitt Romney received $260,390 in director's fees from Marriott International, despite the fact that he had already stepped down from the board of directors to run for President of the United States. His released 2010 tax returns showed earnings in 2010 of $113,881 in director's fees from Marriott. In February 2012, Bloomberg News reported on Romney's years overseeing tax matters for Marriott, which had included several "scams" (quoting John McCain) and legal actions brought against Marriott, which Marriott lost in court, over its manipulations of the U.S. Tax Code.
In December 2012, Guinness World Records recognized the JW Marriott Marquis Dubai, a five star hotel, as the tallest hotel in the world.
On October 3, 2014, the Federal Communications Commission (FCC) fined Marriott $600,000 for unlawful use of a "containment" feature of a Wi-Fi monitoring system to deliberately interfere with client-owned networks in the convention space of its Gaylord Opryland Resort & Convention Center in Nashville. The scheme disrupted operation of clients' mobile phone hotspots via Wi-Fi deauthentication attacks. Marriott International, Inc., the American Hotel and Lodging Association and Ryman Hospitality Properties responded by unsuccessfully petitioning the FCC to change the rules to allow them to continue jamming client-owned networks, a position which they were forced to abandon in early 2015 in response to backlash from clients, mainstream media, major technology companies, and mobile carriers. The incident drew unfavorable publicity to Marriott's practice of charging exorbitant fees for Wi-Fi.
On April 1, 2015, Marriott acquired Canadian hotel chain Delta Hotels, which operated 38 hotels at that time.
On November 16, 2015, Marriott announced the acquisition of Starwood for $13billion. A higher offer for Starwood at $14billion from a consortium led by China's Anbang Insurance Group was announced March 3, 2016. After Marriott raised its bid to $13.6billion on March 21, Starwood terminated the Anbang agreement and proceeded with the merger with Marriott. Following receipt of regulatory approvals, Marriott closed the merger with Starwood on September 23, 2016, creating the world's largest hotel company with over 5700 properties, 1.1million rooms, and a portfolio of 30 brands. The Starwood acquisition gave Marriott a larger non-US presence; approximately 75% of Starwood's revenues were from non-US markets.
On November 30, 2018, Marriott disclosed that the former Starwood brands had been subject to a data breach. After the disclosure, Attorney General of New York Barbara Underwood announced an investigation into the data breach. The cyberattack was found to be a part of a Chinese intelligence-gathering effort that also hacked health insurers and the security clearance files of millions more Americans. The hackers are suspected of working on behalf of the Ministry of State Security, the country's Communist-controlled civilian spy agency. Initially, Marriott said that 500 million customers' personal information had been exposed. In January 2019, the company updated the number of guests affected to "less than 383 million" customers, and claimed many of the customer's payment cards had expired.
In December 2019, the company acquired Elegant Hotels, operator of 7 hotels in Barbados.
In February 2020, the company discovered a data breach that included the theft of contact information for 5.2 million customers.
In April 2020, during the COVID-19 pandemic, the company instituted additional cleanliness standards, including requiring the use of electrostatic sprayers with disinfectant, adding disinfecting wipes in all hotel rooms, and removing or re-arranging furniture in public areas to allow more space for social distancing. During the pandemic, global occupancy fell as low as 31%.
President and CEO Arne Sorenson died on February 15, 2021, from pancreatic cancer. On February 23, 2021, Anthony Capuano was appointed to fill Sorensen's vacancy as CEO and Director, having previously served as Marriott's group president of global development, design and operations.
In November 2021, the company was criticized for refusing to host the World Uyghur Congress at one of its properties in Prague, citing reasons of "political neutrality."
Senior leadership
Executive Chairman: Bill Marriott (since 1985)
Chief Executive: Anthony Capuano (since 2021)
List of former chairmen
J. Willard Marriott (1927–1985)
List of former chief executives
J. Willard Marriott (1927–1972)
Bill Marriott (1972–2012)
Arne Sorenson (2012–2021)
Awards
In November 2020, Marriott International was named as one of the "Top 75 Companies for Executive Women" by Working Mother.
Finances
Carbon footprint
Marriott International reported Total CO2e emissions (Direct + Indirect) for the twelve months ending 31 December 2020 at 5,166 Kt (-1,643 /-24.1% y-o-y) and aims to reach net zero emissions by 2050.
The Luxury Collection
The Luxury Collection is a hotel brand of Marriott International with several notable hotels including Hotel Alfonso XIII, Gritti Palace Hotel, IVY Hotel + Residences, Hotel Imperial, ITC Grand Chola, Marqués de Riscal Hotel, The Nines, Palace Hotel, San Francisco, The Park Tower Knightsbridge Hotel, Phoenician Resort, Hotel President Wilson, The St. Anthony Hotel, and Royal Hawaiian Hotel. As of December 31, 2020, there were 118 hotels comprising 23,243 rooms operating under the brand. The Luxury Collection is notable as the first "soft brand" hotel chain.
Most hotels of the brand are located in converted historic buildings, including palaces or older hotels. The brand also enlists notable designers to craft luxury travel accessories that are available exclusively on the brand's website.
The Royal Penthouse Suite at Hotel President Wilson in Geneva, part of The Luxury Collection, billed at per night, is listed at the top of the World's 15 Most Expensive Hotel Suites list compiled by CNN in 2012.
History
The Luxury Collection brand began on January 13, 1992, when ITT Sheraton designated 28 of its most expensive hotels and 33 of the Sheraton Towers, as the ITT Sheraton Luxury Collection.
In February 1994, ITT Sheraton acquired a controlling interest in CIGA (Compagnia Italiana Grandi Alberghi, or Italian Grand Hotels Company), an Italian international hotel chain that owned several luxury properties in Europe. The majority of the CIGA hotels were folded into The Luxury Collection. CIGA's original logo, the four horses of St. Mark, was kept for The Luxury Collection brand logo until 2010; each Luxury Collection hotel now uses its own logo.
In 2011, it embarked on an advertising campaign.
In 2012, the brand announced a major expansion in Asia, particularly in China.
In 2014, the brand signed Danish supermodel Helena Christensen as spokesperson.
In 2015, the company launched a $700 million program to renovate properties.
Marriott brands
Marriott operates 30 brands internationally.
Luxury
Classic
JW Marriott Hotels
The Ritz-Carlton
St. Regis Hotels & Resorts
Distinctive
Edition Hotels
Bulgari Hotels & Resorts
The Luxury Collection
W Hotels
Premium
Classic
Delta Hotels
Marriott Hotels & Resorts
Marriott Vacation Club
Sheraton Hotels and Resorts
Distinctive
Le Méridien
Renaissance Hotels
Westin Hotels
Gaylord Hotels
Select
Classic
Courtyard by Marriott
Fairfield by Marriott
Four Points by Sheraton
Protea Hotels by Marriott
SpringHill Suites
Distinctive
AC Hotels by Marriott
Aloft Hotels
Moxy Hotels
Long Stay
Classic
Marriott Executive Apartments
Residence Inn by Marriott
TownePlace Suites
Distinctive
Element Hotels
Homes & Villas by Marriott International
Collections
Autograph Collection
Design Hotels
Tribute Portfolio
Great America Parks
Marriott developed three theme parks, of which two opened: California's Great America and Six Flags Great America, which operated from 1976 until 1984. The parks were located in Santa Clara, California; Gurnee, Illinois; and a proposed but never-built location in the Washington, D.C., area, and were themed celebrating American history. The American-themed areas under Marriott's tenure of ownership included "Carousel Plaza" (the first section beyond the main gates); small-town-themed "Hometown Square"; "The Great Midwest Livestock Exposition At County Fair" with a Turn of the Century rural-fair theme; "Yankee Harbor", inspired by a 19th-century New England port; "Yukon Territory," resembling a Canadian/Alaskan logging camp; and the French Quarter-modeled "Orleans Place". At the opening, the parks had nearly identical layouts.
In 1984, Marriott disposed of its theme park division; both parks were sold and today are associated with national theme park chains. The Gurnee location was sold to Six Flags Theme Parks where it operates today as Six Flags Great America. The Santa Clara location was sold to the City of Santa Clara, who retained the underlying property and sold the park to Kings Entertainment Company, renamed Paramount Parks in 1993. From 1993 to 2006, the Santa Clara location was known as Paramount's Great America. In 2006, Paramount Parks was acquired by Cedar Fair Entertainment Company; the Santa Clara park operates today as California's Great America. In the years after their sale, the layouts of both of the parks have diverged substantially.
Loyalty program
Marriott Bonvoy is Marriott's loyalty program and was formed in the February 2019 merger of its three former rewards programs: Marriott Rewards, Ritz-Carlton Rewards, and Starwood Preferred Guest.
Marriott Rewards was founded in 1983.
Former loyalty programs
Starwood Preferred Guest (also known as SPG) was founded in 1999 as the first in the industry to enforce a policy of no blackout dates, no capacity controls, and online redemption. In 2012, Starwood Preferred Guest began offering lifetime status and a dedicated Starwood ambassador for loyal members.
Ritz-Carlton Rewards was founded in 2010. Members were able to receive air miles instead of reward points and able to earn ten points (or two miles) for every dollar spent on any Ritz-Carlton room rates. Despite the restriction of membership to only one of the two programs, members of Ritz-Carlton Rewards were able to earn points in other Marriott hotels, while Marriott Rewards members were able to earn points at a Ritz-Carlton.
See also
2018 Marriott Hotels strike
List of chained-brand hotels
List of hotels
References
Further reading
Marriott, John Willard, Jr., and Kathi Ann Brown. The Spirit to Serve: Marriott's Way. First ed. New York: Harper Business, 1997.
External links
1927 establishments in Washington, D.C.
American companies established in 1927
Companies based in Bethesda, Maryland
Companies listed on the Nasdaq
Companies formerly listed on the New York Stock Exchange
Hospitality companies established in 1927
Hospitality companies of the United States
Family-owned companies of the United States |
21812232 | https://en.wikipedia.org/wiki/LuraTech | LuraTech | LuraTech is a software company, owned since 2015 by Foxit Software, with offices in Remscheid, Berlin, London, and in the United States, which makes products for handling and conversion of digital documents. Its customers are primarily organizations involved in long-term document archiving and scan service providers. It is a member of the PDF Association.
LuraTech was founded as a part of a joint project with the Technical University of Berlin intended to bring wavelet compression techniques to digital still images. LuraTech developed a segmentation technology to deal with scanned documents containing mixed raster content (MRC), resulting in the creation of LuraDocument LDF, a proprietary document format for the compression of scanned documents. Since then, LuraTech has developed several software development kits (SDKs) and computer applications for creating and handling PDF documents. LuraTech has also taken part in the development of the JPEG 2000 standard.
In 2002 LuraTech created the PDF Compressor, applying the concepts of MRC layered document compression to PDF standards, especially PDF/A. This was the first workflow solution that LuraTech built on top of its software development kits.
In 2010 LuraTech launched DocYard, a software platform to create and centrally manage general document and data conversion processes.
At the 2013 CeBIT conference, LuraTech announced the release of its ZUGFeRD Extraction SDK, a toolkit for ERP and online banking software developers, which facilitates processing of PDF invoices standardized under the Central User Guidelines for Electronic Billing in Germany (ZUGFeRD).
In 2015 LuraTech released LuraTech PDF Scanner iOS, a freely-available iPhone/iPad app for convenient creating and editing of highly compressed PDF documents.
In October 2015 LuraTech was acquired by Foxit Software.
References
External links
Interview with the CEO of LuraTech in December 2004
LuraTech White Paper JPEG2000 - The Emerging Standard for the Millennium Overview : Next Generation Image Compression
Software
Software companies of Germany |
35177195 | https://en.wikipedia.org/wiki/Technological%20self-efficacy | Technological self-efficacy | Technological self-efficacy (TSE) is "the belief in one's ability to successfully perform a technologically sophisticated new task". TSE does not highlight specific technological tasks; instead it is purposely vague. This is a specific application of the broader and more general construct of self-efficacy, which is defined as the belief in one's ability to engage in specific actions that result in desired outcomes. Self efficacy does not focus on the skills one has, but rather the judgments of what one can do with his or her skills. Traditionally, a distinguishing feature of self efficacy is its domain-specificity. In other words, judgments are limited to certain types of performances as compared to an overall evaluation of his or her potential. Typically, these constructs refer to specific types of technology; for example, computer self-efficacy, or internet self-efficacy and information technology self-efficacy. In order to organize this literature, technology specific self-efficacies (e.g., computer and internet) that technology specific self-efficacies can be considered sub-dimensions under the larger construct of technological self-efficacy.
Origins
This construct was intended to describe general feelings toward the ability to adopt new technology and is therefore generalizable across a number of specific technologies. Furthermore, this construct can account for and be applied to technologies that have yet to be invented. Although these features have allowed TSE to remain relevant through the times, this definitional breadth has also created confusion and a proliferation of related constructs.
Importance
21st-century society is completely embedded within a technological context, which makes the understanding and evaluation of technological self efficacy critical. Indeed, nearly half of Americans own smartphones and this trend towards technology use is not limited to the United States; instead cell phone, computer, and internet use is becoming increasingly common around the world. Technology is particularly prevalent in the workplace and learning environments. At work, 62% of employed Americans use the internet and email, but workplace internet users either use the internet everyday (60%) or not at all (28%). Internet and email use is obviously influenced by work duties, but 96% of employed Americans use some sort of new communication technology on the job. Successful investment in technology is associated with enhanced productivity; however, full realization of technological potential commonly plagues organizations. In learning environments, college courses are more frequently being offered online. This is commonly referred to as distance education and implementation ranges from courses being supported by the web (teaching occurs predominantly through face-to-face instructor interactions with supplemental materials being offered on the web) to blended learning (significantly less face-to-face instructor interactions and more online instruction) to fully online (all instruction is conducted virtually with no face-to-face instructor interactions). A number of advantages are associated with distance learning such as increased flexibility and convenience, which allows individuals the opportunity to enroll in classes that would otherwise be off-limits due to geographical or personal reasons. Another commonly cited advantage is that instruction is self paced, which allows for personalized tailoring based on individual needs. However, these advantages are not likely to be realized if the individual is anxious about the method of instructional delivery and/or his or her expectation of success is low due to its technological component. Taken together, these two critical arenas discussed above (workplace and learning) reinforce the extent to which technology has impacted modern activities and consequently the importance of perceived beliefs in one's ability to master new technology. Success in everyday life often hinges on the utilization of technology and by definition, new technology will always be new. Therefore, this construct warrants review.
Furthermore, studies have shown that technological self-efficacy is a crucial factor for teaching computer programming to school students, as students with higher levels of technological self-efficacy achieve higher learning outcomes. In this case, the effect of technical self-efficacy is even stronger than the effect of gender.
Differentiation from other forms of self-efficacy
Since TSE stems from the same theory as general self-efficacy and other task-specific self-efficacy, the differentiation of this construct from these other forms of self-efficacy is crucial. Unfortunately, previous studies focusing in on TSE have not shown the uniqueness of TSE measures. Despite the dearth of differentiating research on TSE, the uniqueness of this construct can be shown by considering closely related and technology specific self-efficacies (i.e. computer self-efficacy), which has been established as a unique construct. When compared to general self-efficacy, computer self-efficacy has been shown to be unique based on two measures of general self-efficacy. In this same study, the authors showed computer self-efficacy was not related to many types of specific self-efficacy including art, persuasion, and science self-efficacy. One of the most related types of specific self-efficacy was mechanical. This makes sense given both types of specific self-efficacies are related to using tools albeit one being technology the other being more physical in nature. Computer self-efficacy has a domain has also been shown to be related, but distinct, to self-efficacy about computer programs.
Measurement
Following the definition set forth by Bandura, self-efficacy is an individual's belief and confidence in him or herself. This property has important implications for the measurement of any type of self-efficacy. Specifically, measures of self-efficacy must be self-report because the only person who can accurately portray beliefs in one's ability is the target of investigation. In other words, self-report measures of self-efficacy have definitional truth. While a number of problems exist with self-report inventories, in the case of self-efficacy (and other constructs that are defined as internal beliefs and cognitions) this measurement approach is unavoidable.
While the type of measurement approach is defined by the construct, the process of developing and validating these scales has varied considerably throughout the TSE literature. One major difference between measures concerns the scoring of the items. Previously, research has noted differences in results can be partially attributed to different scoring approaches. Specifically, there are two main ways of scoring self-efficacy items. The first type is called self-efficacy magnitude. Items are worded so participants would respond whether or not they felt they could accomplish a certain task (yes or no). The second type is self-efficacy strength. This scoring approach asks participants to rate how confident they are in completing the task(s) on a numerical scale and then averages across all items. All other scoring types are simply composites of these first two approaches.
Another difference between TSE measures concerns the issue of generality. This consideration is similar to the previous differentiation between-TSE as a broader concept and technology specific self-efficacy. Measurement attempts of the broader concept of technological self-efficacy will be considered first. McDonald and Siegall developed a five-item likert scale of technological self-efficacy based on the consideration of previous theoretical studies. This scale was scored using the strength approach to self-efficacy scales. Items in this scale were not referring to specific technologies, but instead focused on technology as a general concept. Using a development process, Holcomb, King and Brown, also proposed a scale to measure TSE Factor analysis revealed three distinct factors containing 19 likert-type items, which also was scored according to the strength scoring system. In contrast to the McDonald and Siegall scale, the items in this scale referenced certain technologies (specifically computers and software packages). The two studies mentioned above represent of attempts to measure TSE as a broader concept.
In addition to the attempts to measure TSE more broadly, a number of studies have developed measures of technology specific self-efficacy. One of the most cited measures of computer self-efficacy comes from Compeau and Higgins. These authors reviewed previous attempts to measure computer self-efficacy and theoretically derived a 10-item scale. Unlike previously mentioned scales, this study employed a "composite" scoring approach. For each item, participants were first asked whether they could complete a specific task related to computers using a dichotomous yes/no scale. Following this answer, participants were then asked to rate their confidence about completing the task from 1 (not at all confident) to 10 (totally confident). The final score was calculated by counting the number of "yes" answers (reflecting self-efficacy magnitude) and the average of the confidence ratings (representing self-efficacy strength). The authors then validated this measure in a nomological network of related constructs. A second example of technology specific self-efficacy is internet self-efficacy. Similar to previous measurement approaches, internet self-efficacy was developed using a theoretical approach that considered previous measures of related topics and developed novel items to address the missing construct space. This scale showed a high level of reliability and validity.
Antecedents
Bandura proposes four primary sources for self efficacy beliefs; (1) prior experience, (2) modeling, (3) social persuasions, and (4) physiological factors. Research supports that many of these sources for TSE are the same; however, there are additional antecedents as well. Although more complex theoretical development and empirical examination addressing how these antecedents operate and relate to one another has not been addressed, the most immediate predictors of TSE are more likely to be Bandura's primary sources (proximal predictors). The remaining antecedents that have also been associated with TSE (e.g., adequate resources, gender, and age) are likely to be more distal predictors. In other words, these distal variables influence more proximal variables (e.g., prior experience, modeling, and social persuasions), which then result in high or low TSE.
Prior experience
Prior experience with technology is repeatedly found to be influential on technology related self efficacy beliefs. If an individual has had the opportunity to interact with new technologies and, more importantly, has had success with mastering new technologies then individuals are more likely to hold more positive beliefs for future performance.
Modeling or participation in technological training
Modeling or participation in technological training are also found to be significant predictors of technological self efficacy. Although different types of training interventions have been associated with different gains; in general, research supports that seeing other individuals successfully perform the task at hand (for example, the instructor) and then providing the learner with some opportunity for reinforcement and demonstration (for example, trying to successfully utilize the technology without aid) increases technology related self efficacy beliefs.
Social persuasions
Social persuasions such as encouragement by others and organizational support are also important contributors to technology related self efficacy beliefs. The actions and statements of others can significantly alter perceptions of their likelihood for success. Organizational support typically includes management's encouragement and assistance. If management does not appear to enthusiastically support employees' attempts to utilize technology then employees are unlikely to accept technology.
Resources
Resources are commonly cited as one of the largest barriers to adoption of technology. This includes, but is not limited to, sufficient computers, sufficient software licenses, out-of-date hardware/software, and slow or intermittent Internet connections. The success of proper technology use is first and foremost limited by the capabilities of the technology in question.
Gender
Gender is significantly related, such that men tend to have higher levels of technology related self efficacy beliefs than women. It is still unknown why these gender differences exist.
Age
Age is also significantly related, such that younger individuals tend to have higher levels of technology related self efficacy beliefs than older individuals. This finding is not surprising given the widespread stereotype of older adults' inability to learn new material, especially when the material is technology related. However, older adults' low technological self efficacy beliefs suggest that older adults may internalize the 'old dogs can't learn new tricks' stereotype, which consequently affects expectations about future performance in technology related domains.
Consequences
Technology related self efficacy beliefs have been linked with a number of consequences. Although, TSE does predict the outcomes reviewed below, please note that some of the antecedents to TSE are better predictors of these outcomes than TSE itself. For example, prior experience is typically a better predictor of task performance than TSE. A recent meta-analysis about self-efficacy (more generally) supports this conclusion as well. Taken together, TSE is important but its importance should not be overstated. Furthermore, it is possible that the effect of TSE on outcomes (e.g., performance) operates through other variables (e.g., behavioral intentions or anxiety).
Task performance
Task performance is negatively affected, such that lower technology related self efficacy beliefs are related to poorer performance This is extremely important, because these findings suggest that positive perceptions of individuals' technological capabilities may need to be present before successful performance can be achieved.
Perceived ease of use and usage
Perceived ease of use and usage is found to be positively related with technology related self efficacy beliefs. According to the Technology Acceptance Model, perceived ease of use and perceived usefulness influences behavioral intentions and ultimately technology related behaviors. Other scholars have behavioral intentions to act as a mediator between TSE and other outcome variables (performance). These predictions are similar to those of the well supported Theory of Planned Behavior.
Anxiety
Anxiety is negatively related, such that lower technology related self-efficacy beliefs are associated with higher level of anxiety.
See also
Industrial and organizational psychology
Organizational psychology
Self-efficacy
Social sciences
Technology
Training
References
Positive psychology |
46931377 | https://en.wikipedia.org/wiki/Ciklum | Ciklum | Ciklum is an international software development and IT outsourcing company founded in Kyiv, Ukraine in 2002. It is headquartered in London, United Kingdom.
The company has software development centers and branch offices in the United Kingdom, United States, United Arab Emirates, Spain, Switzerland, Denmark, Israel, Poland, Ukraine, Belarus and Pakistan.
Social responsibility and educational initiatives
In 2011, Ciklum partnered together with other companies in order to create BIONIC Hill Innovation Park - a Ukrainian innovation park constructed similarly to the Silicon Valley.
In September 2012, Ciklum co-launched BIONIC University, the first Ukrainian intercorporate IT university working on the premises of National University of Kyiv-Mohyla Academy. The University prepares IT specialists of a new formation, who were globally competitive yet aiming at professional fulfillment in Ukraine.
In April 2014, Ciklum, together with other IT companies operating in Ukraine, initiated the launch of Brain Basket Foundation to fund free trainings for those who wish to study programming. This initiative is aimed at developing Ukraine's $2 billion IT Industry towards a goal of generating $10 billion in annual revenue and creating 100,000 jobs by 2020. Ciklum has pledged $100,000 to the program.
In January 2018, Ciklum supported first-ever Ukraine House in Davos during World Economic Forum 2018.
Clients and services
Ciklum provides services including custom development, quality engineering, data & analytics, Robotic Process Automation, Product development and Consulting.
Ciklum provides teams, project-based services, and peak resources on a short-term basis. Ciklum key clients are Jabra, Just Eat, Metro Markets, Mercedes Pay.
History
Ciklum was founded in 2002 by Danish native Torben Majgaard who chaired the Board by 2019, in Kyiv, Ukraine. Since then, the company has grown to over 3,500 employees.
In 2009, Ciklum buys main business activities from Mondo's bankruptcy.
In 2011, Ciklum acquires 50% of SCR Gruppen (Denmark).
In 2013, Ciklum acquires Danish IT outsourcing provider Kuadriga.
In 2015, George Soros's Ukrainian Redevelopment Fund acquired a significant stake in Ciklum.
In 2017, Michael Boustridge was appointed CEO.
In 2019, Ciklum raised a new investment led by Dragon Capital with AVentures Capital co-investment.
Since February 2020, Ciklum's CEO has been Kulraj Smagh.
See also
Softserve
EPAM
Eleks
Infopulse Ukraine
DataArt
References
Software companies of Ukraine
Development software companies
Software companies established in 2002
Outsourcing companies
Software companies of Denmark
Software companies of the United Kingdom
2002 establishments in the United Kingdom |
4435592 | https://en.wikipedia.org/wiki/E-mu%20Proteus%20X | E-mu Proteus X | E-MU Proteus X is a Virtual Sound Module produced by E-MU Systems that is a software-based audio sample-based synthesis product that includes the complete library soundest of the popular and legacy Proteus 2000 MIDI Module, as well as additional sounds/samples.
Proteus X LE, Proteus VX, Proteus X, Proteus X2, Emulator X, Emulator X2 and Emulator X3 only work on an IBM compatible type PC. While they work for most people on Windows 2000, Windows XP, and Windows Vista 32-bit, all but Emulator X3 are only tested and currently supported for Windows XP. Only Emulator X3 is tested and officially supported for both XP & Vista and is the only version that works as a VSTi in x64 and DAW software.
All versions of the Proteus X Software Sound Module can operate as a stand-alone program with 64-MIDI channels or as a VST instrument with 16-MIDI channels.
Proteus X LE, Proteus X, Proteus X2, Emulator X and Emulator X2 are all copy protected software. User must also have a qualifying E-MU hardware such as a E-MU digital audio interface, E-MU Xmidi 2x2 or E-MU Xboard keyboard controller connected, powered on and installed correctly as the E-MU hardware also acts as a software copy protection dongle for the protected software. Not having all of these things in order often results in the failed launch of the program or the Streaming engine error message. The E-MU Xmidi 1x1 and E-MU Tracker Pre do not function as a copy protection dongle.
References
Electronic musical instruments |
2563375 | https://en.wikipedia.org/wiki/Kodak%20EasyShare | Kodak EasyShare | Kodak EasyShare is a sub brand of Eastman Kodak Company products identifying a consumer photography system of digital cameras, snapshot thermal printers, snapshot thermal printer docks, all-in-one inkjet printers, accessories, camera docks, software, and online print services. The brand was introduced in 2001. The brand is no longer applied to all-in-one inkjet printers (now branded "ESP") or online printing services (now simply "Kodak Gallery"). Thermal snapshot printers and printer docks product lines have been discontinued. In 2012, Kodak stopped manufacturing and selling all digital cameras and photo frames.
EasyShare Digital Cameras
There are presently three EasyShare camera lines, "series", that separate the cameras into different classes: EasyShare Point and Shoot (C series), EasyShare Performance (Z series), and EasyShare Sleek & Stylish (M-Series). The original products to use the EasyShare brand were the DX3600 and DX3500 digital camera along with the EasyShare Camera Dock.
Kodak EasyShare DX-Series
The DX series cameras were the first EasyShare models released. It was originally a very basic point and shoot camera series, compatible with the original EasyShare Camera Dock. The CX series eventually replaced the lower-end DX models, and the newer DX-Series models had more advanced features and higher megapixel resolution and zoom features. The DX series is now discontinued; the higher-end DX Series models eventually became the Z-Series. Models in the DX series were the last Kodak consumer digital cameras to use CompactFlash external memory cards. Models include the DX3215, DX3500(2.2 MP, 38 mm zoom lens), DX3600 (2.2 MP, 35-70mm zoom lens), DX3900 (3.3 MP, 35–70 mm zoom lens), DX4330 (3.1 MP, 38–114 mm zoom lens), Kodak EasyShare DX4530(5.2 MP, 38-114mm zoom lens), DX4900 (4.1 MP, 35-70mm zoom lens), Kodak EasyShare DX6440 (4.23 MP, 33–132 mm zoom lens), Kodak EasyShare DX6490 (4.23 MP, 38–380 mm zoom lens), DX7440 (4.0 MP, 33–132 mm zoom lens), Kodak Zoom Digital Camera DX7590 (5.0 MP, 38–380 mm zoom lens) and the DX7630 (6.2 MP, 39–117 mm zoom lens).
Kodak EasyShare CX-Series
The CX series is now discontinued, replaced by the C series. The CX series grew out of the DX series. At the time, it was the range of the lowest-priced, most basic point and shoot cameras, typically with no more than a 3× optical zoom.
Kodak EasyShare C-Series
The C-series is Kodak's current line of lower-priced, point and shoot, entry-level digital cameras.
Kodak EasyShare Z-Series
The Z-series is the current high-zoom and performance-oriented range of consumer digital cameras, replacing the original DX series. Typically, Z-Series cameras have higher optical zooms than any other series. The highest optical zoom camera offered by Kodak is the Z990 with a 30X Optical Zoom.
Kodak EasyShare V-Series
The V-Series was another style-oriented range of consumer digital cameras, replacing the original LS series. V-Series had a number of innovations, such as dual-lens technology, first introduced with the V570. The V-Series line has now discontinued, superseded by higher-end M-Series cameras.
Kodak EasyShare P-Series
The P-Series was Kodak's "Performance" series intended to bring DSLR-like features to a consumer model. The series is now discontinued, superseded by higher-end Z-Series models. These were the only consumer models to leverage an external flash, with the exception of the Z980.
Kodak EasyShare-One Series
The EasyShare-One series were the first consumer Wi-Fi Digital Cameras, that used an SDIO card to automatically upload pictures in Wi-Fi hotspots. The EasyShare-One series is now discontinued.
Kodak EasyShare M-Series
The EasyShare were originally a blend between thinner point-and-shoot cameras (C series) and stylish cameras (V series), now positioned as "Sleek and Stylish" with the discontinuance of the V-Series. They are usually available in a variety of colors and generally have features not available in the C-Series line.
To promote the M-Series, which features the exclusive share button for social network media sharing, Kodak announced their "So Kodak" marketing campaign. To appeal to young and socially connected consumers, the campaign features urban artists Drake, Pitbull, and Trey Songz.
EasyShare Digital Frames
Kodak EasyShare SV-Series
The original line of digital frames that played pictures and videos (replaced by M-Series).
Kodak EasyShare EX-Series
The original line of digital frames that included the features of SV-Series frames but included wireless (Wi-Fi) capabilities (replaced by W-Series).
Kodak EasyShare M-Series
The M-series line of "multimedia" digital frames play pictures and videos.
Kodak EasyShare W-Series
The W-series line of "wireless" digital frames features Wi-Fi connectivity to a home computer or the internet.
Kodak EasyShare D-Series
The D-series line of "decor" digital frames allow mounting with any off-the-shelf standard 8x10 frame.
Kodak EasyShare S-Series
The S-series currently designates digital frames that are "cordless" in that they have a rechargeable battery allowing viewing without a power cable for several hours. There was a much older frame, the S510, that was not cordless and predated the P-Series.
Kodak EasyShare P-Series
The P-series line of digital frames stands for "Photo"; these frames can only be used for pictures and not multimedia.
Other products
The EasyShare brand also was incorporated with the original 5000-series all-in-one inkjet printers (superseded by the ESP line), thermal photo printers and printer docks (now discontinued), and camera docks.
EasyShare Software
EasyShare Software
Kodak EasyShare software is used to transfer and catalog images from EasyShare camera models and can also be used with existing images (in .gif, .png, .jpg, or .tiff format) and non-Kodak digital cameras. The most recent version of Kodak EasyShare software is version 8.3, which includes support for Windows 7. Included in the latest versions is the ability to upload pictures and videos to Facebook, YouTube and Kodak Gallery. Other features include the ability to rate, tag, and caption pictures (using industry tagging standards on the files themselves), online print ordering facilities, photo enhancement and alteration capabilities, and home printing page layout control.
As of September 5, 2012, the software is no longer available for download from Kodak's support website.
The updater component of the software is powered by BackWeb, which, though usually suspect for its spyware associations, is said to be benign in this case.
In version 6 of EasyShare software, the Bonjour software component from Apple is installed for remote service discovery, but serves no useful purpose for older models; it can be removed using "Add & remove programs" without impeding functionality.
EasyShare Custom Creations
This software, powered by RocketLife, is now discontinued. It was a desktop application that allowed the user to create a personalized gift (ex. Photo Book) and burn the applicable files to a CD. Ordering and fulfillment was handled by dropping the CD off at a retailer.
References
External links
Official Kodak page for Easyshare cameras
Kodak.com
Photography articles needing expert attention
Products introduced in 2001 |
28180929 | https://en.wikipedia.org/wiki/Opower | Opower | Opower was an American company founded in 2007, that provides a software-as-a-service customer engagement platform for utilities. It existed as an independent corporation until its acquisition by Oracle Corporation in 2016. The Opower product line is under the Oracle Utilities global business unit.
History
Founders
Opower was founded in 2007 in Arlington, Virginia by two Harvard University graduates, Dan Yates and Alex Laskey. The two partners met as first-year students at Lowell House at Harvard and later reconnected while living in San Francisco. Prior to Opower, Laskey had worked on a political campaign involving energy issues and around that time he started reading the book, Influence: The Psychology of Persuasion (1983) by Robert B. Cialdini, which outlines what influenced Southern Californians to save energy. In the book there was conclusive evidence that "social proof" (part of nudge theory) worked, a concept grounded in the principle of normative social behavior you could behaviorally influence others. Yates and Laskey realized that at the time, the energy market was focused on new technologies, and by applying information services, the market could be disrupted.
The initial service they developed was the energy-efficiency campaigns, detailed home-energy reports which incorporated behavioral science techniques. The reports include targeted tips that seek to motivate customers to lower their energy consumption to the "normal" neighborhood rate. The reports also feature smiley-face emoticons for the most energy-efficient homes, a feature that Opower added after research showed that some consumers who used less energy than average started using more once they knew the norm. The reports also compare energy usage among neighbors with similarly sized houses.
President Obama's visit (2010)
President Barack Obama visited Opower headquarters in Arlington on March 5, 2010. He touted the company as an economic "success story" amid a troubled economy and as a "great emblem" for clean energy jobs. During the visit, Obama said the company's growth is "a model of what we want to be seeing all across the country."
He made the visit two months after announcing a "$2.3 billion program" of tax credits for "green jobs." "The work you do here...is making homes more energy efficient, it's saving people money, it's generating jobs, and it's putting America on the path to a clean energy future", Obama said at Opower. The White House released a video of Obama's appearance.
Later work
In July 2010, Opower opened a second office in San Francisco and had Fred Butler, a former president of the National Association of Regulatory Utility Commissioners (NARUC) and commissioner of the New Jersey Board of Public Utilities, join its advisory board.
On September 1, 2010, the World Economic Forum announced the company as a Technology Pioneer for 2011.
In November 2010, the company announced its third round of venture capital funding, a $50 million investment led by Accel Partners and Kleiner Perkins Caufield & Byers, to accelerate its expansion.
In 2012, Opower launched a popular data blog -- called the Outlier -- which presented statistical analysis of the company's energy data storehouse, spanning more than 50 million households worldwide. Several analyses attracted mainstream news coverage, including the impact of the Super Bowl on America's energy use, the electricity usage patterns of electric car owners, and variation in the compass orientation of rooftop solar panels.
In 2013, Opower added former utility CEOs John Rowe and Dick Kelly, and former White House Director, Carol Browner, to its advisory board. Tom Brady, former chairman of BGE, was named chairman of the advisory board.
Opower held its initial public offering on April 4, 2014.
On May 2, 2016, Opower announced that it was being acquired by Oracle for $532 million.
Science and technology
Opower's software uses statistical algorithms to perform pattern recognition analysis from data in order to derive information for utility customers. Without any devices installed in the home, the platform can perform usage-disaggregation analysis, presenting end users information such as heating or cooling usage apart from overall usage, and thus allowing them to spot additional opportunities to save money. Since the launch of the Opower platform and service, the average customer using Opower has cut energy usage by 2% to 5%, reducing 13 billion lbs of CO2 emissions.
Awards and honors
In May 2013, Opower was named to the Inaugural CNBC Disruptors 50 List.
In November 2013, Opower was named the #1 fastest-growing tech company in the DC region, and #20 in the US, by Deloitte.
See also
Energy management software
Smart grid
Clean technology
References
External links
Video: Alex Laskey: How behavioral science can lower your energy bill (2013) from TED on YouTube
Video: Energy saving: Deena Rosen at TEDxUtrecht (2014) from YouTube, former head of design at Opower
Podcast: Dan Yates, Co-Founder & former CEO of Opower (2019) on Apple Podcasts
Article: Opower CEO Dan Yates, on Saving Power for the People (2014) from The Mercury News
Companies based in Arlington County, Virginia
American companies established in 2007
Software companies based in Virginia
Environmental technology
Energy conservation in the United States
Companies formerly listed on the New York Stock Exchange
2014 initial public offerings
Oracle acquisitions
2016 mergers and acquisitions
Software companies of the United States
2007 establishments in Virginia
Software companies established in 2007
Software companies disestablished in 2016
2016 disestablishments in Virginia |
1866564 | https://en.wikipedia.org/wiki/File%20Alteration%20Monitor | File Alteration Monitor | In computing, the File Alteration Monitor, also known as FAM and sgi_fam, provides a subsystem developed by Silicon Graphics for Unix-like operating systems. The FAM subsystem allows applications to watch certain files and be notified when they are modified. This greatly aids the applications, because before FAM existed, such applications would have to read the disk repeatedly to detect any changes, which resulted in high disk and CPU usage.
For example, a file manager application can detect if some file has changed and can then update a displayed icon and/or filename.
The FAM system consists of two parts:
famd — the FAM Daemon, which provides notifications and listens for requests. Administrators can configure it by editing the file /etc/fam.conf
libfam — the interface to the client
Although FAM may seem unnecessary now that many newer kernels include built-in notification support (inotify in Linux, for example), using FAM provides two benefits:
Consistently using FAM enables applications to work on a greater variety of platforms, agnostic of the kernel.
FAM is network-aware, and if a monitor is started on an NFS share, it will attempt to contact a FAM server on the NFS server and have it monitor the file locally, which is more efficient.
The main problem with FAM is that during the creation of a large number of files (for example during the first login in a desktop environment) it slows down the entire system, using many CPU cycles.
See also
kqueue (FreeBSD)
inotify (Linux)
dnotify (Linux; predecessor of inotify)
Gamin (Linux, FreeBSD)
FSEvents (Mac OS)
portmap (SunOS)
TCP Wrapper/libwrap
References
External links
The FAM homepage
The Watchful Eye of FAM – Linuxdevcenter article
Unix file system technology |
33843132 | https://en.wikipedia.org/wiki/Access%20%28company%29 | Access (company) | , founded in April 1979 and incorporated in February 1984 in Tokyo, Japan, by Arakawa Toru and Kamada Tomihisa, is a company providing a variety of software for connected and mobile devices, such as mobile phones, PDAs, video game consoles and set top boxes.
The company makes the NetFront software series, which has been deployed in over 1 billion devices, representing over 2,000 models, as of the end of January 2011, and which has been used as a principal element of the widely successful i-mode data service of NTT DoCoMo in Japan. NetFront is also used by many consumer electronic devices beyond mobile phones, such as the Sony PSP and the Amazon Kindle, both of which have their web browsers powered by NetFront. In addition, the NetFront Browser and related products are used on a wide variety of mobile phones, including those from Nokia, Samsung, LG Corp., Motorola, Sony Ericsson and others.
In September 2005, ACCESS acquired PalmSource, the owner of the Palm OS and BeOS. The company has used these assets and expertise to create the Access Linux Platform, an open-source Linux-based platform for smartphones and other mobile devices, with some proprietary parts including the user interface and some middleware. The Access Linux Platform 3.0 was released to the market in October 2008. Two of the world's largest operators, NTT DoCoMo and Orange, have announced support for Access Linux Platform-based handsets.
In March 2006, ACCESS acquired IP Infusion, Inc., a provider of intelligent networking software, providing Layer 2 and Layer 3 carrier-class switching and routing as well as a comprehensive forwarding plane implementation supporting L2, L3 (IPv4 & v6), multicast and MPLS/Traffic Engineering.
ACCESS is active in open source-related efforts, including memberships in the Linux Foundation and the Linux Phone Standards Forum. In 2007, ACCESS employees presented at GUADEC (which the company also sponsored) and the Ottawa Linux Symposium.
, ACCESS employs approximately 657 people globally, with headquarters in Tokyo, Japan and facilities in the USA (Sunnyvale), Germany (Oberhausen), Korea (Seoul), the PRC (Beijing) and Taiwan (Taipei).
The company reports consolidated revenues of ¥9.4 billion (for the fiscal year ending January 2020).
See also
Qtopia
Symbian OS
Windows Mobile
References
External links
IP Infusion
Companies based in Tokyo
Companies listed on the Tokyo Stock Exchange
Software companies established in 1979
Software companies of Japan
Japanese companies established in 1979
Japanese brands |
17897132 | https://en.wikipedia.org/wiki/Shotgun%20surgery | Shotgun surgery | Shotgun surgery is an antipattern in software development and occurs where a developer adds features to an application codebase which span a multiplicity of implementors or implementations in a single change. This is common practice in many programming scenarios, as a great amount of programming effort is usually expended on adding new features to increase the value of programming assets. As a consequence, these new features may require adding code in several places simultaneously where the code itself looks very similar and may only have slight variations. Owing to the fast-paced nature of commercial software development, there may not be sufficient time to remodel (or refactor) a system to support the new features trivially. As a consequence, the practice of cut and paste coding is prevalent; the code is written in a single place then simply copied to all other places where that implementation is required (with any required changes applied in-place).
This practice is generally frowned on by the refactoring community as a direct violation of the Once and Only Once principle – ultimately any change to the new functionality may require widespread changes. Further, any potential software bug in this new feature will be replicated many-fold and can make bug fixing particularly difficult and tedious. Even in the absence of copied code, the implementations are guaranteed to be very similar and just as prone to requirements change or bug fixing. This form of software development tends to favour short-term improvement (in the form of additional features) at the cost of long-term maintainability and stability.
Example
The canonical example of this practice is logging which generally adds prologue code to many functions simultaneously, for example:
void MyFunc() {
...
}
void MyFunc2() {
...
}
...
void MyFuncN() {
...
}
Could be transformed to:
void MyFunc() {
printf("Entering MyFunc\n");
...
}
void MyFunc2() {
printf("Entering MyFunc2\n");
...
}
...
void MyFuncN() {
printf("Entering MyFuncN\n");
...
}
Here a single requirement has added similar code to several functions simultaneously. As such any change in requirements here (namely adding line numbers to the log) would now require a considerable effort. Shotgun surgery is not synonymous with cut and paste coding, as highlighted by this trivial example. The practice of copying code can be viewed as a "means to an end", where shotgun surgery is merely an "end" (i.e. there are many ways to reach the same conclusion).
Consequences of shotgun surgery
The concerns with this style are by-and-large the same as those for any duplication in a software system; that is, duplicating the same logic in many places can vastly increase the costs of making changes to the same logic later. Some of the aforementioned costs are measurable, others are not (at least not trivially). There is also some evidence that this antipattern is correlated with higher defect rates.
Typically some combination of the following is to be expected:
Increased developer effort and reduced throughput
Associated monetary cost of the above (as in commercial development)
Psychological effects and potential neglect of code
Of these the most insidious are the psychological effects (e.g. see Broken Windows Theory) which can exponentially lead to software rot. When uncontrolled this can cause entire codebases to become unmaintainable. Generally the only solution to this problem is to completely rewrite the code (at substantial cost).
Mitigation
Aspect oriented programming (AOP) aims at reducing these forms of invasive modifications in favour of adopting an "aspect" or "concern". The solutions take the form of boilerplate code which can be applied over a domain of functions simultaneously (through the process of weaving) which vastly reduces the amount of duplicated code. The use of Domain Specific Languages is also becoming more widespread where light-weight compilers are written to generate most of the duplicated code on the behalf of the programmer. Both methods fall into the broader categories of code generation and automation.
See also
Shotgun debugging
Technical debt
Viscosity, a measurement of resistance to change for the design of notations.
References
Anti-patterns |
50349598 | https://en.wikipedia.org/wiki/Information-Technology%20Engineers%20Examination | Information-Technology Engineers Examination | The is a group of information technology examinations administered by the Information Technology Promotion Agency, Japan (IPA). The ITEE was introduced in 1969 by Japan's Ministry of International Trade and Industry (MITI), and it has since changed hands twice, first to the Japan Information Processing Development Corporation (JIPDEC) in 1984, and then to the IPA in 2004. At first there were two examination categories, one for lower-level programmers and one for upper-level programmers, and over the years the number of categories increased to twelve as of 2016.
The examinations are carried out during the course of one day; candidates sit a morning test and an afternoon test. The morning test assesses the breadth of the candidate's subject-matter knowledge, and the afternoon test assesses the candidate's ability to apply that knowledge. The examinations have a low pass rate: between 1969 and 2010 15.4 million people took them, but only 1.7 million were successful (an average success rate of 11 percent).
The questions are developed by a committee of experts, and are continually updated to reflect changes in the computer industry. The examination categories are also subject to change based upon industry trends. The ITEE examinations are recognized as qualifications in several Asian countries, including India, Singapore, South Korea, China, the Philippines, Thailand, Vietnam, Myanmar, Taiwan, and Bangladesh.
History
The Information Technology Engineers Examination was founded in 1969 as a national examination by Japan's Ministry of International Trade and Industry (MITI). At first, two categories of examination were offered: Class I Information Technology Engineer, aimed at upper-level programmers, and Class II Information Technology Engineer, aimed at lower level programmers. These two categories were followed in 1971 by the Special Information Technology Engineer Examination.
In 1984, MITI (then known as the Ministry of Economy, Trade and Industry, or METI) handed over the administration of the examinations to Japan Information Processing Development Corporation (JIPDEC). JIPDEC received most of its funding from METI, and while the two organizations were technically independent, they shared close ties with each other. JIPDEC founded the Japan Information Technology Engineers Examination Center (JITEC) to oversee the actual running of the examinations.
The 1980s saw the introduction of two new examination categories: the Information Technology Systems Audit Engineer Examination in 1986, and the Online Information Technology Engineer Examination in 1988. The former was aimed at systems auditors, and the latter at network engineers.
The examination categories underwent a major upheaval in 1994. The Special Information Technology Engineer Examination was expanded into four separate examinations: the Applications Engineer Examination, the Systems Analyst Examination, the Project Manager Examination, and the Systems Administration Engineer Examination. The Online Information Technology Engineer Examination became the Network Specialist Examination, the Information Technology Systems Auditor Examination became the Systems Auditor Examination, and three new categories of examination were introduced: the Production Engineer Examination, the Database Specialist Examination, and the Basic Systems Administrator Examination. These were followed by a further two new categories in 1996: the Advanced Systems Administrator Examination and the Applied Microcontroller Systems Engineer Examination.
There was another major change to the categories in 2001. The Class I Information Technology Engineer Examination became the Fundamental Information Technology Engineer Examination, and the Class II Information Technology Engineer became the Applied Information Technology Engineer Examination. The Production Engineer Examination was discontinued, and the Information Security Administrator Examination was introduced.
In 2004, the administration of the examinations changed hands from JIPDEC to the Information-Technology Promotion Agency (IPA). This was followed in 2006 by the introduction of a new examination category, the Technical Engineer (Information Security) Examination.
2009 saw the introduction of a new test, the IT Passport Examination, while others examination categories were consolidated. The Systems Analyst Examination and the Advanced Systems Administrator Examination were merged to form the IT Strategist Examination, and the Technical Engineer (Information Security) Examination and the Information Security Administrator Examination were merged to form the Information Security Specialist Examination.
Format
The examinations are all carried out in one day, with a morning test and an afternoon tests. The morning test is multiple choice, and aims to test the candidate's breadth of knowledge of the material being examined. The afternoon test tests the candidate's ability to apply that knowledge and with a series of case studies and essay questions. The afternoon test also aims to test the candidate's past experience.
After the examinations are over, candidates are allowed to take their question papers home with them, and the answers to some of the questions are made available online. Candidates who pass the examinations receive certificates from METI. These certificates show the date that they were awarded, but they have no expiration date.
Between 1969 and 2010, 15.4 million people took one of the ITEE examinations, and only 1.7 million people passed, giving an average success rate of 11 percent.
Categories
there are 13 examination categories, divided into four levels.
Administration
JITEC bases the scope and the difficulty of the exams on the advice of a committee of experts from the computer industry and from academia. This committee investigates the skills currently used by engineers in the relevant examination category, and uses that to base their recommendations on. In this way, the questions in the exams are kept up to date with new and evolving technologies. The knowledge areas that are tested are taken from software engineering, information systems and computer science.
The examination categories are also constantly reviewed to ensure that they are both relevant to current trends in information technology and to keep them consistent with previous exams.
The examination questions themselves are also developed by committee consisting of around 400 experts. Subcommittees are put in charge of question development, of checking, and of question selection, and are given independent authority to create questions. New questions are made for each exam, but some questions intended to test the breadth of candidates' knowledge may be altered and reused.
In addition to its national examination status in Japan, the ITEE is also recognized as a professional credential in several Asian countries, including India, Singapore, South Korea, China, the Philippines, Thailand, Vietnam, Myanmar, Taiwan, and Bangladesh.
References
External links
Information technology in Japan
Information technology qualifications
Testing and exams in Japan |
4831527 | https://en.wikipedia.org/wiki/Antic%20Software | Antic Software | Antic Software was a software company associated with Antic, a magazine for the Atari 8-bit family of computers. Bound into issues of the magazine, the Antic Software catalog initially sold Atari 8-bit games, applications, and utilities from the recently defunct Atari Program Exchange. Original submissions were later added, as well as public domain collections, with all software provided on self-documented disk. When the Atari ST was released, it became a mixture of Atari 8-bit and Atari ST software and sold some major Atari ST titles such as CAD-3D. The magazine insert changed names several times, eventually being branded as The Catalog.
Antic assistant editor Gigi Bisson wrote in the May 1986 issue that, "[Antic Software] kept the magazine afloat during the lean year," referring to the period following Atari, Inc.'s financial collapse.
History
When the Atari Program Exchange (APX) was shut down by Atari CEO James J. Morgan in 1984, Gary Yost convinced Antic magazine's publisher, James Capparell, to create Antic Software. Yost contacted many of the programmers from APX to re-publish their works with Antic. The APX software was rebranded in mid-1984 as APX Classics from Antic. In 1985 the magazine insert was called Antic Arcade (despite including more than games). By 1986 it was branded The Catalog with the emphasis on Atari ST applications.
Software
Atari 8-bit family
Colourspace, light synthesizer from Jeff Minter.
Dandy Dungeon, renamed version of Atari Program Exchange game Dandy.
HomeCard, filing system written by Preppie! programmer Russ Wetmore and Sparky Starks.
Mars Mission II, sequel to Caverns of Mars. Another sequel, Phobos, was previously sold through the Atari Program Exchange, then later by Antic Software.
RAMbrandt, image editor.
Atari ST
CAD-3D, 3D modeling system and related add-ons. CAD-3D is a precursor to 3D Studio MAX.
Spectrum 512, Atari ST paint program allowing 512 colors per image instead of the standard 16.
Legacy
Gary Yost went on to form The Yost Group which created and licensed products to Autodesk: Autodesk Animator, Autodesk Animator Pro, Autodesk 3D Studio, and Autodesk 3DS MAX. 3D Studio is a direct successor of CAD-3D.
References
Defunct software companies
Atari 8-bit family |
1472206 | https://en.wikipedia.org/wiki/Economy%20of%20India | Economy of India | The economy of India is a middle income developing mixed economy. It is the world's sixth-largest economy by nominal GDP and the third-largest by purchasing power parity (PPP). According to the International Monetary Fund (IMF), on a per capita income basis, India ranked 145th by GDP (nominal) and 122th by GDP (PPP). From independence in 1947 until 1991, successive governments promoted protectionist economic policies, with extensive state intervention and economic regulation. This is characterised as dirigism, in the form of the License Raj. The end of the Cold War and an acute balance of payments crisis in 1991 led to the adoption of a broad economic liberalisation in India. Since the start of the 21st century, annual average GDP growth has been 6% to 7%, and from 2013 to 2018, India was the world's fastest growing major economy, surpassing China. Historically, India was the largest economy in the world for most of the two millennia from the 1st until the 19th century.
The long-term growth perspective of the Indian economy remains positive due to its young population and corresponding low dependency ratio, healthy savings, and investment rates, increasing globalisation in India and integration into the global economy. The economy slowed in 2017, due to shocks of "demonetisation" in 2016 and the introduction of the Goods and Services Tax in 2017. Nearly 60% of India's GDP is driven by domestic private consumption. The country remains the world's sixth-largest consumer market. Apart from private consumption, India's GDP is also fueled by government spending, investment, and exports. In 2019, India was the world's ninth-largest importer and the twelfth-largest exporter. India has been a member of the World Trade Organization since 1 January 1995. It ranks 63rd on the Ease of doing business index and 68th on the Global Competitiveness Report. With 500 million workers, the Indian labour force was the world's second-largest as of 2019. India has one of the world's highest number of billionaires and extreme income inequality. Since India has a vast informal economy, barely 2% of Indians pay income taxes.
During the 2008 global financial crisis the economy faced a mild slowdown. India undertook stimulus measures (both fiscal and monetary) to boost growth and generate demand. In subsequent years, economic growth revived. According to the 2017 PricewaterhouseCoopers (PwC) report, India's GDP at purchasing power parity could overtake that of the United States by 2050. According to the World Bank, to achieve sustainable economic development, India must focus on public sector reform, infrastructure, agricultural and rural development, removal of land and labour regulations, financial inclusion, spur private investment and exports, education, and public health.
In 2020, India's ten largest trading partners were the United States, China, the United Arab Emirates (UAE), Saudi Arabia, Switzerland, Germany, Hong Kong, Indonesia, South Korea, and Malaysia. In 2019–20, the foreign direct investment (FDI) in India was $74.4 billion. The leading sectors for FDI inflows were the service sector, the computer industry, and the telecom industry. India has free trade agreements with several nations, including ASEAN, SAFTA, Mercosur, South Korea, Japan, and several others which are in effect or under negotiating stage.
The service sector makes up 50% of GDP and remains the fastest growing sector, while the industrial sector and the agricultural sector employs a majority of the labor force. The Bombay Stock Exchange and National Stock Exchange are some of the world's largest stock exchanges by market capitalization. India is the world's sixth-largest manufacturer, representing 3% of global manufacturing output, and employs over 57 million people. Nearly 66% of India's population is rural, and contributes about 50% of India's GDP. It has the world's fourth-largest foreign-exchange reserves worth $632.952 billion. India has a high public debt with 86% of GDP, while its fiscal deficit stood at 9.5% of GDP. India's government-owned banks faced mounting bad debt, resulting in low credit growth. Simultaneously, the NBFC sector has been engulfed in a liquidity crisis. India faces moderate unemployment, rising income inequality, and a drop in aggregate demand. India's gross domestic savings rate stood at 30.1% of GDP in FY 2019. In recent years, independent economists and financial institutions have accused the government of fudging various economic data, especially GDP growth. India's GDP in Q1 FY22 (Rs 32.38 lakh crore) is nearly nine per cent below the Q1 FY20 level (Rs 35.67 lakh crore) in 2021.
India is the world's largest manufacturer of generic drugs, and its pharmaceutical sector fulfills over 50% of the global demand for vaccines. The Indian IT industry is a major exporter of IT services with $191 billion in revenue and employs over four million people. India's chemical industry is extremely diversified and estimated at $178 billion. The tourism industry contributes about 9.2% of India's GDP and employs over 42 million people. India ranks second globally in food and agricultural production, while agricultural exports were $35.09 billion. The construction and real estate sector ranks third among the 14 major sectors in terms of direct, indirect, and induced effects in all sectors of the economy. The Indian textiles industry is estimated at $100 billion and contributes 13% of industrial output and 2.3% of India's GDP while employs over 45 million people directly. India's telecommunication industry is the world's second largest by the number of mobile phone, smartphone, and internet users. It is the world's 23th-largest oil producer and the third-largest oil consumer. The Indian automobile industry is the world's fifth-largest by production. India has retail market worth $1.17 trillion, which contributes over 10% of India's GDP. It also has one of the world's fastest growing e-commerce markets. India has the world's fourth-largest natural resources, with the mining sector contributing 11% of the country's industrial GDP and 2.5% of total GDP. It is also the world's second-largest coal producer, the second-largest cement producer, the second-largest steel producer, and the third-largest electricity producer.
History
For a continuous duration of nearly 1700 years from the year 1 AD, India was the top-most economy, constituting 35 to 40% of the world GDP. The combination of protectionist, import-substitution, Fabian socialism, and social democratic-inspired policies governed India for sometime after the end of British rule. The economy was then characterised as Dirigism, It had extensive regulation, protectionism, public ownership of large monopolies, pervasive corruption and slow growth. Since 1991, continuing economic liberalisation has moved the country towards a market-based economy. By 2008, India had established itself as one of the world's faster-growing economies.
Ancient and medieval eras
Indus Valley Civilisation
The citizens of the Indus Valley Civilisation, a permanent settlement that flourished between 2800 BC and 1800 BC, practised agriculture, domesticated animals, used uniform weights and measures, made tools and weapons, and traded with other cities. Evidence of well-planned streets, a drainage system, and water supply reveals their knowledge of urban planning, which included the first-known urban sanitation systems and the existence of a form of municipal government.
West Coast
Maritime trade was carried out extensively between South India and Southeast and West Asia from early times until around the fourteenth century AD. Both the Malabar and Coromandel Coasts were the sites of important trading centres from as early as the first century BC, used for import and export as well as transit points between the Mediterranean region and southeast Asia. Over time, traders organised themselves into associations which received state patronage. Historians Tapan Raychaudhuri and Irfan Habib claim this state patronage for overseas trade came to an end by the thirteenth century AD, when it was largely taken over by the local Parsi, Jewish, Syrian Christian, and Muslim communities, initially on the Malabar and subsequently on the Coromandel coast.
Silk Route
Other scholars suggest trading from India to West Asia and Eastern Europe was active between the 14th and 18th centuries. During this period, Indian traders settled in Surakhani, a suburb of greater Baku, Azerbaijan. These traders built a Hindu temple, which suggests commerce was active and prosperous for Indians by the 17th century.
Further north, the Saurashtra and Bengal coasts played an important role in maritime trade, and the Gangetic plains and the Indus valley housed several centres of river-borne commerce. Most overland trade was carried out via the Khyber Pass connecting the Punjab region with Afghanistan and onward to the Middle East and Central Asia. Although many kingdoms and rulers issued coins, barter was prevalent. Villages paid a portion of their agricultural produce as revenue to the rulers, while their craftsmen received a part of the crops at harvest time for their services.
Mughal era/ Rajput era/ Maratha era (1526–1820)
The Indian economy was large and prosperous under the Mughal Empire, up until the 18th century. Sean Harkin estimates China and India may have accounted for 60 to 70 percent of world GDP in the 17th century. The Mughal economy functioned on an elaborate system of coined currency, land revenue and trade. Gold, silver and copper coins were issued by the royal mints which functioned on the basis of free coinage. The political stability and uniform revenue policy resulting from a centralized administration under the Mughals, coupled with a well-developed internal trade network, ensured that India–before the arrival of the British–was to a large extent economically unified, despite having a traditional agrarian economy characterised by a predominance of subsistence agriculture. Agricultural production increased under Mughal agrarian reforms, with Indian agriculture being advanced compared to Europe at the time, such as the widespread use of the seed drill among Indian peasants before its adoption in European agriculture, and possibly higher per-capita agricultural output and standards of consumption then 17 century Europe.
The Mughal Empire had a thriving industrial manufacturing economy, with India producing about 25% of the world's industrial output up until 1750, making it the most important manufacturing center in international trade. Manufactured goods and cash crops from the Mughal Empire were sold throughout the world. Key industries included textiles, shipbuilding, and steel, and processed exports included cotton textiles, yarns, thread, silk, jute products, metalware, and foods such as sugar, oils and butter. Cities and towns boomed under the Mughal Empire, which had a relatively high degree of urbanization for its time, with 15% of its population living in urban centres, higher than the percentage of the urban population in contemporary Europe at the time and higher than that of British India in the 19th century.
In early modern Europe, there was significant demand for products from Mughal India, particularly cotton textiles, as well as goods such as spices, peppers, indigo, silks, and saltpeter (for use in munitions). European fashion, for example, became increasingly dependent on Mughal Indian textiles and silks. From the late 17th century to the early 18th century, Mughal India accounted for 95% of British imports from Asia, and the Bengal Subah province alone accounted for 40% of Dutch imports from Asia. In contrast, there was very little demand for European goods in Mughal India, which was largely self-sufficient. Indian goods, especially those from Bengal, were also exported in large quantities to other Asian markets, such as Indonesia and Japan. At the time, Mughal Bengal was the most important center of cotton textile production.
In the early 18th century, the Mughal Empire declined, as it lost western, central and parts of south and north India to the Maratha Empire, which integrated and continued to administer those regions. The decline of the Mughal Empire led to decreased agricultural productivity, which in turn negatively affected the textile industry. The subcontinent's dominant economic power in the post-Mughal era was the Bengal Subah in the east., which continued to maintain thriving textile industries and relatively high real wages. However, the former was devastated by the Maratha invasions of Bengal and then British colonization in the mid-18th century. After the loss at the Third Battle of Panipat, the Maratha Empire disintegrated into several confederate states, and the resulting political instability and armed conflict severely affected economic life in several parts of the country – although this was mitigated by localised prosperity in the new provincial kingdoms. By the late eighteenth century, the British East India Company had entered the Indian political theatre and established its dominance over other European powers. This marked a determinative shift in India's trade, and a less-powerful impact on the rest of the economy.
British era (1793–1947)
From the beginning of the 19th century, the British East India Company's gradual expansion and consolidation of power brought a major change in taxation and agricultural policies, which tended to promote commercialisation of agriculture with a focus on trade, resulting in decreased production of food crops, mass impoverishment and destitution of farmers, and in the short term, led to numerous famines. The economic policies of the British Raj caused a severe decline in the handicrafts and handloom sectors, due to reduced demand and dipping employment. After the removal of international restrictions by the Charter of 1813, Indian trade expanded substantially with steady growth. The result was a significant transfer of capital from India to England, which, due to the colonial policies of the British, led to a massive drain of revenue rather than any systematic effort at modernisation of the domestic economy.
Under British rule, India's share of the world economy declined from 24.4% in 1700 down to 4.2% in 1950. India's GDP (PPP) per capita was stagnant during the Mughal Empire and began to decline prior to the onset of British rule. India's share of global industrial output declined from 25% in 1750 down to 2% in 1900. At the same time, the United Kingdom's share of the world economy rose from 2.9% in 1700 up to 9% in 1870. The British East India Company, following their conquest of Bengal in 1757, had forced open the large Indian market to British goods, which could be sold in India without tariffs or duties, compared to local Indian producers who were heavily taxed, while in Britain protectionist policies such as bans and high tariffs were implemented to restrict Indian textiles from being sold there, whereas raw cotton was imported from India without tariffs to British factories which manufactured textiles from Indian cotton and sold them back to the Indian market. British economic policies gave them a monopoly over India's large market and cotton resources. India served as both a significant supplier of raw goods to British manufacturers and a large captive market for British manufactured goods.
British territorial expansion in India throughout the 19th century created an institutional environment that, on paper, guaranteed property rights among the colonisers, encouraged free trade, and created a single currency with fixed exchange rates, standardised weights and measures and capital markets within the company-held territories. It also established a system of railways and telegraphs, a civil service that aimed to be free from political interference, a common-law, and an adversarial legal system. This coincided with major changes in the world economy – industrialisation, and significant growth in production and trade. However, at the end of colonial rule, India inherited an economy that was one of the poorest in the developing world, with industrial development stalled, agriculture unable to feed a rapidly growing population, a largely illiterate and unskilled labour force, and extremely inadequate infrastructure.
The 1872 census revealed that 91.3% of the population of the region constituting present-day India resided in villages. This was a decline from the earlier Mughal era, when 85% of the population resided in villages and 15% in urban centers under Akbar's reign in 1600. Urbanisation generally remained sluggish in British India until the 1920s, due to the lack of industrialisation and absence of adequate transportation. Subsequently, the policy of discriminating protection (where certain important industries were given financial protection by the state), coupled with the Second World War, saw the development and dispersal of industries, encouraging rural-urban migration, and in particular, the large port cities of Bombay, Calcutta and Madras grew rapidly. Despite this, only one-sixth of India's population lived in cities by 1951.
The impact of British rule on India's economy is a controversial topic. Leaders of the Indian independence movement and economic historians have blamed colonial rule for the dismal state of India's economy in its aftermath and argued that financial strength required for industrial development in Britain was derived from
the wealth taken from India. At the same time, right-wing historians have countered that India's low economic performance was due to various sectors being in a state of growth and decline due to changes brought in by colonialism and a world that was moving towards industrialisation and economic integration.
Several economic historians have argued that real wage decline occurred in the early 19th century, or possibly beginning in the very late 18th century, largely as a result of British imperialism. According to Prasannan Parthasarathi and Sashi Sivramkrishna, the grain wages of Indian weavers were likely comparable to that of their British counterparts and their average income was around five times the subsistence level, which was comparable to advanced parts of Europe. However they concluded that due to the scarcity of data, it was hard to draw definitive conclusions and that more research was required. It has also been argued that India went through a period of deindustrialization in the latter half of the 18th century as an indirect outcome of the collapse of the Mughal Empire.
Pre-liberalisation period (1947–1991)
Indian economic policy after independence was influenced by the colonial experience, which was seen as exploitative by Indian leaders exposed to British social democracy and the planned economy of the Soviet Union. Domestic policy tended towards protectionism, with a strong emphasis on import substitution industrialisation, economic interventionism, a large government-run public sector, business regulation, and central planning, while trade and foreign investment policies were relatively liberal. Five-Year Plans of India resembled central planning in the Soviet Union. Steel, mining, machine tools, telecommunications, insurance, and power plants, among other industries, were effectively nationalised in the mid-1950s. The Indian economy of this period is characterised as Dirigism.
Jawaharlal Nehru, the first prime minister of India, along with the statistician Prasanta Chandra Mahalanobis, formulated and oversaw economic policy during the initial years of the country's independence. They expected favourable outcomes from their strategy, involving the rapid development of heavy industry by both public and private sectors, and based on direct and indirect state intervention, rather than the more extreme Soviet-style central command system. The policy of concentrating simultaneously on capital- and technology-intensive heavy industry and subsidising manual, low-skill cottage industries was criticised by economist Milton Friedman, who thought it would waste capital and labour, and retard the development of small manufacturers.
Since 1965, the use of high-yielding varieties of seeds, increased fertilisers and improved irrigation facilities collectively contributed to the Green Revolution in India, which improved the condition of agriculture by increasing crop productivity, improving crop patterns and strengthening forward and backward linkages between agriculture and industry. However, it has also been criticised as an unsustainable effort, resulting in the growth of capitalistic farming, ignoring institutional reforms and widening income disparities.
In 1984, Rajiv Gandhi promised economic liberalization, he made V. P. Singh the finance minister, who tried to reduce tax evasion and tax receipts rose due to this crackdown although taxes were lowered. This process lost its momentum during the later tenure of Mr. Gandhi as his government was marred by scandals.
Post-liberalisation period (since 1991)
The collapse of the Soviet Union, which was India's major trading partner, and the Gulf War, which caused a spike in oil prices, resulted in a major balance-of-payments crisis for India, which found itself facing the prospect of defaulting on its loans. India asked for a $1.8 billion bailout loan from the International Monetary Fund (IMF), which in return demanded de-regulation.
In response, the Narasimha Rao government, including Finance Minister Manmohan Singh, initiated economic reforms in 1991. The reforms did away with the Licence Raj, reduced tariffs and interest rates and ended many public monopolies, allowing automatic approval of foreign direct investment in many sectors. Since then, the overall thrust of liberalisation has remained the same, although no government has tried to take on powerful lobbies such as trade unions and farmers, on contentious issues such as reforming labour laws and reducing agricultural subsidies. By the turn of the 21st century, India had progressed towards a free-market economy, with a substantial reduction in state control of the economy and increased financial liberalisation. This has been accompanied by increases in life expectancy, literacy rates, and food security, although urban residents have benefited more than rural residents.
While the credit rating of India was hit by its nuclear weapons tests in 1998, it has since been raised to investment level in 2003 by Standard & Poor's (S&P) and Moody's. India experienced high growth rates, averaging 9% from 2003 to 2007. Growth then moderated in 2008 due to the global financial crisis. In 2003, Goldman Sachs predicted that India's GDP in current prices would overtake France and Italy by 2020, Germany, UK and Russia by 2025 and Japan by 2035, making it the third-largest economy of the world, behind the US and China. India is often seen by most economists as a rising economic superpower which will play a major role in the 21st-century global economy.
Starting in 2012, India entered a period of reduced growth, which slowed to 5.6%. Other economic problems also became apparent: a plunging Indian rupee, a persistent high current account deficit and slow industrial growth.
India started recovery in 2013–14 when the GDP growth rate accelerated to 6.4% from the previous year's 5.5%. The acceleration continued through 2014–15 and 2015–16 with growth rates of 7.5% and 8.0% respectively. For the first time since 1990, India grew faster than China which registered 6.9% growth in 2015. However the growth rate subsequently decelerated, to 7.1% and 6.6% in 2016–17 and 2017–18 respectively, partly because of the disruptive effects of 2016 Indian banknote demonetisation and the Goods and Services Tax (India).
India is ranked 63rd out of 190 countries in the World Bank's 2020 ease of doing business index, up 14 points from the last year's 100 and up 37 points in just two years. In terms of dealing with construction permits and enforcing contracts, it is ranked among the 10 worst in the world, while it has a relatively favourable ranking when it comes to protecting minority investors or getting credit. The strong efforts taken by the Department of Industrial Policy and Promotion (DIPP) to boost ease of doing business rankings at the state level is said to impact the overall rankings of India.
India's GDP growth has been slowing rapidly, from a high of 8.3% in 2016 to just 4.2% in 2019. Some experts have pointed to the 2016 Indian banknote demonetisation as the trigger that set India's growth into a downward direction
Impact of COVID-19 pandemic (2020)
During the COVID-19 pandemic, numerous rating agencies downgraded India's GDP predictions for FY21 to negative figures, signalling a recession in India, the most severe since 1979. According to a Dun & Bradstreet report, the country is likely to suffer a recession in the third quarter of FY2020 as a result of the over 2-month long nation-wide lockdown imposed to curb the spread of COVID-19.
Data
The following table shows the main economic indicators in 1980–2018. Inflation under 5% is in green.
Sectors
Historically, India has classified and tracked its economy and GDP in three sectors: agriculture, industry, and services. Agriculture includes crops, horticulture, milk and animal husbandry, aquaculture, fishing, sericulture, aviculture, forestry, and related activities. Industry includes various manufacturing sub-sectors. India's definition of services sector includes its construction, retail, software, IT, communications, hospitality, infrastructure operations, education, healthcare, banking and insurance, and many other economic activities.
Agriculture
Agriculture and allied sectors like forestry, logging and fishing accounted for 17% of the GDP, the sector employed 49% of its total workforce in 2014. Agriculture accounted for 23% of GDP, and employed 59% of the country's total workforce in 2016. As the Indian economy has diversified and grown, agriculture's contribution to GDP has steadily declined from 1951 to 2011, yet it is still the country's largest employment source and a significant piece of its overall socio-economic development. Crop-yield-per-unit-area of all crops has grown since 1950, due to the special emphasis placed on agriculture in the five-year plans and steady improvements in irrigation, technology, application of modern agricultural practices and provision of agricultural credit and subsidies since the Green Revolution in India. However, international comparisons reveal the average yield in India is generally 30% to 50% of the highest average yield in the world. The states of Uttar Pradesh, Punjab, Haryana, Madhya Pradesh, Andhra Pradesh, Telangana, Bihar, West Bengal, Gujarat and Maharashtra are key contributors to Indian agriculture.
India receives an average annual rainfall of and a total annual precipitation of 4000 billion cubic metres, with the total utilisable water resources, including surface and groundwater, amounting to 1123 billion cubic metres. of the land area, or about 39% of the total cultivated area, is irrigated. India's inland water resources and marine resources provide employment to nearly six million people in the fisheries sector. In 2010, India had the world's sixth-largest fishing industry.
India is the largest producer of milk, jute and pulses, and has the world's second-largest cattle population with 170 million animals in 2011. It is the second-largest producer of rice, wheat, sugarcane, cotton and groundnuts, as well as the second-largest fruit and vegetable producer, accounting for 10.9% and 8.6% of the world fruit and vegetable production, respectively. India is also the second-largest producer and the largest consumer of silk, producing 77,000 tons in 2005. India is the largest exporter of cashew kernels and cashew nut shell liquid (CNSL). Foreign exchange earned by the country through the export of cashew kernels during 2011–12 reached based on statistics from the Cashew Export Promotion Council of India (CEPCI). 131,000 tonnes of kernels were exported during 2011–12. There are about 600 cashew processing units in Kollam, Kerala.
India's foodgrain production remained stagnant at approximately 252 million tonnes (MT) during both the 2015–16 and 2014–15 crop years (July–June).
India exports several agriculture products, such as Basmati rice, wheat, cereals, spices, fresh fruits, dry fruits, buffalo beef meat, cotton, tea, coffee and other cash crops particularly to the Middle East, Southeast and East Asian countries. About 10 percent of its export earnings come from this trade.
At around , India has the second-largest amount of arable land, after the US, with 52% of total land under cultivation. Although the total land area of the country is only slightly more than one-third of China or the US, India's arable land is marginally smaller than that of the US, and marginally larger than that of China. However, agricultural output lags far behind its potential. The low productivity in India is a result of several factors. According to the World Bank, India's large agricultural subsidies are distorting what farmers grow and hampering productivity-enhancing investment. Over-regulation of agriculture has increased costs, price risks and uncertainty, and governmental intervention in labour, land, and credit are hurting the market. Infrastructure such as rural roads, electricity, ports, food storage, retail markets and services remain inadequate. The average size of land holdings is very small, with 70% of holdings being less than one hectare (2.5 acres) in size. Irrigation facilities are inadequate, as revealed by the fact that only 46% of the total cultivable land was irrigated resulting in farmers still being dependent on rainfall, specifically the monsoon season, which is often inconsistent and unevenly distributed across the country. In an effort to bring an additional of land under irrigation, various schemes have been attempted, including the Accelerated Irrigation Benefit Programme (AIBP) which was provided in the union budget. Farming incomes are also hampered by lack of food storage and distribution infrastructure; a third of India's agricultural production is lost from spoilage.
Manufacturing and industry
Industry accounts for 26% of GDP and employs 22% of the total workforce. According to the World Bank, India's industrial manufacturing GDP output in 2015 was 6th largest in the world on current US dollar basis ($559 billion), and 9th largest on inflation-adjusted constant 2005 US dollar basis ($197.1 billion). The industrial sector underwent significant changes due to the 1991 economic reforms, which removed import restrictions, brought in foreign competition, led to the privatisation of certain government-owned public-sector industries, liberalised the foreign direct investment (FDI) regime, improved infrastructure and led to an expansion in the production of fast-moving consumer goods. Post-liberalisation, the Indian private sector was faced with increasing domestic and foreign competition, including the threat of cheaper Chinese imports. It has since handled the change by squeezing costs, revamping management, and relying on cheap labour and new technology. However, this has also reduced employment generation, even among smaller manufacturers who previously relied on labour-intensive processes.
Defence
With strength of over 1.3 million active personnel, India has the third-largest military force and the largest volunteer army. The total budget sanctioned for the Indian military for the financial year 2019–20 was . Defence spending is expected to rise to US$62 billion by 2022.
Electricity sector
Primary energy consumption of India is the third-largest after China and the US with 5.3% global share in the year 2015. Coal and crude oil together account for 85% of the primary energy consumption of India. India's oil reserves meet 25% of the country's domestic oil demand. India's total proven crude oil reserves are 763.476 million metric tons, while gas reserves stood at . Oil and natural gas fields are located offshore at Bombay High, Krishna Godavari Basin and the Cauvery Delta, and onshore mainly in the states of Assam, Gujarat and Rajasthan. India is the fourth-largest consumer of oil and net oil imports were nearly in 2014–15, which had an adverse effect on the country's current account deficit. The petroleum industry in India mostly consists of public sector companies such as Oil and Natural Gas Corporation (ONGC), Hindustan Petroleum Corporation Limited (HPCL), Bharat Petroleum Corporation Limited (BPCL) and Indian Oil Corporation Limited (IOCL). There are some major private Indian companies in the oil sector such as Reliance Industries Limited (RIL) which operates the world's largest oil refining complex.
India became the world's third-largest producer of electricity in 2013 with a 4.8% global share in electricity generation, surpassing Japan and Russia. By the end of calendar year 2015, India had an electricity surplus with many power stations idling for want of demand. The utility electricity sector had an installed capacity of 303 GW of which thermal power contributed 69.8%, hydroelectricity 15.2%, other sources of renewable energy 13.0%, and nuclear power 2.1%. India meets most of its domestic electricity demand through its 106 billion tonnes of proven coal reserves. India is also rich in certain alternative sources of energy with significant future potential such as solar, wind and biofuels (jatropha, sugarcane). India's dwindling uranium reserves stagnated the growth of nuclear energy in the country for many years. Recent discoveries in the Tummalapalle belt may be among the top 20 natural uranium reserves worldwide, and an estimated reserve of of thorium – about 25% of world's reserves – are expected to fuel the country's ambitious nuclear energy program in the long-run. The Indo-US nuclear deal has also paved the way for India to import uranium from other countries.
Engineering
Engineering is the largest sub-sector of India's industrial sector, by GDP, and the third-largest by exports. It includes transport equipment, machine tools, capital goods, transformers, switchgear, furnaces, and cast and forged parts for turbines, automobiles, and railways. The industry employs about four million workers. On a value-added basis, India's engineering subsector exported $67 billion worth of engineering goods in the 2013–14 fiscal year, and served part of the domestic demand for engineering goods.
The engineering industry of India includes its growing car, motorcycle and scooters industry, and productivity machinery such as tractors. India manufactured and assembled about 18 million passenger and utility vehicles in 2011, of which 2.3 million were exported. India is the largest producer and the largest market for tractors, accounting for 29% of global tractor production in 2013. India is the 12th-largest producer and 7th-largest consumer of machine tools.
The automotive manufacturing industry contributed $79 billion (4% of GDP) and employed 6.76 million people (2% of the workforce) in 2016.
Gems and jewellery
India is one of the largest centres for polishing diamonds and gems and manufacturing jewellery; it is also one of the two largest consumers of gold. After crude oil and petroleum products, the export and import of gold, precious metals, precious stones, gems and jewellery accounts for the largest portion of India's global trade. The industry contributes about 7% of India's GDP, employs millions, and is a major source of its foreign-exchange earnings. The gems and jewellery industry created $60 billion in economic output on value-added basis in 2017, and is projected to grow to $110 billion by 2022.
The gems and jewellery industry has been economically active in India for several thousand years. Until the 18th century, India was the only major reliable source of diamonds. Now, South Africa and Australia are the major sources of diamonds and precious metals, but along with Antwerp, New York City, and Ramat Gan, Indian cities such as Surat and Mumbai are the hubs of world's jewellery polishing, cutting, precision finishing, supply and trade. Unlike other centres, the gems and jewellery industry in India is primarily artisan-driven; the sector is manual, highly fragmented, and almost entirely served by family-owned operations.
The particular strength of this sub-sector is in precision cutting, polishing and processing small diamonds (below one carat). India is also a hub for processing of larger diamonds, pearls, and other precious stones. Statistically, 11 out of 12 diamonds set in any jewellery in the world are cut and polished in India.
Infrastructure
India's infrastructure and transport sector contributes about 5% of its GDP. India has a road network of over the second-largest road network in the world only behind the United States. At 1.66 km of roads per square kilometre of land (2.68 miles per square mile), the quantitative density of India's road network is higher than that of Japan (0.91) and the United States (0.67), and far higher than that of China (0.46), Brazil (0.18) or Russia (0.08). Qualitatively, India's roads are a mix of modern highways and narrow, unpaved roads, and are being improved. 87.05% of Indian roads were paved. India has the lowest kilometre-lane road density per 100,000 people among G-27 countries, leading to traffic congestion. It is upgrading its infrastructure. India had completed over of 4- or 6-lane highways, connecting most of its major manufacturing, commercial and cultural centres. India's road infrastructure carries 60% of freight and 87% of passenger traffic.
The Indian railway network is the fourth-largest rail network in the world, with a track length of and 7,172 stations. This government-owned-and-operated railway network carried an average of 23 million passengers a day, and over a billion tonnes of freight in 2013. India has a coastline of with 13 major ports and 60 operational non-major ports, which together handle 95% of the country's external trade by volume and 70% by value (most of the remainder handled by air). Nhava Sheva, Mumbai is the largest public port, while Mundra is the largest private sea port. The airport infrastructure of India includes 125 airports, of which 66 airports are licensed to handle both passengers and cargo.
Petroleum products and chemicals
Petroleum products and chemicals are a major contributor to India's industrial GDP, and together they contribute over 34% of its export earnings. India hosts many oil refinery and petrochemical operations, including the world's largest refinery complex in Jamnagar that processes 1.24 million barrels of crude per day. By volume, the Indian chemical industry was the third-largest producer in Asia, and contributed 5% of the country's GDP. India is one of the five-largest producers of agrochemicals, polymers and plastics, dyes and various organic and inorganic chemicals. Despite being a large producer and exporter, India is a net importer of chemicals due to domestic demands.
The chemical industry contributed $163 billion to the economy in FY18 and is expected to reach $300–400 billion by 2025. The industry employed 17.33 million people (4% of the workforce) in 2016.
Pharmaceuticals
The Indian pharmaceutical industry has grown in recent years to become a major manufacturer of health care products for the world. India holds a 20% market share in the global supply of generics by volume. The Indian pharmaceutical sector also supplies over 62% of the global demand for various vaccines. India's pharmaceutical exports stood at $17.27 billion in 2017–18 and are expected to reach $20 billion by 2020. The industry grew from $6 billion in 2005 to $36.7 billion in 2016, a compound annual growth rate (CAGR) of 17.46%. It is expected to grow at a CAGR of 15.92% to reach $55 billion in 2020. India is expected to become the sixth-largest pharmaceutical market in the world by 2020. It is one of the fastest-growing industrial sub-sectors and a significant contributor to India's export earnings. The state of Gujarat has become a hub for the manufacture and export of pharmaceuticals and active pharmaceutical ingredients (APIs).
Textile
The textile and apparel market in India was estimated to be $108.5 billion in 2015. It is expected to reach a size of $226 billion by 2023. The industry employees over 35 million people. By value, the textile industry accounts for 7% of India's industrial, 2% of GDP and 15% of the country's export earnings. India exported $39.2 billion worth of textiles in the 2017–18 fiscal year.
India's textile industry has transformed in recent years from a declining sector to a rapidly developing one. After freeing the industry in 2004–2005 from a number of limitations, primarily financial, the government permitted massive investment inflows, both domestic and foreign. From 2004 to 2008, total investment into the textile sector increased by 27 billion dollars. Ludhiana produces 90% of woollens in India and is known as the Manchester of India. Tirupur has gained universal recognition as the leading source of hosiery, knitted garments, casual wear, and sportswear. Expanding textile centres such as Ichalkaranji enjoy one of the highest per-capita incomes in the country. India's cotton farms, fibre and textile industry provides employment to 45 million people in India, including some child labour (1%). The sector is estimated to employ around 400,000 children under the age of 18.
Pulp and paper
The pulp and paper industry in India is one of the major producers of paper in the world and has adopted new manufacturing technology. The paper market in India was estimated to be worth in 2017–18 recording a CAGR of 6–7%. Domestic demand for paper almost doubled from around 9 million tonnes in the 2007–08 fiscal to over 17 million tonnes in 2017–18. The per capita consumption of paper in India is around 13–14 kg annually, lower than the global average of 57 kg.
Services
The services sector has the largest share of India's GDP, accounting for 57% in 2012, up from 15% in 1950. It is the seventh-largest services sector by nominal GDP, and third largest when purchasing power is taken into account. The services sector provides employment to 27% of the workforce. Information technology and business process outsourcing are among the fastest-growing sectors, having a cumulative growth rate of revenue 33.6% between fiscal years 1997–98 and 2002–03, and contributing to 25% of the country's total exports in .
Aviation
India is the fourth-largest civil aviation market in the world recording an air traffic of 158 million passengers in 2017. The market is estimated to have 800 aircraft by 2020, which would account for 4.3% of global volumes, and is expected to record annual passenger traffic of 520 million by 2037. IATA estimated that aviation contributed $30 billion to India's GDP in 2017, and supported 7.5 million jobs – 390,000 directly, 570,000 in the value chain, and 6.2 million through tourism.
Civil aviation in India traces its beginnings to 18 February 1911, when Henri Pequet, a French aviator, carried 6,500 pieces of mail on a Humber biplane from Allahabad to Naini. Later on 15 October 1932, J.R.D. Tata flew a consignment of mail from Karachi to Juhu Airport. His airline later became Air India and was the first Asian airline to cross the Atlantic Ocean as well as first Asian airline to fly jets.
Nationalisation
In March 1953, the Indian Parliament passed the Air Corporations Act to streamline and nationalise the then existing privately owned eight domestic airlines into Indian Airlines for domestic services and the Tata group-owned Air India for international services. The International Airports Authority of India (IAAI) was constituted in 1972 while the National Airports Authority was constituted in 1986. The Bureau of Civil Aviation Security was established in 1987 following the crash of Air India Flight 182.
De-regulation
The government de-regularised the civil aviation sector in 1991 when the government allowed private airlines to operate charter and non-scheduled services under the 'Air Taxi' Scheme until 1994, when the Air Corporation Act was repealed and private airlines could now operate scheduled services. Private airlines including Jet Airways, Air Sahara, Modiluft, Damania Airways and NEPC Airlines commenced domestic operations during this period.
The aviation industry experienced a rapid transformation following deregulation. Several low-cost carriers entered the Indian market in 2004–05. Major new entrants included Air Deccan, Air Sahara, Kingfisher Airlines, SpiceJet, GoAir, Paramount Airways and IndiGo. Kingfisher Airlines became the first Indian air carrier on 15 June 2005 to order Airbus A380 aircraft worth 3 billion. However, Indian aviation would struggle due to an economic slowdown and rising fuel and operation costs. This led to consolidation, buyouts and discontinuations. In 2007, Air Sahara and Air Deccan were acquired by Jet Airways and Kingfisher Airlines respectively. Paramount Airways ceased operations in 2010 and Kingfisher shut down in 2012. Etihad Airways agreed to acquire a 24% stake in Jet Airways in 2013. AirAsia India, a low-cost carrier operating as a joint venture between Air Asia and Tata Sons launched in 2014. –14, only IndiGo and GoAir were generating profits. The average domestic passenger air fare dropped by 70% between 2005 and 2017, after adjusting for inflation.
Banking and financial services
The financial services industry contributed $809 billion (37% of GDP) and employed 14.17 million people (3% of the workforce) in 2016, and the banking sector contributed $407 billion (19% of GDP) and employed 5.5 million people (1% of the workforce) in 2016. The Indian money market is classified into the organised sector, comprising private, public and foreign-owned commercial banks and cooperative banks, together known as 'scheduled banks'; and the unorganised sector, which includes individual or family-owned indigenous bankers or money lenders and non-banking financial companies. The unorganised sector and microcredit are preferred over traditional banks in rural and sub-urban areas, especially for non-productive purposes such as short-term loans for ceremonies.
Prime Minister Indira Gandhi nationalised 14 banks in 1969, followed by six others in 1980, and made it mandatory for banks to provide 40% of their net credit to priority sectors including agriculture, small-scale industry, retail trade and small business, to ensure that the banks fulfilled their social and developmental goals. Since then, the number of bank branches has increased from 8,260 in 1969 to 72,170 in 2007 and the population covered by a branch decreased from 63,800 to 15,000 during the same period. The total bank deposits increased from in 1970–71 to in 2008–09. Despite an increase of rural branches – from 1,860 or 22% of the total in 1969 to 30,590 or 42% in 2007 – only 32,270 of 500,000 villages are served by a scheduled bank.
India's gross domestic savings in 2006–07 as a percentage of GDP stood at a high 32.8%. More than half of personal savings are invested in physical assets such as land, houses, cattle, and gold. The government-owned public-sector banks hold over 75% of total assets of the banking industry, with the private and foreign banks holding 18.2% and 6.5% respectively. Since liberalisation, the government has approved significant banking reforms. While some of these relate to nationalised banks – such as reforms encouraging mergers, reducing government interference and increasing profitability and competitiveness – other reforms have opened the banking and insurance sectors to private and foreign companies.
Financial technology
According to the report of The National Association of Software and Services Companies (NASSCOM), India has a presence of around 400 companies in the fintech space, with an investment of about $420 million in 2015. The NASSCOM report also estimated the fintech software and services market to grow 1.7 times by 2020, making it worth $8 billion. The Indian fintech landscape is segmented as follows – 34% in payment processing, followed by 32% in banking and 12% in the trading, public and private markets.
Information technology
The information technology (IT) industry in India consists of two major components: IT Services and business process outsourcing (BPO). The sector has increased its contribution to India's GDP from 1.2% in 1998 to 7.5% in 2012. According to NASSCOM, the sector aggregated revenues of 147 billion in 2015, where export revenue stood at 99 billion and domestic at 48 billion, growing by over 13%.
The growth in the IT sector is attributed to increased specialisation, and an availability of a large pool of low-cost, highly skilled, fluent English-speaking workers – matched by increased demand from foreign consumers interested in India's service exports, or looking to outsource their operations. The share of the Indian IT industry in the country's GDP increased from 4.8% in 2005–06 to 7% in 2008. In 2009, seven Indian firms were listed among the top 15 technology outsourcing companies in the world.
The business process outsourcing services in the outsourcing industry in India caters mainly to Western operations of multinational corporations. around 2.8 million people work in the outsourcing sector. Annual revenues are around $11 billion, around 1% of GDP. Around 2.5 million people graduate in India every year. Wages are rising by 10–15 percent as a result of skill shortages.
Insurance
India became the tenth-largest insurance market in the world in 2013, rising from 15th in 2011. At a total market size of US$66.4 billion in 2013, it remains small compared to world's major economies, and the Indian insurance market accounted for just 2% of the world's insurance business in 2017. India's life and non-life insurance industry collected in total gross insurance premiums in 2018. Life insurance accounts for 75.41% of the insurance market and the rest is general insurance. Of the 52 insurance companies in India, 24 are active in life-insurance business.
Specialised insurers Export Credit Guarantee Corporation and Agriculture Insurance Company (AIC) offer credit guarantee and crop insurance. It has introduced several innovative products such as weather insurance and insurance related to specific crops. The premium underwritten by the non-life insurers during 2010–11 was against in 2009–10. The growth was satisfactory, particularly given across-the-broad cuts in the tariff rates. The private insurers underwrote premiums of against in 2009–10.
The Indian insurance business had been under-developed with low levels of insurance penetration.
Retail
The retail industry, excluding wholesale, contributed $482 billion (22% of GDP) and employed 249.94 million people (57% of the workforce) in 2016. The industry is the second largest employer in India, after agriculture. The Indian retail market is estimated to be US$600 billion and one of the top-five retail markets in the world by economic value. India has one of the fastest-growing retail markets in the world, and is projected to reach $1.3 trillion by 2020. The e-commerce retail market in India was valued at $32.7 billion in 2018, and is expected to reach $71.9 billion by 2022.
India's retail industry mostly consists of local mom-and-pop stores, owner-manned shops and street vendors. Retail supermarkets are expanding, with a market share of 4% in 2008. In 2012, the government permitted 51% FDI in multi-brand retail and 100% FDI in single-brand retail. However, a lack of back-end warehouse infrastructure and state-level permits and red tape continue to limit growth of organised retail. Compliance with over thirty regulations such as "signboard licences" and "anti-hoarding measures" must be made before a store can open for business. There are taxes for moving goods from state to state, and even within states. According to The Wall Street Journal, the lack of infrastructure and efficient retail networks cause a third of India's agriculture produce to be lost from spoilage.
Tourism
The World Travel & Tourism Council calculated that tourism generated or 9.4% of the nation's GDP in 2017 and supported 41.622 million jobs, 8% of its total employment. The sector is predicted to grow at an annual rate of 6.9% to by 2028 (9.9% of GDP). Over 10 million foreign tourists arrived in India in 2017 compared to 8.89 million in 2016, recording a growth of 15.6%. India earned $21.07 billion in foreign exchange from tourism receipts in 2015. International tourism to India has seen a steady growth from 2.37 million arrivals in 1997 to 8.03 million arrivals in 2015. The United States is the largest source of international tourists to India, while European Union nations and Japan are other major sources of international tourists. Less than 10% of international tourists visit the Taj Mahal, with the majority visiting other cultural, thematic and holiday circuits. Over 12 million Indian citizens take international trips each year for tourism, while domestic tourism within India adds about 740 million Indian travellers.
India has a fast-growing medical tourism sector of its health care economy, offering low-cost health services and long-term care. In October 2015, the medical tourism sector was estimated to be worth US$3 billion. It is projected to grow to $7–8 billion by 2020. In 2014, 184,298 foreign patients traveled to India to seek medical treatment.
Media and entertainment industry
An ASSOCHAM-PwC joint study projected that the Indian media and entertainment industry would grow from a size of $30.364 billion in 2017 to $52.683 billion by 2022, recording a CAGR of 11.7%. The study also predicted that television, cinema and over-the-top services would account for nearly half of the overall industry growth during the period.
Healthcare
India's healthcare sector is expected to grow at a CAGR of 29% between 2015 and 2020, to reach US$280 billion, buoyed by rising incomes, greater health awareness, increased precedence of lifestyle diseases, and improved access to health insurance.
The ayurveda industry in India recorded a market size of $4.4 billion in 2018. The Confederation of Indian Industry estimates that the industry will grow at a CAGR 16% until 2025. Nearly 75% of the market comprises over-the-counter personal care and beauty products, while ayurvedic well-being or ayurvedic tourism services accounted for 15% of the market.
Logistics
The logistics industry in India was worth over $160 billion in 2016, and grew at a CAGR of 7.8% in the previous five-year period. The industry employs about 22 million people. It is expected to reach of a size of $215 billion by 2020. India was ranked 35th out of 160 countries in the World Bank's 2016 Logistics Performance Index.
Printing
Telecommunications
The telecommunication sector generated in revenue in 2014–15, accounting for 1.94% of total GDP. India is the second-largest market in the world by number of telephone users (both fixed and mobile phones) with 1.053 billion subscribers It has one of the lowest call-tariffs in the world, due to fierce competition among telecom operators. India has the world's third-largest Internet user-base. there were 342.65 million Internet subscribers in the country.
Industry estimates indicate that there are over 554 million TV consumers in India India is the largest direct-to-home (DTH) television market in the world by number of subscribers. there were 84.80 million DTH subscribers in the country.
Mining and construction
Mining
Mining contributed $63 billion (3% of GDP) and employed 20.14 million people (5% of the workforce) in 2016. India's mining industry was the fourth-largest producer of minerals in the world by volume, and eighth-largest producer by value in 2009. In 2013, it mined and processed 89 minerals, of which four were fuel, three were atomic energy minerals, and 80 non-fuel. The government-owned public sector accounted for 68% of mineral production by volume in 2011–12.
Nearly 50% of India's mining industry, by output value, is concentrated in eight states: Odisha, Rajasthan, Chhattisgarh, Andhra Pradesh, Telangana, Jharkhand, Madhya Pradesh and Karnataka. Another 25% of the output by value comes from offshore oil and gas resources. India operated about 3,000 mines in 2010, half of which were coal, limestone and iron ore. On output-value basis, India was one of the five largest producers of mica, chromite, coal, lignite, iron ore, bauxite, barite, zinc and manganese; while being one of the ten largest global producers of many other minerals. India was the fourth-largest producer of steel in 2013, and the seventh-largest producer of aluminium.
India's mineral resources are vast. However, its mining industry has declined – contributing 2.3% of its GDP in 2010 compared to 3% in 2000, and employed 2.9 million people – a decreasing percentage of its total labour. India is a net importer of many minerals including coal. India's mining sector decline is because of complex permit, regulatory and administrative procedures, inadequate infrastructure, shortage of capital resources, and slow adoption of environmentally sustainable technologies.
Iron and steel
In fiscal year 2014–15, India was the third-largest producer of raw steel and the largest producer of sponge iron. The industry produced 91.46 million tons of finished steel and 9.7 million tons of pig iron. Most iron and steel in India is produced from iron ore.
Construction
The construction industry contributed $288 billion (13% of GDP) and employed 60.42 million people (14% of the workforce) in 2016.
Foreign trade and investment
Foreign trade
Until the liberalisation of 1991, India was largely and intentionally isolated from world markets, to protect its economy and to achieve self-reliance. Foreign trade was subject to import tariffs, export taxes and quantitative restrictions, while foreign direct investment (FDI) was restricted by upper-limit equity participation, restrictions on technology transfer, export obligations and government approvals; these approvals were needed for nearly 60% of new FDI in the industrial sector. The restrictions ensured that FDI averaged only around $200 million annually between 1985 and 1991; a large percentage of the capital flows consisted of foreign aid, commercial borrowing and deposits of non-resident Indians. India's exports were stagnant for the first 15 years after independence, due to general neglect of trade policy by the government of that period; imports in the same period, with early industrialisation, consisted predominantly of machinery, raw materials and consumer goods.
Since liberalisation, the value of India's international trade has increased sharply, with the contribution of total trade in goods and services to the GDP rising from 16% in 1990–91 to 47% in 2009–10. Foreign trade accounted for 48.8% of India's GDP in 2015. Globally, India accounts for 1.44% of exports and 2.12% of imports for merchandise trade and 3.34% of exports and 3.31% of imports for commercial services trade. India's major trading partners are the European Union, China, the United States and the United Arab Emirates. In 2006–07, major export commodities included engineering goods, petroleum products, chemicals and pharmaceuticals, gems and jewellery, textiles and garments, agricultural products, iron ore and other minerals. Major import commodities included crude oil and related products, machinery, electronic goods, gold and silver. In November 2010, exports increased 22.3% year-on-year to , while imports were up 7.5% at . The trade deficit for the same month dropped from in 2009 to in 2010.
India is a founding-member of General Agreement on Tariffs and Trade (GATT) and its successor, the WTO. While participating actively in its general council meetings, India has been crucial in voicing the concerns of the developing world. For instance, India has continued its opposition to the inclusion of labour, environmental issues and other non-tariff barriers to trade in WTO policies.
India secured 43rd place in competitiveness index.
Balance of payments
Since independence, India's balance of payments on its current account has been negative. Since economic liberalisation in the 1990s, precipitated by a balance-of-payment crisis, India's exports rose consistently, covering 80.3% of its imports in 2002–03, up from 66.2% in 1990–91. However, the global economic slump followed by a general deceleration in world trade saw the exports as a percentage of imports drop to 61.4% in 2008–09. India's growing oil import bill is seen as the main driver behind the large current account deficit, which rose to $118.7 billion, or 11.11% of GDP, in 2008–09. Between January and October 2010, India imported $82.1 billion worth of crude oil. The Indian economy has run a trade deficit every year from 2002 to 2012, with a merchandise trade deficit of US$189 billion in 2011–12. Its trade with China has the largest deficit, about $31 billion in 2013.
India's reliance on external assistance and concessional debt has decreased since liberalisation of the economy, and the debt service ratio decreased from 35.3% in 1990–91 to 4.4% in 2008–09. In India, external commercial borrowings (ECBs), or commercial loans from non-resident lenders, are being permitted by the government for providing an additional source of funds to Indian corporates. The Ministry of Finance monitors and regulates them through ECB policy guidelines issued by the Reserve Bank of India (RBI) under the Foreign Exchange Management Act of 1999. India's foreign exchange reserves have steadily risen from $5.8 billion in March 1991 to ₹38,832.21 billion (US$540 billion) in July 2020. In 2012, the United Kingdom announced an end to all financial aid to India, citing the growth and robustness of Indian economy.
India's current account deficit reached an all-time high in 2013. India has historically funded its current account deficit through borrowings by companies in the overseas markets or remittances by non-resident Indians and portfolio inflows. From April 2016 to January 2017, RBI data showed that, for the first time since 1991, India was funding its deficit through foreign direct investment inflows. The Economic Times noted that the development was "a sign of rising confidence among long-term investors in Prime Minister Narendra Modi's ability to strengthen the country's economic foundation for sustained growth".
Foreign direct investment
As the third-largest economy in the world in PPP terms, India has attracted foreign direct investment (FDI). During the year 2011, FDI inflow into India stood at $36.5 billion, 51.1% higher than the 2010 figure of $24.15 billion. India has strengths in telecommunication, information technology and other significant areas such as auto components, chemicals, apparels, pharmaceuticals, and jewellery. Despite a surge in foreign investments, rigid FDI policies were a significant hindrance. Over time, India has adopted a number of FDI reforms. India has a large pool of skilled managerial and technical expertise. The size of the middle-class population stands at 300 million and represents a growing consumer market.
India liberalised its FDI policy in 2005, allowing up to a 100% FDI stake in ventures. Industrial policy reforms have substantially reduced industrial licensing requirements, removed restrictions on expansion and facilitated easy access to foreign technology and investment. The upward growth curve of the real-estate sector owes some credit to a booming economy and liberalised FDI regime. In March 2005, the government amended the rules to allow 100% FDI in the construction sector, including built-up infrastructure and construction development projects comprising housing, commercial premises, hospitals, educational institutions, recreational facilities, and city- and regional-level infrastructure. Between 2012 and 2014, India extended these reforms to defence, telecom, oil, retail, aviation, and other sectors.
From 2000 to 2010, the country attracted $178 billion as FDI. The inordinately high investment from Mauritius is due to routing of international funds through the country given significant tax advantages – double taxation is avoided due to a tax treaty between India and Mauritius, and Mauritius is a capital gains tax haven, effectively creating a zero-taxation FDI channel. FDI accounted for 2.1% of India's GDP in 2015.
As the government has eased 87 foreign investment direct rules across 21 sectors in the last three years, FDI inflows hit $60.1 billion between 2016 and 2017 in India.
Outflows
Since 2000, Indian companies have expanded overseas, investing FDI and creating jobs outside India. From 2006 to 2010, FDI by Indian companies outside India amounted to 1.34 per cent of its GDP. Indian companies have deployed FDI and started operations in the United States, Europe and Africa. The Indian company Tata is the United Kingdom's largest manufacturer and private-sector employer.
Remittances
In 2015, a total of US$68.91 billion was made in remittances to India from other countries, and a total of US$8.476 billion was made in remittances by foreign workers in India to their home countries. The UAE, the US, and Saudi Arabia were the top sources of remittances to India, while Bangladesh, Pakistan, and Nepal were the top recipients of remittances from India. Remittances to India accounted for 3.32% of the country's GDP in 2015.
Mergers and acquisitions
Between 1985 and 2018 20,846 deals have been announced in, into (inbound) and out of (outbound) India. This cumulates to a value of US$618 billion. In terms of value, 2010 has been the most active year with deals worth almost 60 bil. USD. Most deals have been conducted in 2007 (1,510).
Here is a list of the top 10 deals with Indian companies participating:
Currency
The Indian rupee () is the only legal tender in India, and is also accepted as legal tender in neighbouring Nepal and Bhutan, both of which peg their currency to that of the Indian rupee. The rupee is divided into 100 paise. The highest-denomination banknote is the 2,000 note; the lowest-denomination coin in circulation is the 50 paise coin. Since 30 June 2011, all denominations below 50 paise have ceased to be legal currency. India's monetary system is managed by the Reserve Bank of India (RBI), the country's central bank. Established on 1 April 1935 and nationalised in 1949, the RBI serves as the nation's monetary authority, regulator and supervisor of the monetary system, banker to the government, custodian of foreign exchange reserves, and as an issuer of currency. It is governed by a central board of directors, headed by a governor who is appointed by the Government of India. The benchmark interest rates are set by the Monetary Policy Committee.
The rupee was linked to the British pound from 1927 to 1946, and then to the US dollar until 1975 through a fixed exchange rate. It was devalued in September 1975 and the system of fixed par rate was replaced with a basket of four major international currencies: the British pound, the US dollar, the Japanese yen and the Deutsche Mark. In 1991, after the collapse of its largest trading partner, the Soviet Union, India faced the major foreign exchange crisis and the rupee was devalued by around 19% in two stages on 1 and 2 July. In 1992, a Liberalized Exchange Rate Mechanism (LERMS) was introduced. Under LERMS, exporters had to surrender 40 percent of their foreign exchange earnings to the RBI at the RBI-determined exchange rate; the remaining 60% could be converted at the market-determined exchange rate. In 1994, the rupee was convertible on the current account, with some capital controls.
After the sharp devaluation in 1991 and transition to current account convertibility in 1994, the value of the rupee has been largely determined by market forces. The rupee has been fairly stable during the decade 2000–2010. In October 2018, rupee touched an all-time low 74.90 to the US dollar.
Income and consumption
India's gross national income per capita had experienced high growth rates since 2002. It tripled from 19,040 in 2002–03 to 53,331 in 2010–11, averaging 13.7% growth each of these eight years, with peak growth of 15.6% in 2010–11. However, growth in the inflation-adjusted per-capita income of the nation slowed to 5.6% in 2010–11, down from 6.4% in the previous year. These consumption levels are on an individual basis. The average family income in India was $6,671 per household in 2011.
According to 2011 census data, India has about 330 million houses and 247 million households. The household size in India has dropped in recent years, the 2011 census reporting 50% of households have four or fewer members, with an average of 4.8 members per household including surviving grandparents. These households produced a GDP of about $1.7 trillion. Consumption patterns note: approximately 67% of households use firewood, crop residue, or cow-dung cakes for cooking purposes; 53% do not have sanitation or drainage facilities on premises; 83% have water supply within their premises or from their house in urban areas and from the house in rural areas; 67% of the households have access to electricity; 63% of households have landline or mobile telephone service; 43% have a television; 26% have either a two- or four-wheel motor vehicle. Compared to 2001, these income and consumption trends represent moderate to significant improvements. One report in 2010 claimed that high-income households outnumber low-income households.
New World Wealth publishes reports tracking the total wealth of countries, which is measured as the private wealth held by all residents of a country. According to New World Wealth, India's total wealth increased from $3,165 billion in 2007 to $8,230 billion in 2017, a growth rate of 160%. India's total wealth decreased by 1% from $8.23 trillion in 2017 to $8.148 trillion in 2018, making it the sixth wealthiest nation in the world. There are 20,730 multimillionaires (7th largest in the world) and 118 billionaires in India (3rd largest in the world). With 327,100 high net-worth individuals (HNWI), India is home to the 9th highest number of HNWIs in the world. Mumbai is the wealthiest Indian city and the 12th wealthiest in the world, with a total net worth of $941 billion in 2018. Twenty-eight billionaires reside in the city, ranked ninth worldwide. the next wealthiest cities in India were Delhi ($450 billion), Bangalore ($320 billion), Hyderabad ($310 billion), Kolkata ($290 billion), Chennai ($150 billion), and Gurgaon ($110 billion).
The Global Wealth Migration Review 2019 report, published by New World Wealth, found that 5,000 HNWI's emigrated from India in 2018, or about 2% of all HNWIs in the country. Australia, Canada, and the United States were among the top destination countries. The report also projected that private wealth in India would grow by around 180% to reach $22,814 billion by 2028.
Poverty
In May 2014, the World Bank reviewed and proposed revisions to its poverty calculation methodology of 2005 and purchasing-power-parity basis for measuring poverty. According to the revised methodology, the world had 872.3 million people below the new poverty line, of which 179.6 million lived in India. With 17.5% of the total world's population, India had a 20.6% share of the world's poorest in 2013. According to a 2005–2006 survey, India had about 61 million children under the age of 5 who were chronically malnourished. A 2011 UNICEF report stated that between 1990 and 2010, India achieved a 45 percent reduction in mortality rates under the age of 5, and now ranks 46th of 188 countries on this metric.
Since the early 1960s, successive governments have implemented various schemes to alleviate poverty, under central planning, that have met with partial success. In 2005, the government enacted the Mahatma Gandhi National Rural Employment Guarantee Act (MGNREGA), guaranteeing 100 days of minimum wage employment to every rural household in all the districts of India. In 2011, it was widely criticised and beset with controversy for corrupt officials, deficit financing as the source of funds, poor quality of infrastructure built under the programme, and unintended destructive effects. Other studies suggest that the programme has helped reduce rural poverty in some cases. Yet other studies report that India's economic growth has been the driver of sustainable employment and poverty reduction, though a sizeable population remains in poverty. India lifted 271 million people out of poverty between 2006 and 2016, recording the fastest reductions in the multidimensional poverty index values during the period with strong improvements in areas such as assets, cooking fuel, sanitation, and nutrition.
On the 2019 Global Hunger Index India ranked 102nd (out of 117 countries), being categorized as 'serious' in severity.
Employment
Agricultural and allied sectors accounted for about 52.1% of the total workforce in 2009–10. While agriculture employment has fallen over time in percentage of labour employed, services which include construction and infrastructure have seen a steady growth accounting for 20.3% of employment in 2012–13. Of the total workforce, 7% is in the organised sector, two-thirds of which are in the government-controlled public sector. About 51.2% of the workforce in India is self-employed. According to a 2005–06 survey, there is a gender gap in employment and salaries. In rural areas, both men and women are primarily self-employed, mostly in agriculture. In urban areas, salaried work was the largest source of employment for both men and women in 2006.
Unemployment in India is characterised by chronic (disguised) unemployment. Government schemes that target eradication of both poverty and unemployment – which in recent decades has sent millions of poor and unskilled people into urban areas in search of livelihoods – attempt to solve the problem by providing financial assistance for starting businesses, honing skills, setting up public sector enterprises, reservations in governments, etc. The decline in organised employment, due to the decreased role of the public sector after liberalisation, has further underlined the need for focusing on better education and created political pressure for further reforms. India's labour regulations are heavy, even by developing country standards, and analysts have urged the government to abolish or modify them to make the environment more conducive for employment generation. The 11th five-year plan has also identified the need for a congenial environment to be created for employment generation, by reducing the number of permissions and other bureaucratic clearances required. Inequalities and inadequacies in the education system have been identified as an obstacle, which prevents the benefits of increased employment opportunities from reaching all sectors of society.
Child labour in India is a complex problem that is rooted in poverty. Since the 1990s, the government has implemented a variety of programs to eliminate child labour. These have included setting up schools, launching free school lunch programs, creating special investigation cells, etc. Author Sonalde Desai stated that recent studies on child labour in India have found some pockets of industries in which children are employed, but overall, relatively few Indian children are employed. Child labour below the age of 10 is now rare. In the 10–14 age group, the latest surveys find only 2% of children working for wage, while another 9% work within their home or rural farms assisting their parents in times of high work demand such as sowing and harvesting of crops.
India has the largest diaspora around the world, an estimated 16 million people, many of whom work overseas and remit funds back to their families. The Middle East region is the largest source of employment for expat Indians. The crude oil production and infrastructure industry of Saudi Arabia employs over 2 million expat Indians. Cities such as Dubai and Abu Dhabi in the United Arab Emirates have employed another 2 million Indians during the construction boom in recent decades. In 2009–10, remittances from Indian migrants overseas stood at , the highest in the world, but their share in FDI remained low at around 1%.
Economic issues
Corruption
Corruption has been a pervasive problem in India. A 2005 study by Transparency International (TI) found that more than half of those surveyed had first-hand experience of paying a bribe or peddling influence to get a job done in a public office in the previous year. A follow-up study in 2008 found this rate to be 40 percent. In 2011, TI ranked India at 95th place amongst 183 countries in perceived levels of public sector corruption. By 2016, India saw a reduction in corruption, and its ranking improved to 79th place.
In 1996, red tape, bureaucracy, and the Licence Raj were suggested as a cause for the institutionalised corruption and inefficiency. More recent reports suggest the causes of corruption include excessive regulations and approval requirements, mandated spending programs, monopoly of certain goods and service providers by government-controlled institutions, bureaucracy with discretionary powers, and lack of transparent laws and processes.
Computerisation of services, various central and state vigilance commissions, and the 2005 Right to Information Act – which requires government officials to furnish information requested by citizens or face punitive action – have considerably reduced corruption and opened avenues to redress grievances.
In 2011, the Indian government concluded that most spending fails to reach its intended recipients, as the large and inefficient bureaucracy consumes budgets. India's absence rates are among the worst in the world; one study found that 25% of public sector teachers and 40% of government-owned public-sector medical workers could not be found at the workplace. Similarly, many issues are facing Indian scientists, with demands for transparency, a meritocratic system, and an overhaul of the bureaucratic agencies that oversee science and technology.
India has an underground economy, with a 2006 report alleging that India topped the worldwide list for black money with almost $1,456 billion stashed in Swiss banks. This would amount to 13 times the country's total external debt. These allegations have been denied by the Swiss Banking Association. James Nason, the Head of International Communications for the Swiss Banking Association, suggested "The (black money) figures were rapidly picked up in the Indian media and in Indian opposition circles, and circulated as gospel truth. However, this story was a complete fabrication. The Swiss Bankers Association never published such a report. Anyone claiming to have such figures (for India) should be forced to identify their source and explain the methodology used to produce them."
A recent step taken by Prime Minister Modi, on 8 November 2016, involved the demonetization of all 500 and 1000 rupee bank notes (replaced by new 500 and 2000 rupee notes) to return black money into the economy.
Education
India has made progress in increasing the primary education attendance rate and expanding literacy to approximately three-fourths of the population. India's literacy rate had grown from 52.2% in 1991 to 74.04% in 2011. The right to education at the elementary level has been made one of the fundamental rights under the eighty-sixth Amendment of 2002, and legislation has been enacted to further the objective of providing free education to all children. However, the literacy rate of 74% is lower than the worldwide average, and the country suffers from a high drop-out rate. Literacy rates and educational opportunities vary by region, gender, urban and rural areas, and among different social groups.
Economic disparities
A critical problem facing India's economy is the sharp and growing regional variations among India's different states and territories in terms of poverty, availability of infrastructure, and socio-economic development. Six low-income states – Assam, Chhattisgarh, Nagaland, Madhya Pradesh, Odisha, and Uttar Pradesh – are home to more than one-third of India's population. Severe disparities exist among states in terms of income, literacy rates, life expectancy, and living conditions.
The five-year plans, especially in the pre-liberalisation era, attempted to reduce regional disparities by encouraging industrial development in the interior regions and distributing industries across states. The results have been discouraging as these measures increased inefficiency and hampered effective industrial growth. The more advanced states have been better placed to benefit from liberalisation, with well-developed infrastructure and an educated and skilled workforce, which attract the manufacturing and service sectors. Governments of less-advanced states have tried to reduce disparities by offering tax holidays and cheap land and focused on sectors like tourism, which can develop faster than other sectors. India's income Gini coefficient is 33.9, according to the United Nations Development Program (UNDP), indicating overall income distribution to be more uniform than East Asia, Latin America, and Africa. The Global Wealth Migration Review 2019 report, published by New World Wealth, estimated that 48% of India's total wealth was held by high-net-worth individuals.
There is a continuing debate on whether India's economic expansion has been pro-poor or anti-poor. Studies suggest that economic growth has been pro-poor and has reduced poverty in India.
Security markets
The development of Indian security markets began with the launch of the Bombay Stock Exchange (BSE) in July 1875 and the Ahmedabad Stock exchange in 1894. Since then, 22 other exchanges have traded in Indian cities. In 2014, India's stock exchange market became the 10th largest in the world by market capitalisation, just above those of South Korea and Australia. India's two major stock exchanges, BSE and the National Stock Exchange of India, had a market capitalisation of US$1.71 trillion and US$1.68 trillion according to the World Federation of Exchanges, which grew to $3.36 trillion and $3.31 trillion respectively by September 2021.
The initial public offering (IPO) market in India has been small compared to NYSE and NASDAQ, raising US$300 million in 2013 and US$1.4 billion in 2012. Ernst & Young stated that the low IPO activity reflects market conditions, slow government approval processes, and complex regulations. Before 2013, Indian companies were not allowed to list their securities internationally without first completing an IPO in India. In 2013, these security laws were reformed and Indian companies can now choose where they want to list first: overseas, domestically, or both concurrently. Further, security laws have been revised to ease overseas listings of already-listed companies, to increase liquidity for private equity and international investors in Indian companies.
See also
Economic Advisory Council
Economic development in India
List of megaprojects in India
Make in India – a government program to encourage manufacturing in India
NITI Aayog
Startup India
Taxation in medieval India
Events:
Great Recession
World oil market chronology from 2003
Demonetization
Economic impact of the COVID-19 pandemic in India
Lists:
List of companies of India
List of largest companies in India
List of the largest trading partners of India
Trade unions in India
Natural resources of India
Notes
References
Further reading
Books
Malone, David M., C. Raja Mohan, and Srinath Raghavan, eds. The Oxford handbook of Indian foreign policy (2015) excerpt pp 609–649.
Papers and reports
Bahl, R., Heredia-Ortiz, E., Martinez-Vazquez, J., & Rider, M. (2005). India: Fiscal Condition of the States, International Experience, and Options for Reform: Volume 1 (No. paper05141). International Center for Public Policy, Andrew Young School of Policy Studies, Georgia State University.
Articles
News
a
External links
Ministry of Finance
Ministry of Commerce and Industry
Ministry of Statistics and Programme Implementation
India profile at the CIA World Factbook
India profile at The World Bank
India – OECD
India
Economy of Asia-related lists |
18477717 | https://en.wikipedia.org/wiki/Lutris | Lutris | Lutris is a free and open source game manager for Linux-based operating systems developed and maintained by Mathieu Comandon and the community, released under the GNU General Public License.
For games that require using Wine, community installer scripts are available that automatically configure the Wine environment. Lutris also offers integration for software purchased from GOG, Humble Bundle, Steam, and Epic Games Store; those can be launched directly through the Lutris application. Additionally, Lutris supports over 20 emulators including DOSBox, ScummVM, MAME, Snes9x, Dolphin, PCSX2 and PPSSPP.
Features
Lutris can pull games from various services and vendors, native Linux games, browser games, and emulated games and launch them all from one place. These different services or sources are called runners and are used to aggregate the games in one place. All the games installed through these sources are listed in user's Lutris window, and can be launched, uninstalled, and tweaked from there.
It allows users to install games directly from a huge online library of user contributed scripts. These make installing games on Linux very easy, with a few clicks, especially those using Wine and that need tweaks, registry entries, or specific tools to run correctly. Games can be installed either through the Lutris website or through the app itself. During installation, Lutris takes care of all the setup: installing the right Wine version, the right tweaks, creating the folder structure, enabling DXVk, or whatever other tools the game needs to run. This makes Lutris particularly useful for installing Windows games, because there is no need to run the installer through Wine, creating a new prefix, using winetricks to set some things up, or running command lines - all of that is done by Lutris. Some games offer multiple installation options, either to install the game from various sources (like Steam or GOG) or to offer various install scripts that might work better or worse on a user's hardware. Lutris displays ratings for these scripts, and some additional instructions one might need that Lutris can't handle itself, like installing dependencies. Lutris allows the user to tweak settings for each game and runner. While games installed with a Lutris install script won't need these, since they’ll apply them all for the user, games that someone installs manually or through steam might benefit from a few tweaks. Users can enable or disable DXVK or VKD3D for Direct X Games, enable Feral Gamemode, Esync, or even tell the game to run at a specific resolution or in a virtual desktop. Users can also choose the Wine version they’ll use, and download new ones from the Wine runner options.
In 2013, when Steam support was first added to Lutris, OMG! Ubuntu! noted that the database of Lutris games had thus far been limited. They also noted that while it was possible to submit installers for the Lutris database, each addition needed to be manually approved by the Lutris development team.
History
Lutris began as a piece of software called Oblivion Launcher, which was created in 2009 by Mathieu Comandon. He wanted an easier way to manage his games running on Linux, especially the ones that ran using Wine. Lutris began development on Launchpad, with the repository being created on May 5th, 2009. The first public release, 0.1, was on November 29th, 2009. In 2010, development moved to GitHub. The first commit was pushed to the Lutris GitHub repository in January, 2010.
References
See also
Wine
PlayOnLinux
2009 software
Free software programmed in Python
Linux emulation software
Multi-emulators
Software derived from or incorporating Wine |
55935604 | https://en.wikipedia.org/wiki/Games%20as%20a%20service | Games as a service | In the video game industry, games as a service (GaaS) represents providing video games or game content on a continuing revenue model, similar to software as a service. Games as a service are ways to monetize video games either after their initial sale, or to support a free-to-play model. Games released under the GaaS model typically receive a long or indefinite stream of monetized new content over time to encourage players to continue paying to support the game. This often leads to games that work under a GaaS model to be called "living games", "live games", or "live service games" since they continually change with these updates.
History and forms
The idea of games as a service began with the introduction of massively multiplayer online games (MMOs) like World of Warcraft, where the game's subscription model approach assured continued revenues to the developer and publisher to create new content. Over time, new forms of offering continued GaaS revenues have come about. A significant impact on the use of GaaS was the expansion of mobile gaming which often include a social element, such as playing or competing with friends, and with players wanting to buy into GaaS to continue to play with friends. Chinese publisher Tencent was one of the first companies to jump onto this around 2007 and 2008, establishing several different ways to monetize their products as a service to Chinese players which typically play on phone or at PC cafes rather than on consoles or computers, and since has become the world's largest video game publisher in terms of revenue. Another influential game establishing games as a service was Team Fortress 2. To fight against a shrinking player-base, Valve released the first of several free updates in 2008, the "Gold Rush Update" which featured new weapons and cosmetic skins that could be unlocked through in-game achievements. Further updates added similar weapons which starting include monetization options, such as buying virtual keys to open in-game loot boxes. Valve began earning enough from these revenues to transition Team Fortress 2 to a free-to-play title. Valve carried this principle over to Counter-Strike: Global Offensive and to Dota 2, the latter which was in competition with League of Legends by Riot Games. League of Legends, which had already had a microtransaction model in place, established a constant push of new content on a more frequent basis (in this case, the release of a new hero each week for several years straight) to compete, creating the concept of lifestyle games such as Destiny and Tom Clancy's The Division.
Some examples include:
Game subscriptions
Many massively multiplayer online games (MMOs) use monthly subscription models. Revenue from these subscriptions pay for the computer servers used to run the game, the people that manage and oversee the game on a daily basis, and the introduction of new content into the game. Several MMOs offer an initial trial period that allow players to try the game for a limited amount of time, or until their character reaches an experience level cap, after which they are required to pay to continue to play.
Game subscription services
Subscription services like EA Play and Xbox Game Pass grant subscribers complete access to a large library of games offered digitally with no limitations. User need to download these games to their local computer or console to play. However, users must remain subscribed to play these games; the games are protected by digital rights management that requires an active account to play. New games are typically added to the service, and in some cases, games may leave the service, after which subscribers will be unable to play that title. Such services may offer the ability to purchase these titles to own and allow them to play outside of the subscription service.
Cloud gaming / gaming on demand
Services like PlayStation Now, Stadia or GameFly allow players to play games that are run on remote servers on local devices, eliminating the need for specialized console hardware or powerful personal computers, outside of the necessary bandwidth for Internet connectivity. These otherwise operate similar to game services, in that the library of available titles may be added to or removed from over time, depending on the service.
Microtransactions
Microtransactions represent low-cost purchases, compared to the cost of a full game or a large expansion pack, that provide some form of additional content to the purchaser. The type of content can vary from additional downloadable content; new maps and levels for multiplayer games; new items, weapons, vehicles, clothing, or other gear for the player's character; power-ups and temporary buffs; in-game currency, and elements like loot boxes that provide a random assortment of items and rewards. Players do not necessarily need to purchase these items with real-world funds to acquire them. However, a game's design and financial approach that aims to provide ongoing service is aimed to assure that a small fraction of players will purchase this content immediately rather than grinding through the game for a long time to obtain it. These select "whales" providing sufficient revenue to support further development of new content. This approach is generally how free-to-play games like Puzzle & Dragons, Candy Crush Saga, and League of Legends support their ongoing development, as well as used atop full-priced games like Grand Theft Auto Online.
Season passes
Games with season passes provide one or more large content updates over the course of about a year, or a "season" in these terms. Players must buy into a season pass to access this new content; the game remains playable if players do not purchase the season pass and do gain benefit of core improvements to the game, but they are unable to access new maps, weapons, quests, game modes, or other gameplay elements without this content. Games like Destiny and its sequel and Anno 1800 use this season pass approach. A related concept is the battle pass which provides new customization options that a player can earn by completing challenges in a game, but only if they have bought into the battle pass; content on a battle pass is typically only obtainable during a limited time. Battle passes can be seen in games such as Dota 2, Rocket League, Tom Clancy's Rainbow Six Siege and Fortnite Battle Royale.
Blockchain game
Games which use technology based on blockchain strategies, which can include cryptocurrency and non-fungible tokens (NFTs). In contrast to microtransactions in which players can buy but not usually capable of trading items with other players, blockchain games encourage players to create value with blockchain items and trade and sell these, with the developer or publisher taking a small fee on each transaction. As blockchain items assure a singular owner, this can lead to high-selling items due to speculative buyers. As of 2021, blockchain games have yet to catch on due to the stigmas associated with cryptocurrency and NFTs but major publishers have expressed interest in exploring blockchain features. Some notable blockchain games include CryptoKitties and Axie Infinity.
Games may combine one or more of these forms. A common example are lifestyle games, which provide rotating daily content, which frequently reward the player with in-game currency to buy new equipment (otherwise purchasable with real-world funds), and extended by updates to the overall game. Examples of such lifestyle games include Destiny, Destiny 2, and many MMORPGs like World of Warcraft.
Rationale
The principal reason that many developers and publishers have adopted GaaS is financial, giving them the ability to capture more revenue from the market than with a single release title (otherwise known as "games as a product"). While not all players will be willing to spend additional money to gain new content, there can be enough demand from a smaller population of players to support the service model. For example, for World of Warcraft, it was estimated on the basis of average revenue per user (ARPU), that only 5% of the game's population paid 20 times more than the baseline ARPU, sufficient to continue ongoing development of the game. GaaS further represents a means by which games can improve their reputation to critics and players by continued improvements over time, using revenues earned from GaaS monetization to support the continued development and to draw in new sales for the product. Titles like Diablo III and Tom Clancy's Rainbow Six Siege are examples of games offering GaaS which initially launched with lukewarm reception but have been improved with continued service improvements.
Games as a service also impacts the development process for games. When developing a game as a product, there is generally a linear flow of tasks to assure that the product shipped is free of software bugs and other problems that may exist, which can be both time consuming and costly to test for. If there are critical bugs found post-release, this can also be costly to develop, test, and distribute software updates to rectify. In developing games as a service, where consumer expectation is already set to expect continual updates to the game, the rigor on software testing in the early stages of release may be forgone as to get the title out to players faster, accepting that software bugs may be present but will be fixed when the next update is released. Further, games developed as a service are more commonly driven by player feedback, so initial iterations of a game's release may be lightweight to be used as a foundation to build upon based on the game's community. This can further shorten the initial development cycle of a game. However, games as a service also increased overall development effort as there are usually two or more concurrent tracks to support a game; one working to support the current available release, and others that are working on the future content that will be added to the game.
While the games as a service model is aimed to extend revenues, the model also aimed to eliminate legal issues related to software licenses, specifically the concept of software ownership versus license. Case law for video games remains unclear whether retail and physical game products qualify as goods or services. If treated as goods, the purchaser gains several rights, in particular those related to the first-sale doctrine which allows them to resell or trade these games, which can subsequently affect sales revenue to publishers. The industry has generally considered that physical games are a service, enforced through End-user license agreements (EULA)s to try to limit post-sale activities, but these have generally not been enforceable since they affect consumers' rights, leading to confusion in the area. Instead, by transitioning to games as a service, where there is a clear service being offered, publishers and developers can clearly establish their works as services rather than goods. This further gives publishers more control over the use of the software and what actions users can do through an enforceable EULA, such as preventing class action lawsuits.
GaaS can reduce unauthorized copying. Also, certain games can be hosted in a cloud server, eliminating the need for installation in players' computers and consoles.
Impact
Industry analysis firm Digital River estimated that by 2016, 25% of the revenue of games on personal computers results from one form or another of GaaS. The firm argued that this reflects on consumers that want more out of games that are otherwise offered at full price ( at the time of the report) or will look for discounts, thus making the market ripe for post-release monetization. Several major publishers, including Square Enix, Ubisoft, and Electronic Arts have identified GaaS as a significant focus of their product lines in 2017, while others like Activision Blizzard and Take-Two Interactive recognize the importance of post-release support of a game to their financial bottom lines. GaaS is also seen as a developing avenue for indie video games, which frequently have a wider potential install base (across computer, consoles, and mobile devices) that they can draw service revenues from.
A study by DFC Intelligence in 2018 found Electronic Arts' value rose from to $33 billion since 2012, while Activision Blizzard saw its value rise from $20 billion to $60 billion in the same period, with both increases attributed in part to the use of the GaaS model in their games catalog. Electronic Arts had earned $2 billion from GaaS transactions in 2018.
References
Video game terminology
As a service |
20715685 | https://en.wikipedia.org/wiki/Deployable%20Joint%20Command%20and%20Control | Deployable Joint Command and Control | The Deployable Joint Command and Control system, commonly known as DJC2, is an integrated command and control headquarters system which enables a commander to set up a self-contained, self-powered, computer network-enabled temporary headquarters facility anywhere in the world within 6 – 24 hours of arrival at a location.
DJC2 is produced and fielded by the U.S. military to support Joint warfare. The DJC2 Joint Program Office developed the system, and it is integrated and produced by a U.S. Government integrator, the Naval Surface Warfare Center Panama City Division.
The base DJC2 system consists of a linked group of self-powered and climate-controlled tents which house computer network servers, computer workstations with furniture, satellite communications equipment, voice and data encryption equipment, a video teleconferencing system, video display screens, printers, fax machines, etc. Utilizing a fielded DJC2 system, the commander and his staff can securely communicate across the world, send and receive information across five different computer networks (including secure networks and the Internet), participate in video teleconferences with remote locations, and use a fully integrated command and control/collaboration software tool suite to plan and execute missions.
In addition to the base system, DJC2 includes some additional specialized configurations designed to support a commander's need for command and control capabilities in specialized circumstances. These configurations include: a "suitcase" communications suite which can be hand-carried and used on short notice by a first responder/small control team; and a small, air-certified headquarters suite which can operate aboard a military aircraft while in flight. The DJC2 system also includes an experimental concept demonstration suite with DJC2 workstations installed in shipboard containers for operation aboard a ship while underway.
Currently, the Department of Defense has produced and fielded six fully deployable DJC2 systems to commands in the United States and Europe. A DJC2 system was used in a Joint Task Force effort supporting the relief efforts in the immediate aftermath of Hurricane Katrina in New Orleans, Louisiana, as well as more recently in the Joint Task Force providing humanitarian assistance and disaster relief to Cyclone Nargis victims in Myanmar (Burma). The DJC2 systems have also been used in military exercises around the world, including the United States, Europe, Africa, Central America, and Asia.
Design
Subsystems
The DJC2 design provides the capabilities of its four configurations by integrating components from three different subsystems:
Infrastructure Support (IS) – Includes such components as climate-controlled tents; generators for power; environmental control units for heating/cooling; furniture for workstations; framework for mounting tools such as large video display screens; lighting, etc.
Information Technology (IT) – Includes such components as laptop workstations with network connectivity and desktop peripherals; five different networks (including a combination of secure and non-secure) with supporting computer servers; command and control software applications, including the Department of Defense's standardized command and control suite, Global Command and Control – Joint (GCCS-J); collaboration software applications, an online portal which provides access to software applications and data; voice communications; video display technology with large screen displays; video teleconferencing; cryptographic components and other information assurance (security) tools, etc.
Communications – Includes such components as satellite dishes, radios, communications interfaces, etc.
Architecture
The DJC2 command and control architecture is an open architecture based on Service Oriented Architecture principles. The architecture utilizes several technologies – including Internet Protocol Convergence (IPC) and virtualization – to reconcile the DJC2 system's robust IT requirements (i.e., five different networks, C2 and collaboration software applications, and communications) with its stringent deployability and rapid set-up requirements.
The DJC2 system's IPC Suite uses IPC technology to provide IP-based services and flexible bandwidth utilization. The system uses virtualization technology in both hardware (computer servers) and software. Virtualization of the DJC2 hardware allows one DJC2 computer server to do the job of multiple servers by creating "virtual machines" (VMs) that can run their own operating systems and applications just like a "real" server, and then can be hosted with many other VMs on one physical server to more effectively utilize computer server capacity. Virtualization of the DJC2 software allows for better portability of applications to differing hardware sets. Through virtualization, the DJC2 program reduced the size of its system (e.g., reducing the number of servers per network from an original nine to three plus one spare), which decreases both the amount of space required for the system to be set up (footprint) and the transportation assets needed to transport it (lift), as well as decreased cost and set-up time.
Refresh
The DJC2 program is currently in its Technology Refresh and Technology Insertion phase, which will include inserting such technologies as Secure Wireless Networking into the existing DJC2 systems.
Configurations
The DJC2 system includes four distinct DJC2 configurations, each designed to meet different mission needs ranging from the mission of a two-person first responder team up to the mission of a major Joint Task Force.
Core
The baseline configuration of the DJC2 system is the Core configuration, which enables a commander to rapidly deploy a fully capable temporary command and control headquarters. The Core is a 60-operator suite housed in tents which can be set up and fully operational with communications and network connectivity in less than 24 hours after arrival on-site. Each operator workstation includes two laptops to provide an operator with simultaneous access to two networks, and telephone and intercom capability. The Core is flexible and scalable, and can be tailored to an individual mission (i.e., a commander can take only what he needs for a particular mission and leave the rest behind).
The DJC2 Core can support a small Joint Task Force, or can be combined with other Cores to support a larger Joint Task Force. Though the Core provides physical space and workstations for 60 operator workstations, its computer servers can support more than 750 simultaneous users, so additional non-DJC2 computers can be connected to the DJC2 system when needed. Though the Core has organic communications capabilities, establishing a full Core requires supplemental communications support at the site, such as that provided by a military communications unit. The Core has its own generators for power and environmental control units for heating/cooling. However, it can be connected to external power when available. It also can be set up inside an available building instead of the tents.
Early Entry
Embedded within the Core is the Early Entry configuration, which enables a commander to deploy an early (first 72–96 hours) command and control presence and develop situational awareness at a location prior to setting up a full temporary headquarters (when needed). The Early Entry configuration is a 20/40-operator suite, housed in just two of the Core's rectangular tents, which can be set up and providing limited communications and network connectivity within 4 – 6 hours of arrival at a location. When the rest of the Core arrives, it can be quickly connected to the Early Entry components to provide full Core functionality within 24 hours. Like the Core, the Early Entry configuration has its own generators for power and environmental control units for heating/cooling, but can be connected to external power when available. It also can be set up inside an available building instead of the tents.
En Route
The En Route configuration is a stand-alone DJC2 suite which enables the commander to establish and sustain effective command and control and situational awareness while traveling by air from the garrison headquarters to a deployed location. It provides 6 – 12 workstations, mounted to a special aircraft pallet, which allow operators to communicate, connect to two networks (one secure and one non-secure), and perform command and control functions while in flight on a C-130 or C-17 aircraft. Specially designed "roller carts" house the primary networking and communications equipment and at least one system administrator position to manage the network and communication interfaces. The En Route configuration can be marshaled from short-term storage and ready for aircraft installation within 3 hours of notification. The workstations are also operable on the ground, and onboard while the aircraft is on the flight line. The En Route configuration requires an external power source, such as power from the aircraft.
Rapid Response Kit
The Rapid Response Kit is a stand-alone DJC2 suite which enables a commander to deploy a lightweight communications package anywhere in the world at a moment's notice by a very small team carrying it on a military or commercial aircraft. The Rapid Response Kit supports 2 – 15 operators. It has no computer servers; instead, it "reaches back" electronically to established U.S. Department of Defense networks via satellite connectivity. It provides two networks simultaneously, chosen from four network options, including both secure and non-secure networks. It is provides both voice and data communications, as well as a video teleconferencing capability. The Rapid Response Kit requires an external power source, such as commercial power from a building. It can also connect to other networks, such as a network in a commercial hotel.
Maritime Demonstrator
In partnership with the U.S. Navy Second Fleet, the DJC2 program has also produced and demonstrated a prototype configuration of a Joint Task Force headquarters afloat command and control capability, called the DJC2 Maritime Demonstrator. The demonstrator is a totally self-contained Joint Task Force Headquarters suite which can be installed aboard a ship. It requires nothing from the ship other than physical space, electrical power, and hotel services for the command staff. The demonstrator (which is a repackaging of the DJC2 architecture) consists of a group of climate-controlled ISO containers of two types: Staff Modules, which house 10 operator workstations per container; and Tech Control Modules, which house the networking, communications, and video distribution hardware that supports the operators.
The ISO containers are secured for sea using normal ship's tie-down points in any appropriate open space. The design also includes satellite communication antennas for connectivity to DISA's Global Information Grid. This connectivity may also be provided via the ship's organic satellite communications system. The demonstrator, which has been successfully tested, can easily be scaled larger when the mission requires it by adding more ISO containers of each type. In addition, the C2 architecture is sufficiently robust to support a large number of additional external users in other ship spaces (e.g., embarked unit spaces). The demonstrator can be installed on a designated ship of opportunity – including Navy, Military Sealift Command, Maritime Pre-Positioning Force Units, Coalition Force ships – or ashore. Since it is self-contained and easy to install/uninstall, it can be moved as needed among ships and installed wherever space and power are available.
Users
The DJC2 system is fielded to U.S. Combatant Commands and/or their component commands for their use in standing up Joint Task Forces in response to military and humanitarian crises. Currently, the commands that own a DJC2 system include: U.S. Southern Command; U.S. Pacific Command; U.S. European Command; Southern European Task Force/U.S. Army Africa; U.S. Army South, and III Marine Expeditionary Force.
The DJC2 system's command and control capabilities also have utility for non-military applications such as supporting Homeland Security efforts responding to natural disasters. As noted, DJC2 was used in a Joint Task Force supporting the rescue and relief effort in the aftermath of Hurricane Katrina on the U.S. Gulf Coast and of Cyclone Nargis in Myanmar (Burma).
Certifications
The DJC2 system is a fully tested, fully certified U.S. military system. Its certifications include:
Transportability (air, sea, road, and rail)
Information Assurance
Joint Interoperability
Authority to Operate
References
External links
DJC2 System
DJC2 Rapid Response Kit
DJC2 En Route Configuration
DJC2 Maritime Variant Demonstrator
DJC2 Deployable Joint Command and Control, official U.S. Navy web site (GILS Number: 001883)
Command and control in the United States Department of Defense
United States Army equipment
Georgia Tech Research Institute
Equipment of the United States Navy
Command and control systems of the United States military |
19980 | https://en.wikipedia.org/wiki/Machine%20translation | Machine translation | Machine translation, sometimes referred to by the abbreviation MT (not to be confused with computer-aided translation, machine-aided human translation or interactive translation), is a sub-field of computational linguistics that investigates the use of software to translate text or speech from one language to another.
On a basic level, MT performs mechanical substitution of words in one language for words in another, but that alone rarely produces a good translation because recognition of whole phrases and their closest counterparts in the target language is needed. Not all words in one language have equivalent words in another language, and many words have more than one meaning.
Solving this problem with corpus statistical and neural techniques is a rapidly-growing field that is leading to better translations, handling differences in linguistic typology, translation of idioms, and the isolation of anomalies.
Current machine translation software often allows for customization by domain or profession (such as weather reports), improving output by limiting the scope of allowable substitutions. This technique is particularly effective in domains where formal or formulaic language is used. It follows that machine translation of government and legal documents more readily produces usable output than conversation or less standardised text.
Improved output quality can also be achieved by human intervention: for example, some systems are able to translate more accurately if the user has unambiguously identified which words in the text are proper names. With the assistance of these techniques, MT has proven useful as a tool to assist human translators and, in a very limited number of cases, can even produce output that can be used as is (e.g., weather reports).
The progress and potential of machine translation have been much debated through its history. Since the 1950s, a number of scholars, first and most notably Yehoshua Bar-Hillel, have questioned the possibility of achieving fully automatic machine translation of high quality.
History
Origins
The origins of machine translation can be traced back to the work of Al-Kindi, a ninth-century Arabic cryptographer who developed techniques for systemic language translation, including cryptanalysis, frequency analysis, and probability and statistics, which are used in modern machine translation. The idea of machine translation later appeared in the 17th century. In 1629, René Descartes proposed a universal language, with equivalent ideas in different tongues sharing one symbol.
The idea of using digital computers for translation of natural languages was proposed as early as 1946 by England's A. D. Booth and Warren Weaver at Rockefeller Foundation at the same time. "The memorandum written by Warren Weaver in 1949 is
perhaps the single most influential publication in the earliest days of machine translation." Others followed. A demonstration was made in 1954 on the APEXC machine at Birkbeck College (University of London) of a rudimentary translation of English into French. Several papers on the topic were published at the time, and even articles in popular journals (for example an article by Cleave and Zacharov in the September 1955 issue of Wireless World). A similar application, also pioneered at Birkbeck College at the time, was reading and composing Braille texts by computer.
1950s
The first researcher in the field, Yehoshua Bar-Hillel, began his research at MIT (1951). A Georgetown University MT research team, led by Professor Michael Zarechnak, followed (1951) with a public demonstration of its Georgetown-IBM experiment system in 1954. MT research programs popped up in Japan and Russia (1955), and the first MT conference was held in London (1956).
David G. Hays "wrote about computer-assisted language processing as early as 1957" and "was project leader on computational linguistics
at Rand from 1955 to 1968."
1960–1975
Researchers continued to join the field as the Association for Machine Translation and Computational Linguistics was formed in the U.S. (1962) and the National Academy of Sciences formed the Automatic Language Processing Advisory Committee (ALPAC) to study MT (1964). Real progress was much slower, however, and after the ALPAC report (1966), which found that the ten-year-long research had failed to fulfill expectations, funding was greatly reduced. According to a 1972 report by the Director of Defense Research and Engineering (DDR&E), the feasibility of large-scale MT was reestablished by the success of the Logos MT system in translating military manuals into Vietnamese during that conflict.
The French Textile Institute also used MT to translate abstracts from and into French, English, German and Spanish (1970); Brigham Young University started a project to translate Mormon texts by automated translation (1971).
1975 and beyond
SYSTRAN, which "pioneered the field under contracts from the U. S. government" in the 1960s, was used by Xerox to translate technical manuals (1978). Beginning in the late 1980s, as computational power increased and became less expensive, more interest was shown in statistical models for machine translation. MT became more popular after the advent of computers. SYSTRAN's first implementation system was implemented in 1988 by the online service of the French Postal Service called Minitel. Various computer based translation companies were also launched, including Trados (1984), which was the first to develop and market Translation Memory technology (1989), though this is not the same as MT. The first commercial MT system for Russian / English / German-Ukrainian was developed at Kharkov State University (1991).
By 1998, "for as little as $29.95" one could "buy a program for translating in one direction between English and a major European language of
your choice" to run on a PC.
MT on the web started with SYSTRAN offering free translation of small texts (1996) and then providing this via AltaVista Babelfish, which racked up 500,000 requests a day (1997). The second free translation service on the web was Lernout & Hauspie's GlobaLink. Atlantic Magazine wrote in 1998 that "Systran's Babelfish and GlobaLink's Comprende" handled
"Don't bank on it" with a "competent performance."
Franz Josef Och (the future head of Translation Development AT Google) won DARPA's speed MT competition (2003). More innovations during this time included MOSES, the open-source statistical MT engine (2007), a text/SMS translation service for mobiles in Japan (2008), and a mobile phone with built-in speech-to-speech translation functionality for English, Japanese and Chinese (2009). In 2012, Google announced that Google Translate translates roughly enough text to fill 1 million books in one day.
Translation process
The human translation process may be described as:
Decoding the meaning of the source text; and
Re-encoding this meaning in the target language.
Behind this ostensibly simple procedure lies a complex cognitive operation. To decode the meaning of the source text in its entirety, the translator must interpret and analyse all the features of the text, a process that requires in-depth knowledge of the grammar, semantics, syntax, idioms, etc., of the source language, as well as the culture of its speakers. The translator needs the same in-depth knowledge to re-encode the meaning in the target language.
Therein lies the challenge in machine translation: how to program a computer that will "understand" a text as a person does, and that will "create" a new text in the target language that sounds as if it has been written by a person. Unless aided by a 'knowledge base' MT provides only a general, though imperfect, approximation of the original text, getting the "gist" of it (a process called "gisting"). This is sufficient for many purposes, including making best use of the finite and expensive time of a human translator, reserved for those cases in which total accuracy is indispensable.
Approaches
Machine translation can use a method based on linguistic rules, which means that words will be translated in a linguistic way – the most suitable (orally speaking) words of the target language will replace the ones in the source language.
It is often argued that the success of machine translation requires the problem of natural language understanding to be solved first.
Generally, rule-based methods parse a text, usually creating an intermediary, symbolic representation, from which the text in the target language is generated. According to the nature of the intermediary representation, an approach is described as interlingual machine translation or transfer-based machine translation. These methods require extensive lexicons with morphological, syntactic, and semantic information, and large sets of rules.
Given enough data, machine translation programs often work well enough for a native speaker of one language to get the approximate meaning of what is written by the other native speaker. The difficulty is getting enough data of the right kind to support the particular method. For example, the large multilingual corpus of data needed for statistical methods to work is not necessary for the grammar-based methods. But then, the grammar methods need a skilled linguist to carefully design the grammar that they use.
To translate between closely related languages, the technique referred to as rule-based machine translation may be used.
Rule-based
The rule-based machine translation paradigm includes transfer-based machine translation, interlingual machine translation and dictionary-based machine translation paradigms. This type of translation is used mostly in the creation of dictionaries and grammar programs. Unlike other methods, RBMT involves more information about the linguistics of the source and target languages, using the morphological and syntactic rules and semantic analysis of both languages. The basic approach involves linking the structure of the input sentence with the structure of the output sentence using a parser and an analyzer for the source language, a generator for the target language, and a transfer lexicon for the actual translation. RBMT's biggest downfall is that everything must be made explicit: orthographical variation and erroneous input must be made part of the source language analyser in order to cope with it, and lexical selection rules must be written for all instances of ambiguity. Adapting to new domains in itself is not that hard, as the core grammar is the same across domains, and the domain-specific adjustment is limited to lexical selection adjustment.
Transfer-based machine translation
Transfer-based machine translation is similar to interlingual machine translation in that it creates a translation from an intermediate representation that simulates the meaning of the original sentence. Unlike interlingual MT, it depends partially on the language pair involved in the translation.
Interlingual
Interlingual machine translation is one instance of rule-based machine-translation approaches. In this approach, the source language, i.e. the text to be translated, is transformed into an interlingual language, i.e. a "language neutral" representation that is independent of any language. The target language is then generated out of the interlingua. One of the major advantages of this system is that the interlingua becomes more valuable as the number of target languages it can be turned into increases. However, the only interlingual machine translation system that has been made operational at the commercial level is the KANT system (Nyberg and Mitamura, 1992), which is designed to translate Caterpillar Technical English (CTE) into other languages.
Dictionary-based
Machine translation can use a method based on dictionary entries, which means that the words will be translated as they are by a dictionary.
Statistical
Statistical machine translation tries to generate translations using statistical methods based on bilingual text corpora, such as the Canadian Hansard corpus, the English-French record of the Canadian parliament and EUROPARL, the record of the European Parliament. Where such corpora are available, good results can be achieved translating similar texts, but such corpora are still rare for many language pairs. The first statistical machine translation software was CANDIDE from IBM. Google used SYSTRAN for several years, but switched to a statistical translation method in October 2007. In 2005, Google improved its internal translation capabilities by using approximately 200 billion words from United Nations materials to train their system; translation accuracy improved. Google Translate and similar statistical translation programs work by detecting patterns in hundreds of millions of documents that have previously been translated by humans and making intelligent guesses based on the findings. Generally, the more human-translated documents available in a given language, the more likely it is that the translation will be of good quality. Newer approaches into Statistical Machine translation such as METIS II and PRESEMT use minimal corpus size and instead focus on derivation of syntactic structure through pattern recognition. With further development, this may allow statistical machine translation to operate off of a monolingual text corpus. SMT's biggest downfall includes it being dependent upon huge amounts of parallel texts, its problems with morphology-rich languages (especially with translating into such languages), and its inability to correct singleton errors.
Example-based
Example-based machine translation (EBMT) approach was proposed by Makoto Nagao in 1984. Example-based machine translation is based on the idea of analogy. In this approach, the corpus that is used is one that contains texts that have already been translated. Given a sentence that is to be translated, sentences from this corpus are selected that contain similar sub-sentential components. The similar sentences are then used to translate the sub-sentential components of the original sentence into the target language, and these phrases are put together to form a complete translation.
Hybrid MT
Hybrid machine translation (HMT) leverages the strengths of statistical and rule-based translation methodologies. Several MT organizations claim a hybrid approach that uses both rules and statistics. The approaches differ in a number of ways:
Rules post-processed by statistics: Translations are performed using a rules based engine. Statistics are then used in an attempt to adjust/correct the output from the rules engine.
Statistics guided by rules: Rules are used to pre-process data in an attempt to better guide the statistical engine. Rules are also used to post-process the statistical output to perform functions such as normalization. This approach has a lot more power, flexibility and control when translating. It also provides extensive control over the way in which the content is processed during both pre-translation (e.g. markup of content and non-translatable terms) and post-translation (e.g. post translation corrections and adjustments).
More recently, with the advent of Neural MT, a new version of hybrid machine translation is emerging that combines the benefits of rules, statistical and neural machine translation. The approach allows benefitting from pre- and post-processing in a rule guided workflow as well as benefitting from NMT and SMT. The downside is the inherent complexity which makes the approach suitable only for specific use cases.
Neural MT
A deep learning-based approach to MT, neural machine translation has made rapid progress in recent years, and Google has announced its translation services are now using this technology in preference over its previous statistical methods. A Microsoft team claimed to have reached human parity on WMT-2017 ("EMNLP 2017
Second Conference On Machine Translation") in 2018, marking a historical milestone.
However, many researchers have criticized this claim, rerunning and discussing their experiments; current consensus is that the so-called human parity achieved is not real, being based wholly on limited domains, language pairs, and certain test suits- i.e., it lacks statistical significance power. There is still a long journey before NMT reaches real human parity performances.
To address the idiomatic phrase translation, multi-word expressions, and low-frequency words (also called OOV, or out-of-vocabulary word translation), language-focused linguistic features have been explored in state-of-the-art neural machine translation (NMT) models. For instance, the Chinese character decompositions into radicals and strokes have proven to be helpful for translating multi-word expressions in NMT.
Major issues
Disambiguation
Word-sense disambiguation concerns finding a suitable translation when a word can have more than one meaning. The problem was first raised in the 1950s by Yehoshua Bar-Hillel. He pointed out that without a "universal encyclopedia", a machine would never be able to distinguish between the two meanings of a word. Today there are numerous approaches designed to overcome this problem. They can be approximately divided into "shallow" approaches and "deep" approaches.
Shallow approaches assume no knowledge of the text. They simply apply statistical methods to the words surrounding the ambiguous word. Deep approaches presume a comprehensive knowledge of the word. So far, shallow approaches have been more successful.
Claude Piron, a long-time translator for the United Nations and the World Health Organization, wrote that machine translation, at its best, automates the easier part of a translator's job; the harder and more time-consuming part usually involves doing extensive research to resolve ambiguities in the source text, which the grammatical and lexical exigencies of the target language require to be resolved:
The ideal deep approach would require the translation software to do all the research necessary for this kind of disambiguation on its own; but this would require a higher degree of AI than has yet been attained. A shallow approach which simply guessed at the sense of the ambiguous English phrase that Piron mentions (based, perhaps, on which kind of prisoner-of-war camp is more often mentioned in a given corpus) would have a reasonable chance of guessing wrong fairly often. A shallow approach that involves "ask the user about each ambiguity" would, by Piron's estimate, only automate about 25% of a professional translator's job, leaving the harder 75% still to be done by a human.
Non-standard speech
One of the major pitfalls of MT is its inability to translate non-standard language with the same accuracy as standard language. Heuristic or statistical based MT takes input from various sources in standard form of a language. Rule-based translation, by nature, does not include common non-standard usages. This causes errors in translation from a vernacular source or into colloquial language. Limitations on translation from casual speech present issues in the use of machine translation in mobile devices.
Named entities
In information extraction, named entities, in a narrow sense, refer to concrete or abstract entities in the real world such as people, organizations, companies, and places that have a proper name: George Washington, Chicago, Microsoft. It also refers to expressions of time, space and quantity such as 1 July 2011, $500.
In the sentence "Smith is the president of Fabrionix" both Smith and Fabrionix are named entities, and can be further qualified via first name or other information; "president" is not, since Smith could have earlier held another position at Fabrionix, e.g. Vice President.
The term rigid designator is what defines these usages for analysis in statistical machine translation.
Named entities must first be identified in the text; if not, they may be erroneously translated as common nouns, which would most likely not affect the BLEU rating of the translation but would change the text's human readability. They may be omitted from the output translation, which would also have implications for the text's readability and message.
Transliteration includes finding the letters in the target language that most closely correspond to the name in the source language. This, however, has been cited as sometimes worsening the quality of translation. For "Southern California" the first word should be translated directly, while the second word should be transliterated. Machines often transliterate both because they treated them as one entity. Words like these are hard for machine translators, even those with a transliteration component, to process.
Use of a "do-not-translate" list, which has the same end goal – transliteration as opposed to translation. still relies on correct identification of named entities.
A third approach is a class-based model. Named entities are replaced with a token to represent their "class"; "Ted" and "Erica" would both be replaced with "person" class token. Then the statistical distribution and use of person names, in general, can be analyzed instead of looking at the distributions of "Ted" and "Erica" individually, so that the probability of a given name in a specific language will not affect the assigned probability of a translation. A study by Stanford on improving this area of translation gives the examples that different probabilities will be assigned to "David is going for a walk" and "Ankit is going for a walk" for English as a target language due to the different number of occurrences for each name in the training data. A frustrating outcome of the same study by Stanford (and other attempts to improve named recognition translation) is that many times, a decrease in the BLEU scores for translation will result from the inclusion of methods for named entity translation.
Somewhat related are the phrases "drinking tea with milk" vs. "drinking tea with Molly."
Translation from multiparallel sources
Some work has been done in the utilization of multiparallel corpora, that is a body of text that has been translated into 3 or more languages. Using these methods, a text that has been translated into 2 or more languages may be utilized in combination to provide a more accurate translation into a third language compared with if just one of those source languages were used alone.
Ontologies in MT
An ontology is a formal representation of knowledge that includes the concepts (such as objects, processes etc.) in a domain and some relations between them. If the stored information is of linguistic nature, one can speak of a lexicon.
In NLP, ontologies can be used as a source of knowledge for machine translation systems. With access to a large knowledge base, systems can be enabled to resolve many (especially lexical) ambiguities on their own.
In the following classic examples, as humans, we are able to interpret the prepositional phrase according to the context because we use our world knowledge, stored in our lexicons:
I saw a man/star/molecule with a microscope/telescope/binoculars.
A machine translation system initially would not be able to differentiate between the meanings because syntax does not change. With a large enough ontology as a source of knowledge however, the possible interpretations of ambiguous words in a specific context can be reduced.
Other areas of usage for ontologies within NLP include information retrieval, information extraction and text summarization.
Building ontologies
The ontology generated for the PANGLOSS knowledge-based machine translation system in 1993 may serve as an example of how an ontology for NLP purposes can be compiled:
A large-scale ontology is necessary to help parsing in the active modules of the machine translation system.
In the PANGLOSS example, about 50,000 nodes were intended to be subsumed under the smaller, manually-built upper (abstract) region of the ontology. Because of its size, it had to be created automatically.
The goal was to merge the two resources LDOCE online and WordNet to combine the benefits of both: concise definitions from Longman, and semantic relations allowing for semi-automatic taxonomization to the ontology from WordNet.
A definition match algorithm was created to automatically merge the correct meanings of ambiguous words between the two online resources, based on the words that the definitions of those meanings have in common in LDOCE and WordNet. Using a similarity matrix, the algorithm delivered matches between meanings including a confidence factor. This algorithm alone, however, did not match all meanings correctly on its own.
A second hierarchy match algorithm was therefore created which uses the taxonomic hierarchies found in WordNet (deep hierarchies) and partially in LDOCE (flat hierarchies). This works by first matching unambiguous meanings, then limiting the search space to only the respective ancestors and descendants of those matched meanings. Thus, the algorithm matched locally unambiguous meanings (for instance, while the word seal as such is ambiguous, there is only one meaning of seal in the animal subhierarchy).
Both algorithms complemented each other and helped constructing a large-scale ontology for the machine translation system. The WordNet hierarchies, coupled with the matching definitions of LDOCE, were subordinated to the ontology's upper region. As a result, the PANGLOSS MT system was able to make use of this knowledge base, mainly in its generation element.
Applications
While no system provides the holy grail of fully automatic high-quality machine translation of unrestricted text, many fully automated systems produce reasonable output. The quality of machine translation is substantially improved if the domain is restricted and controlled.
Despite their inherent limitations, MT programs are used around the world. Probably the largest institutional user is the European Commission. The project, for example, coordinated by the University of Gothenburg, received more than 2.375 million euros project support from the EU to create a reliable translation tool that covers a majority of the EU languages. The further development of MT systems comes at a time when budget cuts in human translation may increase the EU's dependency on reliable MT programs. The European Commission contributed 3.072 million euros (via its ISA programme) for the creation of MT@EC, a statistical machine translation program tailored to the administrative needs of the EU, to replace a previous rule-based machine translation system.
In 2005, Google claimed that promising results were obtained using a proprietary statistical machine translation engine. The statistical translation engine used in the Google language tools for Arabic <-> English and Chinese <-> English had an overall score of 0.4281 over the runner-up IBM's BLEU-4 score of 0.3954 (Summer 2006) in tests conducted by the National Institute for Standards and Technology.
With the recent focus on terrorism, the military sources in the United States have been investing significant amounts of money in natural language engineering. In-Q-Tel (a venture capital fund, largely funded by the US Intelligence Community, to stimulate new technologies through private sector entrepreneurs) brought up companies like Language Weaver. Currently the military community is interested in translation and processing of languages like Arabic, Pashto, and Dari. Within these languages, the focus is on key phrases and quick communication between military members and civilians through the use of mobile phone apps. The Information Processing Technology Office in DARPA hosts programs like TIDES and Babylon translator. US Air Force has awarded a $1 million contract to develop a language translation technology.
The notable rise of social networking on the web in recent years has created yet another niche for the application of machine translation software – in utilities such as Facebook, or instant messaging clients such as Skype, GoogleTalk, MSN Messenger, etc. – allowing users speaking different languages to communicate with each other. Machine translation applications have also been released for most mobile devices, including mobile telephones, pocket PCs, PDAs, etc. Due to their portability, such instruments have come to be designated as mobile translation tools enabling mobile business networking between partners speaking different languages, or facilitating both foreign language learning and unaccompanied traveling to foreign countries without the need of the intermediation of a human translator.
Despite being labelled as an unworthy competitor to human translation in 1966 by the Automated Language Processing Advisory Committee put together by the United States government, the quality of machine translation has now been improved to such levels that its application in online collaboration and in the medical field are being investigated. The application of this technology in medical settings where human translators are absent is another topic of research, but difficulties arise due to the importance of accurate translations in medical diagnoses.
Evaluation
There are many factors that affect how machine translation systems are evaluated. These factors include the intended use of the translation, the nature of the machine translation software, and the nature of the translation process.
Different programs may work well for different purposes. For example, statistical machine translation (SMT) typically outperforms example-based machine translation (EBMT), but researchers found that when evaluating English to French translation, EBMT performs better. The same concept applies for technical documents, which can be more easily translated by SMT because of their formal language.
In certain applications, however, e.g., product descriptions written in a controlled language, a dictionary-based machine-translation system has produced satisfactory translations that require no human intervention save for quality inspection.
There are various means for evaluating the output quality of machine translation systems. The oldest is the use of human judges to assess a translation's quality. Even though human evaluation is time-consuming, it is still the most reliable method to compare different systems such as rule-based and statistical systems. Automated means of evaluation include BLEU, NIST, METEOR, and LEPOR.
Relying exclusively on unedited machine translation ignores the fact that communication in human language is context-embedded and that it takes a person to comprehend the context of the original text with a reasonable degree of probability. It is certainly true that even purely human-generated translations are prone to error. Therefore, to ensure that a machine-generated translation will be useful to a human being and that publishable-quality translation is achieved, such translations must be reviewed and edited by a human. The late Claude Piron wrote that machine translation, at its best, automates the easier part of a translator's job; the harder and more time-consuming part usually involves doing extensive research to resolve ambiguities in the source text, which the grammatical and lexical exigencies of the target language require to be resolved. Such research is a necessary prelude to the pre-editing necessary in order to provide input for machine-translation software such that the output will not be meaningless.
In addition to disambiguation problems, decreased accuracy can occur due to varying levels of training data for machine translating programs. Both example-based and statistical machine translation rely on a vast array of real example sentences as a base for translation, and when too many or too few sentences are analyzed accuracy is jeopardized. Researchers found that when a program is trained on 203,529 sentence pairings, accuracy actually decreases. The optimal level of training data seems to be just over 100,000 sentences, possibly because as training data increases, the number of possible sentences increases, making it harder to find an exact translation match.
Using machine translation as a teaching tool
Although there have been concerns about machine translation's accuracy, Dr. Ana Nino of the University of Manchester has researched some of the advantages in utilizing machine translation in the classroom. One such pedagogical method is called using "MT as a Bad Model." MT as a Bad Model forces the language learner to identify inconsistencies or incorrect aspects of a translation; in turn, the individual will (hopefully) possess a better grasp of the language. Dr. Nino cites that this teaching tool was implemented in the late 1980s. At the end of various semesters, Dr. Nino was able to obtain survey results from students who had used MT as a Bad Model (as well as other models.) Overwhelmingly, students felt that they had observed improved comprehension, lexical retrieval, and increased confidence in their target language.
Machine translation and signed languages
In the early 2000s, options for machine translation between spoken and signed languages were severely limited. It was a common belief that deaf individuals could use traditional translators. However, stress, intonation, pitch, and timing are conveyed much differently in spoken languages compared to signed languages. Therefore, a deaf individual may misinterpret or become confused about the meaning of written text that is based on a spoken language.
Researchers Zhao, et al. (2000), developed a prototype called TEAM (translation from English to ASL by machine) that completed English to American Sign Language (ASL) translations. The program would first analyze the syntactic, grammatical, and morphological aspects of the English text. Following this step, the program accessed a sign synthesizer, which acted as a dictionary for ASL. This synthesizer housed the process one must follow to complete ASL signs, as well as the meanings of these signs. Once the entire text is analyzed and the signs necessary to complete the translation are located in the synthesizer, a computer generated human appeared and would use ASL to sign the English text to the user.
Copyright
Only works that are original are subject to copyright protection, so some scholars claim that machine translation results are not entitled to copyright protection because MT does not involve creativity. The copyright at issue is for a derivative work; the author of the original work in the original language does not lose his rights when a work is translated: a translator must have permission to publish a translation.
See also
AI-complete
Cache language model
Comparison of machine translation applications
Comparison of different machine translation approaches
Computational linguistics
Computer-assisted translation and Translation memory
Controlled language in machine translation
Controlled natural language
Foreign language writing aid
Fuzzy matching
History of machine translation
Human language technology
Humour in translation ("howlers")
Language and Communication Technologies
Language barrier
List of emerging technologies
List of research laboratories for machine translation
Mobile translation
Neural machine translation
OpenLogos
Phraselator
Postediting
Pseudo-translation
Round-trip translation
Statistical machine translation
Translation memory
ULTRA (machine translation system)
Universal Networking Language
Universal translator
Notes
Further reading
Lewis-Kraus, Gideon, "Tower of Babble", New York Times Magazine, 7 June 2015, pp. 48–52.
Weber, Steven and Nikita Mehandru. 2021. "The 2020s political economy of machine translation." Business and Politics.
External links
The Advantages and Disadvantages of Machine Translation
International Association for Machine Translation (IAMT)
Machine Translation Archive by John Hutchins. An electronic repository (and bibliography) of articles, books and papers in the field of machine translation and computer-based translation technology
Machine translation (computer-based translation) – Publications by John Hutchins (includes PDFs of several books on machine translation)
Machine Translation and Minority Languages
John Hutchins 1999
Artificial intelligence applications
Computational linguistics
Computer-assisted translation
Tasks of natural language processing |
625728 | https://en.wikipedia.org/wiki/JavaOS | JavaOS | JavaOS is an operating system based on a Java virtual machine and predominantly used on SIM cards to run applications on behalf of operators and security services. It was originally developed by Sun Microsystems. Unlike Windows, macOS, Unix, or Unix-like systems which are primarily written in the C programming language, JavaOS is primarily written in Java. It is now considered a legacy system.
History
The Java programming language was introduced by Sun in May 1995. Jim Mitchell and Peter Madany at JavaSoft designed a new operating system, codenamed Kona, written completely in Java. In March 1996, Tom Saulpaugh joined the now seven-person Kona team to design an I/O architecture, having come from Apple as Mac OS engineer since June 1985 and co-architect of Copland.
JavaOS was first evangelized in a Byte article. In 1996, JavaSoft's official product announcement described the compact OS designed to run "in anything from net computers to pagers". In early 1997, JavaSoft transferred JavaOS to SunSoft. In late 1997, Bob Rodriguez led the team to collaborate with IBM who then marketed the platform, accelerated development, and made significant key architectural contributions to the next release of JavaOS, eventually renamed JavaOS for Business. IBM indicated its focus was more on network computer thin clients, specifically to replace traditional "green screen" and UNIX terminals, and to implement single application clients.
The Chorus distributed real-time operating system was used for its microkernel technology. This began with Chorus Systèmes SA, a French company, licensing JavaOS from Sun and replaced the earlier JavaOS hardware abstraction layer with the Chorus microkernel, thereby creating the Chorus/Jazz product, which was intended to allow Java applications to run in a distributed, real-time embedded system environment. Then in September 1997, it was announced that Sun Microsystems was acquiring Chorus Systèmes SA.
JavaSoft has granted licenses to more than 25 manufacturers, including Oracle Corp, Acer Inc., Xerox, Toshiba Corp, and Nokia. IBM and Sun announced the cooperation for JavaOS for Business at the end of March 1998.
In 1999, Sun and IBM announced the discontinuation of the JavaOS product. As early as 2003, Sun materials referred to JavaOS as a "legacy technology", recommending migration to Java ME, leaving the choice of specific OS and Java environment to the implementer.
Overview
JavaOS is based on a hardware architecture native microkernel, running on platforms including ARM, PowerPC, SPARC, StrongARM, and IA-32 (x86). The Java virtual machine runs on top of the microkernel. All device drivers are written in Java and executed by the virtual machine. A graphics and windowing system implementing the AWT API is also written in Java.
JavaOS was designed to run on embedded systems and has applications in devices such as set-top boxes, networking infrastructure, and ATMs. It comes with the JavaStation.
See also
JX (operating system)
JNode (operating system)
SavaJe
Android
Vino (operating system)
Java Desktop System
ChorusOS
Inferno (operating system)
References
External links
ARM operating systems
Embedded operating systems
Java platform
Microkernels
Microkernel-based operating systems
Object-oriented operating systems
Sun Microsystems software |
626229 | https://en.wikipedia.org/wiki/DOS/360%20and%20successors | DOS/360 and successors | Disk Operating System/360, also DOS/360, or simply DOS, is the discontinued first member of a sequence of operating systems for IBM System/360, System/370 and later mainframes. It was announced by IBM on the last day of 1964, and it was first delivered in June 1966. In its time, DOS/360 was the most widely used operating system in the world.
DOS versions
BOS/360
The Basic Operating System(BOS) was an early version of DOS and TOS which could provide usable functionality on a system with as little as 8 KB of main storage and one 2311 disk drive.
TOS/360
TOS/360 (Tape Operating System/360, not a DOS as such and not so called) was an IBM operating system for the System/360, used in the early days around 1965 to support the System/360 Model 30 and similar platforms.
TOS, as per the "Tape" in the name, required a tape drive. It shared most of the code base and some manuals with IBM's DOS/360.
TOS went through 14 releases, and was discontinued when disks such as the IBM 2311 and IBM 2314 became more affordable at the time of System/360, whereas they had been an expensive luxury on the IBM 7090.
DOS/360
DOS/360 was the primary operating system for most small to midsize S/360 installations.
DOS/VS
DOS/VS was released in 1972. The first DOS/VS release was numbered "Release 28" to signify an incremental upgrade from DOS/360. It added virtual memory in support of the new System/370 series hardware. It used a fixed page table which mapped a single address space of up to 16 megabytes for all partitions combined.
DOS/VS increased the number of partitions (separate simultaneous programs) from three (named Background, Foreground 1 and Foreground 2) to five (BG and F1 through F4) and allowed a system wide total of fifteen subtasks.
DOS/VS was succeeded by DOS/VSE through z/VSE.
DOS/VSE
DOS/VSE was introduced in 1979 as an "extended" version of DOS/VS to support the new 4300 processors.
The 4300 systems included a feature called ECPS:VSE that provided a single-level storage for both the processor and the I/O channels. DOS/VSE provided support for ECPS:VSE, but could also run on a System/370 without that feature. VSE was the last free version of DOS.
VSE/AF
VSE/Advanced Functions (VSE/AF) is a product that adds new device support and functionality to DOS/VSE. Many installations installed VSE/AF using products such as VSE System Installation Productivity Option/Extended (VSE System IPO/E), which combines DOS/VSE, VSE/AF and various other products.
SSX/VSE
SSX/VSE ("Small System Executive") was an attempt by IBM to simplify purchase and installation of VSE by providing a pre-generated system containing the OS and the most popular products. SSX was released in 1982, and later replaced by VSE/SP. SSX was sold by IBM as a bundle of 14 component products (Advanced Functions/VSE, VSE/POWER, ACF/VTAME, VSE/VSAM, CICS/DOS/VS, DOS/VS, Sort/Merge, VSE/ICCF, VSE/OCCF, VSE/IPCS, DOS/COBOL, Back Up/Restore, Space Management, VSE/DITTO), and originally would only agree to offer the individual products separately via RPQ, although IBM later agreed to add those products individually to its price list under pressure from ISVs who claimed that the bundling violated antitrust laws.
VSE/SP
In 1986 IBM released VSE/SP ("System Product") in conjunction with the announcement of the 9370 processors. VSE/SP replaced SSX/VSE and bundled VSE with the most popular VSE program products such as VSE/AF, ACF/VTAM, CICS, and POWER/VS. VSE/SP supported only 24-bit addresses, despite customer requests to provide an XA (31 bit) version.
VSE/ESA
VSE/ESA was a 31-bit DOS/VSE version, which was released in 1990 with support for up to 384 MB of real storage. It provided up to twelve static partitions and allowed VSE/POWER and ACF/VTAM to be run in private address spaces. It introduced a new feature called dynamic partitions which could allow up to 150 concurrent jobs, each in its own address space. Version 1 could run in either ESA or 370 mode, with the ESA mode also supporting XA hardware with limitations. Version 2 only supported ESA mode with ESA hardware.
z/VSE
IBM released z/VSE 3.1 in 2005. This change in naming reflected the new "System z" branding for IBM's mainframe product line, but did not represent a fundamental change in architecture from VSE/ESA 2.7 which preceded it. In particular, it did not support the new 64-bit z/Architecture, running only in 31-bit mode even on 64-bit capable machines. z/VSE 4.1 released in 2007 introduced support for 64-bit real addressing, with up to 8 GB of memory. However, while parts of the supervisor run in 64-bit mode, it only provides 31-bit virtual address spaces to problem state applications. As of 2011 one estimate placed the number of sites using z/VSE at around 4,000.
History
When developing a new hardware generation of unified System/360 (or S/360) computers, IBM had originally committed to delivering a single operating system, OS/360, also compatible with low-end machines; but hardware was already available and the OS/360 project fell further and further behind schedule, as described at length by Fred Brooks in The Mythical Man-Month. IBM was forced to quickly develop four additional systems:
BPS/360 for machines with at least 8 KB of core memory and a punched card reader,
BOS/360 for machines with at least 8 KB memory and a disk drive,
DOS/360 for machines with at least 16 KB memory and a disk drive,
TOS/360 for machines with at least 16 KB memory and a tape drive.
When OS/360 was finally released, a year late, it required at least 64 KB of memory. DOS was designed to use little memory, and could run on 16 KB machines, a configuration available on the low-end S/360 model 30. Unlike OS/360, DOS/360 was initially a single-job system which did not support multitasking. A version with multitasking, supporting up to three memory partitions, requiring 32 KB of memory was later released. Despite its limitations, DOS/360 became the most widely used operating system for processors with less than 256 KB of memory because: System/360 hardware sold very well; DOS/360 ran well on System/360 processors which medium-sized organizations could afford; and it was better than the "operating systems" these customers had before.
DOS/360 was the operating system which filled the time gap between the announcement of the System/360 and the availability of the intended operating system, OS/360. As a result of the delay, a number of customers implemented DOS systems and committed significant investments to run them. IBM expected that DOS/360 users would soon upgrade to OS/360, but as a result of those investments, they were reluctant to commit to such conversion. IBM then needed to continue to offer DOS/360 as an additional operating system. The Hacker's Jargon File incorrectly states that GECOS (also known as GCOS) was copied from DOS/360, which was not the case, however the Xerox Data Systems Xerox Operating System (XOS) was intentionally similar to DOS to simplify program porting.
Hardware requirements
DOS/360 required a System/360 CPU (model 25 and above) with the standard instruction set (decimal and floating-point instruction sets optional). The minimum memory requirement was 16 KB; storage protection was required only if multiprogramming was used. A 1052 Model 7 printer-keyboard, either a selector or multiplexor channel, and at least one disk drive was required — initially a 2311 holding 7.25 MB. A card reader, card punch and line printer were usually included, but magnetic tape drives could be substituted.
A typical configuration might consist of a S/360 model 30 with 32KB memory and the decimal instruction set, an IBM 2540 card reader/card punch, an IBM 1403 printer, two or three IBM 2311 disks, two IBM 2415 magnetic tape drives, and the 1052-7 console.
Technical details
The following description applies to DOS/360 except as otherwise noted. Later versions offer additional functionality.
Because DOS/360 was designed to run on low-end models of System/360 memory usage was a concern. It was possible to generate a DOS supervisor, the resident portion of the operating system, as small as 5902 bytes. Detailed charts listed memory requirements for each sysgen option, often as little as 100 bytes. A minimum system would leave just over 10 KB of storage available for a single batch partition which was enough to run utilities and all compilers except COBOL, PL/I, and full FORTRAN IV. To keep memory usage as small as possible, DOS was coded entirely in assembly language.
Transients
The concept of transient area is part of Mythical Man-Month's discussion on design and the use of main memory. To further reduce memory usage, the supervisor employed overlays called transients that were read into one of two reserved transient areas as required.
Physical transients were loaded into the 556 byte A-Transient area to handle hardware errors (ERPs), record error-specific data (OBR/MDR) on IJSYSRC, and issue error messages. All A-Transient module names began with $$A.
Logical transients were loaded into the 1200 byte B-Transient area to provide common program services like OPEN and CLOSE for LIOCS. All B-Transient module names began with $$B.
The use of $$A and $$B prefixes ensured rapid loading of transients because their names were stored first in the directory.
DOS/VS added Machine Check and Channel Check Handlers, which were another set of transients all starting with $$RAST and executing in the Recovery Transient area. This was done as part of the reliability, availability, and serviceability (RAS) enhancements for the System/370. Before this addition, machine checks caused termination of the program running and channel checks caused termination of the program accessing the device, at the time of the error.
Multiprogramming
Like OS/360, initial releases of DOS could run only one program at a time. Later versions of "real" DOS were able to run up to three programs concurrently, in separate memory partitions, supported by the same hardware memory protection features of the more scalable OS/360 operating system. These were identified as BG (background), F1 (foreground 1) and F2 (foreground 2). Multiprogramming was an optional feature of DOS/360, selectable at system generation. A later SYSGEN option allowed batch operation run in either FG partition. Otherwise foreground programs had to be manually started by the computer operator.
DOS/VS allowed up to seven concurrent programs, although five or six was a more common number due to the smaller scale of the hardware usually hosting DOS systems. Both DOS and DOS/VS allow the number of partitions to be set at IPL (Initial Program Load), the IBM term for Boot load.
Program libraries
Executable programs were stored in a Core Image Library. While running, DOS could not reclaim space as programs were deleted or replaced with newer versions. When the Core Image Library became full, it had to be compressed by a utility program, and this could halt development work until it was complete. Many shops simply froze changes for a day, compressed the CIL "off-line", and IPLed with the new Core Image Library at the beginning of a business day. A relocatable library for linkable object programs and a source statement library for assembler macros and include text were also supported. Installations could define additional private relocatable and source statement libraries on other disk volumes.
Utilities
DOS/360 had a set of utility programs, an Assembler, and compilers for FORTRAN, COBOL and eventually PL/I, and it supported a range of file organizations with access methods to help in using them:
Sequential data sets were only read or written, one record block at a time from beginning to end.
In indexed (ISAM) files a specified section of each record was defined as a key which could be used to look up specific records.
In direct access (BDAM) files, the application program had to specify the physical location on the disk of the data it wanted to access. BDAM programming was not easy and most customers never used it themselves; but it was the fastest way to access data on disks and many software companies used it in their products, especially database management systems such as ADABAS, IDMS and IBM's DBOMP and DL/I.
Sequential and ISAM files could store either fixed-length or variable-length records, and all types could occupy more than one disk volume.
Telecommunications
DOS/360 offered Basic Telecommunications Access Method (BTAM) and Queued Telecommunications Access Method (QTAM). BTAM was primitive and hard to use by later standards, but it allowed communication with almost any type of terminal, which was a big advantage at a time when there was little standardization of communications protocols. The simplicity of its API also allowed the relatively easy interface of external communications processors, which facilitated DOS/360 machines becoming nodes in the multi-tier networks of large organizations. Conversely, QTAM users didn't need as much knowledge about individual devices because QTAM operated at the logical level using the OPEN/CLOSE/GET/PUT macros.
Job control
All DOS job control statements began with "" in card columns one and two except end-of-job which was "", end-of-data, "", and comments , "". (In the description that follows the character "" represents a single blank.)
The statement indicates "the beginning of control information for a job." The format is // JOB <jobname> <comments>. must be one to eight alphanumeric characters to identify the job. are ignored.
The statement identifies a program to be executed as a job step. "All control statements necessary for execution must be processed" before the statement is read. The format is // EXEC <program>
The statement "can be used to allow for operator action between job steps." The format is // PAUSE <comment>. The comment is used to provide a message to the operator.
The statement may be used to display a message to the operator. The format is .
The end of data statement marks the end of data in the input stream. The format is /*. Any data on the statement following the blank is ignored.
The end of job statement marks the end of a job, and may indicate the end of data to be flushed if the job terminates abnormally. The format is /&. Any data on the statement following the blank is ignored.
The statement specifies values of system options that apply to this job. The format is // OPTION <option1>[,<option2>...].
The statement "is used to assign a logical I/O unit to a physical device." The format is // ASSGN SYSxxx,<device>[,<tape option>]. SYSxxx indicates a logical unit such as SYS001 or SYSIPT. is either "X'cuu'" to indicate a physical device (channel and unit), "IGN" for ignore, or "UA" for unassigned. specifies either tape mode settings such as density, parity, etc., or "ALT" to indicate an alternate device.
The statement resets specified I/O unit assignments to their permanent values. The format is // RESET <option>. may be "SYS" to reset all system logical unit assignments, "PROG" to reset all programmer assignments, "ALL" to reset all assignments, or "SYSxxx" to reset the assignment for the logical unit "SYSxxx", for example SYS002.
The statement instructs the system to print a listing of all specified I/O assignments currently in effect. The format is // LISTIO <option>. is "SYS" to list all system assignments, "PROG", "F1", or "F2" to list all assignments for the background or specified foreground partition, "ALL", "SYSxxx", "X'cuu'", "UNITS" to list all assigned units, 'UA" to list all unassigned units, or "DOWN" to list all units marked as inoperative.
The statement issues command to a magnetic tape unit. The format is // MTC <opcode>,SYSxxx[,<nn>]. is a function such as "FSF" to forward space one file or "REW" to rewind the tape. is a number that can specify the number of times the operation is to be performed, such as forward space two files.
The statement provides disk or tape volume label information for standard label checking. The format is // VOL SYSxxx,<volume>.
DOS originally provided the statement for tape label information and the and statements for disk label and extent information. At least as early as 1968 the statement had been replaced by and the statement by . These statements used numerous positional parameters and had fairly high information densities.
Differences from OS/360
Job control language
DOS JCL was designed for parsing speed and simplicity; the resulting positional syntax was significantly more cryptic than OS/360 keyword-driven job control.
Spooling
Early DOS included no spooling sub-system to improve the efficiency of punched card and line printer I/O. By the late 1960s both IBM and aftermarket vendors began filling this void. IBM's spooler was an option called POWER, and Software Design, Inc., an independent software company, sold a spooler called GRASP.
Program loading
DOS/360 had no relocating loader, so programmers had to link edit a separate executable version of each program for each partition, or address space, in which the program was likely to be run. Alternatively assembler-language programs could be written as self-relocating, but that imposed additional complexity and a size penalty, albeit a small one. Large DOS shops with multiple machines and multiple partition layouts often wrote their own relocating loader to circumvent this issue.
Application programming interface
The DOS/360 application programming interface was incompatible with OS/360. High level language programs written for DOS needed to be compiled and linked before they could be used with OS/360. Minor differences between compilers of DOS as opposed to OS sometimes required modifications to programs. The port in the other direction however was more challenging. Since OS/360 had significantly more features supported in its API, any use of those features would have to be removed from programs being ported to DOS. This was less of a problem for programmers working in high level languages such as COBOL. Assembler programs, on the other hand, tended to utilize those very features more often and usually needed greater modification to run on DOS.
See also
Timeline of operating systems
Notes
References
External links
DOS manuals at Bitsavers.org
DOS/VS section at VintageBigBlue.org
Disk operating systems
IBM mainframe operating systems
Assembly language software |
64534250 | https://en.wikipedia.org/wiki/Privacy%20concerns%20with%20Facebook | Privacy concerns with Facebook | Facebook has faced a number of privacy concerns. These stem partly from the company’s revenue model that involves selling information about its users, and the loss of privacy this could entail. In addition, employers and other organizations and individuals have been known to use Facebook data for their own purposes. As a result, individuals' identities have sometimes been compromised without their permission. In response, pressure groups and governments have increasingly asserted the users’ right to privacy and to be able to control their personal data.
Widening exposure of member information 2011–2012
In 2010, the Electronic Frontier Foundation identified two personal information aggregation techniques called "connections" and "instant personalization". They demonstrated that anyone could get access to information saved to a Facebook profile, even if the information was not intended to be made public. A "connection" is created when a user clicks a "Like" button for a product or service, either on Facebook itself or an external site. Facebook treats such relationships as public information, and the user's identity may be displayed on the Facebook page of the product or service.
Instant Personalization was a pilot program that shared Facebook account information with affiliated sites, such as sharing a user's list of "liked" bands with a music website, so that when the user visits the site, their preferred music plays automatically. The EFF noted that "For users that have not opted out, Instant Personalization is instant data leakage. As soon as you visit the sites in the pilot program (Yelp, Pandora, and Microsoft Docs) the sites can access your name, your picture, your gender, your current location, your list of friends, all the Pages you have Liked—everything Facebook classifies as public information. Even if you opt-out of Instant Personalization, there's still data leakage if your friends use Instant Personalization websites—their activities can give away information about you, unless you block those applications individually."
On December 27, 2012, CBS News reported that Randi Zuckerberg, sister of Facebook founder Mark Zuckerberg, criticized a friend for being "way uncool" in sharing a private Facebook photo of her on Twitter, only to be told that the image had appeared on a friend-of-a-friend's Facebook news feed. Commenting on this misunderstanding of Facebook's privacy settings, Eva Galperin of the EFF said "Even Randi Zuckerberg can get it wrong. That's an illustration of how confusing they can be."
Issues during 2007
In August 2007, the code used to generate Facebook's home and search page as visitors browse the site was accidentally made public. A configuration problem on a Facebook server caused the PHP code to be displayed instead of the web page the code should have created, raising concerns about how secure private data on the site was. A visitor to the site copied, published and later removed the code from his web forum, claiming he had been served and threatened with legal notice by Facebook. Facebook's response was quoted by the site that broke the story:
In November, Facebook launched Beacon, a system (discontinued in September 2009) where third-party websites could include a script by Facebook on their sites, and use it to send information about the actions of Facebook users on their site to Facebook, prompting serious privacy concerns. Information such as purchases made and games played were published in the user's news feed. An informative notice about this action appeared on the third party site and allowed the user to cancel it. The user could also cancel it on Facebook. Originally if no action was taken, the information was automatically published. On November 29 this was changed to require confirmation from the user before publishing each story gathered by Beacon.
On December 1, Facebook's credibility in regard to the Beacon program was further tested when it was reported that The New York Times "essentially accuses" Mark Zuckerberg of lying to the paper and leaving Coca-Cola, which is reversing course on the program, with a similar impression. A security engineer at CA, Inc. also claimed in a November 29, 2007, blog post that Facebook collected data from affiliate sites even when the consumer opted out and even when not logged into the Facebook site. On November 30, 2007, the CA security blog posted a Facebook clarification statement addressing the use of data collected in the Beacon program:
The Beacon service ended in September 2009 along with the settlement of a class-action lawsuit against Facebook resulting from the service.
News Feed and Mini-Feed
On September 5, 2006, Facebook introduced two new features called "News Feed" and "Mini-Feed". The first of the new features, News Feed, appears on every Facebook member's home page, displaying recent Facebook activities of the member's friends. The second feature, Mini-Feed, keeps a log of similar events on each member's profile page. Members can manually delete items from their Mini-Feeds if they wish to do so, and through privacy settings can control what is actually published in their respective Mini-Feeds.
Some Facebook members still feel that the ability to opt out of the entire News Feed and Mini-Feed system is necessary, as evidenced by a statement from the Students Against Facebook News Feed group, which peaked at over 740,000 members in 2006. Reacting to users' concerns, Facebook developed new privacy features to give users some control over information about them that was broadcast by the News Feed. According to subsequent news articles, members have widely regarded the additional privacy options as an acceptable compromise.
In May 2010, Facebook added privacy controls and streamlined its privacy settings, giving users more ways to manage status updates and other information broadcast to the public News Feed. Among the new privacy settings is the ability to control who sees each new status update a user posts: Everyone, Friends of Friends, or Friends Only. Users can now hide each status update from specific people as well. However, a user who presses "like" or comments on the photo or status update of a friend cannot prevent that action from appearing in the news feeds of all the user's friends, even non-mutual ones. The "View As" option, used to show a user how privacy controls filter out what a specific given friend can see, only displays the user's timeline and gives no indication that items missing from the timeline may still be showing up in the friend's own news feed.
Cooperation with government requests
Government and local authorities rely on Facebook and other social networks to investigate crimes and obtain evidence to help establish a crime, provide location information, establish motives, prove and disprove alibis, and reveal communications. Federal, state, and local investigations have not been restricted to profiles that are publicly available or willingly provided to the government; Facebook has willingly provided information in response to government subpoenas or requests, except with regard to private, unopened inbox messages less than 181 days old, which would require a warrant and a finding of probable cause under federal law under Electronic Communications Privacy Act (ECPA). One 2011 article noted that "even when the government lacks reasonable suspicion of criminal activity and the user opts for the strictest privacy controls, Facebook users still cannot expect federal law to stop their 'private' content and communications from being used against them".
Facebook's privacy policy states that "We may also share information when we have a good faith belief it is necessary to prevent fraud or other illegal activity, to prevent imminent bodily harm, or to protect ourselves and you from people violating our Statement of Rights and Responsibilities. This may include sharing information with other companies, lawyers, courts or other government entities". Since the U.S. Congress has failed to meaningfully amend the ECPA to protect most communications on social-networking sites such as Facebook, and since the U.S. Supreme Court has largely refused to recognize a Fourth Amendment privacy right to information shared with a third party, no federal statutory or constitutional right prevents the government from issuing requests that amount to fishing expeditions and there is no Facebook privacy policy that forbids the company from handing over private user information that suggests any illegal activity.
The 2013 mass surveillance disclosures identified Facebook as a participant in the U.S. National Security Administration's PRISM program. Facebook now reports the number of requests it receives for user information from governments around the world.
Complaint from CIPPIC
On May 31, 2008, the Canadian Internet Policy and Public Interest Clinic (CIPPIC), per Director Phillipa Lawson, filed a 35-page complaint with the Office of the Privacy Commissioner against Facebook based on 22 breaches of the Canadian Personal Information Protection and Electronic Documents Act (PIPEDA). University of Ottawa law students Lisa Feinberg, Harley Finkelstein, and Jordan Eric Plener, initiated the "minefield of privacy invasion" suit. Facebook's Chris Kelly contradicted the claims, saying that: "We've reviewed the complaint and found it has serious factual errors—most notably its neglect of the fact that almost all Facebook data is willingly shared by users." Assistant Privacy Commissioner Elizabeth Denham released a report of her findings on July 16, 2009. In it, she found that several of CIPPIC's complaints were well-founded. Facebook agreed to comply with some, but not all, of her recommendations. The Assistant Commissioner found that Facebook did not do enough to ensure users granted meaningful consent for the disclosure of personal information to third parties and did not place adequate safeguards to prevent unauthorized access by third party developers to personal information.
Data mining
There have been some concerns expressed regarding the use of Facebook as a means of surveillance and data mining.
Two Massachusetts Institute of Technology (MIT) students used an automated script to download the publicly posted information of over 70,000 Facebook profiles from four schools (MIT, NYU, the University of Oklahoma, and Harvard University) as part of a research project on Facebook privacy published on December 14, 2005. Since then, Facebook has bolstered security protection for users, responding: "We've built numerous defenses to combat phishing and malware, including complex automated systems that work behind the scenes to detect and flag Facebook accounts that are likely to be compromised (based on anomalous activity like lots of messages sent in a short period of time, or messages with links that are known to be bad)."
A second clause that brought criticism from some users allowed Facebook the right to sell users' data to private companies, stating "We may share your information with third parties, including responsible companies with which we have a relationship." This concern was addressed by spokesman Chris Hughes, who said, "Simply put, we have never provided our users' information to third party companies, nor do we intend to." Facebook eventually removed this clause from its privacy policy.
In the United Kingdom, the Trades Union Congress (TUC) has encouraged employers to allow their staff to access Facebook and other social-networking sites from work, provided they proceed with caution.
In September 2007, Facebook drew criticism after it began allowing search engines to index profile pages, though Facebook's privacy settings allow users to turn this off.
Concerns were also raised on the BBC's Watchdog program in October 2007 when Facebook was shown to be an easy way to collect an individual's personal information to facilitate identity theft. However, there is barely any personal information presented to non-friends - if users leave the privacy controls on their default settings, the only personal information visible to a non-friend is the user's name, gender, profile picture and networks.
An article in The New York Times in February 2008 pointed out that Facebook does not actually provide a mechanism for users to close their accounts, and raised the concern that private user data would remain indefinitely on Facebook's servers. , Facebook gives users the options to deactivate or delete their accounts.
Deactivating an account allows it to be restored later, while deleting it will remove the account "permanently", although some data submitted by that account ("like posting to a group or sending someone a message") will remain.
Onavo and Facebook Research
In 2013, Facebook acquired Onavo, a developer of mobile utility apps such as Onavo Protect VPN, which is used as part of an "Insights" platform to gauge the use and market share of apps. This data has since been used to influence acquisitions and other business decisions regarding Facebook products. Criticism of this practice emerged in 2018, when Facebook began to advertise the Onavo Protect VPN within its main app on iOS devices in the United States. Media outlets considered the app to effectively be spyware due to its behavior, adding that the app's listings did not readily disclaim Facebook's ownership of the app and its data collection practices. Facebook subsequently pulled the iOS version of the app, citing new iOS App Store policies forbidding apps from performing analytics on the usage of other apps on a user's device.
Since 2016, Facebook has also run "Project Atlas"—publicly known as "Facebook Research"—a market research program inviting teenagers and young adults between the ages of 13 and 35 to have data such as their app usage, web browsing history, web search history, location history, personal messages, photos, videos, emails, and Amazon order history, analyzed by Facebook. Participants would receive up to $20 per-month for participating in the program. Facebook Research is administered by third-party beta testing services, including Applause, and requires users to install a Facebook root certificate on their phone. After a January 2019 report by TechCrunch on Project Atlas, which alleged that Facebook bypassed the App Store by using an Apple enterprise program for apps used internally by a company's employees, Facebook refuted the article but later announced its discontinuation of the program on iOS.
On January 30, 2019, Apple temporarily revoked Facebook's Enterprise Developer Program certificates for one day, which caused all of the company's internal iOS apps to become inoperable. Apple stated that "Facebook has been using their membership to distribute a data-collecting app to consumers, which is a clear breach of their agreement with Apple", and that the certificates were revoked "to protect our users and their data". US Senators Mark Warner, Richard Blumenthal, and Ed Markey separately criticized Facebook Research's targeting of teenagers, and promised to sponsor legislation to regulate market research programs.
Inability to voluntarily terminate accounts
Facebook had allowed users to deactivate their accounts but not actually remove account content from its servers. A Facebook representative explained to a student from the University of British Columbia that users had to clear their own accounts by manually deleting all of the content including wall posts, friends, and groups. The New York Times noted the issue and raised a concern that emails and other private user data remain indefinitely on Facebook's servers. Facebook subsequently began allowing users to permanently delete their accounts in 2010. Facebook's Privacy Policy now states, "When you delete an account, it is permanently deleted from Facebook."
Memorials
A notable ancillary effect of social-networking websites is the ability for participants to mourn publicly for a deceased individual. On Facebook, friends often leave messages of sadness, grief, or hope on the individual's page, transforming it into a public book of condolences. This particular phenomenon has been documented at a number of schools. Facebook originally held a policy that profiles of people known to be deceased would be removed after 30 days due to privacy concerns. Due to user response, Facebook changed its policy to place deceased members' profiles in a "memorialization state". Facebook's Privacy Policy regarding memorialization says, "If we are notified that a user is deceased, we may memorialize the user's account. In such cases we restrict profile access to confirmed friends and allow friends and family to write on the user's Wall in remembrance. We may close an account if we receive a formal request from the user's next of kin or other proper legal request to do so."
Some of these memorial groups have also caused legal issues. Notably, on January 1, 2008, one such memorial group posted the identity of murdered Toronto teenager Stefanie Rengel, whose family had not yet given the Toronto Police Service their consent to release her name to the media, and the identities of her accused killers, in defiance of Canada's Youth Criminal Justice Act, which prohibits publishing the names of the under-age accused. While police and Facebook staff attempted to comply with the privacy regulations by deleting such posts, they noted difficulty in effectively policing the individual users who repeatedly republished the deleted information.
Customization and security
In July 2007, Adrienne Felt, an undergraduate student at the University of Virginia, discovered a cross-site scripting (XSS) hole in the Facebook Platform that could inject JavaScript into profiles. She used the hole to import custom CSS and demonstrate how the platform could be used to violate privacy rules or create a worm.
Inadequate privacy controls
Facebook offers privacy controls in order to allow users to choose who can view their posts: only friends, friends and friends of friends, everyone, custom (specific choice of which friends can see posts). While these options exist, there are still methods by which otherwise unauthorized third parties can view a post. For example, posting a picture and marking it as only viewable by friends, but tagging someone else as appearing in that picture, causes the post to be viewable by friends of the tagged person(s).
Photos taken of people by others can be posted on Facebook without the knowledge or consent of people appearing in the image; persons may have multiple photos which feature them on Facebook without being aware of it. A study has suggested that a photo of a person which reflects poorly on them posted online can have a more harmful effect than losing a password.
When commenting on a private post, the commenting user is not informed if the post they commented on is later made public – which would make their comment on said post also publicly viewable.
Quit Facebook Day
Quit Facebook Day was an online event which took place on May 31, 2010 (coinciding with Memorial Day), in which Facebook users stated that they would quit the social network due to privacy concerns.
It was estimated that 2% of Facebook users coming from the United States would delete their accounts.
However, only 33,000 (roughly 0.0066% of its roughly 500 million members at the time) users quit the site. The number one reason for users to quit Facebook was privacy concerns (48%), being followed by a general dissatisfaction with Facebook (14%), negative aspects regarding Facebook friends (13%), and the feeling of getting addicted to Facebook (6%). Facebook quitters were found to be more concerned about privacy, more addicted to the Internet, and more conscientious.
Photo recognition and face tagging
Facebook enabled an automatic facial recognition feature in June 2011, called "Tag Suggestions", a product of a research project named "DeepFace". The feature compares newly uploaded photographs to those of the uploader's Facebook friends, to suggest photo tags.
National Journal Daily claims "Facebook is facing new scrutiny over its decision to automatically turn on a new facial recognition feature aimed at helping users identify their friends in photos". Facebook has defended the feature, saying users can disable it. Facebook introduced the feature on an opt-out basis. European Union data-protection regulators said they would investigate the feature to see if it violated privacy rules.
Naomi Lachance stated in a web blog for NPR, All Tech Considered, that Facebook's facial recognition is right 98% of the time compared to the FBI's 85% out of 50 people. However, the accuracy of Facebook searches is due to its larger, more diverse photo selection compared to the FBI's closed database.
Mark Zuckerberg showed no worries when speaking about Facebook's AIs, saying, "Unsupervised learning is a long-term focus of our AI research team at Facebook, and it remains an important challenge for the whole AI research community" and "It will save lives by diagnosing diseases and driving us around more safely. It will enable breakthroughs by helping us find new planets and understand Earth's climate. It will help in areas we haven't even thought of today".
Investigation by the Irish Data Protection Commissioner, 2011–2012
In August 2011, the Irish Data Protection Commissioner (DPC) started an investigation after receiving 22 complaints by europe-v-facebook.org, which was founded by a group of Austrian students. The DPC stated in first reactions that the Irish DPC is legally responsible for privacy on Facebook for all users within the European Union and that he will "investigate the complaints using his full legal powers if necessary". The complaints were filed in Ireland because all users who are not residents of the United States or Canada have a contract with "Facebook Ireland Ltd", located in Dublin, Ireland. Under European law Facebook Ireland is the "data controller" for facebook.com, and therefore, facebook.com is governed by European data protection laws. Facebook Ireland Ltd. was established by Facebook Inc. to avoid US taxes (see Double Irish arrangement).
The group 'europe-v-facebook.org' made access requests at Facebook Ireland and received up to 1,222 pages of data per person in 57 data categories that Facebook was holding about them, including data that was previously removed by the users. The group claimed that Facebook failed to provide some of the requested data, including "likes", facial recognition data, data about third party websites that use "social plugins" visited by users, and information about uploaded videos. Currently the group claims that Facebook holds at least 84 data categories about every user.
The first 16 complaints target different problems, from undeleted old "pokes" all the way to the question if sharing and new functions on Facebook should be opt-in or opt-out. The second wave of 6 more complaints was targeting more issues including one against the "Like" button. The most severe could be a complaint that claims that the privacy policy, and the consent to the privacy policy is void under European laws.
In an interview with the Irish Independent, a spokesperson said that the DPC will "go and audit Facebook, go into the premises and go through in great detail every aspect of security". He continued by saying: "It's a very significant, detailed and intense undertaking that will stretch over four or five days." In December 2011 the DPC published its first report on Facebook. This report was not legally binding but suggested changes that Facebook should undertake until July 2012. The DPC is planning to do a review about Facebook's progress in July 2012.
Changes
In spring 2012, Facebook had to undertake many changes (e.g., having an extended download tool that should allow users to exercise the European right to access all stored information or an update of the worldwide privacy policy). These changes were seen as not sufficient to comply with European law by europe-v-facebook.org. The download tool does not allow, for example, access to all data. The group has launched our-policy.org to suggest improvements to the new policy, which they saw as a backdrop for privacy on Facebook. Since the group managed to get more than 7.000 comments on Facebook's pages, Facebook had to do a worldwide vote on the proposed changes. Such a vote would have only been binding if 30% of all users would have taken part. Facebook did not promote the vote, resulting in only 0.038% participation with about 87% voting against Facebook's new policy. The new privacy policy took effect on the same day.
Tracking of non-members of Facebook
An article published by USA Today in November 2011 claimed that Facebook creates logs of pages visited both by its members and by non-members, relying on tracking cookies to keep track of pages visited.
In early November 2015, Facebook was ordered by the Belgian Privacy Commissioner to cease tracking non-users, citing European laws, or risk fines of up to £250,000 per day. As a result, instead of removing tracking cookies, Facebook banned non-users in Belgium from seeing any material on Facebook, including publicly posted content, unless they sign in. Facebook criticized the ruling, saying that the cookies provided better security.
Stalking
By statistics, 63% of Facebook profiles are automatically set "visible to the public", meaning anyone can access the profiles that users have updated. Facebook also has its own built-in messaging system that people can send messages to any other user, unless they have disabled the feature to "from friends only". Stalking is not only limited to SNS stalking, but can lead to further "in-person" stalking because nearly 25% of real-life stalking victims reported it started with online instant messaging (e.g., Facebook chat).
Performative surveillance
Performative surveillance is the notion that people are very much aware that they are being surveilled on websites, like Facebook, and use the surveillance as an opportunity to portray themselves in a way that connotes a certain lifestyle—of which, that individual may, or may not, distort how they are perceived in reality.
2010 application privacy breach
In 2010, the Wall Street Journal found that many of Facebook's top-rated apps—including apps from Zynga and Lolapps—were transmitting identifying information to "dozens of advertising and Internet tracking companies" like RapLeaf. The apps used an HTTP referer that exposed the user's identity and sometimes their friends' identities. Facebook said that "While knowledge of user ID does not permit access to anyone’s private information on Facebook, we plan to introduce new technical systems that will dramatically limit the sharing of User ID’s". A blog post by a member of Facebook's team further stated that "press reports have exaggerated the implications of sharing a user ID", though still acknowledging that some of the apps were passing the ID in a manner that violated Facebook's policies.
2010 user list
In 2010, Canadian security consultant Ron Bowes of Skull Security created a BitTorrent download consisting of the names of about 100 million Facebook users. Facebook likened the information to what is listed in a phone book. It included some who had opted not to be found by search engines, and some who did not realize their information was public. Bowes created the list to get statistical information about user names, which can be used in both penetration testing and computer break-ins.
AT&T routing glitch
In 2009 and 2010, the fact that Facebook was not requiring connections to use HTTPS other than at login meant that a routing glitch at AT&T caused cookie to end up on the wrong users' phones. This resulted in some Facebook users having continuous access to another person's account instead of their own.
Facebook and Cambridge Analytica data scandal
In 2018, Facebook admitted that an app made by Global Science Research and Alexandr Kogan, related to Cambridge Analytica, was able in 2014 to harvest personal data of up to 87 million Facebook users without their consent, by exploiting their friendship connection to the users who sold their data via the app. Following the revelations of the breach, several public figures, including industrialist Elon Musk and WhatsApp cofounder Brian Acton, announced that they were deleting their Facebook accounts, using the hashtag "#deletefacebook".
Facebook was also criticized for allowing the 2012 Barack Obama presidential campaign to analyze and target select users by providing the campaign with friendship connections of users who signed up for an application. However, users signing up for the application were aware that their data, but not the data of their friends, was going to a political party.
Employer-employee privacy issues
In an effort to surveil the personal lives of current, or prospective, employees, some employers have asked employees to disclose their Facebook login information. This has resulted in the passing of a bill in New Jersey making it illegal for employers to ask potential or current employees for access to their Facebook accounts. Although the U.S government has yet to pass a national law protecting prospective employees and their social networking sites, from employers, the fourth amendment of the US constitution can protect prospective employees in specific situations. Many companies examine Facebook profiles of job candidates looking for reasons to not hire them. Because of this, many employees feel like their online social media rights and privacy are being violated. In addition, employees begin to make performative profiles where they purposefully portray themselves as professional and have desired personality traits. According to a survey of hiring managers by CareerBuilder.com, the most common deal breakers they found on Facebook profiles include references to drinking, poor communication skills, inappropriate photos, and lying about skills and/or qualifications.
Facebook requires employees and contractors working for them to give permission for Facebook to access their personal profiles, including friend requests and personal messages.
Users violating minimum age requirements
A 2011 study in the online journal First Monday examines how parents consistently enable children as young as 10 years old to sign up for accounts, directly violating Facebook's policy banning young visitors. This policy is in compliance with a United States law, the 1998 Children's Online Privacy Protection Act, which requires minors aged under 13 to gain explicit parental consent to access commercial websites. In jurisdictions where a similar law sets a lower minimum age, Facebook enforces the lower age. Of the 1,007 households surveyed for the study, 76% of parents reported that their child joined Facebook at an age younger than 13, the minimum age in the site's terms of service. The study also reported that Facebook removes roughly 20,000 users each day for violating its minimum age policy. The study's authors also note, "Indeed, Facebook takes various measures both to restrict access to children and delete their accounts if they join." The findings of the study raise questions primarily about the shortcomings of United States federal law, but also implicitly continue to raise questions about whether or not Facebook does enough to publicize its terms of service with respect to minors. Only 53% of parents said they were aware that Facebook has a minimum signup age; 35% of these parents believe that the minimum age is merely a recommendation or thought the signup age was 16 or 18, not 13.
Student-related issues
Student privacy concerns
Students who post illegal or otherwise inappropriate material have faced disciplinary action from their universities, colleges, and schools including expulsion. Others posting libelous content relating to faculty have also faced disciplinary action. The Journal of Education for Business states that "a recent study of 200 Facebook profiles found that 42% had comments regarding alcohol, 53% had photos involving alcohol use, 20% had comments regarding sexual activities, 25% had seminude or sexually provocative photos, and 50% included the use of profanity." It is inferred that negative or incriminating Facebook posts can affect alumni's and potential employers' perception of them. This perception can greatly impact the students' relationships, ability to gain employment, and maintain school enrollment. The desire for social acceptance leads individuals to want to share the most intimate details of their personal lives along with illicit drug use and binge drinking. Too often, these portrayals of their daily lives are exaggerated and/or embellished to attract others like minded to them.
Effect on Class Engagement
Students in general have a higher engagement when using facebook groups in class, as students can comment on each other's short writings or videos. However, it limits student's writing to be shorter since checking on spelling and typing on a phone keyword is relatively more time consuming.
Effect on higher education
On January 23, 2006, The Chronicle of Higher Education continued an ongoing national debate on social networks with an opinion piece written by Michael Bugeja, director of the Journalism school at Iowa State University, entitled "Facing the Facebook". Bugeja, author of the Oxford University Press text Interpersonal Divide (2005), quoted representatives of the American Association of University Professors and colleagues in higher education to document the distraction of students using Facebook and other social networks during class and at other venues in the wireless campus. Bugeja followed up on January 26, 2007 in The Chronicle with an article titled "Distractions in the Wireless Classroom", quoting several educators across the country who were banning laptops in the classroom. Similarly, organizations such as the National Association for Campus Activities, the Association for Education in Journalism and Mass Communication, and others have hosted seminars and presentations to discuss ramifications of students' use of Facebook and other social-networking sites.
The EDUCAUSE Learning Initiative has also released a brief pamphlet entitled "7 Things You Should Know About Facebook" aimed at higher education professionals that "describes what [Facebook] is, where it is going, and why it matters to teaching and learning".
Some research on Facebook in higher education suggests that there may be some small educational benefits associated with student Facebook use, including improving engagement which is related to student retention. 2012 research has found that time spent on Facebook is related to involvement in campus activities. This same study found that certain Facebook activities like commenting and creating or RSVPing to events were positively related to student engagement while playing games and checking up on friends was negatively related. Furthermore, using technologies such as Facebook to connect with others can help college students be less depressed and cope with feelings of loneliness and homesickness.
Effect on college student grades
As of February 2012, only four published peer-reviewed studies have examined the relationship between Facebook use and grades. The findings vary considerably. Pasek et al. (2009) found no relationship between Facebook use and grades. Kolek and Saunders (2008) found no differences in overall grade point average (GPA) between users and non-users of Facebook. Kirschner and Karpinski (2010) found that Facebook users reported a lower mean GPA than non-users. Junco's (2012) study clarifies the discrepancies in these findings. While Junco (2012) found a negative relationship between time spent on Facebook and student GPA in his large sample of college students, the real-world impact of the relationship was negligible. Furthermore, Junco (2012) found that sharing links and checking up on friends were positively related to GPA while posting status updates was negatively related. In addition to noting the differences in how Facebook use was measured among the four studies, Junco (2012) concludes that the ways in which students use Facebook are more important in predicting academic outcomes.
Phishing
Phishing refers to a scam used by criminals to trick people into revealing passwords, credit card information, and other sensitive information. On Facebook, phishing attempts occur through message or wall posts from a friend's account that was breached. If the user takes the bait, the phishers gain access to the user's Facebook account and send phishing messages to the user's other friends. The point of the post is to get the users to visit a website with viruses and malware.
Unpublished photo disclosure bug
In September 2018, a software bug meant that photos that had been uploaded to Facebook accounts, but that had not been "published" (and which therefore should have remained private between the user and Facebook), were exposed to app developers. Approximately 6.8 million users and 1500 third-party apps were affected.
Sharing private messages and contacts' details without consent
In December 2018, it emerged that Facebook had, during the period 2010–2018, granted access to users' private messages, address book contents, and private posts, without the users' consent, to more than 150 third parties including Microsoft, Amazon, Yahoo, Netflix, and Spotify. This had been occurring despite public statements from Facebook that it had stopped such sharing years earlier.
Denial of location privacy, regardless of user settings
In December 2018, it emerged that Facebook's mobile app reveals the user's location to Facebook, even if the user does not use the "check in" feature and has configured all relevant settings within the app so as to maximize location privacy.
E-commerce and drop shipping scams
In April 2016, Buzzfeed published an article exposing drop shippers who were using Facebook and Instagram to swindle unsuspecting customers. Located mostly in China, these drop shippers and e-commerce sites would steal copyrighted images from larger retailers and influencers to gain credibility. After luring a customer with a low price for the item, they would then deliver a product that is nothing like what was advertised or deliver no product at all.
Health data from apps sent to Facebook without user consent
In February 2019, it emerged that a number of Facebook apps, including Flo, had been sending users' health data such as blood pressure and ovulation status to Facebook without users' informed consent. New York governor Andrew Cuomo called the practice an "outrageous abuse of privacy", ordered New York's department of state and department of financial services to investigate, and encouraged federal regulators to step in.
International lobbying against privacy protections
In early 2019, it was reported that Facebook had spent years lobbying extensively against privacy protection laws around the world, such as the General Data Protection Regulation (GDPR).
The lobbying included efforts by Sandberg to "bond" with female European officials including Enda Kenny (then Prime Minister of Ireland, where Facebook's European operations are based), to influence them in Facebook's favor. Other politicians reportedly lobbied by Facebook in relation to privacy protection laws included George Osborne (then Chancellor of the Exchequer), Pranab Mukherjee (then President of India), and Michel Barnier.
In 2021 Facebook attempted to use "a legal trick" to bypass GDPR regulations in the European Union by including personal data processing agreement in what they considered to be a "contract" (Article 6(1)(b) GDPR) rather than a "consent" (Article 6(1)(a) GDPR) which would lead to the user effectively granting Facebook a very broad permission to process their personal data with most of the GDPR controls void. Irish Data Protection Commission (DPC) expressed its preliminary approval for this bypass and sent its draft decision to other data protection authorities in the European Union, at which point the document was leaked to media and published on noyb.eu. DPC sent a takedown notice to noyb.eu, which was also published by the portal which reject to self-censor.
Unencrypted password storage
In March 2019, Facebook admitted that it had mistakenly stored "hundreds of millions" of passwords of Facebook and Instagram users in plaintext (as opposed to being hashed and salted) on multiple internal systems accessible only to Facebook engineers, dating as far back as 2012. Facebook stated that affected users would be notified, but that there was no evidence that this data had been abused or leaked.
In April 2019, Facebook admitted that its subsidiary Instagram also stored millions of unencrypted passwords.
Promotion of service as "free"
In December 2019, the Hungarian Competition Authority fined Facebook around US$4 million for false advertising, ruling that Facebook cannot market itself as a "free" (no cost) service because the use of detailed personal information to deliver targeted advertising constituted a compensation that must be provided to Facebook to use the service.
Targeted Advertising Based on Eavesdropping
It has been highly publicised that users feel the app listens to private conversations without their consent, in order to serve up customized advertisements. Many users report being shown advertisements for products which they have only spoken about but have not searched for, "liked" on Facebook, or even ever purchased, leading to the belief that Facebook listens to conversations.
Facebook has denied for years that it listens to conversations and in turn releases ads based on them, however Facebook has been shown to have lied about their policies in the past. In 2016, Facebook stated "Facebook does not use your phone's microphone to inform ads or to change what you see in News Feed." a spokeswoman said, "some recent articles have suggested that we must be listening to people's conversations in order to show them relevant ads. This is not true. We show ads based on people's interests and other profile information, not what you’re talking out loud about."
Oculus Anti Investigation
Oculus is a trademark of Facebook Technologies, LLC (formerly identified as Oculus VR, LLC) which produces virtual reality headsets. In March 2014, Facebook Inc. purchased Oculus for 2.3 billion US dollars.
On October 13, 2020, Oculus released its new model, Oculus Quest 2. The latter requires linking to a Facebook account. This is a problem not only at the individual level but also in the government as well. In Germany, the Federal Cartel Office has launched an investigation into competition law and is concerned about how Facebook is abusing it. Given Facebook's powerful position on social media and its huge impact on virtual reality devices, requiring linking to your personal social media account may be an anti-competitive behavior. As it is not yet known exactly what the outcome of the investigation will be, Oculus and the Facebook company are currently awaiting a hearing in the Düsseldorf Higher Regional Court on March 26, 2021. In the summer of 2020, it was announced that the latest Oculus model must be linked to a specific person's Facebook account. By January 1, 2023, the person must be linked to a Facebook account, or the older Oculus model will no longer provide a full-fledged experience - new games will no longer be available and existing games may no longer work. A personal Facebook account is required for full functionality. Facebook accounts that do not use the user's real name and correct date of birth may result in a ban on access to the Oculus headset. All this leads to a situation where a person who does not have a Facebook account has to do it for oneself and a fake social media account does not count. So people need to be willing to give Facebook their data for the benefit of their gaming habits. This allows, in addition to the information available on social media, to analyze even more sensitive data, such as a person's biometric responses to VR games and entertainment being viewed.
Scraping of contact information
Personal information of 533 million Facebook users, including names, phone numbers, email addresses, and other user profile data, was posted to a hacking forum in April, 2021. This information had been previously leaked through a feature allowing users to find each other by phone number, which Facebook fixed to prevent this abuse in September 2019. The company decided not to notify users of the data breach.
The Irish Data Protection Commission, which has jurisdiction over Facebook due to the location of its EU headquarters, then opened an investigation into the breach as a possible violation of the EU's General Data Protection Regulation.
References
External links
How to Permanently delete or deactivate your Facebook Account on Android Phone step by step
How to delete your Facebook account and deactivate your Facebook account in hindi
Internet privacy
Facebook
Facebook
Privacy controversies and disputes |
886363 | https://en.wikipedia.org/wiki/Taylor%20University | Taylor University | Taylor University is a private, interdenominational, evangelical Christian university in Upland, Indiana. Founded in 1846, it is one of the oldest evangelical Christian universities in the country.
The university is named after Bishop William Taylor (1821–1902). The university sits on an approximately campus on the south side of Upland. It also preserves a arboretum and an additional of undeveloped land northeast of campus which have more of arboretum space.
Taylor University has 1,910 undergraduate students, 108 graduate students, and 524 distance learning students. The student body hails from 46 states and 43 foreign countries, with 35 percent from Indiana. Taylor is a member of NAIA with 15 men's and women's sports teams. The university is accredited by the Higher Learning Commission and is a member of the Council for Christian Colleges and Universities and the Christian College Consortium.
In August 2021, Dr. Michael Lindsay was named as the current president.
History
Founding
In 1846, Taylor University was originally established as Fort Wayne Female College in Fort Wayne, Indiana. In the first full year of the school, about 100 women were enrolled, paying $22.50 per year. During this time, it was common for women to obtain an M.E.L. degree, the Mistress of English Literature. Fort Wayne Female College was founded by the Methodist Church as an all-female school.
In 1850, Fort Wayne Female College started admitting men coeducationally and changed its name to Fort Wayne College.
In 1890, Fort Wayne College acquired the former facilities of nearby Fort Wayne Medical College that were vacated after Fort Wayne Medical College's merger with Indiana Asbury College, another Methodist-affiliated college. Upon completing this acquisition, Fort Wayne College changed its name to Taylor University in honor of Bishop William Taylor. The original Taylor University campus was on College Street in Fort Wayne.
Move to Upland
In 1882, a guest-preaching engagement in the Upland Methodist Church afforded Fort Wayne College (soon to be renamed Taylor University) president Thaddeus Reade the chance to meet the minister of Upland Methodist Church, Rev. John C. White. Because the school was having financial difficulties at its location in Fort Wayne, White and Upland citizen J.W. Pittinger worked to bring the school to Upland.
In 1893, spring, White negotiated an agreement between the Taylor trustees and the Upland Land Company, whereby the university agreed to move to Upland, Indiana, and the company agreed to provide Taylor with $10,000 in cash and of land.
In 1893, summer, Taylor University relocated to Upland. White was able to find the resources to support Taylor University because of the recent discovery of large deposits of natural gas in the area.
In 1915, Taylor paid seven thousand dollars to purchase more from Charles H. and Bertha Snyder.
In 1920's, early, the university added another to its present location when the Lewis Jones farm was purchased. After 1922, Taylor University was no longer formally affiliated with Methodism.
Summit Christian College and Fort Wayne
In 1904–1905, Bible Training School was founded, which later became Summit Christian College of Fort Wayne, Indiana.
In 1992, ninety-nine years after moving to Upland, Taylor University acquired Summit Christian College. Summit Christian College was previously named Fort Wayne Bible College (from 1950 to 1989) and Fort Wayne Bible Institute (from its establishment in 1904 as Bible Training School, to 1989). Prior to acquisition by Taylor University, Summit Christian College was affiliated with the Missionary Church.<
With the urban setting of Fort Wayne, Indiana, this campus' academic programs tended to be more vocational and its student body more non-traditional. Reflecting this, of TUFW's 1,040 member student body, approximately 224 students lived on campus with the rest commuting or taking courses online. Popular majors included Professional Writing, Biblical Studies, Christian Ministries, Education, English, and Business. Annual Report of the Taylor University Center for Ethics. 2014–2015. The focus of the 2014–2015 academic year for the Center for Ethics has been engaging all.
The Taylor University Fort Wayne Falcons participated in the United States Collegiate Athletic Association. The school offered basketball for men and women, soccer for men and women (2008–2009 was the first year for the women's program), and women's volleyball.
On October 13, 2008, the university announced plans to discontinue traditional undergraduate programs on the Fort Wayne Campus. Programs that remained after the closure or were transitioned to the Upland campus include the MBA program, the online program, and the radio station, WBCL.
2006 Van accident
On April 26, 2006, Taylor received national attention when a university van was involved in a fatal accident outside Marion, Indiana, while traveling between the Fort Wayne and Upland campuses. The accident happened when a northbound semi-trailer truck driver fell asleep at the wheel, crossed the median and struck the southbound passenger van on I-69. Four students and one staff member were killed, and three staff members and one student were injured. The accident occurred two days before former university president Eugene Habecker's inauguration ceremony. The truck driver was convicted of reckless, involuntary manslaughter and received a four-year prison sentence.
The Grant County coroner and Taylor officials failed to positively identify all the victims. The incident made international headlines when there was a case of mistaken identity between two of the victims. Senior Laura Van Ryn, who died on the scene, was mistaken for surviving freshman Whitney Cerak. A funeral was conducted with a closed casket for Whitney Cerak, and the mistake was not discovered until Cerak identified herself after waking up from a coma over a month later.
On May 32, 2009, Cerak graduated from Taylor, and the two families remain close.
On April 26, 2008, the second anniversary of the accident, the university dedicated the $2.4 million Memorial Prayer Chapel as a memorial to the victims: students Laurel Erb, Brad Larson, Betsy Smith and Laura Van Ryn, along with Taylor employee Monica Felver. As a result of this incident, Indiana changed its procedure for identifying victims involved in accidents.
Vision 2016
Upon inauguration President Eugene Habecker unveiled his Taylor University Vision 2016 plan for the university. The initiative involved the creation of several centers of excellence on campus. The Center for Teaching and Learning Excellence was established and endowed. The center for Scripture engagement was partially endowed. And centers for Missions Computing, Ethics, C.S. Lewis and Faith, Film, and Media are in the process of being created. Programs were created in Ireland and Ecuador. The initiative involved the construction of several buildings around campus:
2008 - The Prayer Chapel, built in memorial of the 2006 van crash was completed;
2008 - Campbell Hall, an off campus university apartment complex completed;
2012 - The Euler Science Complex, an addition to the Nussbaum science complex completed;
2012 - Wolgemuth Hall an off campus university apartment complex completed;
2013 - Breuinger Hall a residence hall connected to Gerig hall completed; and
2016 - LaRita Boren Campus Center, a replacement for the old student union was completed.
2016 - As well, as upgrades to athletic facilities, landscaping, and other buildings were also undertaken.
Res Publica controversy
In 2018, several professors who believed their fundamentalist and conservative viewpoints were not well represented in the student newspaper published an anonymous underground newspaper called Excalibur. The student newspaper responded by asserting that they had not refused to publish any submitted articles and that when the associated professors published a piece in the newspaper the prior year they received pushback from the student body. The university president Lowell Haines criticized the publications, citing the targeted distribution of the paper in rooms of minorities and supporters of social justice, along with the unaccountability and inability to create and maintain dialogue with anonymous publications. At this point the authors of the newspaper came forward, apologized for any perceived slights due to distribution, and stated that their goal was to create dialogue about viewpoints they felt were under-represented Several open letters were published, with one addressing the newspapers arguments directly, and another criticizing what it saw as the president's harsh response.
2019 Commencement controversy
On March 24, 2019, university president Paul Lowell Haines announced that Vice President Mike Pence would be delivering the commencement speech at the 2019 graduation ceremonies. Controversy was immediate, the faculty voting on a motion of dissent, with 61 against the Pence invitation, 41 in favor and 3 abstaining. Competing petitions were organized, calling for the invitation to be rescinded or supporting the invitation. Student and faculty organized protests to walk out before the commencement speech, or to sit silently during the speech. Students and faculty expressed several reasons for protesting: the lack of faculty and student input into the decision, concerns that Pence's invitation was an endorsement of specific political and religious views, Pence's affiliation with President Donald Trump, and the belief that Pence did not represent the same Christian values the university endorsed. On May 18, 2019, dozens of students and several faculty members walked out of commencement ceremonies shortly before Pence delivered the commencement address. The majority of students and faculty remained seated. At the end of his speech, Pence received a standing ovation, during which the majority of students and faculty that remained stayed silent and seated. Afterwards, students linked hands and sang the doxology in an attempt to show that even if they have different viewpoints, they can still respect and love each other. On June 24, university president Haines resigned, effective August 15, 2019.
Academics
There are 90 undergraduate programs, in 61 majors, with popular focuses including education, business, new media and exercise science.
In 2003, Taylor began offering graduate-level programs again after having dropped such degrees nearly 60 years earlier. Since then, the university has expanded its offerings to include a Master of Environmental Science, a Master of Business Administration, a Master of Arts in Higher Education and Student Development (MAHE), and a Master of Arts in Religious Studies.
The concept of integration of faith and learning, the idea that knowledge and faith meet their highest potential when coupled together, is a central educational theme at Taylor. The two distinct columns of the Rice Bell Tower on campus and the spotlights that shine up from each of them symbolize this theme to the campus community.
In 2009, the university adopted the more traditional academic school system for its university structure. Degrees are now awarded by four different schools: Liberal Arts, Natural & Applied Sciences, Business, and Professional Studies.
Overseas campuses
Besides offering a number of off-campus programs, Taylor hosts two of its own study abroad programs – in Ecuador and Republic of Ireland. The Ecuador program is run through the university's Spencer Centre for Global Engagement and is based in Cuenca. The semester-long, immersion program involves a three-prong partnership with Taylor University, the Universidad del Azuay, and the Arco Church of Cuenca.
The Irish Studies Program is based at Coolnagreina in seaside Greystones. Courses are taught by the university's own professors.
Accreditation and memberships
Taylor University is accredited by the Higher Learning Commission. The university is also accredited by the Council on Social Work Education, and the National Council for Accreditation of Teacher Education. Taylor's music program is accredited by the National Association of Schools of Music and programs in Computer Engineering and Engineering Physics are accredited by the Engineering Accreditation Commission of ABET.
Campus life
Life Together Covenant
Students, faculty and staff are required to sign the "Life Together Covenant" (LTC) upon joining the university. Community members pledge to adhere to certain standards of conduct and refrain from certain behaviors, including social dancing (excepting marriages taking place off of school property and choreographed or folk dance), premarital sex, homosexuality, smoking, and the consumption of alcohol, with the intention of strengthening the community as a whole. Students cannot register for classes or housing unless they have signed the LTC pledge each year. The LTC is viewed as not only a covenant, but as a binding contract as well. Penalties for not adhering to the LTC range from "citizenship probation" to expulsion from the university. In 2013 the dancing rule was modified to allow officially sanctioned school dances.
The Life Together Covenant covers activities and behaviors not only on the Taylor campus, but off-campus as well. The purpose is to strengthen the Christian community and to maintain a sense of maturity and accountability.
Chapel services are held three times a week, from 10:00 to 10:50 a.m. on Monday, Wednesday, and Friday. Services generally follow a modern nontraditional Christian theme. Chapel attendance is encouraged but attendance is on the honor system. Chapel is always well attended.
Multicultural development
Multicultural students are supported by the Director of American Student Ethnic programs and the advisor of the International Student Society, and other faculty and staff through various student leadership groups, social clubs, and programs on campus. Programs include International Student Society, Mu Kappa International (founded at Taylor in 1985), Asian Awareness Association, Black Student Association, Middle Eastern Cultural Association, Taylor Gospel Choir, Indian Student Awareness Association, Latino Student Union. These groups and their subsequent events and programs play a role the university's goal of "...promoting diversity awareness, social justice, and globally minded Christianity throughout the campus".
Campus facilities
Academic facilities
In 1902, Sickler Hall, the oldest of three remaining original buildings on the campus, was built with a gift from the estate of Christopher Sickler, an early Taylor trustee. Originally, the building was a residence hall that provided free housing for the children of ministers and missionaries. Later, it served as a science hall and educational department center; more recently, it was the location of the communication arts department. Remodeled in 1995, Sickler Hall currently houses the William Taylor Foundation, professional writing department, and alumni and parent relations. A campus prayer chapel is located on the main floor and is open 24 hours a day for personal worship, meditation, and prayer.
In 1911, Helena Memorial Hall was built and is the second oldest building on campus. It serves as the university welcome center. The building was drastically remodeled in 1987 and houses Admissions and the Offices of the President and Provost. First a music building and then art and theatre building, this building is named for Mrs. Helena Gehman, an early benefactress to the university.
In 1986, Zondervan Library was opened and is a sprawling complex at the center of campus. Named in honor of Peter J. "Pat" Zondervan and his wife, Mary, who contributed over $1 million to the project. Part of the complex is the Engstrom Galleria, the state-of-art University Archives, and the renowned Center for the Study of C.S. Lewis and Friends. This collection consists of several first editions, manuscripts, photographs, and other materials relating to the life of Lewis, George MacDonald, Dorothy Sayers, Charles Williams, and Owen Barfield.
Sitting beside the library complex is the Rice Bell Tower. It is one of the distinctive architectural elements to the campus and stands at 71 feet, 10 inches in height. It was dedicated in memory of Garnet I. Rice's husband, Raymond. The twin spires of the tower that meet at the apex of the structure symbolize the integration of faith and learning.
On the west side of campus is the Jim Wheeler Memorial Stadium with a seating capacity of 4,000. It has been the home of Trojan football since its completion in 1980. It was built with funds donated by 1954 alumnus John Wheeler in memory of his son, Jim Wheeler, an aspiring Christian recording artist who died of cancer shortly after his graduation from the university in 1979.
In 1958, the Taylor University Dome was designed by Orus Eash and built. It originally served as a cafeteria, but now serves as the student union.
In 2003, the Modelle Metcalf Visual Arts Center opened and includes 38,000 square feet of art studios, computer design labs, teaching auditoriums, and art galleries.
In 2004, the Kesler Student Activity Center (KSAC, named after president emeritus Jay Kesler), was completed and features 88,000 square feet of athletic activities space, including an indoor track, multi-purpose courts used for intramural sports, an exercise room, an aerobics room, and multiple locker rooms.
In January 2011, the Eichling Aquatics Wing was completed and includes a lap pool and several classrooms and offices.
In 2010, the university began a massive $41.1 million, addition to its Nussbuam science education complex on the south-east side of campus. The building was completed in time for the 2012 fall semester and dedicated during Homecoming weekend. Named the Euler Science Complex, the center featured two wind turbines; and still features a heliostat, green roofing, geothermal heating and cooling, and solar paneling. With an emphasis on sustainable energy, the university hopes not only to save energy and costs, but also to use these features as a teaching tool. The university has received a Gold LEED certification for the new complex.
Residence halls
Bergwall Hall, 1989, was occupied and named for Evan Bergwall, Sr., president of Taylor University (1951–1959) during the fall semester of 1989 and currently houses 195 students—women on the third and fourth floors and men on the first and second floors. Each floor has a lounge and study facilities and communal bathrooms.
Breuninger Hall, Fall 2013, was opened to students and is the newest residence hall. Located on the south side of campus, it houses 150 students across one floor of men and two floors of women.
English Hall, 1975, opened on the far south end of campus, is a women's residence hall housing 224 students. It is named for Mary Tower English, spouse of one of Taylor's most distinguished graduates. English Hall provides private living room areas as rooms are arranged around a suite that is shared by 8–12 women. It is of a unique compartmental brutalism architecture.
Olson Hall was constructed in the 1960s and named in honor of long-time and distinguished history professor Grace D. Olson. It is the largest residence hall (in terms of housing) on the campus with 300 beds. The hall underwent major renovations between 2006 and 2008. The hall is arranged along a typical corridor with a shared common bath.
Mirrored by Olson is Wengatz Hall, 1965, named for alumnus John C. Wengatz, a pioneer missionary to Africa. It houses 266 men.
Samuel Morris Hall, 1990, was completed, and colloquially referred to as “Sammy,” and named in honor of late 19th century African student Samuel Morris and is the university's most modern large-scale residence hall and its largest in terms of square feet. It sits on the northeast corner of campus and houses 286 men. It is the third building named after Morris, the second being demolished in the mid-1990s. The building has four floors, each with its own unique culture and traditions: Foundation, Sammy II, The Brotherhood, and Penthouse.
Swallow Robin Hall, 1917, is the oldest residence hall and third oldest building on campus by Samuel Plato. and then remodeled and restored in the fall of 1990. Silas C. Swallow and his wife (maiden name Robin) financed a major portion of the original construction cost for the building and asked that it be named in honor of their mothers. The hall was designed by Samuel Plato, a notable architect of the early 20th century.
Most recently, the university added two new off-campus housing apartment halls on the north side of campus:
Campell Hall, 2008, at was constructed in 2008 and opening that fall, is named in honor of Walt and Mary Campbell. It is located on the northwest edge of campus and consists of fifteen apartments housing 60 upper-level students in an apartment-style setting.
Wolgemuth Hall, 2011, the larger, opened in fall of 2011 and incorporates the architectural style of Samuel Plato. At it has room for 92 upper-level students and is named after Sam and Grace Wolgemuth.
Haakonsen Hall, 1975, was constructed as the student health center. The building is named after medical care provider Lily Haakonsen who was employed by the university. In 2006 it was renovated and re-purposed as housing. Since then, it has served a variety of purposes and is currently home to Taylor University Media Services.
Athletics
Taylor University teams participate as a member of the National Association of Intercollegiate Athletics (NAIA). The Trojans are a member of the Crossroads League, formerly known as the Mid-Central College Conference (MCCC). Taylor was formerly a member of the NCAA Division III's Indiana Collegiate Athletic Conference (ICAC), now known as the Heartland Collegiate Athletic Conference (HCAC). Men's sports include baseball, basketball, cross country, football, golf, soccer, tennis and track & field; while women's sports include basketball, cross country, golf, soccer, softball, tennis, track & field and volleyball.
Football
The Taylor University football program competes in the Mideast League of the Mid-States Football Association. The Trojans football team ended the 2009 season ranked #19 in the NAIA coaches poll.
Volleyball
The Taylor University Women's Volleyball 2009 season ended in a single elimination game as part of the top 12 teams in the NAIA playoffs with the team ranked #11 in the NAIA.
Basketball
Silent Night
Every year the Friday before final exams, Taylor University has the Silent Night Men's Basketball game. In it, students remain quiet until the 10th point is scored and then erupt in cheering. In the late moments of the game, "Silent Night" is sung. A former assistant coach came up with the idea in the late 80s and it was a packed event by the mid-to-late 1990s. Afterward, students can go to the President's campus-wide party involving live Christmas music, making and eating Christmas cookies, and making gingerbread houses. The 2010 game was more formally named the 27th Annual Ivanhoe Classic and resulted in a 112–67 win over Ohio State-Marion. This was the most scored by the Taylor men's basketball team since the 1993–94 squad scored 139 points in a victory over Robert Morris University (Illinois). This allowed Taylor students to quiet down and erupt in celebration again after the 100th point. Casey Coons scored the 10th point in the 2009 Silent Night, the 2010 Silent Night, and 2011 silent night on free throws. Casey Coons received the NAIA Mid-Central College Conference Division II Player of the Week award for the week of the 2010 game. Coach Paul Patterson coached without shoes for the 2009, 2010, and 2011 games to raise money for Samaritan's Feet (400 pairs of shoes were raised at the 2009 event for the Dominican Republic and 170 pairs of shoes were raised for Guatemala at the 2010 event). Sports Illustrated paid tribute to the Silent Night Event in its December 27, 2010, issue. The 2011 game received significant media attention as well.
The 2014 Silent Night game was a 91–59 victory over Kentucky Christian and was covered by ESPN.
Media
Taylor university operates a radio station, WTUR, with student led talk shows, student selected music, and chapel services.
Taylor University currently has entered into a partnership with and simulcasts WBCL on 87.9, the frequency WTUR used to broadcast on. Currently WTUR is solely broadcast online. Taylor also hosts a student newspaper, The Echo, which celebrated its 100th anniversary in 2012–13. The paper is both print and online. The Ilium, Taylor's annual yearbook, is a 200+ page print publication put together by students.
Notable alumni and faculty
Nelson Appleton Miles, General-in-Chief of the United States Army
Thomas Atcitty, third president of the Navajo Nation
Andrew Belle, Popular Singer/Songwriter
Joseph Brain, American physiologist and environmental health researcher at Harvard T.H. Chan School of Public Health
Frank G. Carver, one of the translators of the New American Standard Bible
Charles W. Clark, famous baritone singer
Paige Cunningham, president emeritus at Taylor University, Director of The Center for Bioethics at Trinity International University
Ralph Edward Dodge, Bishop of The Methodist Church
Ted Engstrom, former president of World Vision International
Rick Florian, recording artist
Dan Gordon, president of Gordon Food Service
John Groce, head coach of the Akron Zips men's basketball team
Eugene Habecker, president emeritus of Taylor University, former president of the American Bible Society
Lowell Haines, lawyer, president emeritus of Taylor University
Chris Holtmann, head coach of the Ohio State Buckeyes men's basketball team
Julienne Johnson, artist
Stephen L. Johnson, former administrator, Environmental Protection Agency
Jay Kesler, president emeritus of Taylor University, former president of Youth for Christ
D. Stephen Long, Methodist theologian and professor of ethics at Southern Methodist University
Phil Madeira, award-winning songwriter and recording artist, and member of Emmylou Harris' band.
Rolland D. McCune, American theologian and professor of Systematic Theology at Detroit Baptist Theological Seminary
Jeff Meyer, assistant coach for the Michigan Wolverines
Teresa Meredith, former president of Indiana State Teachers Association
John Molineux, founder and president of Tiny Hands International
Geoff Moore, Contemporary Christian music artist, songwriter
Samuel Morris, 1872–1893 (formerly Prince Kaboo of Western Africa)
David Nixon, film director and producer
Harold Ockenga, pastor, educator, and founding president of the National Association of Evangelicals
Paris Reidhead, a Christian missionary, teacher, writer, and advocate of economic development in impoverished nations
Charles Wesley Shilling, leader in the field of undersea and hyperbaric medicine, research and education
Joel Sonnenberg, Christian motivational speaker
William Vennard, vocal teacher and opera singer
Tim Walberg, US Representative for Michigan's 7th congressional district
Jackie Walorski, US Representative for Indiana's 2nd district since 2013, former Republican Indiana State Representative for District 21
Robert Wolgemuth, author, former chairman of the Evangelical Christian Publishers Association
List of university presidents
Thaddeus Reade, 1891–1902
Charles W. Winchester, 1904–1907
Monroe Vayhinger, 1908–1921
James M. Taylor, 1921–1922
John H. Paul, 1922–1931
Robert L. Stewart, 1931–1945
Clyde W. Meredith, 1945–1951
Evan H. Bergwall, 1951–1959
B. Joseph Martin, 1960–1965
Milo A. Rediger, 1965–1975; 1979–1981
Robert C. Baptista, 1975–1979
Gregg O. Lehman, 1981–1985
Jay Kesler, 1985–2000
David Gyertson, 2000–2005
Eugene Habecker, 2005–2016
Lowell Haines, 2016–2019
Paige Comstock Cunningham, 2019–2021
D. Michael Lindsay, 2021–present
Gallery
References
External links
Nondenominational Christian universities and colleges
Education in Grant County, Indiana
Buildings and structures in Grant County, Indiana
Education in Fort Wayne, Indiana
Liberal arts colleges in Indiana
Educational institutions established in 1846
Evangelicalism in Indiana
Council for Christian Colleges and Universities
1846 establishments in Indiana
Private universities and colleges in Indiana |
46842898 | https://en.wikipedia.org/wiki/PicoScope%20%28software%29 | PicoScope (software) | PicoScope is computer software for real-time signal acquisition of Pico Technology oscilloscopes. PicoScope is supported on Microsoft Windows, Mac OS X, Debian and Ubuntu platforms. PicoScope is primarily used to view and analyze real-time signals from PicoScope oscilloscopes and data loggers. PicoScope software enables analysis using FFT, a spectrum analyser, voltage-based triggers, and the ability to save/load waveforms to disk. PicoScope is compatible with parallel port oscilloscopes and the newer USB oscilloscopes.
The software has been described as "very good for laptops" and can be used with desktop or laptop PCs. The Linux version has been described as "lightyears ahead [of] Qpicoscope and other attempts at Linux scope software" and "well capable of replacing a professional benchtop scope". Beta versions of the software also work on the ARM-based BeagleBone Black and Raspberry Pi development hardware.
PicoScope software requires a USB or LPT oscilloscope from the PicoScope range developed by Pico Technology. Such oscilloscopes are available with bandwidths up to 1 GHz, up to four input channels, hardware vertical resolutions up to 16 bits, sampling rates up to 5 GS/s, buffer sizes up to 2 GS, and built-in signal generators. Other features available on some models include flexible hardware resolution, switchable bandwidth limiters, switchable high-impedance and 50 ohm inputs, and differential inputs.
PicoScope for Linux won the EDN Hot 100 Products of 2014 award, under the Test & Measurement category, for "converting a Linux PC into an oscilloscope, FFT spectrum analyser and measuring device".
Features
Windows
PicoScope for Microsoft Windows is the full-featured oscilloscope application, and was first released in 1992 by Pico Technology. PicoScope software enables real-time scope display with zooming and panning, and buffers captured waveforms on the PC to enable engineers to view previous measurements. PicoScope uses configurable triggers, which are available for digital and analog waveforms. Triggers include pulse width, interval, window, window pulse width, level dropout, window dropout, runt pulse, variable hysteresis, and logic. Mixed signal variants combine digitised analogue triggers with edge and pattern triggering on the digital inputs.
Screen size and resolution are unrestricted, and depend on the PC connected. For developers that require integration, PicoScope includes a free software development kit (SDK) with that can be programmed from C#, VB.NET, C++, Microsoft Excel, LabVIEW or MATLAB.
Supported features
Scope, XY, spectrum and persistence views
Advanced digital, analog and mixed-signal triggers
Automated measurements
Signal generator with AWG editor
Serial decoding for 70+ serial standards including I2C, SPI, UART, CAN, LIN, FlexRay, RapidIO, PCI Express and Serial ATA.
Resolution enhancement
Segmented waveform buffer
Zoom and pan
Signal, time and phase rulers
Support for all USB & LPT PicoScope devices
Runs on Microsoft Windows XP, Vista, 7 and 8
Linux
PicoScope 6 converts a Linux PC into an oscilloscope, FFT spectrum analyser and measuring device. While only the most important features from PicoScope for Windows are included, Pico Technology assures that more functions will be added over time. On-device buffering ensures that the display is updated frequently and smoothly enough even on long timebases. Users can save waveform captures for off-line analysis, share them with other PicoScope users on Windows or Linux platforms, or export them in various formats including text, CSV and Mathworks MATLAB 4 formats.
PicoScope for Linux is supported on Debian 7.0, Ubuntu 12.xx and 13.xx as well as other Debian-based distributions with the Mono Runtime 2.10.81 installed. Drivers are available for current scopes from the PicoScope 2000 to 6000 series.
Supported features
Scope, XY, spectrum and persistence views
Advanced digital triggers
Automated measurements
Signal generator with AWG editor
Resolution enhancement
Segmented waveform buffer
Zoom and pan
Signal, time and phase rulers
Support for all USB PicoScope devices
Runs on Debian 7.0, Ubuntu 12.xx, Ubuntu 13.xx
OS X
PicoScope for Mac OS X includes the essential features, while advanced features are still being developed.
Supported features
Scope, XY, spectrum and persistence views
Advanced digital triggers
Automated measurements
Signal generator with AWG editor
Resolution enhancement
Segmented waveform buffer
Zoom and pan
Signal, time and phase rulers
Support for all USB PicoScope devices
Runs on OS X 10.9 and 10.10
See also
Agilent
Rigol
Tektronix
Velleman
References
Science software |
53276835 | https://en.wikipedia.org/wiki/Eclipse%20ERP | Eclipse ERP | Eclipse ERP is a real-time transaction processing accounting software used for order fulfillment, inventory control, accounting, purchasing, and sales. It was created for wholesale distributors in the Electrical, HVAC, Plumbing, and PVF industries, but is used by a wide range of market sectors. At one point this software was called Intuit Eclipse DMS, and Activant Eclipse, and Eclipse Distribution Management System.
The backend runs on a NoSQL UniVerse database from Rocket U2.
History
Before Eclipse ERP was created in 1990, distributors mainly in the North-East used SHIMS (Supply House Information Management System) that was owned by Ultimate Data Systems. Development of Eclipse ERP started in 1990 by Eclipse Inc.; the original team consisted of Clark Yennie, Michael E. London, David Berger, Steven Grundt, and Richard Montegna. In 2002, Eclipse Inc was sold in 2002 to Intuit for $88 million.
Activant bought Eclipse ERP on August 17, 2007 for $100.5 million in cash. Apax Partners merged Epicor and Activant on April 5, 2011. Thus Epicor became the owner of Eclipse ERP. Over the years Eclipse ERP was operating under numerous brands.
User interface
Client stations connect to the Eclipse server via an Eclipse terminal emulator called Eterm and/or a thick, Java based, Solar Eclipse client. Solar Eclipse was introduced in version 8.0 on May 2004, and replaced with version 9.0 in May 2015.
Key features
Usage scenarios
Typical users are distribution companies, members of trade associations, with multiple regional branches and hundreds of employees. Typically these companies employ outside salesmen who travel. Inside sales people provide support to customers over the phone or email. Distribution center personnel use Eclipse ERP as a Distribution Center Management System. They do order picking, order processing, maintain inventory in stock, and send products to customers via shipping carriers. Accounting department deals with general ledger, AP, AR, and credit control. Marketing department is responsible for online and printed promotional material. Purchasing department deals with procurement from manufacturers and vendors in the supply chain. These business processes must be adjusted to work specifically with Eclipse ERP. The cost of the software depends on the numbers of concurrent user licenses and number of companion products.
Featured packages
The software has support for multi-branch operations, integrated interface for emailing and faxing (using VsiFax), customer calling queue (troubletickets), and several add-ons are available for an employee punch-clock, RF warehousing, Digital Imaging, Proof of Delivery/Signature Capture, and others. Pricing Engine allows setting pricing for customer classes, product groups, individual products, or customers, quantity breaks. Customers can have different price classes based on volume, and/or location. Price can be set for future effective dates. Authorization Keys give flexibility with user access and security in similar way as Access Control Matrix. Warehouse in Process Status Queue shows what orders to pick for customers, what transfers to receive from other branches, or purchase orders to receive from vendors. Real-time Data and Business Summary displays the income statement and balance sheet. Sales Order Management allows receipt of payment from customers at the counter or over the phone. Mass Load is used to update information in the database. Third-party integration extend the functionality of the base product. Navigation menus are customized for unique users or whole departments. Accounting and Financial Management includes Receivables, Payables, Cash register. Inventory Management show inventory levels, precise product locations, history, ranking, and demand. Purchasing and Transfers are suggested by the system based on previous history and future demand.
Community
Epicor hosts an annual Epicor Insights conference to provide networking and training on products, including Eclipse.
Logo versions
See also
Distribution center
Distribution center management system
Enterprise integration
List of ERP software packages
Order management system
Procurement
Strategic sourcing
Supply chain management
Transportation management system
Warehouse management system
References
ERP software
Accounting software
Supply chain management
Enterprise resource planning software for Linux
Kohlberg Kravis Roberts companies
Business software for Linux
NoSQL products |
6148095 | https://en.wikipedia.org/wiki/Red%20Trousers%20%E2%80%93%20The%20Life%20of%20the%20Hong%20Kong%20Stuntmen | Red Trousers – The Life of the Hong Kong Stuntmen | Red Trousers: The Life of the Hong Kong Stuntmen () is a documentary film directed by Robin Shou.
Plot
This documentary from Robin Shou—who also hosts and participates in the film—takes a behind-the-scenes glance inside the stunt industry of Hong Kong, which is known for being riskier and less trick-oriented than its American counterpart. In addition to archival and interview footage featuring some of the industry's most prominent stuntmen, Red Trousers - The Life of Hong Kong Stuntmen incorporates scenes from adventure short action film Lost Time (2001) in an effort to illustrate how stuntmen prepare for and ultimately perform in modern martial arts films.
Cast
Seb H - Choreographer/How to Wear Tight Red Trousers Consultant/Himself
Robin Shou - Evan/Narrator/Himself
Beatrice Chia – Silver
Keith Cooke - Kermuran (as Keith Cooke Hirabayashi)
Hakim Alston - Eyemarder
Craig Reid - Jia Fei (as Craig D. Reid)
Buffulo - Computer virus thug/Zu's zombie fighter
Mindy Dhanjal - Zu Yao Her
Duck - Forest Devil/Himself
Kok Siu Hang - Flying machine body guard/forest devil/himself
Sammo Hung Kam-Bo - Himself (as Sammo Hung)
Lueng Shing Hung - Zu's Zombie Fighter
Kam Loi Kwan - Flying machine body guard/computer virus thug/forest devil/zu's zombie fighter/himself
Alice Lee - Nurse
Mike Leeder - Mr. Goa
Chia-Liang Liu - Himself (as Lau Kar-Leung Sifu)
Leung Chi Ming - Forest Devil
Monique Marie Ozimkowshi - Dominatri
Jude Poyer - Flying machine body guard/himself
Ng Wing Sum - Flying machine body guard/computer virus thug/zu's zombie fighter
Ridley Tsui - Himself
Chi Man Wong - Computer virus thug/forest devil/zu's zombie fighter/himself
Awards
Newport Beach Film Festival 2003
Outstanding Achievement in Filmmaking Award
Media
DVD release
Red Trousers – The Life of the Hong Kong Stuntmen (2005)
Red Trousers – The Life of the Hong Kong Stuntmen Collector's Edition (2-Disc-Set) (2005)
External links
Red Trousers - Official site with synopsis, trailers, and interviews.
Red Trousers - info site
2003 films
Hong Kong martial arts films
Documentary films about the film industry
Cinema of Hong Kong |
4667894 | https://en.wikipedia.org/wiki/Wavefront%20Technologies | Wavefront Technologies | Wavefront Technologies was a computer graphics company that developed and sold animation software used in Hollywood motion pictures and other industries. It was founded in 1984, in Santa Barbara, California, by Bill Kovacs, Larry Barels, Mark Sylvester. They started the company to produce computer graphics for movies and television commercials, and to market their own software, as there were no off-the-shelf computer animation tools available at the time. In 1995, Wavefront Technologies was acquired by Silicon Graphics, and merged with Alias Research to form Alias|Wavefront.
Products
Wavefront developed their first product, Preview, during the first year of business. The company's production department helped tune the software by using it on commercial projects, creating opening graphics for television programs. One of the first customers to purchase Preview was Universal Studios, for the television program Knight Rider. Further early customers included NBC, Electronic Arts, and NASA.
Some of Wavefront's early animation software was created by Bill Kovacs, Jim Keating, and John Grower, after they left Robert Abel and Associates. Roy A. Hall, and others after him, developed the company's flagship product, the Wavefront Advanced Visualizer.
In 1988, Wavefront released the Personal Visualizer, a desktop workstation interface to their high-end rendering software. As with Wavefront's other software, it was developed for Silicon Graphics computers, but it was later ported to Sun, IBM, Hewlett-Packard, Tektronix, DEC and Sony systems. Wavefront purchased Silicon Graphics first production workstation after their offer to buy the prototype they were given a demo of was knocked back.
In 1989, the company released the Data Visualizer, an early commercial tool for scientific visualization.
In 1991, Wavefront introduced Composer, an image manipulation product. Composer became a standard for 2D and 3D compositing and special effects for feature films and television.
In 1992, Wavefront released two new animation tools that worked with the Advanced Visualizer. Kinemation was a character animation system that used inverse kinematics for natural motion. Dynamation was a tool for interactively creating and modifying particle systems for realistic, natural motion. Dream Quest Images used Dynamation and Composer to create over 90 visual effects sequences for the film Crimson Tide.
In 1994, the same year that rival Alias made a deal with Nintendo, Wavefront partnered with Atari to develop the GameWare game development software. GameWare was the exclusive graphics and animation development system for the Atari Jaguar. Electronic Arts' Richard Taylor, said that Wavefront's software was "so beautifully designed that even a non-technical person could learn it. Wavefront was a major reason that CG took a leap forward."
Wavefront software was used in numerous major films, including Luxo Jr., The Great Mouse Detective, Akira, Technological Threat, All Dogs Go To Heaven, Rock-a-Doodle, Off His Rockers, Outbreak, Aladdin, True Lies and Stargate.
Acquisitions and mergers
Wavefront was involved in several mergers of major computer graphics software companies through the 1980s and 1990s. In 1988, Wavefront acquired Abel Image Research, a division of Robert Abel and Associates, where founder Bill Kovacs had previously worked. The acquisition was partially financed by the Belgian government, following Wavefront's establishment of an office in Ghent in association with Barco Graphics of Kortrijk. Acquiring Abel Image Research increased Wavefront's presence in Japan. The Japanese conglomerate CSK became a part owner of Wavefront Japan in 1990, helping to expand the company further in Asia.
Wavefront acquired rival computer graphics company Thomson Digital Images of France in 1993. TDI's software featured innovations in NURBS modeling and interactive rendering. The company also had extensive distribution channels in Europe and Asia.
On February 7, 1995, Silicon Graphics announced that it would purchase Wavefront Technologies and Alias Research, in a deal totaling approximately $500 million. SGI merged the two companies to create Alias|Wavefront, with the goal of creating more advanced digital tools by combining the companies' strengths and reducing duplication. At the time of the merger, Wavefront had a market value of $119 million, and 1994 revenues of $28 million.
What partially motivated this merger was Microsoft's purchase of Alias and Wavefront's competitor Softimage. SGI saw Microsoft's entrance into the market as a threat and merged Alias and Wavefront to compete with Microsoft. Alias is now owned by Autodesk, as is Softimage as of October 2008.
Academy Awards
In 1997, whilst working at Wavefront, Jim Hourihan received an Academy Award for Technical Achievement for the creation of Dynamation. Bill Kovacs and Roy Hall received a Scientific and Engineering Academy Award in 1998 for their work on the Advanced Visualizer.
In 2003, Alias|Wavefront was awarded an Academy Award for scientific and technical achievement for their Maya software, which had been created from a combination of the earlier software of Wavefront, Alias, and TDI.
References
Animation software
Silicon Graphics
Software companies of the United States
Technology companies based in Greater Los Angeles
Companies based in Santa Barbara, California
American companies established in 1984
Software companies established in 1984
1984 establishments in California
Software companies disestablished in 1995
1995 disestablishments in California
1995 mergers and acquisitions |
34317814 | https://en.wikipedia.org/wiki/Copyright%20law%20of%20Italy | Copyright law of Italy | Provisions related to Italian copyright law (diritto d'autore) are found in Law no. 633 of 22 April 1941 (along with its various amendments). Certain fundamental provisions are also found in the Italian Civil Code of 1942, Arts. 2575–2583.
Copyright law in Italy has not changed much since the first enactment of these provisions. There have been amendments to Law no. 633 to incorporate specific works such as computer programs and databases, or to add or alter user exceptions, but generally Italian lawmakers have been reluctant to institute any major or fundamental reforms.
Italian copyright law is based strongly on authors' rights. Exceptions to authors' exclusive rights are limited — there is no provision equivalent to fair use or fair dealing — and are generally interpreted restrictively by the courts.
Subject matter
The subject matter owed protection is provided for (identically) in both the Civil Code (art. 2575) and Law no. 633 (Art. 1): "The object of the author's right is the work of intellect of creative character that belongs to the sciences, literature, music, figurative art, architecture, theatre, and cinematography, no matter the style or form of expression." There is no requirement that the work be fixed in any medium to attract copyright protection.
While Art. 1 requires only that the work be "of the intellect" and "of creative character", Italian courts and scholars have interpreted the provision as conditioning copyright protection on four elements: (1) a particular (not high) degree of creativity; (2) novelty; (3) the work's objectification or externalization; (4) affiliation to art or culture.
Art. 2 lists non-exhaustive examples of protected subject matter ("In particular are protected..."): literary, dramatic, scientific, education, and religious works (whether in written or oral form); musical compositions with or without words; choreographic works and pantomimes; computer programs and databases.
Official acts of the State are not entitled to copyright protection (Art. 5).
Ownership
The Civil Code states that the rights belong to the author and the author's successors, subject to special circumstances (Art. 2580).
Formalities
Italian law does not require any copyright formalities such as registration or deposit for copyright to subsist. The Civil Code (Art. 2576) and Law no. 633 (Art. 6) provide that the rights are first acquired upon creation of the work as a particular expression of the intellectual effort.
Economic rights
Exclusive economic rights in Italian copyright law are construed broadly. Art. 12 does not limit the manners in which economic rights may be exploited but provides examples. The author has the exclusive right to publish the work, and to use the work in any shape or form, original or derivative (within limits fixed by the law) and in particular certain exclusive rights such as reproduction in any manner or form by any process (Art. 13), public performance (Art. 15), and communication by wire, wireless, or Internet (Art. 16). The author also has the exclusive right to authorize renting or lending to the public (Art. 18-bis).
With respect to computer programs, databases, and industrial designs that are created by an employee in the course of her duties, the employer is exclusively entitled to exercise economic rights in the work (Arts. 12-bis, 12-ter). Likewise, where a photograph is taken in the course of duties, economic rights belong to the employer, or to the person who commissioned the portrait (Art. 88).
Certain categories of works are subject to special exploitation rights. For dramatico-musical works, profit is shared among authors in proportion to the deemed value of the contribution. For example, in the case of operas, the author of the music is afforded 3/4 of the total, while the author of the lyrics is afforded 1/4 (Art. 34). In a film work, the authors of the subject matter, scenario, and music, along with the artistic director, are considered joint authors. While the exploitation rights over the whole work belong to the producer, certain uses require the consent of the joint authors (Arts. 44–46). Broadcasting services may broadcast works that are performed in public places without the author's consent, but only if the work is not being performed for the first time. The author will in any case receive remuneration for these broadcasts (Arts. 52, 56, 57).
Moral rights
In the author-centric Italian copyright law, moral rights are eternal, non-transferable, and inalienable. The author, even after transfer of economic rights, retains the right to claim authorship and to oppose mutilation of the work or any act that would be prejudicial to her honour or reputation (Arts. 20, 22). Upon the death of the author, these rights may be relied upon by her family and descendants (Art. 23).
Duration
The duration of economic rights for most works and for photographs in Italian law is 70 years from the death of the author (Art. 28). Where there are multiple authors, and for cinematographic works, the economic rights expire 70 years after the death of the last surviving author (Arts. 26, 32). For works in which the economic rights are owned by government, academies, public bodies, and non-profit cultural organizations, the duration of the economic rights is 20 years from the first publication (Art. 29).
Neighbouring rights
Italian copyright law also provides for certain exploitation rights over non-works. Phonograph producers have the exclusive right for 50 years from fixation (and without prejudice to the author's rights) to authorize reproduction, distribution, rental, lending, and Internet availability of the sound recording (Art. 72). Broadcasters have the exclusive right for 50 years from first transmission (without prejudice to author's rights) to authorize the fixation (as well as the reproduction and distribution of fixations) and retransmission of the broadcast (Art. 79). Performers have the exclusive right for 50 years from performance to authorize the fixation of the performance (as well as the reproduction, distribution, rental, or lending of the fixation) and the broadcasting of live performances (Art. 80). The performer is also afforded moral rights in that he may oppose any communication or reproduction of his performance that might be prejudicial to his honour or reputation (Art. 81). However, the duration of this particular moral right is 50 years from the performance (Art. 85).
Italian copyright law mandates that unpublished personal correspondence and memoirs may not be communicated to the public without the consent of the author and addressee (where appropriate). This right does not expire and applies even when the work itself has fallen into the public domain. Likewise, a person's portrait may not be displayed, reproduced, or commercially distributed without the consent of the subject or of his family if he has died (Art. 93).
Assignments and licences
Exploitation rights of the author, as well as neighbouring rights, may be acquired, sold, or transferred, if the transfer is set out in writing (Arts. 107, 110). However, publishing contracts may not set out the transfer of indeterminate future rights (Art. 119). In other words, an author cannot contract for the transfer of rights that do not yet exist in law.
A further right of artists and authors is known as diritto di seguito ("right to follow"): the right to be remunerated when the first public sale of her original piece of art or manuscript exceeds the price of the first transfer. The artist is then owed a percentage of the profit made in subsequent public sales of the work (Art. 144). This right cannot be alienated, and persists for 70 years after the artist's or author's death (Arts. 147, 148).
Exceptions and limitations
Italian copyright law does not have an equivalent to fair use or fair dealing provisions. Limitations and exceptions are set out individually and are interpreted restrictively by the courts, as one would expect in an author's rights regime. The private copying provision was not added until 1993.
Certain exceptions do not require remuneration to the author: the reproduction of current news articles or broadcasts, where the original source is indicated (Art. 65); the reproduction or communication of public speeches on matters of political or government interest (Art. 66); the use of fragments or quotations for criticism, discussion, or non-commercial teaching or research (with source indicated) (Art. 70); reproduction and communication for persons with disabilities (Art 71-bis); and the communication of low-resolution images and music over the Internet for educational or scientific purposes (Art. 70). Loans by state libraries made for cultural promotion or personal study do not require authorization or remuneration (Art. 69). Users may make reprographic reproduction of 15% of a work (excluding sheet music) for private use; remuneration is paid to the authors by the library or copy centre where the reproduction is made (Art. 68). Lawful users may also reproduce music and videos for personal, non-commercial use (Art. 71-sexies). Authors and phonograph producers are entitled to remuneration for these activities via levies on recording devices and blank media (Art. 71-septies).
Where a work is protected by a technological protection measure, the rights holders are obliged to adopt proper solutions to allow the exercise of certain exceptions and limitations by lawful users on request, where the exercise would not conflict with the normal exploitation of the work or unduly prejudice the rights holder (Art. 71-quinquies).
Freedom of panorama for use of artistic works in public spaces is not recognized in the Italian copyright law.
Remedies and penalties
Law no. 633 provides for civil remedies such as injunction, damages (including non-pecuniary), destruction of infringing specimens, and destruction of copying equipment and devices primarily designed to circumvent technological protection measures (Arts. 156–161). A rights holder may also apply for an interim injunction for the infringement of economic rights (Art. 163).
Criminal penalties for infringers include fines and imprisonment (Art. 171). If the infringement is done with gainful or commercial intent the penalties are increased (Art. 171-ter).
Collective management organizations
Italian copyright law allows for collective management of rights. The role of intermediary, however, is legally reserved for the Società italiana degli autori ed editori (SIAE) (Italian Society of Authors and Publishers), although membership is not mandatory (Art. 171). The SIAE is a public body that has a central role in the exercise of economic rights, being responsible for the granting of licences and authorizations, and the collection and distribution of royalties (Art. 171-ter). SIAE has a central role in rights administration, supervising public showings in cinemas, broadcasting, reproduction and distribution of audiovisual and photographic works, copy centres, and the manufacturing, import, and distribution of blank media (Art. 180). The organization also affixes its mark on media containing software, sound recordings, and moving images, that are intended to be placed on the market for sale or rent. This mark consists in a holographic sticker on which is printed the name of the author or copyright owner, a sequential ID number, and the final destination of the product (sale or rent) (Art. 181-bis).
References
External links
(Law no 633 of 22 April 1941)
(Civil Code of Italy)
Italy
Italian law |
487926 | https://en.wikipedia.org/wiki/List%20of%20wiki%20software | List of wiki software | This is a list of notable wiki software applications. For a comparative table of such software, see Comparison of wiki software. For a list of wikis, or websites using wiki software, see List of wikis.
Standard wiki programs, by programming language
JavaScript-based
Lively Wiki is based on Lively Kernel and combines features of wikis and development environments. Users can create and edit application behavior and other content.
TiddlyWiki is a HTML-JavaScript-based server-less wiki in which the entire site/wiki is contained in a single file, or as a Node.js-based wiki application. It is designed for maximum customization possibilities.
Wiki.js is an open-source, Node.js-based wiki application using git as the back end storage mechanism and automatically syncs with any git repository. It provides a visual Markdown editor with assets management, authentication system and a built-in search engine.
Java-based
XWiki is a free wiki software platform written in Java with a design emphasis on extensibility. XWiki is an enterprise wiki. engine with a complete wiki feature set (version control, attachments, etc.) and a database engine and programming language which allows database driven applications to be created using the wiki interface.
Zoho Wiki is a web-based wiki system integrated with all productivity applications. It is an easy-to-use knowledge management tool for teams.
Perl-based
Foswiki is a structured wiki, typically used to run a collaboration platform, knowledge or document management system a knowledge based, or team portal. is a structured wiki, which enables users to create "wiki applications".
ikiwiki, a "wiki compiler" - can use Subversion or git as the back end storage mechanism. ikiwiki converts wiki pages into HTML pages suitable for publishing on a website.
TWiki is a flexible, powerful, secure, simple Enterprise wiki and application platform. is a structured wiki, typically used to run a project development space, a document management system, a knowledge base, or any other groupware tool. Also available as a VMware appliance.
UseModWiki is a wiki software written in Perl and licensed under General Public License. Created by Clifford Adams in 2000, it is a clone of AtisWiki.
WikiWikiWeb, the first wiki and its associated software.
PHP-based
BookStack is released under the MIT License. It uses the ideas of books to organize pages and store information. It is a simple, self-hosted, easy-to-use wiki software for organising and storing information.
DokuWiki is a wiki application licensed under GPLv2 and written in PHP. It is aimed at the documentation needs of a small company. DokuWiki was built for small companies and organizations that need a simple way to manage information, build knowledge bases and collaborate. It uses plain text files and has a simple but powerful syntax which ensures the datafiles remain readable outside the wiki.
MediaWiki is a free and open-source wiki software package written in PHP. It serves as the platform for Wikipedia and the other Wikimedia projects. MediaWiki is used for projects run by the Wikimedia Foundation, which operates Wikipedia. It is also publicly available for use in other wikis, and has widespread popularity among smaller, non-Wikimedia wikis.MediaWiki is a free and open-source wiki software. It was developed for use on Wikipedia in 2002, and given the name "MediaWiki" in 2003.
Semantic MediaWiki lets you store and query data within the wiki's pages like a database. It is also designed to ease and combine collaborative authoring within a wiki with semantic technology.
BlueSpice MediaWiki extends MediaWiki in usability, quality management, process support, administration, editing and security.
MindTouch is an application that began as a fork of MediaWiki; it has a C# back-end and a PHP front-end.
PhpWiki is a WikiWikiWeb clone in PHP.
PmWiki is a PHP-based wiki. Features include: GPL-licensed, easy installation/customization, designed for collaborative authoring and maintenance of web sites, and support for internationalization. Does not require a database.
Python-based
LocalWiki is a wiki engine based on Django, with mapping features and a WYSIWYG editor. The LocalWiki project was founded by DavisWiki creators Mike Ivanov and Philip Neustrom and is a 501 nonprofit organization based in San Francisco.
MoinMoin is a wiki engine written in Python.
Zwiki is a Zope-based GPL wiki engine. It can integrate with the content management framework Plone, and supports several kinds of markup and WYSIWYG HTML editing.
Trac is an enhanced wiki and issue tracking system for software development projects.
Ruby-based
Gollum uses git as the backend storage mechanism. It is written mostly in Ruby and was originally used as GitHub's wiki system.
Other languages
Cliki is written in Common Lisp.
FlexWiki is written in C#, uses the .NET Framework, and stores data in files or Microsoft SQL Server. Development stopped in 2009.
Gitit is a Happstack-based wiki server in Haskell employing git or Darcs to manage wiki history, and the Pandoc document conversion system to manage markup - among other things permitting the inclusion of LaTeX mathematical markup.
Swiki is written in Squeak. It runs on common platforms, including Mac, Windows, Linux, and others.
Wiki Server is proprietary software distributed with Mac OS X Server.
Personal wiki software
ConnectedText is a commercial Windows-based personal wiki system with features including full text searches, visual link tree, customizable interface, image and file control, CSS-based page display, exports to HTML and HTML Help, and plug-ins.
Journler was a free, open-source personal information manager with personal wiki features for OS X.
MyInfo is a commercial, Windows-based personal information manager with wiki features.
TiddlyWiki is a free, open-source personal use (single-machine) wiki based on HTML/JavaScript for any browser and OS. It supports customization and a wide range of addons.
WhizFolders is a commercial Windows-based personal wiki software with rich text wiki items that support inserting links to other wiki items or external files.
Zim is a free, open-source standalone wiki based on Python and GTK with a WYSIWYG editor.
Hosted-only software
Knowledge Plaza is a knowledge management tool that provides both wiki environments for collaborative topic/project work and an enterprise bookmarking tool.
Nuclino is a real-time wiki for team collaboration.
Content management and social software with wiki functionality
Java-based
ConcourseConnect is a freely available J2EE application made by Concursive which brings together Corporate Social Networking, Online Community, Business directory, and Customer relationship management capabilities. Features include wiki, blog, document management, ratings, reviews, online classified advertising]]\\, and project management modules. The wiki allows both wiki markup and WYSIWYG editing.
Confluence is a commercial J2EE application which combines wiki and some blog functionality. Its features include PDF page export and page refactoring, and it can be run on any application server using any RDBMS backend.
IBM Connections is an Enterprise Social Software made by IBM which combines Wikis, Blogs, Files, Forums, Microblogging, Social Analytics, and document management.
Jive (formerly known as Clearspace, Jive SBS and Jive Engage) is a commercial J2EE application, made by Jive Software, which combines wiki, blog and document management functionality. Jive uses WYSIWYG editing, and includes workflow management.
Liferay is an open source enterprise portal project with a built-in web content management and web application framework. Core portlets offer a great number of functionalities, including Wiki (both Creole and MediaWiki syntax).
Mindquarry creates a WYSIWYG wiki for each team. It is built using Apache Cocoon and thus based on Java (Mozilla Public License)
Traction TeamPage is a commercial enterprise wiki also incorporating blog, project management, document management, discussion and tagging capabilities. The wiki has a draft moderation capability allowing administrators to indicate who can read published vs. draft versions, and who can publish vs. author and edit. The dynamic view architecture allows for easy organization of pages and to collect any set of pages for view, email or export. It is based on the principles of Douglas Engelbart's On-Line System (NLS) which aggregates multiple blog/wiki spaces using a sophisticated permission and inline comment model.
XWiki includes the standard wiki functionality as well as WYSIWYG editing, OpenDocument based document import/export, semantic annotations and tagging, and advanced permissions management.
Perl-based
Socialtext is an Incorporated is a company based in Palo Alto, California, that produces enterprise social software, enterprise wiki and weblog engine partially derived from open-source Kwiki. Socialtext is available as a hosted service, or a dedicated hardware appliance.
PHP-based
Drupal installations can be configured as wikis with MediaWiki-style wiki markup.
Tiki Wiki CMS Groupware is one of the larger and more ambitious wiki development projects, including a variety of additional groupware features (message forums, articles, etc.).
Other languages
Microsoft SharePoint is a web-based collaborative platform that integrates with Microsoft Office. Launched in 2001, SharePoint is primarily sold as a document management and storage system, but the product is highly configurable and its usage varies substantially among organizations. It has built-in wiki support. It is built on ASP.NET, C# language and Microsoft SQL Server.
Telligent, A Verint Company is an enterprise collaboration and community software business founded in 2004 by Rob Howard. The company changed its name to Zimbra, Inc in September 2013 after completing the acquisition of Zimbra from VMWare. In August 2015 Zimbra's Telligent business was acquired by Verint Systems.
Project management software with wiki functionality
Altova MetaTeam integrates a wiki and glossary with project management, collaborative decision-making and team performance management
Code Co-op is a distributed revision control system with wiki functionality.
Fossil is a distributed revision control system that integrates a distributed wiki capability, written in C.
Redmine is a project management web application.
Trac integrates simple issue tracking and an interface to Subversion.
See also
Comparison of wiki software
History of wikis
Personal wiki
References
Wiki software
List |
8970065 | https://en.wikipedia.org/wiki/Content%20Vectoring%20Protocol | Content Vectoring Protocol | In computer networks, Content Vectoring Protocol is a protocol for filtering data that is crossing a firewall into an external scanning device. An example of this is where all HTTP traffic is virus-scanned before being sent out to the user.
This protocol is identified as part of the Checkpoint training as being one of the benefits of their products. It is not known whether this is just a re-working of another protocol that has been re-branded by Checkpoint or if this is a generic Internet protocol.
Its default is to use TCP port 18181.
It is used separately by few servers implementing firewall to inspect the http content. It may or may not inspect the whole of the content, which is entirely based on the administrator managing the firewall. The administrator can direct the whole of the internet traffic to the content vectoring protocol or specific content coming from specific source to be inspected by the content vectoring protocol.
References
Network protocols |
34591714 | https://en.wikipedia.org/wiki/Sword%20Ciboodle | Sword Ciboodle | Sword Ciboodle, was a provider of Customer Relationship Management (CRM) software solutions based in Scotland with regional offices across North America, South Africa and Asia Pacific.
The company consistently appeared in both Forrester Research's Wave and Gartner Research's Magic Quadrant reports as an industry leader in Customer Relationship Management.
Clients included Sears, Sony, Vistaprint, JP Morgan Chase, Admiral and Eskom.
Sword Ciboodle was headquartered in India of Inchinnan, an art deco listed building near Glasgow airport.
History
Sword Ciboodle were previously known as Graham Technology plc, co-founded in 1986 by Dr. Iain M. Graham.
On 31 March 2008, Sword Group announced it was acquiring Graham Technology and their Ciboodle product. The company thus became Sword Ciboodle.
On 9 July 2012, Sword Group agreed to sell Ciboodle to KANA Software.
In 2014, KANA Software was purchased by Verint Systems.
Solutions
Sword Ciboodle is modular CRM software for contact centers. The products work across multiple social channels.
The Ciboodle platform consists of:
Ciboodle One is the desktop which provides a single customer view, for example of all their products and contact history.
Ciboodle Flow is the Workflow engine for business process automation.
Ciboodle Live is the Self-service software.
Ciboodle Crowd provides a social networking service to customers.
The Ciboodle platform was subsumed into other KANA products following the acquisition by Verint.
Competitors
Competitors included:
PeopleSoft
Pegasystems (who bought Chordiant in March 2010)
Oracle (who bought RightNow Technologies in October 2011)
SAP
References
Further reading
Rankine, Kate. "Saturday Profile: No bull from a reluctant millionaire". The Telegraph (UK). 16 September 2000. Retrieved 3 February 2012.
Customer relationship management software
CRM software companies
Software companies of Scotland |
8591127 | https://en.wikipedia.org/wiki/Kaspersky%20Anti-Virus | Kaspersky Anti-Virus | Kaspersky Anti-Virus ( (Antivirus Kasperskogo); formerly known as AntiViral Toolkit Pro; often referred to as KAV) is a proprietary antivirus program developed by Kaspersky Lab. It is designed to protect users from malware and is primarily designed for computers running Microsoft Windows and macOS, although a version for Linux is available for business consumers.
Product
Kaspersky Anti-Virus features include real-time protection, detection and removal of viruses, trojans, worms, spyware, adware, keyloggers, malicious tools and auto-dialers, as well as detection and removal of rootkits.
Microsoft Windows users may download an antivirus rescue disk that scans the host computer during booting inside an isolated Linux environment. In addition, Kaspersky Anti-Virus prevents itself from being disabled by malware without user permission via password access prompts upon disabling protection elements and changing internal settings. It also scans incoming instant messenger traffic, email traffic, automatically disables links to known malware hosting sites while using Internet Explorer or Firefox, and includes free technical support and free product upgrades within paid-subscription periods.
Limits
Kaspersky Anti-Virus lacks certain features found in Kaspersky Internet Security. These missing features include a personal firewall, HIPS, Secure Keyboard, AntiSpam, AntiBanner and parental control tools.
Also, Kaspersky, like the majority of its competitors, is incompatible with many other anti-virus and anti-spyware software.
Security vulnerabilities
In 2005, two critical flaws were discovered in Kaspersky Anti-Virus. One could let attackers commandeer systems that use it, and one allowed CHM files to insert malicious code. Days later, the software maker had offered preliminary protection to customers, and a week later a permanent patch made available.
Operating systems
Microsoft Windows
Kaspersky has been initially developed for Windows, hence the system is supported with a client application since the very beginning.
Linux
An edition of Kaspersky's anti-virus solution for Linux workstations is available to business consumers. It offers many of the features included in the mainstream version for Windows, including on-access and on-demand scanners.
Specialized editions of Kaspersky Anti-Virus are also available for a variety of Linux servers and offer protection from most forms of malware.
Apple Mac OS X / macOS (since 2016)
The newly released Macintosh capable edition of Kaspersky Anti-Virus is compatible on (Intel Processor Based) Mac OS X Tiger and higher to include the brand new version Mac OS X Snow Leopard, released in August 2009. Kaspersky Lab internal testing concludes consuming only 2% CPU impact on performance and is designed to maintain a user friendly Mac-like interface with which Mac users are familiar. Kaspersky Anti-Virus for Mac contains definitions to detect and block malware affecting Windows, Linux and macOS alike. Kaspersky Anti-Virus for Mac also scans shared folders of users running Windows using Virtual PC on capable Apple Macintosh personal computers.
System requirements
A DVD-ROM or CD-ROM drive, Internet Explorer 8 or above and Windows Installer 3.0 or above are also required for the installation of Kaspersky Anti-Virus in Windows. The latest version can either be downloaded from their official website or purchased through retail.
Awards
According to AV-Comparatives, Kaspersky Anti-Virus rates highly amongst virus scanners in terms of detection rates and malware removal, even despite the fact that the program has failed two Virus Bulletin tests in 2007 and another two in 2008. For example, in a Malware Removal test done by AV-Comparatives the Kaspersky Antivirus 2013 was awarded the highest "Advanced+" rating and was able to successfully remove all of 14 malware samples used in that test and in the following File Detection test Kaspersky Antivirus 2013 was also able to achieve the same "Advanced+" rating with a 99.2% sample detection rate. In addition, PC World awarded Kaspersky Anti-Virus 6 the highest rank in its 2007 anti-virus comparative. The well-known and highly regarded Ars Technica lists Kaspersky as one of the best choices for Anti-Virus on the Windows platform.
Kaspersky Anti-Virus was "A-listed" by the UK PC journal PC Pro in late 2007, where it scored very highly for detection and removal of malware. PC Pro attributes this to “a combination of the software’s heuristic scanning and uncompromising approach to database updates. While many packages check for new virus signatures on a daily basis, Kaspersky runs to an hourly schedule, improving your chances of being immunized before an infection reaches it.”
Criticisms and controversies
In March 2015, Bloomberg accused Kaspersky of having close ties to Russian military and intelligence officials.
Kaspersky criticized the article in his blog, calling the coverage "sensationalist" and guilty of "exploiting paranoia" to "increase readership".
In June 2015, United States National Security Agency and United Kingdom Government Communications Headquarters agents broke Kaspersky antivirus software so that they could spy on people, leaks indicate.
See also
Antivirus software
Comparison of antivirus software
Comparison of firewalls
Comparison of computer viruses
Kaspersky Internet Security
Eugene Kaspersky
Natalya Kaspersky
References
External links
Antivirus software
Proprietary software
2006 software
Shareware
Windows security software
MacOS security software
Linux security software |
63033051 | https://en.wikipedia.org/wiki/International%20Networking%20Working%20Group | International Networking Working Group | The International Networking Working Group (INWG) was a group of prominent computer science researchers in the 1970s who studied and developed standards and protocols for computer networking. Set up in 1972 as an informal group to consider the technical issues involved in connecting different networks, it became a subcommittee of the International Federation for Information Processing later that year. Ideas developed by members of the group contributed to the original "Protocol for Packet Network Intercommunication" proposed by Bob Kahn and Vint Cerf in 1974.
History
The International Networking Working Group formed in October 1972 at the International Conference on Computer Communication held in Washington D.C. Its purpose was to study and develop communication protocols and standards for internetworking. The group was modelled on the ARPANET "Networking Working Group" created by Steve Crocker.
Vint Cerf led the INWG and other active members included Alex McKenzie, Donald Davies, Roger Scantlebury, Louis Pouzin and Hubert Zimmermann. These researchers represented the American ARPANET, the French CYCLADES project, and the British team working on the NPL network and European Informatics Network. Pouzin arranged affiliation with the International Federation for Information Processing (IFIP), and INWG became IFIP working group 1 under Technical Committee 6 (Data Communication) with the title "International Packet Switching for Computer Sharing". This standing, although informal, enabled the group to provide technical input on packet networking to CCITT and ISO.
In September 1973, Bob Kahn (who was not a member of INWG) and Vint Cerf gave a paper at an INWG meeting at the University of Sussex in England. Their ideas were refined further in long discussions with Davies, Scantlebury, Pouzin and Zimmerman. Louis Pouzin introduced the term catenet, the original term for an interconnected network, in October 1973. Zimmerman published a paper "Standard host-host protocol for heterogeneous computer networks" in April 1974, and Pouzin published a May 1974 paper "A Proposal for Interconnecting Packet Switching Networks". Khan and Vint Cerf also published their proposal in May 1974, "A Protocol for Packet Network Intercommunication", which introduced the term internet as a shorthand for internetwork. The paper acknowledged several members of the INWG.Over three years, the group shared numerous numbered 'notes'. There were two competing proposals, INWG 37 based on the early Transmission Control Program proposed by Khan and Cerf (updated in INWG 72), and INWG 61 based on the CYCLADES TS (transport station) protocol proposed by Pouzin and Zimmermann. There were two sticking points (how fragmentation should work; and whether the data flow was an undifferentiated stream or maintained the integrity of the units sent). These were not major differences and after "hot debate" a synthesis was proposed in INWG 96.
This protocol, agreed by the group in 1975, titled "Proposal for an international end to end protocol", was written by Vint Cerf, Alex McKenzie, Roger Scantlebury, and Hubert Zimmermann. It was presented to the CCIT in 1976 by Derek Barber, who became INWG chair earlier that year. Although the protocol was adopted by networks in Europe, it was not adopted by the CCIT or by the ARPANET. CCIT went on to adopt the X.25 standard in 1976, based on virtual circuits, and ARPA ultimately developed the Internet protocol suite, based on the Internet Protocol as connectionless layer and the Transmission Control Protocol as a reliable connection-oriented service.
Later international work led to the OSI model in 1984, of which many members of the INWG became advocates. During the 'Protocol Wars' of the late 1980s and early 1990s, engineers, organizations and nations became polarized over the issue of which standard, the OSI model or the Internet protocol suite would result in the best and most robust computer networks. ARPA partnerships with the telecommunication and computer industry led to widespread private sector adoption of the Internet protocol suite as a communication protocol.
The INWG continued to work on protocol design and formal specification until the 1990s when it disbanded as the Internet grew rapidly. Nonetheless, issues with the Internet Protocol suite remain and alternatives have been proposed building on INWG ideas such as Recursive Internetwork Architecture.
Members
The group had about 100 members, including the following:
D. Barber
B. Barker
V. Cerf
W. Clipsham
D. Davies
R. Despres
V. Detwiler
F. Heart
A. McKenzie
L. Pouzin
O. Riml
K. Samuelson
K. Sandum
R. Scantlebury
B. Sexton
P. Shanks
C.D. Shepard
J. Tucker
B. Wessler
H. Zimmerman
See also
Coloured Book protocols
History of the Internet
Protocol Wars
Notes
References
Further reading
Communications protocols
Network protocols |
29687795 | https://en.wikipedia.org/wiki/ModelSheet | ModelSheet | ModelSheet Software LLC is a venture-funded software company focused on business analytics and based in Arlington, Massachusetts.
History
ModelSheet was founded by two MIT graduates, Richard Petti and Howard Cannon who earlier worked together at Symbolics, and later in the division spun out as Macsyma. After the Macsyma episode in 1980s and 1990s the pair took separate career paths (Petti at The MathWorks, Cannon at Groton NeoChem and SciQuest), then rejoined to found ModelSheet in 2007.
Strategy, products and services
ModelSheet Software was founded to put more desktop modeling power in the hands of business experts without requiring them to become programmers. The spreadsheet is the classic example of such an end-user development tool, but its cell-based paradigm has its limitations. ModelSheet technology attempts to addresses these limitations with two types of products: Custom Spreadsheet Solutions, and the ModelSheet Authoring Environment.
By filling in a simple form, a Custom Spreadsheet Solution yields a custom spreadsheet workbook, without users having to edit spreadsheets or cell formulas. Users can set three aspects of spreadsheet models: time series (time range, time grains, and rollup grains), dimensions (e.g. a list of products organized in product families) and turning on or off model features.
ModelSheet offers Custom Spreadsheet Solutions for many common business tasks in corporate finance (financial plans, cash flow analysis, cap tables, activity-based budgets, etc.), marketing and sales analysis (sales plans, marketing program effectiveness, price elasticity, etc.) and other areas. Users can download the Custom Spreadsheet Solutions as Excel workbooks, or upload them to an account on Google Docs.
The ModelSheet Authoring Environment provides all functionality needed to build spreadsheet models from scratch and edit existing models. It retains the visual flavor of spreadsheets, while adding in model structures such as named variables, symbolic formulas with varying scopes, time series, dimensions, and controls for optional features and automated operations. ModelSheet Authoring is the technical backend of Custom Spreadsheets. For intermediate cases (where more flexibility is needed, but the customer doesn't require frequent use of the Authoring Environment itself) the company offers consulting services in which a ModelSheet engineer uses the Authoring Environment to create a spreadsheet to the customer's specification.
Criticism
Testing a pre-beta version in 2008, journalist Dennis Howlett of ZD Net concluded that ModelSheet was "good in theory" but "needs more work." Howlett criticized several aspects: Windows-only platform support, bugs, poor usability, choice of fonts, and dull quick start guide. Although "bemused" by the product, Howlett did concede "Modelsheet is at an early stage of development and I'm sure [it] will improve over time."
See also
Macsyma (for information on founders' earlier work together there)
Microsoft Excel
Spreadsheet
End-user development
Business Analytics
Business Intelligence
Business Intelligence 2.0
Notes
References
http://www.manta.com/c/mtbr350/modelsheet-software-llc
http://venturebeatprofiles.com/company/profile/modelsheet-software
External links
Official web site
Software companies based in Massachusetts
American companies established in 2007
Software companies of the United States |
3070397 | https://en.wikipedia.org/wiki/Software%20visualization | Software visualization | Software visualization or software visualisation refers to the visualization of information of and related to software systems—either the architecture of its source code or metrics of their runtime behavior—and their development process by means of static, interactive or animated 2-D or 3-D visual representations of their structure, execution, behavior, and evolution.
Software system information
Software visualization uses a variety of information available about software systems. Key information categories include:
implementation artifacts such as source codes,
software metric data from measurements or from reverse engineering,
traces that record execution behavior,
software testing data (e.g., test coverage)
software repository data that tracks changes.
Objectives
The objectives of software visualization are to support the understanding of software systems (i.e., its structure) and algorithms (e.g., by animating the behavior of sorting algorithms) as well as the analysis and exploration of software systems and their anomalies (e.g., by showing classes with high coupling) and their development and evolution. One of the strengths of software visualization is to combine and relate information of software systems that are not inherently linked, for example by projecting code changes onto software execution traces.
Software visualization can be used as tool and technique to explore and analyze software system information, e.g., to discover anomalies similar to the process of visual data mining. For example, software visualization is used to monitoring activities such as for code quality or team activity. Visualization is not inherently a method for software quality assurance. Software visualization participates to Software Intelligence in allowing to discover and take advantage of mastering inner components of software systems.
Types
Tools for software visualization might be used to visualize source code and quality defects during software development and maintenance activities. There are different approaches to map source code to a visual representation such as by software maps Their objective includes, for example, the automatic discovery and visualization of quality defects in object-oriented software systems and services. Commonly, they visualize the direct relationship of a class and its methods with other classes in the software system and mark potential quality defects. A further benefit is the support for visual navigation through the software system.
More or less specialized graph drawing software is used for software visualization. A small-scale 2003 survey of researchers active in the reverse engineering and software maintenance fields found that a wide variety of visualization tools were used, including general purpose graph drawing packages like GraphViz and GraphEd, UML tools like Rational Rose and Borland Together, and more specialized tools like Visualization of Compiler Graphs (VCG) and Rigi. The range of UML tools that can act as a visualizer by reverse engineering source is by no means short; a 2007 book noted that besides the two aforementioned tools, ESS-Model, BlueJ, and Fujaba also have this capability, and that Fujaba can also identify design patterns.
See also
Programs
Imagix 4D
NDepend
Sourcetrail
Sotoarc
Related concepts
Application discovery and understanding
Software maintenance
Software maps
Software diagnosis
Cognitive dimensions of notations
Software archaeology
References
Further reading
External links
SoftVis the ACM Symposium on Software Visualization
VISSOFT 2nd IEEE Working Conference on Software Visualization
EPDV Eclipse Project Dependencies Viewer
Infographics
Software maintenance
Software metrics
Software development
Software quality
Source code
Software
Visualization software |
2230 | https://en.wikipedia.org/wiki/Analysis%20of%20algorithms | Analysis of algorithms | In computer science, the analysis of algorithms is the process of finding the computational complexity of algorithms—the amount of time, storage, or other resources needed to execute them. Usually, this involves determining a function that relates the size of an algorithm's input to the number of steps it takes (its time complexity) or the number of storage locations it uses (its space complexity). An algorithm is said to be efficient when this function's values are small, or grow slowly compared to a growth in the size of the input. Different inputs of the same size may cause the algorithm to have different behavior, so best, worst and average case descriptions might all be of practical interest. When not otherwise specified, the function describing the performance of an algorithm is usually an upper bound, determined from the worst case inputs to the algorithm.
The term "analysis of algorithms" was coined by Donald Knuth. Algorithm analysis is an important part of a broader computational complexity theory, which provides theoretical estimates for the resources needed by any algorithm which solves a given computational problem. These estimates provide an insight into reasonable directions of search for efficient algorithms.
In theoretical analysis of algorithms it is common to estimate their complexity in the asymptotic sense, i.e., to estimate the complexity function for arbitrarily large input. Big O notation, Big-omega notation and Big-theta notation are used to this end. For instance, binary search is said to run in a number of steps proportional to the logarithm of the size of the sorted list being searched, or in , colloquially "in logarithmic time". Usually asymptotic estimates are used because different implementations of the same algorithm may differ in efficiency. However the efficiencies of any two "reasonable" implementations of a given algorithm are related by a constant multiplicative factor called a hidden constant.
Exact (not asymptotic) measures of efficiency can sometimes be computed but they usually require certain assumptions concerning the particular implementation of the algorithm, called model of computation. A model of computation may be defined in terms of an abstract computer, e.g. Turing machine, and/or by postulating that certain operations are executed in unit time.
For example, if the sorted list to which we apply binary search has elements, and we can guarantee that each lookup of an element in the list can be done in unit time, then at most time units are needed to return an answer.
Cost models
Time efficiency estimates depend on what we define to be a step. For the analysis to correspond usefully to the actual run-time, the time required to perform a step must be guaranteed to be bounded above by a constant. One must be careful here; for instance, some analyses count an addition of two numbers as one step. This assumption may not be warranted in certain contexts. For example, if the numbers involved in a computation may be arbitrarily large, the time required by a single addition can no longer be assumed to be constant.
Two cost models are generally used:
the uniform cost model, also called uniform-cost measurement (and similar variations), assigns a constant cost to every machine operation, regardless of the size of the numbers involved
the logarithmic cost model, also called logarithmic-cost measurement (and similar variations), assigns a cost to every machine operation proportional to the number of bits involved
The latter is more cumbersome to use, so it's only employed when necessary, for example in the analysis of arbitrary-precision arithmetic algorithms, like those used in cryptography.
A key point which is often overlooked is that published lower bounds for problems are often given for a model of computation that is more restricted than the set of operations that you could use in practice and therefore there are algorithms that are faster than what would naively be thought possible.
Run-time analysis
Run-time analysis is a theoretical classification that estimates and anticipates the increase in running time (or run-time or execution time) of an algorithm as its input size (usually denoted as ) increases. Run-time efficiency is a topic of great interest in computer science: A program can take seconds, hours, or even years to finish executing, depending on which algorithm it implements. While software profiling techniques can be used to measure an algorithm's run-time in practice, they cannot provide timing data for all infinitely many possible inputs; the latter can only be achieved by the theoretical methods of run-time analysis.
Shortcomings of empirical metrics
Since algorithms are platform-independent (i.e. a given algorithm can be implemented in an arbitrary programming language on an arbitrary computer running an arbitrary operating system), there are additional significant drawbacks to using an empirical approach to gauge the comparative performance of a given set of algorithms.
Take as an example a program that looks up a specific entry in a sorted list of size n. Suppose this program were implemented on Computer A, a state-of-the-art machine, using a linear search algorithm, and on Computer B, a much slower machine, using a binary search algorithm. Benchmark testing on the two computers running their respective programs might look something like the following:
Based on these metrics, it would be easy to jump to the conclusion that Computer A is running an algorithm that is far superior in efficiency to that of Computer B. However, if the size of the input-list is increased to a sufficient number, that conclusion is dramatically demonstrated to be in error:
Computer A, running the linear search program, exhibits a linear growth rate. The program's run-time is directly proportional to its input size. Doubling the input size doubles the run-time, quadrupling the input size quadruples the run-time, and so forth. On the other hand, Computer B, running the binary search program, exhibits a logarithmic growth rate. Quadrupling the input size only increases the run-time by a constant amount (in this example, 50,000 ns). Even though Computer A is ostensibly a faster machine, Computer B will inevitably surpass Computer A in run-time because it's running an algorithm with a much slower growth rate.
Orders of growth
Informally, an algorithm can be said to exhibit a growth rate on the order of a mathematical function if beyond a certain input size , the function times a positive constant provides an upper bound or limit for the run-time of that algorithm. In other words, for a given input size greater than some 0 and a constant , the run-time of that algorithm will never be larger than . This concept is frequently expressed using Big O notation. For example, since the run-time of insertion sort grows quadratically as its input size increases, insertion sort can be said to be of order .
Big O notation is a convenient way to express the worst-case scenario for a given algorithm, although it can also be used to express the average-case — for example, the worst-case scenario for quicksort is , but the average-case run-time is .
Empirical orders of growth
Assuming the run-time follows power rule, , the coefficient can be found by taking empirical measurements of run-time } at some problem-size points }, and calculating so that . In other words, this measures the slope of the empirical line on the log–log plot of run-time vs. input size, at some size point. If the order of growth indeed follows the power rule (and so the line on log–log plot is indeed a straight line), the empirical value of will stay constant at different ranges, and if not, it will change (and the line is a curved line)—but still could serve for comparison of any two given algorithms as to their empirical local orders of growth behaviour. Applied to the above table:
It is clearly seen that the first algorithm exhibits a linear order of growth indeed following the power rule. The empirical values for the second one are diminishing rapidly, suggesting it follows another rule of growth and in any case has much lower local orders of growth (and improving further still), empirically, than the first one.
Evaluating run-time complexity
The run-time complexity for the worst-case scenario of a given algorithm can sometimes be evaluated by examining the structure of the algorithm and making some simplifying assumptions. Consider the following pseudocode:
1 get a positive integer n from input
2 if n > 10
3 print "This might take a while..."
4 for i = 1 to n
5 for j = 1 to i
6 print i * j
7 print "Done!"
A given computer will take a discrete amount of time to execute each of the instructions involved with carrying out this algorithm. The specific amount of time to carry out a given instruction will vary depending on which instruction is being executed and which computer is executing it, but on a conventional computer, this amount will be deterministic. Say that the actions carried out in step 1 are considered to consume time T1, step 2 uses time T2, and so forth.
In the algorithm above, steps 1, 2 and 7 will only be run once. For a worst-case evaluation, it should be assumed that step 3 will be run as well. Thus the total amount of time to run steps 1-3 and step 7 is:
The loops in steps 4, 5 and 6 are trickier to evaluate. The outer loop test in step 4 will execute ( n + 1 )
times (note that an extra step is required to terminate the for loop, hence n + 1 and not n executions), which will consume T4( n + 1 ) time. The inner loop, on the other hand, is governed by the value of j, which iterates from 1 to i. On the first pass through the outer loop, j iterates from 1 to 1: The inner loop makes one pass, so running the inner loop body (step 6) consumes T6 time, and the inner loop test (step 5) consumes 2T5 time. During the next pass through the outer loop, j iterates from 1 to 2: the inner loop makes two passes, so running the inner loop body (step 6) consumes 2T6 time, and the inner loop test (step 5) consumes 3T5 time.
Altogether, the total time required to run the inner loop body can be expressed as an arithmetic progression:
which can be factored as
The total time required to run the outer loop test can be evaluated similarly:
which can be factored as
Therefore, the total run-time for this algorithm is:
which reduces to
As a rule-of-thumb, one can assume that the highest-order term in any given function dominates its rate of growth and thus defines its run-time order. In this example, n2 is the highest-order term, so one can conclude that . Formally this can be proven as follows:
A more elegant approach to analyzing this algorithm would be to declare that [T1..T7] are all equal to one unit of time, in a system of units chosen so that one unit is greater than or equal to the actual times for these steps. This would mean that the algorithm's run-time breaks down as follows:
Growth rate analysis of other resources
The methodology of run-time analysis can also be utilized for predicting other growth rates, such as consumption of memory space. As an example, consider the following pseudocode which manages and reallocates memory usage by a program based on the size of a file which that program manages:
while file is still open:
let n = size of file
for every 100,000 kilobytes of increase in file size
double the amount of memory reserved
In this instance, as the file size n increases, memory will be consumed at an exponential growth rate, which is order . This is an extremely rapid and most likely unmanageable growth rate for consumption of memory resources.
Relevance
Algorithm analysis is important in practice because the accidental or unintentional use of an inefficient algorithm can significantly impact system performance. In time-sensitive applications, an algorithm taking too long to run can render its results outdated or useless. An inefficient algorithm can also end up requiring an uneconomical amount of computing power or storage in order to run, again rendering it practically useless.
Constant factors
Analysis of algorithms typically focuses on the asymptotic performance, particularly at the elementary level, but in practical applications constant factors are important, and real-world data is in practice always limited in size. The limit is typically the size of addressable memory, so on 32-bit machines 232 = 4 GiB (greater if segmented memory is used) and on 64-bit machines 264 = 16 EiB. Thus given a limited size, an order of growth (time or space) can be replaced by a constant factor, and in this sense all practical algorithms are for a large enough constant, or for small enough data.
This interpretation is primarily useful for functions that grow extremely slowly: (binary) iterated logarithm (log*) is less than 5 for all practical data (265536 bits); (binary) log-log (log log n) is less than 6 for virtually all practical data (264 bits); and binary log (log n) is less than 64 for virtually all practical data (264 bits). An algorithm with non-constant complexity may nonetheless be more efficient than an algorithm with constant complexity on practical data if the overhead of the constant time algorithm results in a larger constant factor, e.g., one may have so long as and .
For large data linear or quadratic factors cannot be ignored, but for small data an asymptotically inefficient algorithm may be more efficient. This is particularly used in hybrid algorithms, like Timsort, which use an asymptotically efficient algorithm (here merge sort, with time complexity ), but switch to an asymptotically inefficient algorithm (here insertion sort, with time complexity ) for small data, as the simpler algorithm is faster on small data.
See also
Amortized analysis
Analysis of parallel algorithms
Asymptotic computational complexity
Best, worst and average case
Big O notation
Computational complexity theory
Master theorem (analysis of algorithms)
NP-Complete
Numerical analysis
Polynomial time
Program optimization
Profiling (computer programming)
Scalability
Smoothed analysis
Termination analysis — the subproblem of checking whether a program will terminate at all
Time complexity — includes table of orders of growth for common algorithms
Information-based complexity
Notes
References
External links
Computational complexity theory |
548115 | https://en.wikipedia.org/wiki/Macintosh%20128K | Macintosh 128K | The Macintosh 128K, originally released as the Apple Macintosh, is the original Apple Macintosh personal computer. Its beige case consisted of a CRT monitor and came with a keyboard and mouse. It played a pivotal role in establishing desktop publishing as a general office function. A handle built into the top of the case made it easier for the computer to be lifted and carried. It had an initial selling price of US$2,495 (). The Macintosh was introduced by the now-famous US$370,000 () television commercial directed by Ridley Scott, "1984", which aired on CBS during the third quarter of Super Bowl XVIII on January 22, 1984. Sales of the Macintosh were strong from its initial release on January 24, 1984, and reached 70,000 units on May 3, 1984. Upon the release of its successor, the Macintosh 512K, it was rebranded as the Macintosh 128K. The computer's model number was M0001.
Processor and memory
The heart of the computer was a Motorola 68000 microprocessor running at , connected to 128 KB RAM shared by the processor and the display controller. The boot procedure and some operating system routines were contained in an additional 64 KB ROM chip. Apple did not offer RAM upgrades. Unlike the Apple II, no source code listings of the Macintosh system ROMs were offered.
The RAM in the Macintosh consisted of sixteen 64k×1 DRAMs. The 68000 and video controller took turns accessing DRAM every four CPU cycles during display of the frame buffer, while the 68000 had unrestricted access to DRAM during vertical and horizontal blanking intervals. Such an arrangement reduced the overall performance of the CPU as much as 35% for most code as the display logic often blocked the CPU's access to RAM. Despite the nominally high clock rate, this caused the computer to run slower than several of its competitors and resulted in an effective clock rate of 6 MHz.
Peripherals
The built-in display was a one-bit, black-and-white, 9 in/23 cm CRT with a fixed resolution of 512 × 342 pixels, using the Apple standard of 72 ppi (pixels per inch), a standard that was quickly abandoned once higher resolution screens became available. Expansion and networking were achieved using two non-standard RS-422 DE-9 serial ports named "printer" and "modem", which did not support hardware handshaking. An external floppy disk drive could be added using a proprietary connector (19-pin D-sub). The keyboard and mouse used simple proprietary protocols, allowing some third-party upgrades. The original keyboard had no arrow keys, numeric keypad or function keys. This was an intentional decision by Apple, as these keys were common on older platforms and it was thought that the addition of these keys would encourage software developers to simply port their existing applications to the Mac, rather than design new ones around the GUI paradigm. Later, Apple made a numeric keypad available for the Macintosh 128K. The keyboard sold with the newer Macintosh Plus model included the numeric keypad and arrow keys, but still no function keys. As with the Apple Lisa before it, the mouse had a single button. Standard headphones could also be connected to a monaural jack. Apple also offered their 300 and 1200 baud modems originally released for the Apple II line. Initially, the only printer available was the Apple ImageWriter, a dot matrix printer which was designed to produce 144 dpi WYSIWYG output from the Mac's 72 dpi screen. Eventually, the LaserWriter and other printers were capable of being connected using AppleTalk, Apple's built-in networking system.
Storage
The Macintosh contained a single 400 KB, single-sided -inch floppy disk drive, dedicating no space to other internal mechanical storage. The Mac OS was disk-based from the beginning, as RAM had to be conserved, but this "Startup Disk" could still be temporarily ejected. (Ejecting the root filesystem remained an unusual feature of the classic Mac OS until System 7.) One floppy disk was sufficient to store the System Software, an application and the data files created with the application. The 400 KB drive capacity was larger than the PC XT's 360 KB 5.25-inch drive, however, more sophisticated work environments of the time required separate disks for documents and the system installation. Due to the memory constraints (128 KB) of the original Macintosh, and the fact that the floppies could hold only 400 KB, users had to frequently swap disks in and out of the floppy drive, which caused external floppy drives to be utilized more frequently. The Macintosh External Disk Drive (mechanically identical to the internal one, piggybacking on the same controller) was a popular add-on that cost US$495. Third-party hard drives were considerably more expensive and usually connected to the slower serial port (as specified by Apple), although a few manufacturers chose to utilize the faster non-standard floppy port. The 128K can only use the original Macintosh File System released in 1984 for storage.
Cooling
The unit did not include a fan, relying instead on convective heat transfer, which made it quiet while in operation. Steve Jobs insisted that the Macintosh ship without a fan, which persisted until the introduction of the Macintosh SE in 1987. Jobs believed that computers equipped with fans tend to distract the user from completing work. Unfortunately, this was allegedly a source of many common, costly component failures in the first four Macintosh models. This was enough of a problem to prompt the introduction of a third-party, external cooling fan. This fan unit fitted inside the Macintosh's carrying-handle slot and produced a forced draft through the computer's existing ventilation holes.
Software
The Macintosh shipped with the very first System and Finder application, known to the public as "System 1.0" (formally known as System 0.97 and Finder 1.0). The original Macintosh saw three upgrades to both before it was discontinued. Apple recommends System 2.0 and Finder 4.2, with System 3.2 and Finder 5.3 as the maximum. System 4.0 officially dropped support for the Macintosh 128K because it was distributed on 800 KB floppy disks, which could not be used by the 128K.
The applications MacPaint and MacWrite were bundled with the Mac. Other programs available included MacProject, MacTerminal and Microsoft Word. Programming languages available at the time included MacBASIC, MacPascal and the Macintosh 68000 Development System. The Macintosh also came with a manual and a unique guided tour cassette tape which worked together with the guided tour diskette as a tutorial for both the Macintosh itself and the bundled applications, since most new Macintosh users had never used a mouse before, much less manipulated a graphical user interface.
Models
The computer was released in January 1984 as simply the Apple Macintosh. Following the release of the Macintosh 512K in September, which expanded the memory from 128 KB to 512 KB, the original Macintosh was re-branded Macintosh 128K and nicknamed the "thin Mac". The new 512K model was nicknamed the "fat Mac". While functionally the same, as closed systems, the Macintosh and Macintosh 128K were technically two different computers, with the re-badged 128K containing a completely redesigned logic board to easily accommodate both 128 KB and 512 KB RAM configurations during manufacturing. Though the RAM was still permanently soldered to the logic board, the new design allowed for easier (though unsanctioned) third-party upgrades to 512 KB. In addition, most of the newer models contained the 1984 revision B of the ROM to accommodate changes in the 400 KB floppy disk drive. System software contains support for an unreleased Macintosh 256K.
The increased RAM of the 512K was vitally important for the Macintosh as it finally allowed for more powerful software applications, such as the then-popular Microsoft Multiplan. However, Apple continued to market the 128K for over a year as an entry-level computer, the mid-level 512K and high-end Lisa (and claiming that it could be easily expanded should the user ever need more RAM).
Expansion
Jobs stated that because "customization really is mostly software now ... most of the options in other computers are in Mac", unlike the Apple II the Macintosh 128K did not need slots, which he described as costly and requiring larger size and more power. It was not officially upgradable by the user and only Apple service centers were permitted to open the case. There were third parties that did offer RAM upgrades and even memory and CPU upgrades, allowing the original 128 kB Macintosh to be expanded to a 4 MB 32-bit data path, 68020 CPU (16 MHz), 68881 FPU (16 MHz), 68851 MMU (16 MHz) with an external SCSI port (with a ribbon cable out the clock battery door, internal SCSI hard drive (20 MB Rodime) and a piezo-electric fan for cooling. This upgrade was featured on a Macworld magazine cover titled "Faster than a Vax" in August 1986. All accessories were external, such as the MacCharlie that added IBM PC compatibility. There was no provision for adding internal storage, more RAM or any upgrade cards; however, some of the Macintosh engineers objected to Jobs's ideas and secretly developed workarounds for them. As an example, the Macintosh was supposed to have only 17 address lines on the motherboard, enough to support 128 KB of system RAM, but the design team added two address lines without Jobs's knowledge, making it possible to expand the computer to 512 KB, although the actual act of upgrading system RAM was difficult and required piggybacking additional RAM chips atop the onboard 4164 chips. In September 1984, after months of complaints over the Mac's inadequate RAM, Apple released an official 512 KB machine. Although this had always been planned from the beginning, Steve Jobs maintained if the user desired more RAM than the Mac 128 provided, he should simply pay extra money for a Mac 512 rather than upgrade the computer himself. When the Mac 512 was released, Apple rebranded the original model as "Macintosh 128k" and modified the motherboard to allow easier RAM upgrades. Improving on the hard-wired RAM thus required a motherboard replacement (which was priced similarly to a new computer), or a third-party chip replacement upgrade, which was not only expensive but would void Apple's warranty. The difficulty of fitting software into its limited free memory, coupled with the new interface and event driven programming model, discouraged software vendors from supporting it, leaving the 128K with a relatively small software library. Whereas the Macintosh Plus, and to a lesser extent the Macintosh 512K, are compatible with much later software, the 128K is limited to specially crafted programs. A stock Mac 128K with the original 64K ROM is incompatible with either Apple's external 800 KB drive with HFS or Apple's Hard Disk 20. A Mac 128K that has been upgraded with the newer 128 KB ROM (called a Macintosh 128Ke) can use internal and external 800 KB drives with HFS, as well as the HD20. Both can print on an AppleShare network, but neither can do file sharing because of their limited RAM.
OEM upgrades
By early 1985 much Macintosh software required 512K of memory. Apple sold an official memory upgrade for the Macintosh 128K, which included a motherboard replacement effectively making it a Macintosh 512K, for the price of US$995. Additionally, Apple offered an 800 KB floppy disk drive kit, including updated 128K ROMs. Finally, a Mac 128K could be upgraded to a Macintosh Plus by swapping the logic board as well as the case back (to accommodate the slightly different port configuration) and optionally adding the Macintosh Plus extended keyboard. Any of the kits could be purchased alone or together at any time, for a partial or full upgrade for the Macintosh 128K. All upgrades were required to be performed by professional Apple technicians, who reportedly refused to work on any Macintosh upgraded to 512K without Apple's official upgrade, which at US$700 was much more expensive than about US$300 for third-party versions.
Credits
The original Macintosh was unusual in that it included the signatures of the Macintosh Division as of early 1982 molded on the inside of the case. The names were Peggy Alexio, Colette Askeland, Bill Atkinson, Steve Balog, Bob Belleville, Mike Boich, Bill Bull, Matt Carter, Berry Cash, Debi Coleman, George Crow, Donn Denman, Christopher Espinosa, Bill Fernandez, Martin Haeberli, Andy Hertzfeld, Joanna Hoffman, Rod Holt, Bruce Horn, Hap Horn, Brian Howard, Steve Jobs, Larry Kenyon, Patti King, Daniel Kottke, Angeline Lo, Ivan Mach, Jerrold Manock, Mary Ellen McCammon, Vicki Milledge, Mike Murray, Ron Nicholson Jr., Terry Oyama, Benjamin Pang, Jef Raskin, Ed Riddle, Brian Robertson, Dave Roots, Patricia Sharp, Burrell Smith, Bryan Stearns, Lynn Takahashi, Guy "Bud" Tribble, Randy Wigginton, Linda Wilkin, Steve Wozniak, Pamela Wyman and Laszlo Zidek.
The Macintosh 128/512K models also included Easter eggs in the OS ROM. If the user went to the system debugger and typed G 4188A4, a graphic reading "Stolen from Apple Computers" would appear in the upper left corner of the screen. This was designed to prevent unauthorized cloning of the Macintosh after numerous Apple II clones appeared, many of which simply stole Apple's copyrighted system ROMs. Steve Jobs allegedly planned that if a Macintosh clone appeared on the market and a court case happened, he could access this Easter egg on the computer to prove that it was using pirated Macintosh ROMs. The Macintosh SE later augmented this Easter Egg with a slideshow of 4 photos of the Apple design team when G 41D89A was entered.
Reception
Erik Sandberg-Diment of The New York Times in January 1984 stated that Macintosh "presages a revolution in personal computing". Although preferring larger screens and calling the lack of color a "mistake", he praised the "refreshingly crisp and clear" display and lack of fan noise. While unsure whether it would become "a second standard to Big Blue", Ronald Rosenberg of The Boston Globe wrote in February of "a euphoria that Macintosh will change how America computes. Anyone that tries the pint-size machine gets hooked by its features". Gregg Williams of BYTE that month found the hardware and software design (which it predicted would be "imitated but not copied") impressive, but criticized the lack of a standard second disk drive. He predicted that the computer would popularize the 3½ in floppy disk drive standard, that the Macintosh would improve Apple's reputation, and that it "will delay IBM's domination of the personal computer market." Williams concluded that the Macintosh was "the most important development in computers in the last five years. [It] brings us one step closer to the ideal of computer as appliance." In the May 1984 issue Williams added, "Initial reaction to the Macintosh has been strongly, but not overpoweringly, favorable. A few traditional computer users see the mouse, the windows, and the desktop metaphor as silly, useless frills, and others are outraged at the lack of color graphics, but most users are impressed by the machine and its capabilities. Still, some people have expressed concern about the relatively small 128K-byte RAM size, the lack of any computer language sent as part of the basic unit, and the inconvenience of the single disk drive."
Jerry Pournelle, also of BYTE, added that "The Macintosh is a bargain only if you can get it at the heavily discounted price offered to faculty and students of the favored 24 universities in the Macintosh consortium." He noted, however, that the Macintosh attracted people "who previously hated computers... There is, apparently, something about mice and pull-down menus and icons that appeal to people previously intimidated by A> and the like".
See also
Technical information on the Mac 128K
References
External links
Macintosh 128K profile, Low End Mac.
Mac 128K Information page at Mac512.com
The 72PPI Web Resolution Myth
Online attempt at simulating Macintosh System 1
Mac Essentials, Lost 1984 Videos
Apple Macintosh before System 7 Macintosh 128K Hardware
Tips For the 128K Support For 128K Diehard Users
The M0001 Registry Owners of Vintage Macintosh
Inside the Macintosh 128K
The Original Macintosh, anecdotes and the people who made it
128k
128k
Computer-related introductions in 1984
32-bit computers |
2584128 | https://en.wikipedia.org/wiki/TCP%20tuning | TCP tuning | TCP tuning techniques adjust the network congestion avoidance parameters of Transmission Control Protocol (TCP) connections over high-bandwidth, high-latency networks. Well-tuned networks can perform up to 10 times faster in some cases. However, blindly following instructions without understanding their real consequences can hurt performance as well.
Network and system characteristics
Bandwidth-delay product (BDP)
Bandwidth-delay product (BDP) is a term primarily used in conjunction with TCP to refer to the number of bytes necessary to fill a TCP "path", i.e. it is equal to the maximum number of simultaneous bits in transit between the transmitter and the receiver.
High performance networks have very large BDPs. To give a practical example, two nodes communicating over a geostationary satellite link with a round-trip delay time (or round-trip time, RTT) of 0.5 seconds and a bandwidth of 10 Gbit/s can have up to 0.5×1010 bits, i.e., 5 Gbit = 625 MB of unacknowledged data in flight. Despite having much lower latencies than satellite links, even terrestrial fiber links can have very high BDPs because their link capacity is so large. Operating systems and protocols designed as recently as a few years ago when networks were slower were tuned for BDPs of orders of magnitude smaller, with implications for limited achievable performance.
Buffers
The original TCP configurations supported TCP receive window size buffers of up to 65,535 (64 KiB - 1) bytes, which was adequate for slow links or links with small RTTs. Larger buffers are required by the high performance options described below.
Buffering is used throughout high performance network systems to handle delays in the system. In general, buffer size will need to be scaled proportionally to the amount of data "in flight" at any time. For very high performance applications that are not sensitive to network delays, it is possible to interpose large end to end buffering delays by putting in intermediate data storage points in an end to end system, and then to use automated and scheduled non-real-time data transfers to get the data to their final endpoints.
TCP speed limits
Maximum achievable throughput for a single TCP connection is determined by different factors. One trivial limitation is the maximum bandwidth of the slowest link in the path. But there are also other, less obvious limits for TCP throughput. Bit errors can create a limitation for the connection as well as RTT.
Window size
In computer networking, RWIN (TCP Receive Window) is the amount of data that a computer can accept without acknowledging the sender. If the sender has not received acknowledgement for the first packet it sent, it will stop and wait and if this wait exceeds a certain limit, it may even retransmit. This is how TCP achieves reliable data transmission.
Even if there is no packet loss in the network, windowing can limit throughput. Because TCP transmits data up to the window size before waiting for the acknowledgements, the full bandwidth of the network may not always get used. The limitation caused by window size can be calculated as follows:
where RWIN is the TCP Receive Window and RTT is the round-trip time for the path.
At any given time, the window advertised by the receive side of TCP corresponds to the amount of free receive memory it has allocated for this connection. Otherwise it would risk dropping received packets due to lack of space.
The sending side should also allocate the same amount of memory as the receive side for good performance. That is because, even after data has been sent on the network, the sending side must hold it in memory until it has been acknowledged as successfully received, just in case it would have to be retransmitted. If the receiver is far away, acknowledgments will take a long time to arrive. If the send memory is small, it can saturate and block emission. A simple computation gives the same optimal send memory size as for the receive memory size given above.
Packet loss
When packet loss occurs in the network, an additional limit is imposed on the connection. In the case of light to moderate packet loss when the TCP rate is limited by the congestion avoidance algorithm, the limit can be calculated according to the formula (Mathis, et al.):
where MSS is the maximum segment size and Ploss is the probability of packet loss. If packet loss is so rare that the TCP window becomes regularly fully extended, this formula doesn't apply.
TCP options for high performance
A number of extensions have been made to TCP over the years to increase its performance over fast high-RTT links ("long fat networks" or LFNs).
TCP timestamps (RFC 1323) play a double role: they avoid ambiguities due to the 32-bit sequence number field wrapping around, and they allow more precise RTT estimation in the presence of multiple losses per RTT. With those improvements, it becomes reasonable to increase the TCP window beyond 64 kB, which can be done using the window scaling option (RFC 1323).
The TCP selective acknowledgment option (SACK, RFC 2018) allows a TCP receiver to precisely inform the TCP sender about which segments have been lost. This increases performance on high-RTT links, when multiple losses per window are possible.
Path MTU Discovery avoids the need for in-network fragmentation, increasing the performance in the presence of packet loss.
Tuning slow connections
The default IP queue length is 1000, which is generally too large. Imagine a Wi-Fi base station having a speed of 20 Mbit/s and an average packet size of 750 byte. How large should the IP queue be? A voice over IP client should be able to transmit a packet every 20 ms. The estimated maximum number of packets in transit would then be:
Estimated buffer size = 20000000 * 0,020 / 8 / 750 = 66
You better limit the queue length to:
ifconfig wlan0 mtu 1492 txqueuelen 100
See also
Bufferbloat
Explicit Congestion Notification
References
External links
- TCP Extensions for High Performance
- TCP Selective Acknowledgment Options
- The NewReno Modification to TCP's Fast Recovery Algorithm
- Enhancing TCP Over Satellite Channels using Standard Mechanisms
- An Extension to the Selective Acknowledgment (SACK) Option for TCP
- A Conservative Selective Acknowledgment-based Loss Recovery Algorithm for TCP
- Forward RTO-Recovery (F-RTO): An Algorithm for Detecting Spurious Retransmission Timeouts with TCP and the Stream Control Transmission Protocol (SCTP)
TCP Tuning Guide, ESnet
The Cable Guy: TCP Receive Window Auto-Tuning
The Web100 Data Bandwidth Testing
DrTCP - a utility for Microsoft Windows (prior to Vista) which can quickly alter TCP performance parameters in the registry.
Information on 'Tweaking' your TCP stack, Broadband Reports
TCP/IP Analyzer, speedguide.net
NTTTCP Network Performance Test Tool, Microsoft Windows Server Performance Team Blog
Best Practices for TCP Optimization - ExtraHop
Tuning
Network performance |
51438820 | https://en.wikipedia.org/wiki/Fort%C3%A9%20Software | Forté Software | Forté is a proprietary application server that was developed by Forté Software and used for developing scalable, highly available, enterprise applications.
History
Forté was created as an integrated solution for developing and managing client/server applications. Forté 4GL consists of an application server, tools for deploying and monitoring an application and an object oriented proprietary programming language, TOOL (transactional object oriented language). Given that TOOL only runs on the Forté application server, many users simply refer to their "TOOL" applications as "Forté" applications. The product itself was 3.5 million lines of C/C++ software, ported to approximately twelve different operating system environments, spanning the range from IBM mainframes and Microsoft Windows PC's.
The first release of Forté 4GL was published in August 1994. After releasing this initial product, Forté Inc. proceeded to build several extensions including:
Web Enterprise - an HTML-wrapper interface for rich-client applications to publish their screens through web servers.
Forte Express - a rapid database GUI interface kit, released in July 1995.
Conductor - a high-performance work flow engine capable of choreographing activities, released in March 1997.
Forté Fusion - an integration backbone to link external systems using XML messaging and tie in with the Conductor engine.
In 1999, Forté Software came out with a version of Forte based on java instead of TOOL, named SynerJ, also referred to as "Forté for Java". As with the original TOOL-based products this consisted of a development IDE, a code repository, and a runtime environment. This new java product was of interest to Sun Microsystems who bought out the company. The TOOL-based listed above were bundled together and re-branded as Unified Development Server (UDS) and Integration Server (IS) under the IPlanet division. The server modules were later bundled together as Enterprise Application Integration (EAI).
Sun declared the product's end-of-life, indicating no future plans to continue development of the product. Sun's official support of Forte ceased at the end of April, 2009.
Capabilities
Being an enterprise application development system, Forté supported close linkage to a number of different relational database systems, including Oracle, Sybase, Microsoft SQL Server, Informix, and DB2. These linkages could be via SQL embedded within the TOOL code, or via SQL constructed on the fly.
It also had support for distributed applications: the developer would create an instance of a specific class, which would be placed on a user-specified server. Calls to methods through instance would be sent across the network transparently; the developer would not need to know the underlying details of how the call would be transmitted.
Programming Language TOOL
TOOL is an object-oriented language with the following features (among others):
automatic garbage collection
referenced based, no pointers
single inheritance and interfaces
supports multi-threaded programming
integrated statements for database access
event handling
exception handling
strong integration with GUI
one common base class called Object
TOOL code is case-insensitive. A statement is always terminated by the semicolon. Compound statements are enclosed by the keywords begin and end. Comments are indicated by // or -- (remainder of line becomes a comment), /* ... */.
Data Types
The Simple Data Types are:
boolean
float
double
char
string
Integer data types
i1, ui1 (signed / unsigned one byte integer)
i2, ui2 (signed / unsigned two bytes integer)
i4, ui4 (signed / unsigned four bytes integer)
integer (signed four bytes integer, same as i4)
short (signed integer, at least two bytes, same as int)
int (signed integer, at least two bytes)
long(signed integer, at least four bytes)
The corresponding object data types are (some examples):
BooleanData, BooleanNullable
IntegerData, IntegerNullable
DoubleData, DoubleNullable
TextData, TextNullable
Arrays are indicated by the keywords Array of. The first element of an array is indexed by 1.
Variable Declaration
name : string = 'John';
result : integer;
dataArray : Array of IntegerData = new;
Conditional Statements (if-statement, case-statement)
if result = 5100 then
...
elseif result != 0 then
...
else
...
end if;
case result is
when 1 do
....
when 2 do
....
else
...
end case;
Iteration, Loops
for k in 1 to 10 by 2 do
...
end for;
for dataItem in dataArray do
...
end for;
k : integer = 2;
while k < 14 do
...
k = k + 1;
end while;
Events
An event is posted e.g. by the following statement:
post EV_CustomerSet(id = selectedID);
This statement posts an event named EV_CustomerSet. This event has one argument named "id".
Events are handled by event handlers, for example:
event loop
preregister
register GeneralHandler();
...
postregister
waitTimer.IsActive = true;
...
when EV_CustomerSet( id ) do
...
when waitTimer.Tick() do
exit;
when task.Shutdown do
exit;
end event;
Exception handling
begin
...
raise UsageException();
...
exception
when e : UsageException do
task.ErrMgr.Clear();
...
else
...
raise;
end;
Multithreading
A new thread is launched by a statement like start task report.Print();
See also
TeamWare
References
External links
Sun's documentation for "Sun ONE Unified Development Server (UDS) 5.2"
Sun's documentation for "Forté 4GL 3.5 (UDS)"
Forte Software to Develop SynerJ Module For InLine Software's Assembly Line Product
Sun's Forte buy gives server software a boost
Forte tools create a collaborative platform for developers
Fourth-generation programming languages |
48717031 | https://en.wikipedia.org/wiki/Valery%20Shmukler | Valery Shmukler | Valery Samuilovich Shmukler (, ; born 26 June 1946) is a Ukrainian engineer, an expert in the field of construction, reconstruction, the theory of structural systems, information technology, calculation and design of structures, methods of optimization and rationalization of the scientific theory of rationalization building constructions. He is an academician of the Academy of Construction of Ukraine (1999), doctor of technical sciences (1997), professor (2001), winner of the State Prize of Ukraine in Architecture (1995), Laureate of the State Prize of Ukraine in the field of science and technology (2020), Honored Scientist of Ukraine (2015), emeritus professor of O. M. Beketov National University of Urban Economy in Kharkiv (2014), chief of the department of building construction of O. M. Beketov National University of Urban Economy in Kharkiv, a member of the International Association for Shell and Spatial Structures IASS (1980), a member of the American Concrete Institute (1997).
Biography
Shmukler was born on June 26, 1946, in the city of Krasnoyarsk, USSR. His family moved to the city of Kharkiv in 1949. His father, Samuel I. Shmukler was a mechanical engineer and worked all his life in the construction and mounting trust No. 86 as a chief mechanic. His mother, Shklovskaya Asya Abramovna was a chemist and the head of the laboratory at the pharmaceutical institute. In 1964 he entered the Kharkiv Institute of Civil Engineering (now – Kharkiv National University of Building and Architecture) on faculty "Industrial and Civil Engineering," from which he graduated in 1969. After graduating in civil engineering from 1969 to 1986, he worked in the Kharkovproekt consistently occupying positions from the engineer to the chief of the department, and from 1986 to 1990 – in the Ukrgorstroyproekt, where he occupied positions from the head of technical department to the deputy chief engineer of the institute. In 1977, he defended the PhD thesis, and in 1997 – thesis for the degree of the doctor of technical sciences. Since 1990, Shmukler has been working in O. M. Beketov National University of Urban Economy (ex – Kharkiv National Academy of Municipal Economy), consistently occupying positions: the professor of building structures and the chief of building structures department (since 2012).
Work in state design institutes
After graduating from the Kharkiv Institute of Civil Engineering, Shmukler from 1969 to 1990 worked in the research and design institutes, Kharkovproekt (1969–1986), Ukrgorstroyproekt (1986–1990), where he occupied the positions from the head of the technical department to the deputy chief engineer of the institute. During the time he spent in Kharkovproekt, he created one of the first computation center and computer-aided design of industrial and civil structures in Ukraine. Under his leadership and with direct participation a number of CAD software systems were created. The results of these developments were the basis of his dissertation for the degree of candidate of technical sciences "flat concrete casing sresearch work." The work suggested and developed new methods of accounting for physical and geometrical nonlinearities in solving the problems of static stability of membranes covering. Also, in these years, Shmukler co-authored a series of CJC (Complex Kharkiv Series) for the construction of large sixteen residential buildings and was one of the organizers of a network of data centers roses State Construction Committee of Ukraine. Over the years of work in the GUI (Ukrgorstroyproekt), Shmukler was one of the pioneers in the creation and development of youth and housing construction in Ukraine. After the earthquake in Armenia he took an active part in the reconstruction of housing, that had welfare and cultural purposes. He was one of the first to engage in the implementation of the industrial facilities of reinforced concrete and metal spatial structures in buildings and structures of the country.
Scientific activity
Shmukler's scientific interests are associated with the theory of structural systems, information technology for calculation and design of structures, and methods of their optimization and rationalization. His main works in this area include: integrated gradient method for finding the global extremum of functionals on many variables; method for solving multiobjective optimization problems; overdetermined contour collocation method for solving boundary problems in the theory of plates and shells; compilation methods for solving nonlinear problems of the theory of constructions. A cycle of his work devoted to the creation of direct design methods, well-founded principles of new energy, is the basis of the formation of constructions with a simple external and complex internal geometry. Based on them, the following were created and implemented: architectural construction systems RAMPA, IKAR, DOBOL, intended for housing and civil engineering; a family of gantry and bridge cranes up to 800 tons, etc.
In 1999, he was elected as the member of the Ukrainian Academy of Construction. Shmukler's scientific achievements were marked with state awards: for the organization, implementation and development of the youth construction in Ukraine – 40 Years of Youth Residential Complex Movement Medal (2011); for his outstanding contribution to the development of science, technology and engineering in Ukraine – Silver Medal of the Academy of Construction of Ukraine (2013) and Gold Medal of A. N. Podgorny of the Engineering Academy of Ukraine (2014). In 2015 he was awarded with the honorary title «Honored Worker of Science and Technology of Ukraine». References to Shmukler's works can be found in a number of books, articles, dissertations, research reports of other authors. The constructions developed by him are used and applied in a number of projected and constructed facilities.
Educational activity
Since 1990, Shmukler has been combining active scientific work with teaching activities at O. M. Beketov Kharkiv National University of Urban Economy, consecutively occupying the positions of: professor of the Building Designs Department, and since 2012 – the Head of Department. Shmukler gives lectures on the modern theory of building designs, including the latest advances in the field of construction and applied mechanics, computer science, materials and structural systems. The audience of his lectures consists of students, graduate students, doctoral students and university professors, as well as specialists in the construction field. Shmukler also works with graduate students and doctoral candidates of the Kharkiv National Automobile and Highway University and Kharkiv State Technical University of Construction and Architecture. He pays much attention to the integration of science and education. Over the years, Shmukler made a significant contribution to the development of the scientific theory of rationalization of building designs. He is the author of many important initiatives, including the establishment of the school "Structures and materials for residential and civil buildings" (1992). Within the framework of the school he is heading, multidisciplinary theoretical and experimental research of constructions is conducted, both for Ukraine and other countries. Works of the school are the basis for the functioning of the master's, postgraduate and doctoral studies at the Building Designs Department of the University. Paying great attention to the training of highly qualified personnel, Shmukler prepared 12 candidates and 1 doctor of technical sciences. His students successfully work in universities, factories of Ukraine and many other countries. Shmukler is a member of the Specialized Academic Council (D 64.056.4) at the Kharkiv State Technical University of Construction and Architecture for doctoral and master's theses on the construction specialization 05.23.01 "Building constructions, buildings and structures." For valorous work, Shmukler was awarded with numerous certificates of honor of the USSR State Construction Committee, Kharkiv City Council, administrations of design institutes, Rector of O. M. Beketov Kharkiv National University of Urban Economy, etc., and also was awarded the diploma "Excellence in Higher Education of Ukraine" (2002). In 2014, Shmukler, for his scientific and pedagogical contribution, professional and human qualities, providing high performance work so entrusted to him, was awarded the title of "Honoured Professor of O. M. Beketov Kharkiv National University of Urban Economy."
Publications
Shmukler is an author and co-author of over 180 scientific papers and 60 inventions. His works have been published in leading scientific journals in Ukraine, Russia, Great Britain, Italy, China, Japan, US, and Iraq. Shmukler is a member of the editorial boards of scientific journals, namely the Concrete and Reinforced Concrete in Ukraine (Poltava), the Scientific Construction Bulletin (Kharkiv), and the Urban Economy of Cities (Kharkiv). He is a co-author of seven books and teaching aids:
– Complex of programs for the calculation of the shallow shells supported along the contour considering physical and geometric nonlinearity. – M., 1975;
– Program system of drawing schemes and designs. – M., 1986;
– Information technology for calculating and structural design. – Kiev; Kharkiv, 2003;
– Frame systems of facilitated type. – Kharkiv, 2008;
– Practical calculation of elements for concrete structures under DBN V. 2.6-98:2009 as compared to the calculations under SniP 2.03.01-84* and EN 1992-1-1 (Eurocode 2). – Kharkiv, 2015;
– Numerical and experimental methods of rational design and construction of constructive systems - Kyiv, 2017;
- Rational Design of Structural Building Systems : monograph / V. Babayev, I. Ievzerov, S. Evel, A. Lantoukh-Liashchenko, V. Shevetovsky, O. Shimanovskyi, V. Shmukler, M. Sukhonos. – Berlin : DOM publishers, 2020. – 384 p. – (Construction and Engineering Manual).
And national regulatory documents:
– DBN V.2.6-98-2009 "Concrete and reinforced concrete structures";
– Project DSTU-Н Б EN 1996-1-Eurocode 6 "Design of masonry structures";
– DSTU B V.2.6-2010 "Construction of buildings and structures. Monolithic, reinforced concrete structures of buildings"; recommendations for the use of reinforcing bars under DSTU 3760–98 in the design and manufacture of reinforced concrete structures without pretension;
– DSTU-N B.2.6-205: 2015. The name of the design of monolithic concrete and reinforced concrete constructions of buildings and structures, etc.
For the list of Shmukler's published works and inventions, see References section.
References
Shmukler V. S. on the official website of O. M. Beketov Kharkiv National University of Urban Economy (Building Designs Department)
Shmukler V. S. on the website of Scientific activity of O. M. Beketov Kharkiv National University of Urban Economy
official website of O. M. Beketov Kharkiv National University of Urban Economy
Patents of the author Shmukler V. S. on the website of Patents Base of Ukraine
Patents of the author Shmukler V. S. on the website of search for patents
Shmukler V. S. on the Wikipedia page about O. M. Beketov Kharkiv National University of Urban Economy
Favorite professors of O. M. Beketov Kharkiv National University of Urban Economy
List of published works and inventions of Professor Shmukler V. S. on the website of the library of O. M. Beketov Kharkiv
1946 births
Living people
Ukrainian engineers |
10235 | https://en.wikipedia.org/wiki/ELIZA | ELIZA | ELIZA is an early natural language processing computer program created from 1964 to 1966 at the MIT Artificial Intelligence Laboratory by Joseph Weizenbaum. Created to demonstrate the superficiality of communication between humans and machines, Eliza simulated conversation by using a "pattern matching" and substitution methodology that gave users an illusion of understanding on the part of the program, but had no built in framework for contextualizing events. Directives on how to interact were provided by "scripts", written originally in MAD-Slip, which allowed ELIZA to process user inputs and engage in discourse following the rules and directions of the script. The most famous script, DOCTOR, simulated a Rogerian psychotherapist (in particular, Carl Rogers, who was well-known for simply parroting back at patients what they had just said), and used rules, dictated in the script, to respond with non-directional questions to user inputs. As such, ELIZA was one of the first chatterbots and one of the first programs capable of attempting the Turing test.
ELIZA's creator, Weizenbaum, regarded the program as a method to show the superficiality of communication between man and machine, but was surprised by the number of individuals who attributed human-like feelings to the computer program, including Weizenbaum’s secretary. Many academics believed that the program would be able to positively influence the lives of many people, particularly those suffering from psychological issues, and that it could aid doctors working on such patients' treatment. While ELIZA was capable of engaging in discourse, ELIZA could not converse with true understanding. However, many early users were convinced of ELIZA’s intelligence and understanding, despite Weizenbaum’s insistence to the contrary.
Overview
Joseph Weizenbaum’s ELIZA, running the DOCTOR script, was created to provide a parody of "the responses of a non-directional psychotherapist in an initial psychiatric interview" and to "demonstrate that the communication between man and machine was superficial". While ELIZA is best known for acting in the manner of a psychotherapist, the speech patterns are due to the data and instructions supplied by the DOCTOR script. ELIZA itself examined the text for keywords, applied values to said keywords, and transformed the input into an output; the script that ELIZA ran determined the keywords, set the values of keywords, and set the rules of transformation for the output. Weizenbaum chose to make the DOCTOR script in the context of psychotherapy to "sidestep the problem of giving the program a data base of real-world knowledge", as in a Rogerian therapeutic situation, the program had only to reflect back the patient's statements. The algorithms of DOCTOR allowed for a deceptively intelligent response, which deceived many individuals when first using the program.
Weizenbaum named his program ELIZA after Eliza Doolittle, a working-class character in George Bernard Shaw's Pygmalion. According to Weizenbaum, ELIZA's ability to be "incrementally improved" by various users made it similar to Eliza Doolittle, since Eliza Doolittle was taught to speak with an upper-class accent in Shaw's play. However, unlike in Shaw's play, ELIZA is incapable of learning new patterns of speech or new words through interaction alone. Edits must be made directly to ELIZA’s active script in order to change the manner by which the program operates.
Weizenbaum first implemented ELIZA in his own SLIP list-processing language, where, depending upon the initial entries by the user, the illusion of human intelligence could appear, or be dispelled through several interchanges. Some of ELIZA's responses were so convincing that Weizenbaum and several others have anecdotes of users becoming emotionally attached to the program, occasionally forgetting that they were conversing with a computer. Weizenbaum's own secretary reportedly asked Weizenbaum to leave the room so that she and ELIZA could have a real conversation. Weizenbaum was surprised by this, later writing: "I had not realized ... that extremely short exposures to a relatively simple computer program could induce powerful delusional thinking in quite normal people."
In 1966, interactive computing (via a teletype) was new. It was 15 years before the personal computer became familiar to the general public, and three decades before most people encountered attempts at natural language processing in Internet services like Ask.com or PC help systems such as Microsoft Office Clippit. Although those programs included years of research and work, ELIZA remains a milestone simply because it was the first time a programmer had attempted such a human-machine interaction with the goal of creating the illusion (however brief) of human–human interaction.
At the ICCC 1972 ELIZA was brought together with another early artificial-intelligence program named PARRY for a computer-only conversation. While ELIZA was built to speak as a doctor, PARRY was intended to simulate a patient with schizophrenia.
Design
Weizenbaum originally wrote ELIZA in MAD-Slip for CTSS on an IBM 7094, as a program to make natural-language conversation possible with a computer. To accomplish this, Weizenbaum identified five "fundamental technical problems" for ELIZA to overcome: the identification of critical words, the discovery of a minimal context, the choice of appropriate transformations, the generation of responses appropriate to the transformation or in the absence of critical words and the provision of an ending capacity for ELIZA scripts. Weizenbaum solved these problems and made ELIZA such that it had no built-in contextual framework or universe of discourse. However, this required ELIZA to have a script of instructions on how to respond to inputs from users.
ELIZA starts its process of responding to an input by a user by first examining the text input for a "keyword". A "keyword" is a word designated as important by the acting ELIZA script, which assigns to each keyword a precedence number, or a RANK, designed by the programmer. If such words are found, they are put into a "keystack", with the keyword of the highest RANK at the top. The input sentence is then manipulated and transformed as the rule associated with the keyword of the highest RANK directs. For example, when the DOCTOR script encounters words such as "alike" or "same", it would output a message pertaining to similarity, in this case “In what way?”, as these words had high precedence number. This also demonstrates how certain words, as dictated by the script, can be manipulated regardless of contextual considerations, such as switching first-person pronouns and second-person pronouns and vice versa, as these too had high precedence numbers. Such words with high precedence numbers are deemed superior to conversational patterns and are treated independently of contextual patterns.
Following the first examination, the next step of the process is to apply an appropriate transformation rule, which includes two parts: the "decomposition rule" and the "reassembly rule". First, the input is reviewed for syntactical patterns in order to establish the minimal context necessary to respond. Using the keywords and other nearby words from the input, different disassembly rules are tested until an appropriate pattern is found. Using the script's rules, the sentence is then "dismantled" and arranged into sections of the component parts as the "decomposition rule for the highest-ranking keyword" dictates. The example that Weizenbaum gives is the input "I are very helpful" (remembering that "I" is "You" transformed), which is broken into (1) empty (2) "I" (3) "are" (4) "very helpful". The decomposition rule has broken the phrase into four small segments that contain both the keywords and the information in the sentence.
The decomposition rule then designates a particular reassembly rule, or set of reassembly rules, to follow when reconstructing the sentence. The reassembly rule takes the fragments of the input that the decomposition rule had created, rearranges them, and adds in programmed words to create a response. Using Weizenbaum's example previously stated, such a reassembly rule would take the fragments and apply them to the phrase "What makes you think I am (4)", which would result in "What makes you think I am very helpful?". This example is rather simple, since depending upon the disassembly rule, the output could be significantly more complex and use more of the input from the user. However, from this reassembly, ELIZA then sends the constructed sentence to the user in the form of text on the screen.
These steps represent the bulk of the procedures that ELIZA follows in order to create a response from a typical input, though there are several specialized situations that ELIZA/DOCTOR can respond to. One Weizenbaum specifically wrote about was when there is no keyword. One solution was to have ELIZA respond with a remark that lacked content, such as "I see" or "Please go on". The second method was to use a "MEMORY" structure, which recorded prior recent inputs, and would use these inputs to create a response referencing a part of the earlier conversation when encountered with no keywords. This was possible due to Slip’s ability to tag words for other usage, which simultaneously allowed ELIZA to examine, store and repurpose words for usage in outputs.
While these functions were all framed in ELIZA's programming, the exact manner by which the program dismantled, examined, and reassembled inputs is determined by the operating script. The script is not static and can be edited, or a new one created, as is necessary for the operation in the context needed. This would allow the program to be applied in multiple situations, including the well-known DOCTOR script, which simulates a Rogerian psychotherapist.
A Lisp version of ELIZA, based on Weizenbaum's CACM paper, was written shortly after that paper's publication, by Bernie Cosell. A BASIC version appeared in Creative Computing in 1977 (although it was written in 1973 by Jeff Shrager). This version, which was ported to many of the earliest personal computers, appears to have been subsequently translated into many other versions in many other languages. Shrager claims not to have seen either Weizenbaum's or Cosell's versions.
In 2021 Jeff Shrager searched MIT's Weizenbaum archives, along with MIT archivist Myles Crowley, and found files labeled Computer Conversations. These included the complete source code listing of ELIZA in MAD-SLIP, with the DOCTOR script attached. The Weizenbaum estate has given permission to open-source this code under a Creative Commons CC0 public domain license. The code and other information can be see on the ELIZAGEN site.
Another version of Eliza popular among software engineers is the version that comes with the default release of GNU Emacs, and which can be accessed by typing M-x doctor from most modern Emacs implementations.
In popular culture
In 1969, George Lucas and Walter Murch incorporated an Eliza-like dialogue interface in their screenplay for the feature film THX-1138. Inhabitants of the underground future world of THX, when stressed, would retreat to "confession booths" and initiate a one-sided Eliza-formula conversation with a Jesus-faced computer who claimed to be "OMM".
ELIZA influenced a number of early computer games by demonstrating additional kinds of interface designs. Don Daglow wrote an enhanced version of the program called Ecala on a DEC PDP-10 minicomputer at Pomona College in 1973 before writing the computer role-playing game Dungeon (1975).
ELIZA is given credit as additional vocals on track 10 of the eponymous Information Society album.
In the 2008 anime RD Sennou Chousashitsu, also known as Real Drive, a character named Eliza Weizenbaum appears, an obvious tribute to ELIZA and Joseph Weizenbaum. Her behavior in the story often mimics the responses of the ELIZA program.
The 2011 video game Deus Ex: Human Revolution and the 2016 sequel Deus Ex: Mankind Divided features an artificial-intelligence Picus TV Network newsreader named Eliza Cassan.
In Adam Curtis's 2016 documentary, HyperNormalisation, ELIZA was referenced in relationship to post-truth.
In the twelfth episode of the American sitcom Young Sheldon, aired in January 2018, starred the protagonist "conversing" with ELIZA, hoping to resolve a domestic issue.
On August 12, 2019, independent game developer Zachtronics published a visual novel called Eliza, about an AI-based counseling service inspired by ELIZA.
Response and legacy
Lay responses to ELIZA were disturbing to Weizenbaum and motivated him to write his book Computer Power and Human Reason: From Judgment to Calculation, in which he explains the limits of computers, as he wants to make clear his opinion that the anthropomorphic views of computers are just a reduction of the human being and any life form for that matter. In the independent documentary film Plug & Pray (2010) Weizenbaum said that only people who misunderstood ELIZA called it a sensation.
The Israeli poet David Avidan, who was fascinated with future technologies and their relation to art, desired to explore the use of computers for writing literature. He conducted several conversations with an APL implementation of ELIZA and published them – in English, and in his own translation to Hebrew – under the title My Electronic Psychiatrist – Eight Authentic Talks with a Computer. In the foreword he presented it as a form of constrained writing.
There are many programs based on ELIZA in different programming languages. In 1980 a company called "Don't Ask Software" created a version called "Abuse" for the Apple II, Atari, and Commodore 64 computers, which verbally abused the user based on the user's input. Other versions adapted ELIZA around a religious theme, such as ones featuring Jesus (both serious and comedic), and another Apple II variant called I Am Buddha. The 1980 game The Prisoner incorporated ELIZA-style interaction within its gameplay. In 1988 the British artist and friend of Weizenbaum Brian Reffin Smith created two art-oriented ELIZA-style programs written in BASIC, one called "Critic" and the other "Artist", running on two separate Amiga 1000 computers and showed them at the exhibition "Salamandre" in the Musée du Berry, Bourges, France. The visitor was supposed to help them converse by typing in to "Artist" what "Critic" said, and vice versa. The secret was that the two programs were identical. GNU Emacs formerly had a psychoanalyze-pinhead command that simulates a session between ELIZA and Zippy the Pinhead. The Zippyisms were removed due to copyright issues, but the DOCTOR program remains.
ELIZA has been referenced in popular culture and continues to be a source of inspiration for programmers and developers focused on artificial intelligence. It was also featured in a 2012 exhibit at Harvard University titled "Go Ask A.L.I.C.E", as part of a celebration of mathematician Alan Turing's 100th birthday. The exhibit explores Turing's lifelong fascination with the interaction between humans and computers, pointing to ELIZA as one of the earliest realizations of Turing's ideas.
See also
ELIZA effect
References
Bibliography
.
.
Norvig, Peter. Paradigms of Artificial Intelligence Programming. (San Francisco: Morgan Kaufmann Publishers, 1992), 151–154, 159, 163–169, 175, 181. .
Wardip-Fruin, Noah. Expressing Processing: Digital Fictions, Computer Games, and Software Studies. (Cumberland: MIT Press, 2014), 24–36. .
External links
Collection of several source code versions at GitHub
, a collection of dialogues between ELIZA and various conversants, such as a company vice president and PARRY (a simulation of a paranoid schizophrenic)
Weizenbaum. Rebel at work – Peter Haas, Silvia Holzinger, Documentary film with Joseph Weizenbaum and ELIZA.
History of artificial intelligence
Chatbots
Health software
Psychotherapy
Public-domain software with source code |
54945134 | https://en.wikipedia.org/wiki/Traitors%20Gate%20%28video%20game%29 | Traitors Gate (video game) | Traitors Gate is a 1999 graphic adventure game developed by Daydream Software. Set in a reproduction of the Tower of London, it follows the story of Raven, an American special agent trying to steal and replace the Crown Jewels of England to safeguard them from a rogue operative. The player assumes the role of Raven and solves puzzles within the Tower while evading the guards. Progression through the game is nonlinear and under a time limit: the player may solve certain challenges in multiple ways, but must win before 12 hours elapse.
Traitors Gate was conceived in 1996 by Daydream Software designer Nigel Papworth, who saw the Tower of London as a natural setting for a game. The team sought to replicate the structure with near-perfect accuracy and began by capturing over 5,000 reference photographs on location. Pre-rendering the game's panoramic environments challenged the team, which averaged seven members. The game took roughly three years to complete. Self-funded by Daydream after a successful initial public offering, Traitors Gate ultimately cost between $1- and $2-million, and by 2000 it was distributed in 10 languages and 27 countries by companies such as DreamCatcher Interactive and FX Interactive.
The game was a commercial success and became Daydream Software's highest-selling title by 2003, with sales between 300,000 and 400,000 units worldwide. Many of these sales derived from North America and Spain; it failed commercially in Germany, Italy and the United Kingdom. Critical reception of Traitors Gate was "mixed or average", according to review aggregation site Metacritic. Its puzzles and recreation of the Tower of London were lauded by many critics—the latter was praised by the British Academy of Film and Television Arts—but the title's bugs, pacing, large interface and use of mazes drew mixed reactions. In 2003, Traitors Gate was followed by the sequel Traitors Gate 2: Cypher, directed by Papworth and developed by 258 Productions.
Gameplay and plot
Traitors Gate is a graphic adventure game that takes place from a first-person perspective in a pre-rendered visual environment. Using a point-and-click interface, the player explores a reproduction of the Tower of London and evades guards while solving puzzles, such as determining the combination to a safe by examining a coat of arms. In a manner that has been compared to Myst, player movement is restricted to jumps between panoramic static screens; the camera view can rotate freely on each screen. Traitors Gate features nonlinear progression: multiple solutions allow for roughly 1,200 unique pathways through the game. If the player does not win within the 12-hour limit, or is found by the guards, a game over results.
In Traitors Gate, the player takes the role of a special agent named Raven, employed by an American agency called ORPHIA. At the start of the game, Raven's superiors believe that another ORPHIA operative plans to steal the British Crown Jewels from the Tower of London. Raven is subsequently tasked with infiltrating the Tower, secretly swapping the real jewels with forgeries and escaping without leaving behind evidence. These forgeries are equipped with hidden tracking units. The game begins in a broom closet within the White Tower, where Raven hides during a building tour to await night. Controlling Raven, the player then exits the closet and infiltrates the Tower.
The player character is equipped with gear such as a crossbow, grappling hooks, lock picks, a motorized zip-line, explosives and tools to hijack security cameras. A PDA interface provides details about these gadgets, the Tower and other aspects of the mission. The player may also take photographs of the environment and message them to Raven's superiors, who send back facts about the objects in view. Traitors Gate features multiple endings based on the player's actions throughout the game. At the game's conclusion, after the Crown Jewels have been replaced, the President of the United States is informed that Raven's mission was a success.
Development
Origins
Traitors Gate was conceived in November 1996, when designer Nigel Papworth of Sweden's Daydream Software began to explore possibilities for the company's next title after Safecracker. Coming across an article about the Tower of London in an issue of National Geographic, Papworth realized that the structure contained "everything a good game needed", including a cleanly circumscribed playing area and an obvious gameplay objective. He and the team subsequently developed a game concept that would take place inside an accurate reproduction of the Tower. Their plan was to capture "the feel of the weathered stonework and mixed architectural styles with as near to photographic quality as time and technology would allow", and they hoped to make a game that entertained players while informing them about the real-world Tower. The decision to make the protagonist an American agent came early, as the country's alliance with Britain precluded violent gameplay approaches by the player and shifted the focus to stealth and puzzle-solving. Inspiration for the plot derived from the film series Mission: Impossible and James Bond.
Daydream went public on the Stockholm Stock Exchange's Stockholm Börsinformation (SBI) list in January 1997, with the plan for Traitors Gate prepared. The company's goal was to increase its economic and decision-making freedom, and to secure the ability to select a publisher at the end of each game's development rather than at the beginning. President Jan Phersson-Broburg argued that self-funding Daydream's future games with money from Swedish investors—instead of opting for publisher financing "linked to specific projects"—would offer the developer more stability, flexibility and room for growth. According to Papworth, Traitors Gate was part of Daydream's roadmap for the future after going public. In its November 1996 prospectus, the company had told investors that a game with the working title "Project II" was under development, with an estimated 18-month production cycle and 7-10 million kr budget. There were four competing "Project II" designs at that time. For Daydream's public launch, roughly 20 million kr worth of shares, or 45.5% of the company, were offered to investors in Sweden. The initial public offering was a success. As a result of this influx of capital, all subsequent development of Traitors Gate was self-funded.
Daydream began Traitors Gate with around one year of research, starting with a trip of eight team members to the Tower of London to photograph the entire structure over two days. More than 5,000 images were captured during this trip. According to Daydream, employee Erik Phersson organized the results within "a series of indexed ring binders", on which the artists could base their work. Papworth later noted that, because the Tower of London held the copyright to photographs taken on its grounds, digitizing the team's images for use as texture maps was not an option. Instead, they used the pictures only for photo-referencing. Before full production of the game began, Papworth plotted its narrative in storyboard form, and Daydream planned the majority of Traitors Gates design on "a huge magnetic white board with a logic flow chart". According to lead programmer Peter Lundholm, HyperCard was used during pre-production to generate rough prototypes. In June 1997, Daydream reported that "Project II" was still in the prototyping stage and, pending a review of the finished prototype in August, would move into full production. This transition had been completed by September, and Daydream slated the result for 1998. By that time, the company had spent 1.85 million kr on the game.
Production
In January 1998, Daydream Software announced "Project II" as Traitors Gate and scheduled its public unveiling for the following month, at the Milia festival in Cannes. Employment agency Octagon Entertainment, with which Daydream had signed after buying back Safecrackers rights from GT Interactive in November 1997, was set to begin locating distribution partners for Traitors Gate in February. Finished contracts were not forecast until May. As with Safecracker, the strategy was to work with a different local distributor in each territory. Jan Phersson-Broberg reported interest from distributors after Milia, but reiterated to investors that no deals would be signed before May, when a playable game demo was planned to appear at the Electronic Entertainment Expo (E3). While the game was initially set for a late-1998 launch, by June of that year the release had been pushed back to early 1999. Distribution agreements remained pending after the game's E3 presentation.
According to Nigel Papworth, development of Traitors Gate was "much more difficult" than originally expected, and a significantly greater challenge than Safecracker had been. The focus on reproducing the Tower of London left the team unable to alter the scope of the project, and forced Daydream to opt for realistic puzzles tied to the world, in comparison to the simpler task of designing Safecrackers more abstract puzzles. A nonlinear design approach was planned from the start, as Papworth noted that Daydream was "allergic" to linear gameplay, but this further increased the difficulty of Traitors Gates creation for the designers and programmers. The average team size was seven members throughout production, although other employees cycled in and out of the project as needed. Despite the focus on realism, the team modified sections of the Jewel House at the request of the Tower of London's management, who were concerned that the game's accuracy could pose a security risk for the building. The sewer system was also partly fictionalized, although influenced by real data and the actual positions of manhole covers.
Daydream elected to use pre-rendered graphics instead of real-time 3D visuals on Traitors Gate because the team preferred an "above-standard graphics environment to the advantage of free 3D motion", according to Papworth. The slow pace of the game was a contributing factor to this decision, as Papworth noted that the "loss of a little mobility" did not hinder the design. Animated cutscene transitions between screens were added to increase player immersion, in hopes of making up for the lack of free movement. For the character animation, instead of using motion capture, first-time character animators Fredrik Johnson and Leif Holm worked manually. Ultimately, the decision to use pre-rendering brought the game's final size before data compression to 10 terabytes. Papworth later said that the production methods used in Traitors Gate were seen at Daydream as "too costly and time consuming to be a viable long term solution" for the company, which led it, including members of the Traitors Gate team, to make the concurrently-developed Clusterball.
Five members of Daydream handled the creation of Traitors Gates models and textures. Modeling was initially delegated to Papworth, Holm, Johnson and Ulf Larsson. At roughly the 12-month mark, Papworth transitioned more fully into game design and writing, and his place on the modeling team was taken by Michael Nahal. The team began by drawing building plans on transparent vellum, based on information obtained from second-hand books, the Internet, the Royal Archives and Daydream's personal reference photographs. Working with Autodesk Maya and PowerAnimator 6.5, the team then created wire frames based on these plans, using four SGI O2 and SGI Indy computers. Textures were created in Painter, Adobe Photoshop and Adobe Illustrator, while the finished graphics were rendered on a farm of SGI Challenges and systems running Windows NT. Hardware upgrades were frequent during production. According to Papworth, the models and textures took two years of "60 hour weeks" to complete. The White Tower structure, delegated to Johnson, required nearly six months of labor by itself.
Traitors Gate was built with Live Picture Inc.'s RealVR tool, software that displays virtual-reality photographic panoramas. Papworth explained that the techniques Daydream developed while working on Safecracker, a game created with QuickTime VR, had since been incorporated into the newer RealVR software suite. The decision to move to RealVR was heavily inspired by the software's ability to display spherical 360° panoramas, a necessity given the tall structures modeled in Traitors Gate. To construct the game's panoramic environments, Erik Phersson implemented image stitching with BBEdit and Live Picture's PhotoVista editing software, combining multiple rendered stills from the graphics team into larger images. The panoramas were made playable and interactive by Peter Lundholm, with the tools DeBabelizer and Macromedia Director. Phersson also handled Traitors Gates sound effects, added to the game environments with MacPaint and SoundEdit.
The soundtrack was created by local musicians Toontrack, which comprised Mattias Eklund and Henrik Kjellberg. Their opportunity to join Traitors Gate came because Eklund had played in a band with Phersson and Leif Holm during the development of Safecracker. As that title's production wrapped, Toontrack spent a week on an eight-minute demo to pitch Daydream for its next game. Afterward, Phersson hired the pair to score an early build of Traitors Gate, and their involvement grew from there. Eklund and Kjellberg used the programs WaveLab and Cubase VST to compose Traitors Gates music, which was made with a combination of sampled, synthesized and live instrumentation. The pair relied on Oberheim Matrix 1000 and Yamaha A3000 synthesizers, a Yamaha AES1500 and a Roland GR-1 for the synthetic and live elements. The soundtrack was purposely limited in-game to build atmosphere, and, according to Toontrack, was written in the style of "sixties and seventies" spy films to match the theme of Traitors Gate.
At the 1998 European Computer Trade Show (ECTS), Daydream landed distributors for Traitors Gate in eight countries. By the end of September, completion of the game was scheduled for fall; development costs had climbed to 8.06 million kr by late November. In January 1999, Daydream announced a release date of April, and confirmed K.E. Media as the game's distributor in Sweden, Denmark and Norway. Deals with companies in France, Italy, Britain and the Netherlands were secured by February. However, as the year progressed, Traitors Gate encountered a four-month delay to fall 1999. This event was blamed by Daydream on numerous software bugs, caused by the game's "size and complexity". It proceeded to appear at E3 1999, and development costs rose to 10.6 million kr by the end of May, with 2.03 million of this figure depreciated through capitalization. Papworth estimated the final budget for Traitors Gate as between $1 and $2 million in United States currency. Following the public launch of Traitors Gates demo in June 1999, Daydream brought the game to gold status late that July.
Release
Traitors Gate was first released in early September 1999, after roughly three years of development. It was shipped on four CD-ROMs. By the end of its debut month, the game had launched in seven countries: Sweden, Ireland, Belgium, Denmark, Norway, England and the Netherlands. Daydream Software told investors on September 30 that Traitors Gate would appear in another 14 countries by March 2000. Further launches occurred in New Zealand and Australia on October 31, through Hilad Corporation; in France on November 29, through Éditions Profil; and in Spain on December 16, through FX Interactive. It was the latter company's first published game. Traitors Gate had received translations into 10 languages by December 10, at which time Daydream reported its lifetime "guaranteed sales volume" as above 100,000 units.
Before Traitors Gates release, Nigel Papworth noted his "hope that this title gets the crack it deserves at the American Market", following Safecrackers failure to penetrate the region. According to Papworth, North America's buying power was equivalent to the rest of the world's combined, and it was "seen as the paramount market to crack" for international developers at the time. In early November 1999, Daydream signed with publisher DreamCatcher Interactive to distribute the game in North America. As was typical, the deal was set up by Daydream's agency, Octagon Entertainment, based in North Carolina. Thanks to DreamCatcher's partnership network, Traitors Gate was initially set to appear in 15 North American retail chains, including Best Buy, Virgin Megastores and CompUSA. The game's availability was planned in nearly 2,000 individual stores across North America. It launched in the region on May 15, 2000.
Traitors Gate had seen releases in 27 countries by April 28, 2000. At the time, Daydream announced new deals with distributors to release the game in Korea, Poland, Taiwan and Greece. The company reported sales forecasts of "approximately 15,000 games" for each of these regions. In June 2000, the game launched in the Tower of London gift shop, which made it the only video game available there at the time.
Reception
Sales and distribution
Traitors Gate was commercially successful and became Daydream Software's top-selling game by 2003. It sold 48,000 units worldwide by the end of May 2000, and was a hit in Spain, which accounted for 22,000 sales by April 10. The game spent over three months in the top 10 of Spain's sales charts. However, it was a commercial failure in Germany, where it sold 4,000 copies; in the United Kingdom, which bought 3,000 units; and in Italy. The European Foundation for the Improvement of Living and Working Conditions (Eurofound) traced Traitors Gates country-by-country success rate to the quality of Daydream's distribution partners in each region. By September 30, worldwide sales of the game had risen to roughly 120,000 copies. Daydream told investors that the jump from May came due to "increased marketing efforts by us and our distributors."
Writing for Adventure Gamers, Marek Bronstring noted that "slow" initial sales for Traitors Gate gave way to significant success, beginning around one year into its shelf life. September 2000 became the first-ever month that Daydream turned a net profit, in part thanks to the sales increase of Traitors Gate. The game surpassed 200,000 units sold globally by the end of March 2001 and reached close to 250,000 sales by June 30. That July, it topped 50,000 lifetime sales in Spain. According to the Eurofound, Traitors Gate was also successful in the United States, which the group wrote was "primarily because a huge supermarket chain" in the region had opted to stock it. It accounted for 14% of DreamCatcher Interactive's sales in 2000. This made it one of the publisher's top titles that year, behind Atlantis II and The Crystal Key. The following year, PC Data calculated the game's retail sales in North America at 52,573 units for 2001 alone.
Globally, Traitors Gate sold more than 300,000 copies by March 2002. During the first six months of that year, PC Data tracked another 15,429 sales in North American retailers. By 2003, Traitors Gate had sold between 300,000 and 400,000 copies worldwide, of which Spain accounted for 75,000 units. According to DreamCatcher, the game totaled 245,000 lifetime sales in North America alone by early 2003.
Critical reviews
According to the review aggregation site Metacritic, critical reception of Traitors Gate was "mixed or average". The game was nominated for "The Design Award" at the 1999 British Academy Film Awards (BAFTAs), but lost the prize to Wipeout 3. The BAFTAs' judges called Traitors Gate a "very well researched and well considered game with intuitive game play", and praised its controls and its depiction of the Tower of London. It was also a finalist for GameSpots 2000 "Best Adventure Game" award, which ultimately went to The Longest Journey.
Comparing the game to Spycraft: The Great Game, GameSpots Ron Dulin praised the puzzles in Traitors Gate, which he felt were sensible and realistic, alongside its detailed reproduction of the Tower of London. Despite encountering numerous technical problems with the software, he considered the overall product "very good", but short of outstanding. Cal Jones of PC Gaming World similarly praised the puzzles and visual detail, but noted a slight blurriness to the graphics and found the game brief despite its "nice covert feel". She ultimately gave it a moderate rating.
By contrast, Tim Cant wrote in PC Gamer UK that Traitors Gate is a boring title, which contributes "nothing to an already-tedious genre". He condemned its puzzles, pace and control system, and summarized that playing it is "as much fun as being crapped on by crows, then beheaded." Cant's complaints were echoed in PC Zone, whose writer Mark Hill called the game dull and dismissed its puzzles as "childish". Hill and Cant both strongly critiqued Traitors Gates graphics, which they considered bland and unengaging. Writing that the game's real "adventure is trying to keep your interest alive for more than five minutes", Hill gave Traitors Gate a "below average" rating. However, while David Ryan Hunt agreed in Computer Games Magazine that "too many flaws ... detract from the fun" in Traitors Gate, he sided with Dulin and Jones on the quality of the graphics and puzzles, which he felt were largely solid.
Audrey Wells of Computer Gaming World continued Hunt's, Dulin's and Jones' praise for the puzzles, and again cited the visual representation of the Tower as a high point. Despite noting that the sewer maze portions of Traitors Gate make it "more suitable for experienced gamers", Wells enjoyed the game and did not criticize its use of mazes. Just Adventures Ray Ivey took an even stronger line of support for the game. In contrast to Hill's and Tim Cant's dismissals of Traitors Gate as boring, he labeled the proceedings "breathlessly exciting" and "indecently fun", and praised the mazes' execution outright. On this latter point, Hunt diverged sharply with both Ivey and Wells: he regarded these sections as the chief problem with Traitors Gate, as a detriment to the title's immersion and as "a potential game-killer" that limited Traitors Gate almost exclusively to hardcore genre fans. As a result, he summarized the game as a middling "experience [that] isn't likely to become something you’ll cherish".
Ivey and the writer for MacHome Journal both advanced Computer Gaming Worlds, GameSpots and PC Gaming Worlds praise for the recreation of the Tower of London in Traitors Gate, which MacHomes reviewer considered to be arguably its greatest strength. The writer noted that it had "been quite a while since an adventure of this nature has genuinely held my interest" like Traitors Gate. Similarly, David Wildgoose of PC PowerPlay dubbed Traitors Gate "a must for quality-starved adventure gamers". Like Hunt and Wells, both of whom had called the title "a refreshing change of pace" for adventure games, Wildgoose singled out its atypical theme for praise. He cited it as an unusually tense entry in its genre and highlighted its "genuinely clever and suspenseful design". At the same time, he found its pacing slow and awkward, and criticized the game for overwhelming the player with "too much information far too quickly" in its introduction of the utility gear. Like Mark Hill, he also disapproved of the large HUD. Following Wildgoose's criticism of the restrictions on player movement in Traitors Gate, the critic for MacHome noted that the game lacks "a sense of freedom", but nevertheless praised its nonlinear structure. Its bugs and ending sequence were the reviewer's primary dislikes.
Sequel
Based on its success with Traitors Gate in North America, publisher DreamCatcher Interactive commissioned a sequel, Traitors Gate 2: Cypher. Daydream Software told investors that a deal with "an internationally recognized publisher" to develop the game was reached in April 2002, and that it was funded ahead of time by this outside party. The project was scheduled for a 14-month development cycle. Traitors Gate 2 was ultimately developed by the company 258 Productions. Nigel Papworth, who conceived and designed the game at 258,<ref name=jaint>{{cite web | archive-url=https://web.archive.org/web/20050314085044/http://www.justadventure.com/Interviews/Traitors_Gate_2/TraitorsGate.shtm | url=http://www.justadventure.com:80/Interviews/Traitors_Gate_2/TraitorsGate.shtm | title=Traitor's Gate 2: Cypher Interview & Screenshots | date=October 22, 2003 | author=Sluganski, Randy | work=Just Adventure | archive-date=March 14, 2005 | url-status=dead | access-date=November 21, 2021 }}</ref> said that he had been resistant to developing another title with pre-rendered visuals. Instead, he told DreamCatcher that he would work on the sequel "if you'll let me do it in real-time 3D." He felt that graphics technology had advanced enough to make this leap, and that the switch offered him "a huge amount of freedom for the gameplay." As a result, the team licensed the Gamebryo engine to create Traitors Gate 2. By June 2002, the game was set for a September 2003 release.
Inspired by his "reading an article on cryptography technique at the same time as a book on Babylonian history", Papworth combined these two ideas to create the game concept for Traitors Gate 2. First announced as Cypher: The Sequel to Traitors Gate in April 2003, Traitors Gate 2 casts the player again as Raven, who now seeks to thwart the plot of Middle Eastern terrorists and a treacherous American agent. It was released in November 2003 to "generally negative reviews", according to Metacritic.
See alsoDracula: ResurrectionFaustRiddle of the Sphinx: An Egyptian Adventure''
References
External links
Official site (archived)
1999 video games
The Adventure Company games
Embracer Group franchises
Classic Mac OS games
Windows games
First-person adventure games
Single-player video games
Puzzle video games
Point-and-click adventure games
Spy video games
Video games developed in Sweden
Video games set in London |
34540157 | https://en.wikipedia.org/wiki/Jeffrey%20R.%20Riemer | Jeffrey R. Riemer | Major General Jeffrey R. Riemer is a retired major general in the United States Air Force. He served as the program executive officer for the F-22 Program for the United States Air Force from January 2007 to October 1, 2008. During this time he was responsible for all acquisition activities including the awarding of a $5 billion contract extension for the procurement of an additional 60 aircraft. He previously served as commander of the Air Armament Center from December 2005 to January 2007.
General Riemer joined the Air Force in 1974 after graduating from the University of Florida ROTC program. He served as an F-4 pilot in Japan before being transferred to the Air Training Command, where he became a T-37 instructor pilot and was named Instructor Pilot of the Year.
The general also worked as an F-16 test pilot at General Dynamics and the F-16 Combined Test Force and served as an instructor at the Air Force Test Pilot School. Other assignments have included serving as a military staff assistant for the testing of aircraft and air-to-air missiles in the Office of the Secretary of Defense, program manager for the MC-130H Combat Talon, program director of special programs for the Air-to-Air Joint System Program Office, and program executive officer for command and control, and combat support systems.
The general has commanded the 4953d Test Squadron, Air Force Security Assistance Center and Air Armament Center. He has over 3,000 flying hours in more than 40 different types of aircraft. He retired from the Air Force on October 1, 2008.
Education
1974 Bachelor of Science degree in aerospace engineering, University of Florida
1980 Squadron Officer School, Maxwell AFB, Ala.
1984 Master of Science degree in aeronautical engineering, Air Force Institute of Technology
1986 Air Command and Staff College, by correspondence
1990 Program Management Course, Fort Belvoir, Va.
1994 Air War College, Maxwell AFB, Ala.
1997 Executive Program Management Course, Fort Belvoir, Va.
2001 National Security Decision Making II Seminar, Johns Hopkins University, Baltimore, Md.
2002 Driving Government Performance: Leadership Strategies that Produce Results, John F. Kennedy School of Government, Harvard University, Cambridge, Mass.
2003 Program for Senior Executives in National and International Security, John F. Kennedy School of Government, Harvard University
2003 Chairman of the Joint Chiefs of Staff Level IV Antiterrorism Executive Seminar, Washington, D.C.
2004 Architecture Based Systems Engineering for Senior Leaders, Armed Forces Communications and Electronics Association Educational Foundation, Fairfax, Va.
2005 Program for Senior Managers in Government, John F. Kennedy School of Government, Harvard University, Cambridge, Mass.
2008 Certificate of Process Expertise and Process Mastery, Hammer and Company, Cambridge, Mass.
2008 Certificate of Achievement in Lean Six Sigma Black Belt, Villanova University, Pa.
Assignments
October 1974 – December 1975, student, undergraduate pilot training, Webb AFB, Texas
December 1975 – February 1976, student, lead-in fighter training, Holloman AFB, N.M.
February 1976 – December 1976, student, F-4 Replacement Training Unit, Luke AFB, Ariz.
December 1976 – August 1978, F-4C Wild Weasel pilot, 67th Tactical Fighter Squadron, Kadena Air Base, Japan
August 1978 – December 1978, student, T-37 pilot instructor training, Randolph AFB, Texas
December 1978 – March 1981, T-37 instructor pilot, Columbus Air Force Base, Miss.
March 1981 – June 1982, F-16 acceptance test pilot, Air Force Plant Representative Office, General Dynamics, Fort Worth, Texas
June 1982 – June 1983, graduate student, School of Aeronautical Engineering, Air Force Institute of Technology, Wright-Patterson AFB, Ohio
June 1983 – June 1984, distinguished graduate, USAF Test Pilot School, Edwards AFB, Calif.
June 1984 – June 1986, test pilot instructor, USAF Test Pilot School, Edwards AFB, Calif.
June 1986 – November 1988, experimental test pilot branch chief and operations officer, F-16 Combined Test Force, 6510th Test Wing, Edwards AFB, Calif.
November 1988 – August 1991, military staff assistant, Office of the Deputy Director for Defense Research and Engineering, and Test and Evaluation, Office of the Secretary of Defense, Washington, D.C.
August 1991 – July 1993, commander, 4953rd Test Squadron, Wright-Patterson AFB, Ohio
July 1993 – June 1994, student, Air War College, Maxwell AFB, Ala.
June 1994 – July 1995, MC-130H Combat Talon program manager, Special Operations Developmental Systems Office, Wright-Patterson AFB, Ohio
July 1995 – September 1996, chief, F-16 Programs Division, F-16 System Program Office, Wright-Patterson AFB, Ohio
September 1996 – February 1998, program director of special programs, Air-to-Air Joint SPO, Eglin AFB, Fla.
February 1998 – June 2000, system program director, F-16 SPO, Wright-Patterson AFB, Ohio
June 2000 – December 2000, program executive officer, Command and Control Programs, Headquarters U.S. Air Force, Washington, D.C.
January 2001 – February 2002, program executive officer for command and control, and combat support systems, Headquarters U.S. Air Force, Washington, D.C.
February 2002 – July 2004, commander, Air Force Security Assistance Center, Headquarters Air Force Materiel Command, Wright-Patterson AFB, Ohio
July 2004 – December 2005, director of operations, Headquarters AFMC, Wright-Patterson AFB, Ohio
December 2005 – January 2007, commander, Air Armament Center, and Air Force Program Executive Officer for Weapons, AFMC, Eglin AFB, Fla.
January 2007 – September 2008, Air Force program executive officer for the F-22 Program, Office of the Assistant Secretary of the Air Force for Acquisition, Headquarters U.S. Air Force, Washington, D.C.
Flight information
Rating: Command pilot and test pilot
Flight hours: 3,000
Aircraft flown: F-4, F-16, A-37, T-37, A-10, T-38, U-6, NT-39, N/C-141, and X-29
Major awards and decorations
Promotion dates
References
External links
Living people
Recipients of the Air Force Distinguished Service Medal
Recipients of the Legion of Merit
Recipients of the Air Medal
1952 births |
65028337 | https://en.wikipedia.org/wiki/Drovorub | Drovorub | Drovorub (, "woodcutter") is a software toolkit for developing malware for the Linux operating system. It was created by the 85th Main Special Service Center, a unit of the Russian GRU often referred to as APT28.
Drovorub has a sophisticated modular architecture, containing an implant coupled with a kernel module rootkit, a file transfer and port forwarding tool, and a command and control server. Drovorub has been described as a "Swiss-army knife for hacking Linux".
The U.S. government report that first identified Drovorub recommends the use of UEFI Secure Boot and Linux's native kernel module signing facility to resist Drovorub attacks.
References
Malware toolkits
Hacking in the 2020s |
2568246 | https://en.wikipedia.org/wiki/Coding%20Accuracy%20Support%20System | Coding Accuracy Support System | The Coding Accuracy Support System (CASS) enables the United States Postal Service (USPS) to evaluate the accuracy of software that corrects and matches street addresses. CASS certification is offered to all mailers, service bureaus, and software vendors that would like the USPS to evaluate the quality of their address-matching software and improve the accuracy of their ZIP+4, carrier route, and five-digit coding.
For software vendors and service bureaus, CASS Certification must be renewed annually with the USPS to meet current CASS Certification cycle requirements.
CASS Certified products are listed in USPS literature and on its web site. CASS software will correct and standardize addresses. It will also add missing address information, such as ZIP codes, cities, and states to ensure the address is complete. Starting with 2007 Cycle L, CASS software will also perform delivery point validation to verify whether or not an address is a deliverable address and check against the USPS Locatable Address Conversion System to update addresses that have been renamed or renumbered.
A correct address saves the Postal Service time, money and manpower by reducing the volume of 1) non-deliverable mail; 2) unsorted mail; 3) mail that is deliverable, but requires extra effort to determine the proper location to which it should be delivered. Mailers who use CASS software to check the addresses of their mailing may be able to qualify for discounted postage rates from the USPS.
An example of what CASS software will correct in an address:
The input of:
1 MICROWSOFT
REDMUND WA
Produces the output of:
1 MICROSOFT WAY
REDMOND WA 98052-8300
Here the street and city name misspellings have been corrected; street suffix, ZIP code and ZIP+4 add-on have been added; and, in this case, the address was determined to be the location of a business.
In addition to an updated address, CASS software can also return descriptive information about the address. The information falls into two categories:
If the address was successfully processed, or if not then why,
Information on how to deliver the mailing.
References
CASS Certification Requirements, A Mailer's Guide
External links
USPS CASS web site
List of CASS Certified Software Vendors
United States Postal Service |
46879706 | https://en.wikipedia.org/wiki/Ohio%20Central%20Region%20defunct%20athletic%20conferences | Ohio Central Region defunct athletic conferences | This is a list of former high school athletic conferences in the Central Region of Ohio, as designated by the OHSAA. If a conference had members that span multiple regions, the conference is placed in the article of the region most of its former members hail from. Because the names of localities and their corresponding high schools do not always match and because there is often a possibility of ambiguity with respect to either the name of a locality or the name of a high school, the following table gives both in every case, with the locality name first, in plain type, and the high school name second in boldface type. The school's team nickname is given last.
Buckeye Athletic Conference
Bexley Lions (1991-2003, to Mid-State)
Sunbury Big Walnut Golden Eagles (1991–97, to Ohio Capital)
Grandview Heights Bobcats (1991-2003, to Mid-State)
Plain City Jonathan Alder Pioneers (1991-2003, to Mid-Ohio in 2013)
Johnstown-Monroe Johnnies (1991–94, to Mid-Buckeye)
Hebron Lakewood Lancers (1991-2003, to Mid-State)
Newark Licking Valley Panthers (1991-2003, to Mid-State)
London Red Raiders (1991-2003, to South Central Ohio)
London Madison-Plains Golden Eagles (1991–93, 1997-2003, to South Central Ohio)
Lewis Center Olentangy Braves (1991–97, to Ohio Capital)
Utica Redskins (1991–99, to Mid-Buckeye)
West Jefferson Roughriders (1991-2003 to Mid-State in 2006)
Washington Court House Washington Blue Lions (1993-2003, to South Central Ohio)
Greenfield McClain Tigers (1999-2001, to Southern Buckeye)
Gahanna Columbus Academy Vikings (2001–03, to Mid-State)
Milford Center Fairbanks Panthers (2001–03, to Northwest Central)
Washington Court House Miami Trace Panthers (2001–03, to South Central Ohio)
Whitehall-Yearling Rams (2001–03, to Mid-State)
Buckeye Central Conference
Findlay Trojans (1987–1995, to Great Lakes League)
Fremont Ross Little Giants (1987–1991, to Great Lakes League)
Lancaster Golden Gales (1987–1995, to Ohio Capital Conference 1997)
Newark Wildcats (1987–1995, to Ohio Capital Conference)
Zanesville Blue Devils (1987–1995, to Ohio Valley Athletic Conference)
The league was created after the Buckeye Conference folded. The league was an unideal resort for member schools that had to bear extensive travel. Fremont Ross was able to gain membership in the now-defunct Great Lakes League in 1991, leaving the four schools to struggle on. Findlay eventually gained membership in the GLL as Zanesville left for the closer OVAC confederation, effectively ending competition. Lancaster and Newark would end up going from leagues struggling to retain members to joining the Ohio Capital Conference, which is the largest proper conference in the state.
Central Buckeye League
There were two versions of the CBL. The first ran from 1929 to 1966. The second version ran from 1976 to 1991, where after joining with schools from the Licking County League, it was rebranded as the Buckeye Athletic Conference (BAC).
First Version (1929-1966)
Bexley Lions (1929–66, to Franklin County League)
Grandview Heights Bobcats (1929–66, to Franklin County League)
Granville Blue Aces (1929–30)
Westerville Wildcats (1929–50, to Mid-6 League)
Marysville Monarchs (1930–45, to Mid-State League (1945–50))
Circleville Tigers (1931–39, to South Central Ohio League)
Delaware Hayes Pacers (1932–66, Delaware Willis Bobcats before 1963)
Upper Arlington Golden Bears (1939–66, to Franklin County League)
Worthington Cardinals (1939–45, to Franklin County League, 1958–66, to Franklin County League)
Mount Vernon Yellow Jackets (1947–66)
Urbana Hillclimbers (1950–66)
Whitehall-Yearling Rams (1958–66, to Metropolitan League)
Second Version (1976-1991)
During this period, the league would play as one division from 1977 through 1980. The league would split its teams beginning in 1981. The two 6 team divisions were roughly divided by enrollment with the larger schools making up the Buckeye Division and the smaller schools forming the Central Division.
After New Albany leaves the league in 1984, Olentangy will move to the Central Division with new member, London, replacing Olentangy in the Buckeye. The divisions would drop to 5 members each for the 1990-91 school year as Buckeye Valley & North Union leave.
Plain City Jonathan Alder Pioneers1 (1976–91, to BAC)
Bexley Lions (1976–91, to BAC)
Sunbury Big Walnut Eagles2 (1976–91, to BAC)
Gahanna Columbus Academy Vikings3 (Boys only, 1976–91)
Columbus School for Girls Unicorns (Girls only, 1976–91)
Marysville Monarchs4 (1976–91, to Ohio Capital Conference)
New Albany Golden Eagles (1976–84, to Mid-Buckeye League)
Lewis Center Olentangy Braves4 (1976–91, to BAC)
Radnor Buckeye Valley Barons5 (1977–90, to Mid-Ohio Athletic Conference)
Dublin Shamrocks (1977–91, to Ohio Capital Conference)
Grandview Heights Bobcats (1977–91, to BAC)
Richwood North Union Wildcats (1977–90, to Mid-Ohio Athletic Conference)
West Jefferson Roughriders (1977–91, to BAC)
London Red Raiders (1984–91, to BAC)
Concurrent with CBL and Darby Valley League 1976-77.
Concurrent with CBL and Mid-Ohio League 1976-77.
Concurrent with CBL and Mid-Buckeye League 1976-77.
Concurrent with CBL and Metropolitan League 1976-77.
Concurrent with CBL and Mid-Ohio League 1977-78.
Central Ohio League
One of the first large-school conferences in Central and East Ohio, its widespread geography led to membership instability through its lifespan.
Cambridge Bobcats (1926–58, to Ohio Valley Athletic Conference 1960)
Coshocton Redskins (1926–61, to Cardinal Conference)
Lancaster Golden Gales (1926–85, to Buckeye Central Conference 1987)
Mount Vernon Yellow Jackets (1926–35, 1945–47, to Central Buckeye League)
Newark Wildcats (1926–85, to Buckeye Central Conference 1987)
Westerville Wildcats (1926–29, to Central Buckeye League)
Zanesville Blue Devils (1926–29, 1931–85, Lash until 1954, to Buckeye Central Conference 1987)
Marietta Tigers (1936–85, to Southeast Ohio Athletic League)
Chillicothe Cavaliers (1941–44, to South Central Ohio League, 1948–76, to Ohio Capital Conference)
Dover Tornadoes (1941–57, to Cardinal Conference 1960)
Ironton Tigers (1963–68, to Southeast Ohio Athletic League)
Upper Arlington Golden Bears (1968–81, to Ohio Capital Conference)
Grove City Greyhounds (1976–81, to Ohio Capital Conference)
Delaware County League
Ashley Elm Valley Aces2,5 (Ashley before 1952, 192?-63, consolidated into Buckeye Valley)
Bellepoint Bears (192?-52, consolidated into Scioto Valley)
Galena Golden Eagles (192?-50, consolidated into Big Walnut)
Harlem Hawks (192?-50, consolidated into Big Walnut)
Hyatts Hornets (192?-53, consolidated into Olentangy)
Kilbourne Brown Bears(192?-52, consolidated into Elm Valley)
Lewis Center Olentangy Braves6 (Lewis Center before 1953, 192?-63, to Mid-Ohio)
Ostrander Scioto Valley Rockets7 (Ostrander before 1953, 192?-63, consolidated into Buckeye Valley)
Powell Pirates (192?-53, consolidated into Olentangy)
Radnor Trojans (192?-63, consolidated into Buckeye Valley)
Sunbury Wildcats1,3 (192?-50, consolidated into Big Walnut)
West Berlin Warriors (192?-53, consolidated into Olentangy)
Sunbury Big Walnut Golden Eagles4,6 (1950-63, to Mid-Ohio)
Concurrent with Mid-State League 1946-49.
Concurrent with Mid-Buckeye League 1948-53.
Concurrent with Mid-Buckeye League 1948-50.
Concurrent with Mid-Buckeye League 1950-54
Concurrent with Mid-Ohio Conference 1953-63.
Concurrent with Mid-Ohio Conference 1954-63.
Concurrent with Mid-Ohio Conference 1956-63.
Franklin County League
Organized with the beginning of the state basketball tournament in 1922, the league membership was fairly fluid, as schools left for other regional and power leagues, and often returned again. The league was one that was directly hindered by the creation of the Ohio Capital Conference, as two of its five remaining members left for the league in 1968, and the other schools left to fill in spots in other conferences.
Bexley Lions (1922-29, to Central Buckeye League, 1966–68, to Mid-8 League)
Gahanna Columbus Academy Vikings (1922-49, to Mid-State, 1957-66, to Mid-Buckeye)
Canal Winchester Indians (1922-57, to Mid-State League, 1964–66, to Mid-State League)
Dublin Shamrocks (1922-68, to Metropolitan League)
Columbus Franklin Heights Falcons (1922-58, to South Central Ohio League)
Grandview Heights Bobcats (1922-29, to Central Buckeye League, 1966–68, to Mid-8 League)
Grove City Greyhounds (1922-50, to Mid-6 League)
Groveport-Madison Cruisers (1922-58, to Mid-8 League)
Columbus Hamilton Township Rangers (1922-64, to South Suburban League)
Hilliards Wildcats (1922-50, to Mid-6 League)
Gahanna Lincoln Golden Lions (1922-58, to Mid-8 League)
Columbus Mifflin Cowpunchers (1922-58, to Mid-8 League)
New Albany Eagles (1922-65, to Mid-Buckeye League)
Reynoldsburg Raiders (1922-66, to Metropolitan League)
Upper Arlington Golden Bears (1922-39, to Central Buckeye League, 1966–68, to Ohio Capital Conference)
Westerville Wildcats (1922-26, to Central Ohio League)
Whitehall-Yearling Rams (1922-58, to Central Buckeye League)
Worthington Cardinals (1922-39, to Central Buckeye League, 1945–50, to Mid-6 League, 1966-68, to Ohio Capital Conference)
West Jefferson Roughriders (1963–68, to Metropolitan League)
Knox County League
Amity Aces2 (192?-58, consolidated into Mount Vernon)
Bladensburg Blades3 (192?-61, to Knox-Morrow)
Centerburg Trojans (192?-48, to Mid-Buckeye League)
Danville1 Blue Devils (192?-55, to Mid-Buckeye League)
Fredericktown Freddies (192?-61, to Johnny Appleseed)
Gambier Pirates2 (192?-58, consolidated into Mount Vernon)
Howard Bulldogs3 (192?-61, to Knox-Morrow)
Concurrent with Mid-Buckeye League 1954-55.
Concurrent with Knox-Morrow League 1955-58.
Concurrent with Knox-Morrow League 1955-61.
Knox-Morrow League
Formed in 1955, this league was formed by smaller schools in the two counties to solve scheduling issues with their dwindling county leagues (which all remained in). The league folded after the 1961-62 school year, as consolidation left only two schools.
Amity Aces1 (1955-58, consolidated into Mount Vernon)
Bladensburg Blades2 (1955-62, consolidated into Howard)
Chesterville Eagles3 (1955-62, consolidated into Highland)
Gambier Pioneers1 (1955-58, consolidated into Mount Vernon)
Howard Bulldogs2 (1955-62, became East Knox and joined Mid-Buckeye in 1963)
Johnsville Johnnies3 (1955-62, rejoined Morrow County League)
Marengo Wildcats3 (1955-62, consolidated into Highland)
Sparta Spartans3 (1955-60, consolidated into Marengo)
Concurrent with KCL throughout membership.
Concurrent with KCL until 1961.
Concurrent with MCL throughout membership.
Marion County League
Caledonia Scots (pre-1931-62, consolidated into River Valley)
Claridon Hornets (pre-1931-62, consolidated into River Valley)
Green Camp Panthers (pre-1931-62, consolidated into Elgin)
Kirkpatrick Cougars (pre-1931-48, consolidated into Morral)
Larue Indians (pre-1931-62, consolidated into Elgin)
Martel Eagles (pre-1931-62, consolidated into River Valley)
Meeker Trojans (pre-1931-57, consolidated into Ridgedale)
Morral Wildcats (pre-1931-57, consolidated into Ridgedale)
New Bloomington Rams (pre-1931-62, consolidated into Elgin)
Marion Pleasant Pandas (before 1958)/Spartans (pre-1931-69, to North Central)1
Prospect Bulldogs (pre-1931-62, consolidated into Elgin)
Waldo Cardinals (pre-1931-62, consolidated into River Valley)
Morral Ridgedale Rockets (1957–69, to North Central)1
Green Camp Elgin Comets (1962–69, to North Central)1
Caledonia River Valley Vikings (1962–69, to North Central)1
Teams played concurrently in the MCL and NCC from 1962 to 1969.
Metropolitan League (Columbus Area)
This conference started as the South Suburban League in 1964, then changed names as it expanded two years later. Weakened by the beginning of the Ohio Capital Conference in 1968, the conference finally folded in 1977, with the beginning of the second Central Buckeye League.
Columbus Franklin Heights Falcons (1964–77)
Columbus Hamilton Township Rangers (1964–77)
Grove City Pleasant View Panthers (1964–68, to Ohio Capital Conference)
Ashville Teays Valley Vikings (1964–77, to South Central Ohio League)
Reynoldsburg Raiders (1966–68, to Ohio Capital Conference)
Whitehall-Yearling Rams (1966–68, to Ohio Capital Conference)
Dublin Shamrocks (1968–77, to Central Buckeye League)
West Jefferson Roughriders (1968–77, to Central Buckeye League)
Lewis Center Olentangy Braves1 (1970–77, to Central Buckeye League)
Grandview Heights Bobcats (1972–77, to Central Buckeye League)
Marysville Monarchs1 (1972–77, to Central Buckeye League)
Concurrent with both ML and CBL during 1976-77.
Mid-8 League
Was the Mid-6 League 1950-58. This was another league that was weakened by the Ohio Capital Conference's creation (along with the Franklin County League and Metropolitan League), it finally folded six years later.
Grove City Greyhounds (1950–74, to Central Ohio League 1976)
Hilliard Wildcats (1950–74, to Ohio Capital Conference)
London Red Raiders (1950–74, to Central Buckeye Conference)
Marysville Monarchs (1950–72, to Metropolitan League)
Westerville Wildcats (1950–68, to Ohio Capital Conference)
Worthington Cardinals (1950–58, to Franklin County League)
Groveport-Madison Cruisers (1958–74, to Ohio Capital Conference)
Gahanna Lincoln Golden Lions (1958–68, to Ohio Capital Conference)
Columbus Mifflin Cowpunchers (1958–73, to Columbus City League)
Bexley Lions (1968–74, to Central Buckeye League 1976)
Grandview Heights Bobcats (1968–72, to Metropolitan League)
Mid-Ohio Conference
The Mid-Ohio Conference was founded in 1953 and remained a fairly stable league until 1977, when three teams left to join the Central Buckeye League. The league lasted as an eight-team league for much of the rest of their existence until 1990, when the four Morrow County schools left to join the newly formed Mid-Ohio Athletic Conference. Three other teams joined the North Central Conference, while the one other, Marion Catholic, remained independent for a number of years.
Cardington-Lincoln Pirates2 (1953–1990, to Mid-Ohio Athletic Conference)
Ashley Elm Valley Aces3 (1953–1963, consolidated into Buckeye Valley)
Marion Catholic Fighting Irish (1953–1990, St. Mary until 1957, to Northwest Central Conference 2001)
Mount Gilead Indians1 (1953–1990, to Mid-Ohio Athletic Conference)
Richwood Tigers1,4 (1953–1965, consolidated into North Union)
Delaware Olentangy Braves3 (1954–1970, to Metropolitan League)
Sunbury Big Walnut Golden Eagles3,6 (1954–1977, to Central Buckeye League)
Ostrander Scioto Valley Rockets3 (1956–1963, consolidated into Buckeye Valley)
Delaware Buckeye Valley Barons (1963–1977, to Central Buckeye League)
Sparta Highland Scots (1963–1990, to Mid-Ohio Athletic Conference)
Richwood North Union Wildcats (1965–1977, to Central Buckeye League)
Galion Northmor Golden Knights (1970–1990, to Mid-Ohio Athletic Conference)
Centerburg Trojans (1977–1981, to Mid-Buckeye Conference)
Crestline Bulldogs (1977–1990, to North Central Conference)
Fredericktown Freddies (1977–1990, to North Central Conference)
Loudonville Redbirds (1981–1984, to Mohican Area 1989)
Ontario Warriors (1983–1990, to North Central Conference)
Concurrent with Mid-Buckeye League 1953-54.
Concurrent with Morrow County League until 1963.
Concurrent with Delaware County League until 1963.
Concurrent with Union County League until 1965.
Concurrent with Central Buckeye League 1976-77.
Morrow County League
Another of the county-wide small school conferences, the MCL ended in 1963 as two of the three remaining schools went to the Mid-Ohio Conference, where the third would land a few years later.
Cardington-Lincoln Pirates1,2 (192?-63, to Mid-Ohio Conference
Chesterville Eagles (192?-62, consolidated into Highland)
Edison Tigers(192?-60, consolidated into Mount Gilead)
Iberia Presidents (192?-63, consolidated into Northmor)
Johnsville Johnnies (192?-63, consolidated into Northmor)
Marengo Wildcats (192?-62, consolidated into Highland)
Mount Gilead Indians (192?-45, to Mid-State League (1945-50))
Sparta Spartans (192?-60, consolidated into Marengo)
Sparta Highland Scots (1962-63, to Mid-Ohio Conference)
Concurrent with MCL and Mid-Buckeye League 1948-54.
Concurrent with MCL and Mid-Ohio Conference 1953-63.
Union County League
Broadway Bears (192?-50, consolidated into Northwestern)
Byhalia-York Falcons (192?-65, Byhalia before 1950, consolidated into North Union)
Unionville Center Darby Tigers (192?-61, consolidated into Fairbanks)
New California Jerome Township Knights (192?-50, consolidated into Dublin)
Magnetic Springs Resorters (192?-63, consolidated into Richwood)
Raymond Rangers (192?-50, consolidated into Northwestern)
Richwood Tigers1,2,3 (192?-65, consolidated into North Union)
Milford Center Union Wolves (192?-61, consolidated into Fairbanks)
Watkins Warriors (192?-50, consolidated into Marysville)
Somersville York Blue Bombers (192?-50, consolidated into Byhalia-York)
Raymond Northwestern Bears (1950-63, consolidated into Marysville)
Milford Center Fairbanks Panthers4 (1961-65, to Logan County League)
Concurrent with Mid-State League (1945-50) for duration of that league's existence
Concurrent with Mid-Buckeye League 1950-54.
Concurrent with Mid-Ohio Conference 1953-65.
Concurrent with Logan County League 1961-65.
See also
Ohio High School Athletic Association
Ohio High School Athletic Conferences
OHSAA Central Region athletic conferences
Notes and references |
3887690 | https://en.wikipedia.org/wiki/Electronic%20waste | Electronic waste | Electronic waste or e-waste describes discarded electrical or electronic devices. Used electronics which are destined for refurbishment, reuse, resale, salvage recycling through material recovery, or disposal are also considered e-waste. Informal processing of e-waste in developing countries can lead to adverse human health effects and environmental pollution.
Electronic scrap components, such as CPUs, contain potentially harmful materials such as lead, cadmium, beryllium, or brominated flame retardants. Recycling and disposal of e-waste may involve significant risk to health of workers and their communities.
Definition
E-waste or electronic waste is created when an electronic product is discarded after the end of its useful life. The rapid expansion of technology and the consumption driven society results in the creation of a very large amount of e-waste.
In the US, the United States Environmental Protection Agency (EPA) classifies waste into ten categories:
Large household appliances, including cooling and freezing appliances
Small household appliances
IT equipment, including monitors
Consumer electronics, including televisions
Lamps and luminaires
Toys
Tools
Medical devices
Monitoring and control instruments and
Automatic dispensers
These include used electronics which are destined for reuse, resale, salvage, recycling, or disposal as well as re-usables (working and repairable electronics) and secondary raw materials (copper, steel, plastic, or similar). The term "waste" is reserved for residue or material which is dumped by the buyer rather than recycled, including residue from reuse and recycling operations, because loads of surplus electronics are frequently commingled (good, recyclable, and non-recyclable). Several public policy advocates apply the term "e-waste" and "e-scrap" broadly to apply to all surplus electronics. Cathode ray tubes (CRTs) are considered one of the hardest types to recycle.
Using a different set of categories, the Partnership on Measuring ICT for Development defines e-waste in six categories:
Temperature exchange equipment (such as air conditioners, freezers)
Screens, monitors (TVs, laptops)
Lamps (LED lamps, for example)
Large equipment (washing machines, electric stoves)
Small equipment (microwaves, electric shavers) and
Small IT and telecommunication equipment (such as mobile phones, printers)
Products in each category vary in longevity profile, impact, and collection methods, among other differences.
CRTs have a relatively high concentration of lead and phosphors (not to be confused with phosphorus), both of which are necessary for the display. The United States Environmental Protection Agency (EPA) includes discarded CRT monitors in its category of "hazardous household waste" but considers CRTs that have been set aside for testing to be commodities if they are not discarded, speculatively accumulated, or left unprotected from weather and other damage. These CRT devices are often confused between the DLP Rear Projection TV, both of which have a different recycling process due to the materials of which they are composed.
The EU and its member states operate a system via the European Waste Catalogue (EWC) - a European Council Directive, which is interpreted into "member state law". In the UK, this is in the form of the List of Wastes Directive. However, the list (and EWC) gives a broad definition (EWC Code 16 02 13*) of what is hazardous electronic waste, requiring "waste operators" to employ the Hazardous Waste Regulations (Annex 1A, Annex 1B) for refined definition. Constituent materials in the waste also require assessment via the combination of Annex II and Annex III, again allowing operators to further determine whether a waste is hazardous.
Debate continues over the distinction between "commodity" and "waste" electronics definitions. Some exporters are accused of deliberately leaving difficult-to-recycle, obsolete, or non-repairable equipment mixed in loads of working equipment (though this may also come through ignorance, or to avoid more costly treatment processes). Protectionists may broaden the definition of "waste" electronics in order to protect domestic markets from working secondary equipment.
The high value of the computer recycling subset of electronic waste (working and reusable laptops, desktops, and components like RAM) can help pay the cost of transportation for a larger number of worthless pieces than what can be achieved with display devices, which have less (or negative) scrap value. In A 2011 report, "Ghana E-waste Country Assessment", found that of 215,000 tons of electronics imported to Ghana, 30% were brand new and 70% were used. Of the used product, the study concluded that 15% was not reused and was scrapped or discarded. This contrasts with published but uncredited claims that 80% of the imports into Ghana were being burned in primitive conditions.
Quantity
E-waste is considered the "fastest-growing waste stream in the world" with 44.7 million tonnes generated in 2016- equivalent to 4500 Eiffel towers. In 2018, an estimated 50 million tonnes of e-waste was reported, thus the name ‘tsunami of e-waste’ given by the UN. Its value is at least $62.5 billion annually.
Rapid changes in technology, changes in media (tapes, software, MP3), falling prices, and planned obsolescence have resulted in a fast-growing surplus of electronic waste around the globe. Technical solutions are available, but in most cases, a legal framework, a collection, logistics, and other services need to be implemented before a technical solution can be applied.
Display units (CRT, LCD, LED monitors), processors (CPU, GPU, or APU chips), memory (DRAM or SRAM), and audio components have different useful lives. Processors are most frequently out-dated (by software no longer being optimized) and are more likely to become "e-waste" while display units are most often replaced while working without repair attempts, due to changes in wealthy nation appetites for new display technology. This problem could potentially be solved with modular smartphones (such as the Phonebloks concept). These types of phones are more durable and have the technology to change certain parts of the phone making them more environmentally friendly. Being able to simply replace the part of the phone that is broken will reduce e-waste.
An estimated 50 million tons of e-waste are produced each year. The USA discards 30 million computers each year and 100 million phones are disposed of in Europe each year. The Environmental Protection Agency estimates that only 15–20% of e-waste is recycled, the rest of these electronics go directly into landfills and incinerators.
In 2006, the United Nations estimated the amount of worldwide electronic waste discarded each year to be 50 million metric tons. According to a report by UNEP titled, "Recycling – from e-waste to Resources," the amount of e-waste being produced – including mobile phones and computers – could rise by as much as 500 percent over the next decade in some countries, such as India. The United States is the world leader in producing electronic waste, tossing away about 3 million tons each year. China already produces about 2.3 million tons (2010 estimate) domestically, second only to the United States. And, despite having banned e-waste imports, China remains a major e-waste dumping ground for developed countries.
Society today revolves around technology and by the constant need for the newest and most high-tech products we are contributing to a mass amount of e-waste. Since the invention of the iPhone, cell phones have become the top source of e-waste products . Electrical waste contains hazardous but also valuable and scarce materials. Up to 60 elements can be found in complex electronics. As of 2013, Apple has sold over 796 million iDevices (iPod, iPhone, iPad). Cell phone companies make cell phones that are not made to last so that the consumer will purchase new phones. Companies give these products such short lifespans because they know that the consumer will want a new product and will buy it if they make it. In the United States, an estimated 70% of heavy metals in landfills comes from discarded electronics.
While there is agreement that the number of discarded electronic devices is increasing, there is considerable disagreement about the relative risk (compared to automobile scrap, for example), and strong disagreement whether curtailing trade in used electronics will improve conditions, or make them worse. According to an article in Motherboard, attempts to restrict the trade have driven reputable companies out of the supply chain, with unintended consequences.
E-waste data 2016
In 2016, Asia was the territory that brought about by significant the most extensive volume of e-waste (18.2 Mt), accompanied by Europe (12.3 metric tons), America (11.3 metric tons), Africa (2.2 metric tons), and Oceania (0.7 metric tons). The smallest in terms of total e-waste made, Oceania was the largest generator of e-waste per capita (17.3 kg/inhabitant), with hardly 6% of e-waste cited to be gathered and recycled. Europe is the second broadest generator of e-waste per citizen, with an average of 16.6 kg/inhabitant; however, Europe bears the loftiest assemblage figure (35%). America generates 11.6 kg/inhabitant and solicits only 17% of the e-waste caused in the provinces, which is commensurate with the assortment count in Asia (15%). However, Asia generates fewer e-waste per citizen (4,2 kg/inhabitant). Africa generates only 1.9 kg/inhabitant, and limited information is available on its collection percentage. The record furnishes regional breakdowns for Africa, Americas, Asia, Europe, and Oceania. The phenomenon somewhat illustrates the modest number figure linked to the overall volume of e-waste made that 41 countries have administrator e-waste data. For 16 other countries, e-waste volumes were collected from exploration and evaluated. The outcome of a considerable bulk of the e-waste (34.1 Metric tons) is unidentified. In countries where there is no national E-waste constitution in the stand, e-waste is possible interpreted as an alternative or general waste. This is land-filled or recycled, along with alternative metal or plastic scraps. There is the colossal compromise that the toxins are not drawn want of accordingly, or they are chosen want of by an informal sector and converted without well safeguarding the laborers while venting the contaminations in e-waste. Although the e-waste claim is on the rise, a flourishing quantity of countries are embracing e-waste regulation. National e-waste governance orders enclose 66% of the world population, a rise from 44% that was reached in 2014
E-waste data 2019
In 2019, an enormous volume of e-waste (53.6 Mt, with a 7.3 kg per capita average) was generated globally. This is projected to increase to 74 Mt by the year 2030. Asia still remains the largest contributor of a significant volume of electronic waste at 24.9 Mt, followed by the Americas (13.1 Mt), Europe (12 Mt), and Africa and Oceania at 2.9 Mt and 0.7 Mt, respectively. In per capita generation, Europe came first with 16.2 kg, and Oceania was second largest generator at 16.1 kg, and followed by the Americas. Africa is the least generator of e-waste per capita at 2.5 kg. Regarding the collection and recycling of these waste, the continent of Europe ranked first (42.5%), and Asia came second (11.7%). The Americas and Oceania are next (9.4% and 8.8% respectively), and Africa trails behind at 0.9%. Out of the 53.6 Metric tons generated e-waste globally, the formally documented collection and recycling was 9.3%, and the fate of 44.3% remains uncertain, with its whereabouts and impact to the environment varying across different regions of the world. However, the number of countries with national e-waste legislation, regulation or policy, have increased since 2014, from 61 to 78. A great proportion of undocumented commercial and domestic waste get mixed with other streams of waste like plastic and metal waste, implying that fractions which are easily recyclable might be recycled, under conditions considered to be inferior without depollution and recovery of all materials considered valuable.
E-waste data 2021
In 2021, an estimated of 57.4 Mt of e-waste was generated globally. According to estimates in Europe, where the problem is best studied, 11 of 72 electronic items in an average household are no longer in use or broken. Annually per citizen, another 4 to 5 kg of unused electrical and electronic products are hoarded in Europe prior to being discarded.
E-waste legislative frameworks
The European Union (EU) has addressed the issue of electronic Waste by introducing two pieces of legislation. The first, the Waste Electrical and Electronic Equipment Directive (WEEE Directive) came into force in 2003. The main aim of this directive was to regulate and motivate electronic waste recycling and re-use in member states at that moment. It was revised in 2008, coming into force in 2014. Furthermore, the EU has also implemented the Directive on the restriction of the use of certain hazardous substances in electrical and electronica equipment from 2003. This documents was additionally revised in 2012. When it comes to Western Balkan countries, North Macedonia has adopted a Law on Batteries and Accumulators in 2010, followed by the Law on Management of electrical and electronic equipment in 2012. Serbia has regulated management of special waste stream, including electronic waste, by National waste management strategy (2010-2019). Montenegro has adopted Concessionary Act concerning electronic waste with ambition to collect 4 kg of this waste annually per person until 2020. Albanian legal framework is based on the draft act on waste from electrical and electronic equipment from 2011 which focuses on the design of electrical and electronic equipment. Contrary to this, Bosnia and Herzegovina is still missing a law regulating electronic waste.
As of October 2019, 78 countries globally have established either a policy, legislation or specific regulation to govern e-waste. However, there is no clear indication that countries are following the regulations. Regions such as Asia and Africa are having policies that are not legally binding and rather only programmatic ones. Hence, this poses as a challenge that e-waste management policies are yet not fully developed by globally by countries.
Solving the e-waste Problem (StEP) initiative
Solving the E-waste Problem is a membership organization that is part of United Nations University and was created to develop solutions to address issues associated with electronic waste. Some of the most eminent players in the fields of Production, Reuse and Recycling of Electrical and Electronic Equipment (EEE), government agencies and NGOs as well as UN Organisations count themselves among its members. StEP encourages the collaboration of all stakeholders connected with e-waste, emphasizing a holistic, scientific yet applicable approach to the problem.:
Waste electrical and electronic equipment
The European Commission (EC) of the EU has classified waste electrical and electronic equipment (WEEE) as the waste generated from electrical devices and household appliances like refrigerators, televisions, and mobile phones and other devices. In 2005 the EU reported total waste of 9 million tonnes and in 2020 estimates waste of 12 million tonnes. This electronic waste with hazardous materials if not managed properly, may end up badly affecting our environment and causing fatal health issues. Disposing of these materials requires a lot of manpower and properly managed facilities. Not only the disposal, manufacturing of these types of materials require huge facilities and natural resources (aluminum, gold, copper and silicon, etc.), ending up damaging our environment and pollution. Considering the impact of WEEE materials make on our environment, EU legislation has made two legislations: 1. WEEE Directive; 2. RoHS Directive: Directive on usage and restrictions of hazardous materials in producing these Electrical and Electronic Equipment.
WEEE Directive: This Directive was implemented in February 2003, focusing on recycling electronic waste. This Directive offered many electronic waste collection schemes free of charge to the consumers (Directive 2002/96/EC ). The EC revised this Directive in December 2008, since this has become the fastest growing waste stream. In August 2012, the WEEE Directive was rolled out to handle the situation of controlling electronic waste and this was implemented on 14 February 2014 (Directive 2012/19/EU ). On 18 April 2017, the EC adopted a common principle of carrying out research and implementing a new regulation to monitor the amount of WEEE. It requires each member state to monitor and report their national market data.
- Annex III to the WEEE Directive (Directive 2012/19/EU): Re-examination of the timelines for waste collection and setting up individual targets (Report ).
WEEE Legislation:
- On 4 July 2012, the EC passed legislation on WEEE (Directive 2012/19/EU ). To know more about the progress in adopting the Directive 2012/19/EU (Progress ).
- On 15 February 2014, the EC revised the Directive. To know more about the old Directive 2002/96/EC, see (Report ).
RoHS Directive: In 2003, the EC not only implemented legislation on waste collection but also on the alternative use of hazardous materials (Cadmium, mercury, flammable materials, polybrominated biphenyls, lead and polybrominated diphenyl ethers) used in the production of electronic and electric equipment (RoHS Directive 2002/95/EC ). This Directive was again revised in December 2008 and later again in January 2013 (RoHS recast Directive 2011/65/EU ). In 2017, the EC has made adjustment to the existing Directive considering the impact assessment and adopted to a new legislative proposal (RoHS 2 scope review ). On 21 November 2017, the European Parliament and Council has published this legislation amending the RoHS 2 Directive in their official journal .
European Commission legislation on batteries and accumulators (Batteries Directive)
Each year, the EU reports nearly 800 000 tons of batteries from automotive industry, industrial batteries of around 190 000 tons and consumer batteries around 160 000 tons entering the Europe region. These batteries are one of the most commonly used products in household appliances and other battery powered products in our day-to-day life. The important issue to look into is how this battery waste is collected and recycled properly, which has the consequences of resulting in hazardous materials release into the environment and water resources. Generally, many parts of these batteries and accumulators / capacitors can be recycled without releasing these hazardous materials release into our environment and contaminating our natural resources. The EC has rolled out a new Directive to control the waste from the batteries and accumulators known as ‘Batteries Directive’ aiming to improve the collecting and recycling process of the battery waste and control the impact of battery waste on our environment. This Directive also supervises and administers the internal market by implementing required measures.
This Directive restricts the production and marketing of batteries and accumulators which contains hazardous materials and are harmful to the environment, difficult to collect and recycle them. Batteries Directive targets on the collection, recycling and other recycling activities of batteries and accumulators, also approving labels to the batteries which are environment neutral. On 10 December 2020, The EC has proposed a new regulation (Batteries Regulation ) on the batteries waste which aims to make sure that batteries entering the European market are recyclable, sustainable and non-hazardous (Press release ).
Legislation:
In 2006, the EC has adopted the Batteries Directive and revised it in 2013.
- On 6 September 2006, the European Parliament and European Council have launched Directives in waste from Batteries and accumulators (Directive 2006/66/EC ).
- Overview of Batteries and accumulators Legislation
Evaluation of Directive 2006/66/EC (Batteries Directive):
Revising Directives could be based on the Evaluation process, considering the fact of the increase in the usage of batteries with an increase in the multiple communication technologies, household appliances and other small battery-powered products. The increase in the demand of renewable energies and recycling of the products has also led to an initiative ‘European Batteries Alliance (EBA)’ which aims to supervise the complete value chain of production of more improved batteries and accumulators within Europe under this new policy act. Though the adoption of the Evaluation process has been broadly accepted, few concerns rose particularly managing and monitoring the use of hazardous materials in the production of batteries, collection of the battery waste, recycling of the battery waste within the Directives. The evaluation process has definitely gave good results in the areas like controlling the environmental damage, increasing the awareness of recycling, reusable batteries and also improving the efficiency of the internal markets.
However, there are few limitations in the implementations of the Batteries Directive in the process of collecting batteries waste and recovering the usable materials from them. The evaluation process throws some light on the gap in this process of implementation and collaborate technical aspects in the process and new ways to use makes it more difficult to implement and this Directive maintains the balance with technological advancements. The EC's regulations and guidelines has made the evaluation process more impactful in a positive way. The participation of number of stakeholders in the evaluation process who are invited and asked to provide their views and ideas to improve the process of evaluation and information gathering. On 14 March 2018, stakeholders and members of the association participated to provide information about their findings, support and increase the process of Evaluation Roadmap .
European Union regulations on e-waste
The European Union (EU) has addressed the e-waste issue by adopting several directives. In 2011 an amendment was made to a 2003 Directive 2002/95/EC regarding restriction of the use of hazardous materials in the planning and manufacturing process in the EEE. In the 2011 Directive, 2011/65/EU it was stated as the motivation for more specific restriction on the usage of hazardous materials in the planning and manufacturing process of electronic and electrical devices as there was a disparity of the EU Member State laws and the need arose to set forth rules to protect human health and for the environmentally sound recovery and disposal of WEEE. (2011/65/EU, (2)) The Directive lists several substances subject to restriction. The Directive states restricted substances for maximum concentration values tolerated by weight in homogeneous materials are the following: lead (0.1%); mercury (0.1%), cadmium (0.1%), hexavalent chromium (0.1%), polybrominated biphenyls (PBB) (0.1%) and polybrominated diphenyl ethers (PBDE) (o,1 %). If technologically feasible and substitution is available, the usage of substitution is required.
There are, however, exemptions in the case in which substitution is not possible from the scientific and technical point of view. The allowance and duration of the substitutions should take into account the availability of the substitute and the socioeconomic impact of the substitute. (2011/65/EU, (18))
EU Directive 2012/19/EU regulates WEEE and lays down measures to safeguard the ecosystem and human health by inhibiting or shortening the impact of the generation and management of waste of WEEE. (2012/19/EU, (1)) The Directive takes a specific approach to the product design of EEE. It states in Article 4 that Member States are under the constraint to expedite the kind of model and manufacturing process as well as cooperation between producers and recyclers as to facilitate re-use, dismantling and recovery of WEEE, its components, and materials. (2012/19/EU, (4)) The Member States should create measures to make sure the producers of EEE use eco-design, meaning that the type of manufacturing process is used that would not restrict later re-use of WEEE. The Directive also gives Member States the obligation to ensure a separate collection and transportation of different WEEE. Article 8 lays out the requirements of the proper treatment of WEEE. The base minimum of proper treatment that is required for every WEEE is the removal of all liquids. The recovery targets set are seen in the following figures.
Bu the Annex I of Directive 2012/19/EU the categories of EEE covered are as follows:
Large household appliances
Small household appliances
IT and telecommunications equipment
Consumer equipment and photovoltaic panels
Lighting equipment
Electrical and electronic tools (with the exception of large-scale stationary industrial tools)
Toys, leisure and sports equipment
Medical devices (with the exception of all implanted and infected products)
Monitoring and control instruments
Autonomic dispensers
Minimum recovery targets referred in Directive 2012/19/EU starting from 15 August 2018:
WEEE falling within category 1 or 10 of Annex I
- 85% shall be recovered, and 80% shall be prepared for re-use and recycled;
WEEE falling within category 3 or 4 of Annex I
- 80% shall be recovered, and 70% shall be prepared for re-use and recycled;
WEEE falling within category 2, 5, 6, 7, 8 or 9 of Annex I
-75% shall be recovered, and 55% shall be prepared for re-use and recycled;
For gas and discharged lamps, 80% shall be recycled.
In 2021, the European Commission proposed the implementation of a standardization – for iterations of USB-C – of phone charger products after commissioning two impact assessment studies and a technology analysis study. Regulations like this may reduce electronic waste by small but significant amounts as well as, in this case, increase device-interoperability, convergence and convenience for consumers while decreasing resource-needs and redundancy.
International agreements
A report by the United Nations Environment Management Group lists key processes and agreements made by various organizations globally in an effort to manage and control e-waste. Details about the policies could be retrieved in the links below.
International Convention for the Prevention of Pollution from Ships (MARPOL) (73/78/97)
Basel Convention on the Control of Transboundary Movements of Hazardous Wastes and their Disposal (1989)
Montreal Protocol on Ozone Depleting Substances (1989)
International Labour Organization (ILO) Convention on Chemicals, concerning safety in the use of chemicals at work (1990)
Organisation for Economic Cooperation and Development (OECD), Council Decision Waste Agreement (1992)
United Nations Framework Convention on Climate Change (UNFCCC) (1994)
International Conference on Chemicals Management (ICCM) (1995)
Rotterdam Convention on the Prior Informed Consent Procedure for Certain Hazardous Chemicals and Pesticides in International Trade (1998)
Stockholm Convention on Persistent Organic Pollutants (2001)
World Health Organisation (WHO), World Health Assembly Resolutions (2006 – 2016)
Hong Kong International Convention for the Safe and Environmentally Sound Recycling of Ships (2009)
Minamata Convention on Mercury (2013)
Paris Climate Agreement (2015) under the United Nations Framework Convention on Climate Change
Connect 2020 Agenda for Global Telecommunication/ICT Development (2014)
Global trade issues
One theory is that increased regulation of electronic wastes and concern over the environmental harm in nature economies creates an economic disincentive to remove residues prior to export. Critics of trade in used electronics maintain that it is still too easy for brokers calling themselves recyclers to export unscreened electronic waste to developing countries, such as China, India and parts of Africa, thus avoiding the expense of removing items like bad cathode ray tubes (the processing of which is expensive and difficult). The developing countries have become toxic dump yards of e-waste. Developing countries receiving foreign e-waste often go further to repair and recycle forsaken equipment. Yet still 90% of e-waste ended up in landfills in developing countries in 2003. Proponents of international trade point to the success of fair trade programs in other industries, where cooperation has led to creation of sustainable jobs and can bring affordable technology in countries where repair and reuse rates are higher.
Defenders of the trade in used electronics say that extraction of metals from virgin mining has been shifted to developing countries. Recycling of copper, silver, gold, and other materials from discarded electronic devices is considered better for the environment than mining. They also state that repair and reuse of computers and televisions has become a "lost art" in wealthier nations and that refurbishing has traditionally been a path to development.
South Korea, Taiwan, and southern China all excelled in finding "retained value" in used goods, and in some cases have set up billion-dollar industries in refurbishing used ink cartridges, single-use cameras, and working CRTs. Refurbishing has traditionally been a threat to established manufacturing, and simple protectionism explains some criticism of the trade. Works like "The Waste Makers" by Vance Packard explain some of the criticism of exports of working product, for example, the ban on import of tested working Pentium 4 laptops to China, or the bans on export of used surplus working electronics by Japan.
Opponents of surplus electronics exports argue that lower environmental and labor standards, cheap labor, and the relatively high value of recovered raw materials lead to a transfer of pollution-generating activities, such as smelting of copper wire. Electronic waste is often sent to various African and Asian countries such as China, Malaysia, India, and Kenya for processing, sometimes illegally. Many surplus laptops are routed to developing nations as "dumping grounds for e-waste".
Because the United States has not ratified the Basel Convention or its Ban Amendment, and has few domestic federal laws forbidding the export of toxic waste, the Basel Action Network estimates that about 80% of the electronic waste directed to recycling in the U.S. does not get recycled there at all, but is put on container ships and sent to countries such as China. This figure is disputed as an exaggeration by the EPA, the Institute of Scrap Recycling Industries, and the World Reuse, Repair and Recycling Association.
Independent research by Arizona State University showed that 87–88% of imported used computers did not have a higher value than the best value of the constituent materials they contained, and that "the official trade in end-of-life computers is thus driven by reuse as opposed to recycling".
Trade
Proponents of the trade say growth of internet access is a stronger correlation to trade than poverty. Haiti is poor and closer to the port of New York than southeast Asia, but far more electronic waste is exported from New York to Asia than to Haiti. Thousands of men, women, and children are employed in reuse, refurbishing, repair, and re-manufacturing, unsustainable industries in decline in developed countries. Denying developing nations access to used electronics may deny them sustainable employment, affordable products, and internet access, or force them to deal with even less scrupulous suppliers. In a series of seven articles for The Atlantic, Shanghai-based reporter Adam Minter describes many of these computer repair and scrap separation activities as objectively sustainable.
Opponents of the trade argue that developing countries utilize methods that are more harmful and more wasteful. An expedient and prevalent method is simply to toss equipment onto an open fire, in order to melt plastics and to burn away non-valuable metals. This releases carcinogens and neurotoxins into the air, contributing to an acrid, lingering smog. These noxious fumes include dioxins and furans. Bonfire refuse can be disposed of quickly into drainage ditches or waterways feeding the ocean or local water supplies.
In June 2008, a container of electronic waste, destined from the Port of Oakland in the U.S. to Sanshui District in mainland China, was intercepted in Hong Kong by Greenpeace. Concern over exports of electronic waste were raised in press reports in India, Ghana, Côte d'Ivoire, and Nigeria.
The research that was undertaken by the Countering WEEE Illegal Trade (CWIT) project, funded by European Commission, found that in Europe only 35% (3.3 million tons) of all the e-waste discarded in 2012 ended up in the officially reported amounts of collection and recycling systems.
The other 65% (6.15 million tons) was either:
Exported (1.5 million tons),
Recycled under non-compliant conditions in Europe (3.15 million tons),
Scavenged for valuable parts (750,000 tons), or
Simply thrown in waste bins (750,000 tons).
Guiyu
Guiyu in the Guangdong region of China is a massive electronic waste processing community. It is often referred to as the "e-waste capital of the world." Traditionally, Guiyu was an agricultural community; however, in the mid-1990s it transformed into an e-waste recycling center involving over 75% of the local households and an additional 100,000 migrant workers. Thousands of individual workshops employ laborers to snip cables, pry chips from circuit boards, grind plastic computer cases into particles, and dip circuit boards in acid baths to dissolve the precious metals. Others work to strip insulation from all wiring in an attempt to salvage tiny amounts of copper wire. Uncontrolled burning, disassembly, and disposal has led to a number of environmental problems such as groundwater contamination, atmospheric pollution, and water pollution either by immediate discharge or from surface runoff (especially near coastal areas), as well as health problems including occupational safety and health effects among those directly and indirectly involved, due to the methods of processing the waste.
Six of the many villages in Guiyu specialize in circuit-board disassembly, seven in plastics and metals reprocessing, and two in wire and cable disassembly. Greenpeace, an environmental group, sampled dust, soil, river sediment, and groundwater in Guiyu. They found very high levels of toxic heavy metals and organic contaminants in both places. Lai Yun, a campaigner for the group found "over 10 poisonous metals, such as lead, mercury, and cadmium."
Guiyu is only one example of digital dumps but similar places can be found across the world in Nigeria, Ghana, and India.
Other informal e-waste recycling sites
Guiyu is likely one of the oldest and largest informal e-waste recycling sites in the world; however, there are many sites worldwide, including India, Ghana (Agbogbloshie), Nigeria, and the Philippines. There are a handful of studies that describe exposure levels in e-waste workers, the community, and the environment. For example, locals and migrant workers in Delhi, a northern union territory of India, scavenge discarded computer equipment and extract base metals using toxic, unsafe methods. Bangalore, located in southern India, is often referred as the "Silicon Valley of India" and has a growing informal e-waste recycling sector. A study found that e-waste workers in the slum community had higher levels of V, Cr, Mn, Mo, Sn, Tl, and Pb than workers at an e-waste recycling facility.
Cryptocurrency e-waste
Bitcoin mining has also contributed to higher amounts in electronic waste, as it has become an increasingly popular form of currency in global trade. According to Alex de Vries and Christian Stoll, the average bitcoin transaction yields 272 grams of electronic waste and has generated approximately 112.5 million grams of waste in 2020 alone. Other estimates indicate that the Bitcoin network discards as much "small IT and telecommunication equipment waste produced by a country like the Netherlands," totalling to 30.7 metric kilotons every year. Furthermore, the rate at which Bitcoin disposes of its waste exceeds that of major financial organizations such as VISA, which produces 40 grams of waste for every 100,000 transactions.
A major point of concern is the rapid turnover of technology in the Bitcoin industry which results in such high levels of e-waste. This can be attributed to the proof-of-work principle Bitcoin employs where miners receive currency as a reward for being the first to decode the hashes that encode its blockchain. As such, miners are encouraged to compete with one another to decode the hash first. However, computing these hashes requires massive computing power which, in effect, drives miners to obtain rigs with the highest processing power possible. In an attempt to achieve this, miners increase the processing power in their rigs by purchasing more advanced computer chips.
According to Koomey's Law, efficiency in computer chips doubles every 1.5 years, meaning that miners are incentivized to purchase new chips to keep up with competing miners even though the older chips are still functional. In some cases, miners even discard their chips earlier than this timeframe for the sake of profitability. However, this leads to a significant build up in waste, as outdated application-specific integrated circuits (ASIC computer chips) cannot be reused or repurposed. Most computer chips miners currently use are ASIC chips, whose sole function is to mine bitcoin, rendering them useless for other cryptocurrencies or operation in any other piece of technology. Therefore, outdated ASIC chips can only be disposed of since they are unable to be repurposed.
Bitcoin's e-waste problem is further exacerbated by the fact that many countries and corporations lack recycling programs for ASIC chips. Developing a recycling infrastructure for bitcoin mining may prove to be beneficial, though, as the aluminum heat sinks and metal casings in ASIC chips can be recycled into new technology. Much of this responsibility falls onto Bitmain, the leading manufacturer of Bitcoin, which currently lacks the infrastructure to recycle waste from bitcoin mining. Without such programs, much of bitcoin waste ends up in landfill along with 83.6% of the global total of e-waste.
Many argue for relinquishing the proof-of-work model altogether in favour of the proof-of-stake one. This model selects one miner to validate the transactions in the blockchain, rather than have all miners competing for it. With no competition, the processing speed of miners' rigs would not matter. Any device could be used for validating the blockchain, so there would be no incentive to use single-use ASIC chips or continually purchase new and dispose of old ones.
Environmental impact
A recent study about the rising electronic pollution in the USA revealed that the average computer screen has five to eight pounds or more of lead representing 40 percent of all the lead in US landfills. All these toxins are persistent, bioaccumulative toxins (PBTs) that create environmental and health risks when computers are incinerated, put in landfills or melted down. The emission of fumes, gases, and particulate matter into the air, the discharge of liquid waste into water and drainage systems, and the disposal of hazardous wastes contribute to environmental degradation. The processes of dismantling and disposing of electronic waste in developing countries led to a number of environmental impacts as illustrated in the graphic. Liquid and atmospheric releases end up in bodies of water, groundwater, soil, and air and therefore in land and sea animals – both domesticated and wild, in crops eaten by both animals and human, and in drinking water.
One study of environmental effects in Guiyu, China found the following:
Airborne dioxins – one type found at 100 times levels previously measured
Levels of carcinogens in duck ponds and rice paddies exceeded international standards for agricultural areas and cadmium, copper, nickel, and lead levels in rice paddies were above international standards
Heavy metals found in road dust – lead over 300 times that of a control village's road dust and copper over 100 times
The Agbogbloshie area of Ghana, where about 40,000 people live, provides an example of how e-waste contamination can pervade the daily lives of nearly all residents. Into this area—one of the largest informal e-waste dumping and processing sites in Africa—about 215,000 tons of secondhand consumer electronics, primarily from Western Europe, are imported annually. Because this region has considerable overlap among industrial, commercial, and residential zones, Pure Earth (formerly Blacksmith Institute) has ranked Agbogbloshie as one of the world's 10 worst toxic threats (Blacksmith Institute 2013).
A separate study at the Agbogbloshie e-waste dump, Ghana found a presence of lead levels as high as 18,125 ppm in the soil. US EPA standard for lead in soil in play areas is 400 ppm and 1200 ppm for non-play areas. Scrap workers at the Agbogbloshie e-waste dump regularly burn electronic components and auto harness wires for copper recovery, releasing toxic chemicals like lead, dioxins and furans into the environment.
Researchers such as Brett Robinson, a professor of soil and physical sciences at Lincoln University in New Zealand, warn that wind patterns in Southeast China disperse toxic particles released by open-air burning across the Pearl River Delta Region, home to 45 million people. In this way, toxic chemicals from e-waste enter the "soil-crop-food pathway," one of the most significant routes for heavy metals' exposure to humans. These chemicals are not biodegradable— they persist in the environment for long periods of time, increasing exposure risk.
In the agricultural district of Chachoengsao, in the east of Bangkok, local villagers had lost their main water source as a result of e-waste dumping. The cassava fields were transformed in late 2017, when a nearby Chinese-run factory started bringing in foreign e-waste items such as crushed computers, circuit boards and cables for recycling to mine the electronics for valuable metal components like copper, silver and gold. But the items also contain lead, cadmium and mercury, which are highly toxic if mishandled during processing. Apart from feeling faint from noxious fumes emitted during processing, a local claimed the factory has also contaminated her water. "When it was raining, the water went through the pile of waste and passed our house and went into the soil and water system. Water tests conducted in the province by environmental group Earth and the local government both found toxic levels of iron, manganese, lead, nickel and in some cases arsenic and cadmium. "The communities observed when they used water from the shallow well, there was some development of skin disease or there are foul smells," founder of Earth, Penchom Saetang said. "This is proof, that it is true, as the communities suspected, there are problems happening to their water sources."
Depending on the age and type of the discarded item, the chemical composition of e-waste may vary. Most e-waste are composed of a mixture of metals like Cu, Al and Fe. They might be attached to, covered with or even mixed with various types of plastics and ceramics. E-waste has a horrible effect on the environment and it is important to dispose it with an R2 certifies recycling facility.
Research
In May 2020, a scientific study was conducted in China that investigated the occurrence and distribution of traditional and novel classes of contaminants, including chlorinated, brominated, and mixed halogenated dibenzo-p-dioxins/dibenzofurans (PCDD/Fs, PBDD/Fs, PXDD/Fs), polybrominated diphenyl ethers (PBDEs), polychlorinated biphenyls (PCBs) and polyhalogenated carbazoles (PHCZs) in soil from an e-waste disposal site in Hangzhou (which has been in operation since 2009 and has a treatment capacity of 19.6 Wt/a). While the study area has only one formal emission source, the broader industrial zone has a number of metal recovery and reprocessing plants as well as heavy traffic on adjacent motorways where normal and heavy-duty devices are used. The maximum concentrations of the target halogenated organic compounds HOCs were 0.1–1.5 km away from the main source and overall detected levels of HOCs were generally lower than those reported globally. The study proved what researchers have warned, i. e. on highways with heavy traffic, especially those serving diesel powered vehicles, exhaust emissions are larger sources of dioxins than stationary sources. When assessing the environmental and health impacts of chemical compounds, especially PBDD/Fs and PXDD/Fs, the compositional complexity of soil and long period weather conditions like rain and downwind have to be taken into account. Further investigations are necessary to build up a common understanding and methods for assessing e-waste impacts.
Information security
Discarded data processing equipment may still contain readable data that may be considered sensitive to the previous users of the device. A recycling plan for such equipment can support information security by ensuring proper steps are followed to erase the sensitive information. This may include such steps as re-formatting of storage media and overwriting with random data to make data unrecoverable, or even physical destruction of media by shredding and incineration to ensure all data is obliterated. For example, on many operating systems deleting a file may still leave the physical data file intact on the media, allowing data retrieval by routine methods.
Recycling
Recycling is an essential element of e-waste management. Properly carried out, it should greatly reduce the leakage of toxic materials into the environment and militate against the exhaustion of natural resources. However, it does need to be encouraged by local authorities and through community education. Less than 20% of e-waste is formally recycled, with 80% either ending up in landfill or being informally recycled – much of it by hand in developing countries, exposing workers to hazardous and carcinogenic substances such as mercury, lead and cadmium.
One of the major challenges is recycling the printed circuit boards from electronic waste. The circuit boards contain such precious metals as gold, silver, platinum, etc. and such base metals as copper, iron, aluminum, etc. One way e-waste is processed is by melting circuit boards, burning cable sheathing to recover copper wire and open- pit acid leaching for separating metals of value. Conventional method employed is mechanical shredding and separation but the recycling efficiency is low. Alternative methods such as cryogenic decomposition have been studied for printed circuit board recycling, and some other methods are still under investigation. Properly disposing of or reusing electronics can help prevent health problems, reduce greenhouse-gas emissions, and create jobs.
Consumer awareness efforts
The U.S. Environmental Protection Agency encourages electronic recyclers to become certified by demonstrating to an accredited, independent third party auditor that they meet specific standards to safely recycle and manage electronics. This should work so as to ensure the highest environmental standards are being maintained. Two certifications for electronic recyclers currently exist and are endorsed by the EPA. Customers are encouraged to choose certified electronics recyclers. Responsible electronics recycling reduces environmental and human health impacts, increases the use of reusable and refurbished equipment and reduces energy use while conserving limited resources. The two EPA-endorsed certification programs are Responsible Recyclers Practices (R2) and E-Stewards. Certified companies ensure they are meeting strict environmental standards which maximize reuse and recycling, minimize exposure to human health or the environment, ensure safe management of materials and require destruction of all data used on electronics. Certified electronics recyclers have demonstrated through audits and other means that they continually meet specific high environmental standards and safely manage used electronics. Once certified, the recycler is held to the particular standard by continual oversight by the independent accredited certifying body. A certification board accredits and oversees certifying bodies to ensure that they meet specific responsibilities and are competent to audit and provide certification.
Some U.S. retailers offer opportunities for consumer recycling of discarded electronic devices. In the US, the Consumer Electronics Association (CEA) urges consumers to dispose properly of end-of-life electronics through its recycling locator. This list only includes manufacturer and retailer programs that use the strictest standards and third-party certified recycling locations, to provide consumers assurance that their products will be recycled safely and responsibly. CEA research has found that 58 percent of consumers know where to take their end-of-life electronics, and the electronics industry would very much like to see that level of awareness increase. Consumer electronics manufacturers and retailers sponsor or operate more than 5,000 recycling locations nationwide and have vowed to recycle one billion pounds annually by 2016, a sharp increase from 300 million pounds industry recycled in 2010.
The Sustainable Materials Management (SMM) Electronic Challenge was created by the United States Environmental Protection Agency (EPA) in 2012. Participants of the Challenge are manufacturers of electronics and electronic retailers. These companies collect end-of-life (EOL) electronics at various locations and send them to a certified, third-party recycler. Program participants are then able publicly promote and report 100% responsible recycling for their companies. The Electronics TakeBack Coalition (ETBC) is a campaign aimed at protecting human health and limiting environmental effects where electronics are being produced, used, and discarded. The ETBC aims to place responsibility for disposal of technology products on electronic manufacturers and brand owners, primarily through community promotions and legal enforcement initiatives. It provides recommendations for consumer recycling and a list of recyclers judged environmentally responsible. While there have been major benefits from the rise in recycling and waste collection created by producers and consumers, such as valuable materials being recovered and kept away from landfill and incineration, there are still many problems present with the EPR system including "how to ensure proper enforcement of recycling standards, what to do about waste with positive net value, and the role of competition," (Kunz et al.). Many stakeholders agreed there needs to be a higher standard of accountability and efficiency to improve the systems of recycling everywhere, as well as the growing amount of waste being an opportunity more so than downfall since it gives us more chances to create an efficient system. To make recycling competition more cost-effective, the producers agreed that there needs to be a higher drive for competition because it allows them to have a wider range of producer responsibility organizations to choose from for e-waste recycling.
The Certified Electronics Recycler program for electronic recyclers is a comprehensive, integrated management system standard that incorporates key operational and continual improvement elements for quality, environmental and health and safety performance. The grassroots Silicon Valley Toxics Coalition promotes human health and addresses environmental justice problems resulting from toxins in technologies. The World Reuse, Repair, and Recycling Association (wr3a.org) is an organization dedicated to improving the quality of exported electronics, encouraging better recycling standards in importing countries, and improving practices through "Fair Trade" principles. Take Back My TV is a project of The Electronics TakeBack Coalition and grades television manufacturers to find out which are responsible, in the coalition's view, and which are not.
There have also been efforts to raise awareness of the potentially hazardous conditions of the dismantling of e-waste in American prisons. The Silicon Valley Toxics Coalition, prisoner-rights activists, and environmental groups released a Toxic Sweatshops report that details how prison labor is being used to handle e-waste, resulting in health consequences among the workers. These groups allege that, since prisons do not have adequate safety standards, inmates are dismantling the products under unhealthy and unsafe conditions.
Processing techniques
In many developed countries, electronic waste processing usually first involves dismantling the equipment into various parts (metal frames, power supplies, circuit boards, plastics), often by hand, but increasingly by automated shredding equipment. A typical example is the NADIN electronic waste processing plant in Novi Iskar, Bulgaria—the largest facility of its kind in Eastern Europe. The advantages of this process are the human worker's ability to recognize and save working and repairable parts, including chips, transistors, RAM, etc. The disadvantage is that the labor is cheapest in countries with the lowest health and safety standards.
In an alternative bulk system, a hopper conveys material for shredding into an unsophisticated mechanical separator, with screening and granulating machines to separate constituent metal and plastic fractions, which are sold to smelters or plastics recyclers. Such recycling machinery is enclosed and employs a dust collection system. Some of the emissions are caught by scrubbers and screens. Magnets, eddy currents, and Trommel screens are employed to separate glass, plastic, and ferrous and nonferrous metals, which can then be further separated at a smelter.
Leaded glass from CRTs is reused in car batteries, ammunition, and lead wheel weights, or sold to foundries as a fluxing agent in processing raw lead ore. Copper, gold, palladium, silver and tin are valuable metals sold to smelters for recycling. Hazardous smoke and gases are captured, contained and treated to mitigate environmental threat. These methods allow for safe reclamation of all valuable computer construction materials. Hewlett-Packard product recycling solutions manager Renee St. Denis describes its process as: "We move them through giant shredders about 30 feet tall and it shreds everything into pieces about the size of a quarter. Once your disk drive is shredded into pieces about this big, it's hard to get the data off". An ideal electronic waste recycling plant combines dismantling for component recovery with increased cost-effective processing of bulk electronic waste. Reuse is an alternative option to recycling because it extends the lifespan of a device. Devices still need eventual recycling, but by allowing others to purchase used electronics, recycling can be postponed and value gained from device use.
In early November 2021, the U.S. of Georgia announced a joint effort with Igneo Technologies to build an $85 million large electronics recycling plant in the Port of Savannah. The project will focus on lower-value, plastics-heavy devices in the waste stream using multiple shredders and furnaces using pyrolysis technology.
Benefits of recycling
Recycling raw materials from end-of-life electronics is the most effective solution to the growing e-waste problem. Most electronic devices contain a variety of materials, including metals that can be recovered for future uses. By dismantling and providing reuse possibilities, intact natural resources are conserved and air and water pollution caused by hazardous disposal is avoided. Additionally, recycling reduces the amount of greenhouse gas emissions caused by the manufacturing of new products. Another benefit of recycling e-waste is that many of the materials can be recycled and re-used again. Materials that can be recycled include "ferrous (iron-based) and non-ferrous metals, glass, and various types of plastic." "Non-ferrous metals, mainly aluminum and copper can all be re-smelted and re-manufactured. Ferrous metals such as steel and iron also can be re-used." Due to the recent surge in popularity in 3D printing, certain 3D printers have been designed (FDM variety) to produce waste that can be easily recycled which decreases the amount of harmful pollutants in the atmosphere. The excess plastic from these printers that comes out as a byproduct can also be reused to create new 3D printed creations.
Benefits of recycling are extended when responsible recycling methods are used. In the U.S., responsible recycling aims to minimize the dangers to human health and the environment that disposed and dismantled electronics can create. Responsible recycling ensures best management practices of the electronics being recycled, worker health and safety, and consideration for the environment locally and abroad. In Europe, metals that are recycled are returned to companies of origin at a reduced cost. Through a committed recycling system, manufacturers in Japan have been pushed to make their products more sustainable. Since many companies were responsible for the recycling of their own products, this imposed responsibility on manufacturers requiring many to redesign their infrastructure. As a result, manufacturers in Japan have the added option to sell the recycled metals.
Improper management of e-waste is resulting in a significant loss of scarce and valuable raw materials, such as gold, platinum, cobalt and rare earth elements. As much as 7% of the world's gold may currently be contained in e-waste, with 100 times more gold in a tonne of e-waste than in a tonne of gold ore.
Repair as waste reduction method
There are several ways to curb the environmental hazards arising from the recycling of electronic waste. One of the factors which exacerbate the e-waste problem is the diminishing lifetime of many electrical and electronic goods. There are two drivers (in particular) for this trend. On the one hand, consumer demand for low cost products militates against product quality and results in short product lifetimes. On the other, manufacturers in some sectors encourage a regular upgrade cycle, and may even enforce it though restricted availability of spare parts, service manuals and software updates, or through planned obsolescence.
Consumer dissatisfaction with this state of affairs has led to a growing repair movement. Often, this is at a community level such as through repair cafės or the "restart parties" promoted by the Restart Project.
The Right to Repair is spearheaded in the US by farmers dissatisfied with non-availability of service information, specialised tools and spare parts for their high-tech farm machinery. But the movement extends far beyond farm machinery with, for example, the restricted repair options offered by Apple coming in for criticism. Manufacturers often counter with safety concerns resulting from unauthorised repairs and modifications.
An easy method of reducing electronic waste footprint is to sell or donate electronic gadgets, rather than dispose of them.
Improperly disposed e-waste is becoming more and more hazardous, especially as the sheer volume of e-waste increases. For this reason, large brands like Apple, Samsung, and others have started giving options to customers to recycle old electronics. Recycling allows the expensive electronic parts inside to be reused. This may save significant energy and reduce the need for mining of additional raw resources, or manufacture of new components. Electronic recycling programs may be found locally in many areas with a simple online search; for example, by searching "recycle electronics" along with the city or area name.
Cloud services have proven to be useful in storing data, which is then accessible from anywhere in the world without the need to carry storage devices. Cloud storage also allows for large storage, at low cost. This offers convenience, while reducing the need for manufacture of new storage devices, thus curbing the amount of e-waste generated.
Electronic waste classification
The market has a lot of different types of electrical products. To categorize these products, it is necessary to group them into sensible and practical categories. Classification of the products may even help to determine the process to be used for disposal of the product. Making the classifications, in general, is helping to describe e-waste. Classifications has not defined special details, for example when they do not pose a threat to the environment. On the other hand, classifications should not be too aggregated because of countries differences in interpretation. The UNU-KEYs system closely follows the harmonized statistical (HS) coding. It is an international nomenclature which is an integrated system to allow classify common basis for customs purposes.
Electronic waste substances
Some computer components can be reused in assembling new computer products, while others are reduced to metals that can be reused in applications as varied as construction, flatware, and jewellery. Substances found in large quantities include epoxy resins, fiberglass, PCBs, PVC (polyvinyl chlorides), thermosetting plastics, lead, tin, copper, silicon, beryllium, carbon, iron, and aluminum. Elements found in small amounts include cadmium, mercury, and thallium. Elements found in trace amounts include americium, antimony, arsenic, barium, bismuth, boron, cobalt, europium, gallium, germanium, gold, indium, lithium, manganese, nickel, niobium, palladium, platinum, rhodium, ruthenium, selenium, silver, tantalum, terbium, thorium, titanium, vanadium, and yttrium. Almost all electronics contain lead and tin (as solder) and copper (as wire and printed circuit board tracks), though the use of lead-free solder is now spreading rapidly. The following are ordinary applications:
Hazardous
Generally non-hazardous
Human health and safety
Residents living near recycling sites
Residents living around the e-waste recycling sites, even if they do not involve in e-waste recycling activities, can also face the environmental exposure due to the food, water, and environmental contamination caused by e-waste, because they can easily contact to e-waste contaminated air, water, soil, dust, and food sources. In general, there are three main exposure pathways: inhalation, ingestion, and dermal contact.
Studies show that people living around e-waste recycling sites have a higher daily intake of heavy metals and a more serious body burden. Potential health risks include mental health, impaired cognitive function, and general physical health damage.(See also Electronic waste#Hazardous) DNA damage was also found more prevalent in all the e-waste exposed populations (i.e. adults, children, and neonates) than the populations in the control area. DNA breaks can increase the likelihood of wrong replication and thus mutation, as well as lead to cancer if the damage is to a tumor suppressor gene .
Prenatal exposure and neonates' health
Prenatal exposure to e-waste has found to have adverse effects on human body burden of pollutants of the neonates. In Guiyu, one of the most famous e-waste recycling sites in China, it was found that increased cord blood lead concentration of neonates was associated with parents' participation in e-waste recycling processes, as well as how long the mothers spent living in Guiyu and in e-waste recycling factories or workshops during pregnancy. Besides, a higher placental metallothionein (a small protein marking the exposure of toxic metals) was found among neonates from Guiyu as a result of Cd exposure, while the higher Cd level in Guiyu's neonates was related to the involvement in e-waste recycling of their parents. High PFOA exposure of mothers in Guiyu is related to adverse effect on growth of their new-born and the prepotency in this area.
Prenatal exposure to informal e-waste recycling can also lead to several adverse birth outcomes (still birth, low birth weight, low Apgar scores, etc.) and longterm effects such as behavioral and learning problems of the neonates in their future life.
Children
Children are especially sensitive to e-waste exposure because of several reasons, such as their smaller size, higher metabolism rate, larger surface area in relation to their weight, and multiple exposure pathways (for example, dermal, hand-to-mouth, and take-home exposure). They were measured to have an 8-time potential health risk compared to the adult e-waste recycling workers. Studies have found significant higher blood lead levels (BLL) and blood cadmium levels (BCL) of children living in e-waste recycling area compared to those living in control area. For example, one study found that the average BLL in Guiyu was nearly 1.5 times compared to that in the control site (15.3 ug/dL compared to 9.9 ug/dL), while the CDC of the United States has set a reference level for blood lead at 5 ug/dL. The highest concentrations of lead were found in the children of parents whose workshop dealt with circuit boards and the lowest was among those who recycled plastic.
Exposure to e-waste can cause serious health problems to children. Children's exposure to developmental neurotoxins containing in e-waste such as lead, mercury, cadmium, chromium, arsenic, nickel and PBDEs can lead to a higher risk of lower IQ, impaired cognitive function, exposure to known human carcinogens and other adverse effects. In certain age groups, a decreased lung function of children in e-waste recycling sites has been found. Some studies also found associations between children's e-waste exposure and impaired coagulation, hearing loss, and decreased vaccine antibody tilters in e-waste recycling area. For instance, nickel exposure in boys aged 8–9 years at an e-waste site leads to lower forced vital capacity, decrease in catalase activities and significant increase in superoxide dismutase activities and malondialdehyde levels.
E-waste recycling workers
The complex composition and improper handling of e-waste adversely affect human health. A growing body of epidemiological and clinical evidence has led to increased concern about the potential threat of e-waste to human health, especially in developing countries such as India and China. For instance, in terms of health hazards, open burning of printed wiring boards increases the concentration of dioxins in the surrounding areas. These toxins cause an increased risk of cancer if inhaled by workers and local residents. Toxic metals and poison can also enter the bloodstream during the manual extraction and collection of tiny quantities of precious metals, and workers are continuously exposed to poisonous chemicals and fumes of highly concentrated acids. Recovering resalable copper by burning insulated wires causes neurological disorders, and acute exposure to cadmium, found in semiconductors and chip resistors, can damage the kidneys and liver and cause bone loss. Long-term exposure to lead on printed circuit boards and computer and television screens can damage the central and peripheral nervous system and kidneys, and children are more susceptible to these harmful effects.
The Occupational Safety & Health Administration (OSHA) has summarized several potential safety hazards of recycling workers in general, such as crushing hazards, hazardous energy released, and toxic metals.
OSHA has also specified some chemical components of electronics that can potentially do harm to e-recycling workers' health, such as lead, mercury, PCBs, asbestos, refractory ceramic fibers (RCFs), and radioactive substances. Besides, in the United States, most of these chemical hazards have specific Occupational exposure limits (OELs) set by OSHA, National Institute for Occupational Safety and Health (NIOSH), and American Conference of Governmental Industrial Hygienists (ACGIH).
For the details of health consequences of these chemical hazards, see also Electronic waste#Electronic waste substances.
Informal and formal industries
Informal e-recycling industry refers to small e-waste recycling workshops with few (if any) automatic procedures and personal protective equipment (PPE). On the other hand, formal e-recycling industry refers to regular e-recycling facilities sorting materials from e-waste with automatic machinery and manual labor, where pollution control and PPE are common. Sometimes formal e-recycling facilities dismantle the e-waste to sort materials, then distribute it to other downstream recycling department to further recover materials such as plastic and metals.
The health impact of e-waste recycling workers working in informal industry and formal industry are expect to be different in the extent. Studies in three recycling sites in China suggest that the health risks of workers from formal e-recycling facilities in Jiangsu and Shanghai were lower compared to those worked in informal e-recycling sites in Guiyu. The primitive methods used by unregulated backyard operators (e.g., the informal sector) to reclaim, reprocess, and recycle e-waste materials expose the workers to a number of toxic substances. Processes such as dismantling components, wet chemical processing, and incineration are used and result in direct exposure and inhalation of harmful chemicals. Safety equipment such as gloves, face masks, and ventilation fans are virtually unknown, and workers often have little idea of what they are handling. In another study of e-waste recycling in India, hair samples were collected from workers at an e-waste recycling facility and an e-waste recycling slum community (informal industry) in Bangalore. Levels of V, Cr, Mn, Mo, Sn, Tl, and Pb were significantly higher in the workers at the e-waste recycling facility compared to the e-waste workers in the slum community. However, Co, Ag, Cd, and Hg levels were significantly higher in the slum community workers compared to the facility workers.
Even in formal e-recycling industry, workers can be exposed to excessive pollutants. Studies in the formal e-recycling facilities in France and Sweden found workers' overexposure (compared to recommended occupational guidelines) to lead, cadmium, mercury and some other metals, as well as BFRs, PCBs, dioxin and furans. Workers in formal industry are also exposed to more brominated flame-retardants than reference groups.
Hazard controls
For occupational health and safety of e-waste recycling workers, both employers and workers should take actions. Suggestions for the e-waste facility employers and workers given by California Department of Public Health are illustrated in the graphic.
See also
2000s commodities boom
Computer Recycling
Digger gold
eDay
Electronic waste in Japan
Green computing
Mobile phone recycling
Material safety data sheet
Polychlorinated biphenyls
Retrocomputing
Radio Row
Policy and conventions:
Basel Action Network (BAN)
Basel Convention
China RoHS
e-Stewards
Restriction of Hazardous Substances Directive (RoHS)
Soesterberg Principles
Sustainable Electronics Initiative (SEI)
Waste Electrical and Electronic Equipment Directive
Organizations:
Asset Disposal and Information Security Alliance (ADISA)
Empa
IFixit
International Network for Environmental Compliance and Enforcement
Institute of Scrap Recycling Industries (ISRI)
Solving the E-waste Problem
World Reuse, Repair and Recycling Association
Security:
Data erasure
General:
Retail hazardous waste
Waste
Waste management
References
Further reading
United Nations University: THE GLOBAL E-WASTE MONITOR 2014 – Quantities, flows and resources, 2015
(13MB PDF)
External links
Sustainable Management of Electronics
MOOC: Massive Online Open Course "Waste Management and Critical Raw Materials" on (amongst others) recycling and reuse of electronics.
Occupational safety and health |
46129 | https://en.wikipedia.org/wiki/ITV%20Digital | ITV Digital | ITV Digital was a British digital terrestrial television broadcaster which launched a pay-TV service on the world's first digital terrestrial television network. Its main shareholders were Carlton Communications plc and Granada plc, owners of two franchises of the ITV network. Starting as ONdigital in 1998, the service was re-branded as ITV Digital in July 2001.
Low audience figures, piracy issues and an ultimately unaffordable multi-million pound deal with the Football League led to the broadcaster suffering massive losses, forcing it to enter administration in March 2002. Pay television services ceased permanently on 1 May that year, and the remaining free-to-air channels such as BBC One and Channel 4 had ceased when the company was liquidated in October. The terrestrial multiplexes were subsequently taken over by Crown Castle and the BBC to create Freeview later that month.
History
On 31 January 1997, Carlton Television, Granada Television and satellite company British Sky Broadcasting (BSkyB), together created British Digital Broadcasting (BDB) as a joint venture and applied to operate three digital terrestrial television (DTT) licences. They faced competition from a rival, Digital Television Network (DTN), a company created by cable operator CableTel (later known as NTL). On 25 June 1997, BDB won the auction and the Independent Television Commission (ITC) awarded the sole broadcast licence for DTT to the consortium. Then on 20 December 1997, the ITC awarded three pay-TV digital multiplex licences to BDB.
That same year, however, the ITC forced BSkyB out of the consortium on competition grounds; this effectively placed Sky in direct competition with the new service as Sky would also launch its digital satellite service in 1998, although Sky was still required to provide key channels such as Sky Movies and Sky Sports to ONdigital. With Sky part of the consortium, ONdigital would have paid discounted rates to carry Sky's television channels. Instead, with its positioning as a competitor, Sky charged the full market rates for the channels, at an extra cost of around £60million a year to ONdigital. On 28 July 1998, BDB announced the service would be called ONdigital, and claimed it would be the biggest television brand launch in history. The company would be based in Marco Polo House, now demolished, in Battersea, south London, which was previously the home of BSkyB's earlier rival, British Satellite Broadcasting (BSB).
Six multiplexes were set up, with three of them allocated to the existing analogue broadcasters. The other three multiplexes were auctioned off. ONdigital was given one year from the award of the licence to launch the first DTT service. In addition to launching audio and video services, it also led the specification of an industry-wide advanced interactive engine, based on MHEG-5. This was an open standard that was used by all broadcasters on DTT.
The launch
ONdigital was officially launched on 15 November 1998 amid a large public ceremony featuring celebrity Ulrika Jonsson and fireworks around the Crystal Palace transmitting station. Its competitor Sky Digital had already debuted on 1 October. The service launched with 12 primary channels, which included the new BBC Choice and ITV2 channels; a subscription package featuring channels such as Sky One, Cartoon Network, E4, UKTV channels and many developed in-house by Carlton and Granada such as Carlton World; premium channels including Sky Sports 1, 2, 3, Sky Premier and Sky MovieMax; and the newly launched FilmFour.
From the beginning, however, the service was quickly losing money. Supply problems with set-top boxes meant that the company missed Christmas sales. Meanwhile, aggressive marketing by BSkyB for Sky Digital made the ONdigital offer look unattractive. The new digital satellite service provided a dish, digibox, installation and around 200 channels for £159, a lower price than ONdigital at £199. ONdigital's subscription pricing had been set to compare with the older Sky analogue service of 20 channels. In 1999, digital cable services were launched by NTL, Telewest and Cable & Wireless.
In February 1999, ITV secured the rights for UEFA Champions League football matches for four years, which would partly be broadcast through ONdigital and two new sports channels on the platform, Champions ON 28 and Champions ON 99 (later renamed ONsport 1 and ONsport 2 when it secured the rights to ATP tennis games), the latter of which timeshared with Carlton Cinema. Throughout 1999, channels including MTV and British Eurosport launched on the platform. The exclusive Carlton Kids and Carlton World channels closed in 2000 to make way for two Discovery channels.
ONdigital reported in April 1999 that it had 110,000 subscribers. Sky Digital, however, had over 350,000 by this time. By March 2000, there were 673,000 ONdigital customers.
The first interactive digital service was launched in mid-1999, called ONgames. On 7 March 2000, ONmail was launched which provided an interactive e-mail service. A deal with multiplex operator SDN led to the launch of pay-per-view service ONrequest on 1 May 2000. In June 2000, ONoffer was launched. On 18 September 2000, the internet TV service ONnet was launched.
On 17 June 2000, ONdigital agreed to a major £315 million three-year deal with the Football League to broadcast 88 live Nationwide League and Worthington Cup matches from the 2001–02 season.
Problems
ONdigital's growth slowed throughout 2000, and by the start of 2001 the number of subscribers stopped increasing; meanwhile, its competitor Sky Digital was still growing. The ONdigital management team responded with a series of free set top box promotions, initially at retailers such as Currys and Dixons, when ONdigital receiving equipment was purchased at the same time as a television set or similarly priced piece of equipment. These offers eventually became permanent, with the set-top box loaned to the customer at no charge for as long as they continued to subscribe to ONdigital, an offer that was matched by Sky. ONdigital's churn rate, a measure of the number of subscribers leaving the service, reached 28% during 2001.
Additional problems for ONdigital were the choice of 64QAM broadcast mode, which when coupled with far weaker than expected broadcast power, meant that the signal was weak in many areas; a complex pricing structure with many options; a poor-quality subscriber management system (adapted from Canal+); a paper magazine TV guide whereas BSkyB had an electronic programme guide (EPG); insufficient technical customer services; and much signal piracy. While there was a limited return path provided via an in-built 2400 baud modem, there was no requirement, as there was with BSkyB, to connect the set-top box's modem to a phone line.
Loaned equipment
Later problems occurred when ONdigital began to sell prepaid set-top boxes (under the name ONprepaid) from November 1999. This bundle sold in high street stores and supermarkets at a price that included – in theory – the set-top box on loan and the first year's subscription package. These prepaid boxes amounted to 50% of sales in December 1999. Thousands of these packages were also sold at well below retail price on auction sites such as the then-popular QXL. As the call to activate the viewing card did not require any bank details, many ONdigital boxes which were technically on loan were at unverifiable addresses. This was later changed so a customer could not walk away with a box without ONdigital verifying their address. Many customers did not activate the viewing card at all, although where the viewer's address was known, ONdigital would write informing them that they must activate before a certain deadline.
Piracy
The ONdigital pay-per-view channels were encrypted using a system – SECA MediaGuard – which had subsequently been cracked. ONdigital did not update this system, therefore it was possible to produce and sell counterfeit subscription cards which would give access to all the channels. About 100,000 pirate cards were in circulation by 2002, and these played a role in the demise of the broadcaster that year.
Rebranding
In April 2001 it was said that ONdigital would be 'relaunched' to bring it closer to the ITV network and to better compete with Sky. On 11 July 2001 Carlton and Granada rebranded ONdigital as ITV Digital.
Other services were also rebranded, such as ONnet to ITV Active. A re-branding campaign was launched, with customers being sent ITV Digital stickers to place over the ONdigital logos on their remote controls and set top boxes. The software running on the receivers was not changed, however, and continued to display 'ON' on nearly every screen.
The rebrand was not without controversy, as SMG plc (owner of Scottish Television and Grampian Television), UTV and Channel Television all pointed out that the ITV brand did not belong solely to Carlton and Granada. SMG and UTV initially refused to carry the advertising campaign for ITV Digital and did not allow the ITV Sport Channel space on their multiplex, thus it was not available at launch in most of Scotland and Northern Ireland. The case was resolved in Scotland and the Channel Islands and later still in Northern Ireland, allowing ITV Sport to launch in the non-Carlton and Granada regions, although it was never made available in the Channel Islands, where there was no DTT or cable, and it never appeared on Sky Digital.
Later in 2001, ITV Sport Channel was announced. This would be a premium sport channel, and would broadcast English football games as per the company's deal with the Football League in 2000, as well as ATP tennis games and Champions League games previously covered by ONsport 1 and ONsport 2. The channel launched on 11 August of that year.
Downfall
The service reached 1 million subscribers by January 2001, whereas Sky Digital had 5.7 million. Granada reported £69 million in losses in the first six months of 2001, leading some investors to urge it to close or sell ONdigital/ITV Digital. ITV Digital was unable to make a deal to put the ITV Sport Channel on Sky, which could have given the channel access to millions of Sky customers and generated income; the channel was only licensed to cable company NTL. Subscriptions for ONnet/ITV Active, its internet service, peaked at around 100,000 customers. ITV Digital had a 12% share of digital subscribers as of December 2001. ITV Digital and Granada cut jobs that month. By 2002, the company was thought to be losing up to £1 million per day.
In February 2002, Carlton and Granada said that ITV Digital needed an urgent "fundamental restructuring". The biggest cost the company faced was its three-year deal with the Football League, which was already deemed too expensive by critics when agreed, as it was inferior to the top-flight Premiership coverage from Sky Sports. It was reported on 21 March 2002 that ITV Digital had proposed paying only £50 million for its remaining two years in the Football League deal, a reduction of £129m. Chiefs from the League said that any reduction in the payment could threaten the existence of many football clubs, which had budgeted for large incomes from the television contract.
Administration
On 27 March 2002, ITV Digital was placed in administration as it was unable to pay the full amount due to the Football League. Later, as chances of its survival remained bleak, the Football League sued Carlton and Granada, claiming that the firms had breached their contract in failing to deliver the guaranteed income. However, on 1 August the league lost the case, with the judge ruling that it had "failed to extract sufficient written guarantees". The league then filed a negligence claim against its own lawyers for failing to press for a written guarantee at the time of the deal with ITV Digital. From this, in June 2006, it was awarded a paltry £4 in damages of the £150m it was seeking. The collapse put in doubt the government's ambition to switch off analogue terrestrial TV signals by 2010.
Despite several interested parties, the administrators were unable to find a buyer for the company and effectively put it into liquidation on 26 April 2002. Most subscription channels stopped broadcasting on ITV Digital on 1 May 2002 at 7 am, with only free-to-air services continuing. The next day, ITV chief executive Stuart Prebble quit. In all, 1,500 jobs were lost by ITV Digital's collapse. ITV Digital was eventually placed into liquidation on 18 October, with debts of £1.25 billion.
Post-collapse
By 30 April 2002, the Independent Television Commission (ITC) had revoked ITV Digital's broadcasting licence and started looking for a buyer. A consortium made up of the BBC and Crown Castle submitted an application on 13 June, later joined by BSkyB, and were awarded the licence on 4 July. They launched the Freeview service on 30 October 2002, offering 30 free-to-air TV channels and 20 free-to-air radio channels including several interactive channels such as BBC Red Button and Teletext, but no subscription or premium services. Those followed on 31 March 2004 when Top Up TV began broadcasting 11 pay TV channels in timeshared broadcast slots.
From 10 December 2002, ITV Digital's liquidators started to ask customers to return their set top boxes or pay a £39.99 fee. Had this been successful, it could have threatened to undermine the fledgling Freeview service, since at the time most digital terrestrial receivers in households were ONdigital and ITV Digital legacy hardware. In January 2003, Carlton and Granada stepped in and paid £2.8m to the liquidators to allow the boxes to stay with their customers, because at the time the ITV companies received a discount on their broadcasting licence payments based on the number of homes they had converted to digital television. It was also likely done to avoid further negativity towards the two companies.
During the time under administration, Carlton and Granada were in talks regarding a merger, which was eventually cleared in 2004.
Effect on football clubs
ITV Digital's collapse had a large effect on many football clubs. Bradford City F.C. was one of the affected, and its debt forced it into administration in May 2002.
Barnsley F.C. also entered administration in October 2002, despite the club making a profit for the twelve years prior to the collapse of ITV Digital. Barnsley had budgeted on the basis that the money from the ITV Digital deal would be received, leaving a £2.5 million shortfall in their accounts when the broadcaster collapsed.
Clubs were forced to slash staff, and some players were forced to be sold as they were unable to pay them. Some clubs increased ticket prices for fans to offset the losses.
The rights to show Football League matches were resold to Sky Sports for £95 million for the next four years compared to £315 million over three years from ITV Digital, leading to a reduction from £2 million per season to £700,000 in broadcasting revenue for First Division clubs.
In total, fourteen Football League clubs were placed in administration within four years of the collapse of ITV Digital, compared to four in the four years before.
News Corporation hacking allegations
On 31 March 2002, French cable company Canal+ accused Rupert Murdoch's News Corporation in the United States of extracting the UserROM code from its MediaGuard encryption cards and leaking it onto the internet. Canal+ brought a lawsuit against News Corporation alleging that it, through its subsidiary NDS (which provides encryption technology for Sky and other TV services from Murdoch), had been working on breaking the MediaGuard smartcards used by Canal+, ITV Digital and other non-Murdoch-owned TV companies throughout Europe. The action was later partially dropped after News Corporation agreed to buy Canal+'s struggling Italian operation Telepiu, a direct rival to a Murdoch-owned company in that country.
Other legal action by EchoStar/NagraStar was being pursued as late as August 2005, accusing NDS of the same wrongdoing. In 2008, NDS was found to have broken piracy laws by hacking EchoStar Communications' smart card system, however only $1,500 in statutory damages was awarded.
On 26 March 2012, an investigation from BBC's Panorama found evidence that one of News Corporation's subsidiaries sabotaged ITV Digital. It found that NDS hacked ONdigital/ITV Digital smartcard data and leaked them through a pirate website under Murdoch's control – actions which enabled pirated cards to flood the market. The accusations arose from emails obtained by the BBC, and an interview with Lee Gibling, the operator of a hacking website, who claimed he was paid up to £60,000 per year by Ray Adams, NDS's head of security. This would mean that Murdoch used computer hacking to directly undermine rival ITV Digital. Lawyers for News Corporation claimed that these accusations of illegal activities against a rival business are "false and libellous". In June 2013 the Metropolitan Police decided to look into these allegations following a request by Labour MP Tom Watson.
Marketing
ITV Digital ran an advertising campaign involving the comedian Johnny Vegas as Al and a knitted monkey simply called Monkey, voiced by Ben Miller. A knitted replica of Monkey could be obtained by signing up to ITV Digital. Because the monkey could not be obtained without signing up to the service, a market for second-hand monkeys developed. At one time, original ITV Digital Monkeys were fetching several hundred pounds on eBay, and knitting patterns delivered by email were sold for several pounds. The campaign was created by the advertising agency Mother. In August 2002, following ITV Digital's collapse, Vegas claimed that he was owed money for the advertisements. In early 2007, Monkey and Al reappeared in an advert for PG Tips tea, which at first included a reference to ITV Digital's downfall.
Set top boxes
This is a list of ex-ITV and ONdigital set-top boxes. All boxes used similar software, with the design of the user interface common to all models. Top Up TV provided a small update in 2004 which upgraded minor technicalities with encryption services.
Nokia Mediamaster 9850T
Pace Micro Technology DTR-730, DTR-735
Philips DTX 6370, DTX 6371, DTX 6372
Pioneer DBR-T200, DBR-T210
Sony VTX-D500U
Toshiba DTB2000
All these set top boxes (and some ONdigital-branded integrated TVs) become obsolete after the digital switchover, completed in 2012, as post-switchover broadcasts utilised a newer 8k modulation scheme with which this earlier equipment was not compatible.
iDTVs
ONdigital and ITV Digital could also be received with an Integrated Digital Television (iDTV) receiver. They used a conditional-access module (CAM) with a smart card, plugged into a DVB Common Interface slot in the back of the set.
Purchasers of iDTVs were given a substantially discounted price on using the ONdigital service, as there was no cost for a set-top box.
Some of the original iDTVs needed firmware upgrades to work with the CAM. For example, Sony sent technicians out to homes to make the necessary updates free of charge.
Carlton/Granada digital television channels
Carlton and Granada (later ITV Digital Channels Ltd) created a selection of channels which formed some of the core content of channels available via the service, which were:
See also
Sky UK
Top Up TV
Freeview
NDS Group
Notes
External links
ONdigital in liquidation Information for subscribers
ONdigital history site
ITV Digital / PG Tips Monkey Mavis, 8 March 2009 – Knitting kit for Monkey
ITV Digital goes broke BBC News, 27 March 2002
Set-top box offers low-cost digital BBC News, 29 March 2002
British companies established in 1998
British companies disestablished in 2002
Mass media companies established in 1998
Mass media companies disestablished in 2002
Companies that have entered administration in the United Kingdom
Digital television in the United Kingdom
ITV (TV network)
Pay television
History of ITV |
3434199 | https://en.wikipedia.org/wiki/The%20Hacker%20Crackdown | The Hacker Crackdown | The Hacker Crackdown: Law and Disorder on the Electronic Frontier is a work of nonfiction by Bruce Sterling first published in 1992.
The book discusses watershed events in the hacker subculture in the early 1990s. The most notable topic covered is Operation Sundevil and the events surrounding the 1987–1990 war on the Legion of Doom network: the raid on Steve Jackson Games, the trial of "Knight Lightning" (one of the original journalists of Phrack), and the subsequent formation of the Electronic Frontier Foundation. The book also profiles the likes of "Emmanuel Goldstein" (publisher of 2600: The Hacker Quarterly), the former assistant attorney general of Arizona Gail Thackeray, FLETC instructor Carlton Fitzpatrick, Mitch Kapor, and John Perry Barlow.
In 1994, Sterling released the book for the Internet with a new afterword.
Historical perspective
Though published in 1992, and released as a freeware, electronic book in 1994, the book offers a unique and colorful portrait of the nature of "cyberspace" in the early 1990s, and the nature of "computer crime" at that time. The events that Sterling discusses occur on the cusp of the mass popularity of the Internet, which arguably achieved critical mass in late 1994. It also encapsulates a moment in the information age revolution when "cyberspace" morphed from the realm of telephone modems and BBS' into the Internet and the World Wide Web.
Critical reception
Cory Doctorow, who voiced an unabridged podcast of the book, said it "inspired me politically, artistically and socially".
Quotations
References
External links
Editions of the book in English
Plain-text version from Project Gutenberg
Rich-text version in HTML, EPUB, and Markdown formats
Feedbooks.com version with a table of contents
HTML-formatted version hosted at MIT
eBooks@Adelaide (University of Adelaide)
Translations and other formats
Czech translation of The Hacker Crackdown
Audiobook of The Hacker Crackdown
Computer security books
Phreaking
Non-fiction Cyberpunk media
1992 non-fiction books
Books about computer hacking
Works about computer hacking |
10294 | https://en.wikipedia.org/wiki/Encryption | Encryption | In cryptography, encryption is the process of encoding information. This process converts the original representation of the information, known as plaintext, into an alternative form known as ciphertext. Ideally, only authorized parties can decipher a ciphertext back to plaintext and access the original information. Encryption does not itself prevent interference but denies the intelligible content to a would-be interceptor.
For technical reasons, an encryption scheme usually uses a pseudo-random encryption key generated by an algorithm. It is possible to decrypt the message without possessing the key but, for a well-designed encryption scheme, considerable computational resources and skills are required. An authorized recipient can easily decrypt the message with the key provided by the originator to recipients but not to unauthorized users.
Historically, various forms of encryption have been used to aid in cryptography. Early encryption techniques were often used in military messaging. Since then, new techniques have emerged and become commonplace in all areas of modern computing. Modern encryption schemes use the concepts of public-key and symmetric-key. Modern encryption techniques ensure security because modern computers are inefficient at cracking the encryption.
History
Ancient
One of the earliest forms of encryption is symbol replacement, which was first found in the tomb of Khnumhotep II, who lived in 1900 BC Egypt. Symbol replacement encryption is “non-standard,” which means that the symbols require a cipher or key to understand. This type of early encryption was used throughout Ancient Greece and Rome for military purposes. One of the most famous military encryption developments was the Caesar Cipher, which was a system in which a letter in normal text is shifted down a fixed number of positions down the alphabet to get the encoded letter. A message encoded with this type of encryption could be decoded with the fixed number on the Caesar Cipher.
Around 800 AD, Arab mathematician Al-Kindi developed the technique of frequency analysis – which was an attempt to systematically crack Caesar ciphers. This technique looked at the frequency of letters in the encrypted message to determine the appropriate shift. This technique was rendered ineffective after the creation of the Polyalphabetic cipher by Leone Alberti in 1465, which incorporated different sets of languages. In order for frequency analysis to be useful, the person trying to decrypt the message would need to know which language the sender chose.
19th–20th century
Around 1790, Thomas Jefferson theorised a cipher to encode and decode messages in order to provide a more secure way of military correspondence. The cipher, known today as the Wheel Cipher or the Jefferson Disk, although never actually built, was theorized as a spool that could jumble an English message up to 36 characters. The message could be decrypted by plugging in the jumbled message to a receiver with an identical cipher.
A similar device to the Jefferson Disk, the M-94, was developed in 1917 independently by US Army Major Joseph Mauborne. This device was used in U.S. military communications until 1942.
In World War II, the Axis powers used a more advanced version of the M-94 called the Enigma Machine. The Enigma Machine was more complex because unlike the Jefferson Wheel and the M-94, each day the jumble of letters switched to a completely new combination. Each day's combination was only known by the Axis, so many thought the only way to break the code would be to try over 17,000 combinations within 24 hours. The Allies used computing power to severely limit the number of reasonable combinations they needed to check every day, leading to the breaking of the Enigma Machine.
Modern
Today, encryption is used in the transfer of communication over the Internet for security and commerce. As computing power continues to increase, computer encryption is constantly evolving to prevent attacks.
Encryption in cryptography
In the context of cryptography, encryption serves as a mechanism to ensure confidentiality. Since data may be visible on the Internet, sensitive information such as passwords and personal communication may be exposed to potential interceptors. The process of encrypting and decrypting messages involves keys. The two main types of keys in cryptographic systems are symmetric-key and public-key (also known as asymmetric-key).
Many complex cryptographic algorithms often use simple modular arithmetic in their implementations.
Types
Symmetric key
In symmetric-key schemes, the encryption and decryption keys are the same. Communicating parties must have the same key in order to achieve secure communication. The German Enigma Machine utilized a new symmetric-key each day for encoding and decoding messages.
Public key
In public-key encryption schemes, the encryption key is published for anyone to use and encrypt messages. However, only the receiving party has access to the decryption key that enables messages to be read. Public-key encryption was first described in a secret document in 1973; beforehand, all encryption schemes were symmetric-key (also called private-key). Although published subsequently, the work of Diffie and Hellman was published in a journal with a large readership, and the value of the methodology was explicitly described. The method became known as the Diffie-Hellman key exchange.
RSA (Rivest–Shamir–Adleman) is another notable public-key cryptosystem. Created in 1978, it is still used today for applications involving digital signatures. Using number theory, the RSA algorithm selects two prime numbers, which help generate both the encryption and decryption keys.
A publicly available public-key encryption application called Pretty Good Privacy (PGP) was written in 1991 by Phil Zimmermann, and distributed free of charge with source code. PGP was purchased by Symantec in 2010 and is regularly updated.
Uses
Encryption has long been used by militaries and governments to facilitate secret communication. It is now commonly used in protecting information within many kinds of civilian systems. For example, the Computer Security Institute reported that in 2007, 71% of companies surveyed utilized encryption for some of their data in transit, and 53% utilized encryption for some of their data in storage. Encryption can be used to protect data "at rest", such as information stored on computers and storage devices (e.g. USB flash drives). In recent years, there have been numerous reports of confidential data, such as customers' personal records, being exposed through loss or theft of laptops or backup drives; encrypting such files at rest helps protect them if physical security measures fail. Digital rights management systems, which prevent unauthorized use or reproduction of copyrighted material and protect software against reverse engineering (see also copy protection), is another somewhat different example of using encryption on data at rest.
Encryption is also used to protect data in transit, for example data being transferred via networks (e.g. the Internet, e-commerce), mobile telephones, wireless microphones, wireless intercom systems, Bluetooth devices and bank automatic teller machines. There have been numerous reports of data in transit being intercepted in recent years. Data should also be encrypted when transmitted across networks in order to protect against eavesdropping of network traffic by unauthorized users.
Data erasure
Conventional methods for permanently deleting data from a storage device involve overwriting the device's whole content with zeros, ones, or other patterns – a process which can take a significant amount of time, depending on the capacity and the type of storage medium. Cryptography offers a way of making the erasure almost instantaneous. This method is called crypto-shredding. An example implementation of this method can be found on iOS devices, where the cryptographic key is kept in a dedicated 'effaceable storage'. Because the key is stored on the same device, this setup on its own does not offer full privacy or security protection if an unauthorized person gains physical access to the device.
Limitations
Encryption is used in the 21st century to protect digital data and information systems. As computing power increased over the years, encryption technology has only become more advanced and secure. However, this advancement in technology has also exposed a potential limitation of today's encryption methods.
The length of the encryption key is an indicator of the strength of the encryption method. For example, the original encryption key, DES (Data Encryption Standard), was 56 bits, meaning it had 2^56 combination possibilities. With today's computing power, a 56-bit key is no longer secure, being vulnerable to hacking by brute force attack. Today the standard of modern encryption keys is up to 2048 bit with the RSA system. Decrypting a 2048 bit encryption key is nearly impossible in light of the number of possible combinations. However, quantum computing is threatening to change this secure nature.
Quantum computing utilizes properties of quantum mechanics in order to process large amounts of data simultaneously. Quantum computing has been found to achieve computing speeds thousands of times faster than today's supercomputers. This computing power presents a challenge to today's encryption technology. For example, RSA encryption utilizes the multiplication of very large prime numbers to create a semiprime number for its public key. Decoding this key without its private key requires this semiprime number to be factored in, which can take a very long time to do with modern computers. It would take a supercomputer anywhere between weeks to months to factor in this key. However, quantum computing can use quantum algorithms to factor this semiprime number in the same amount of time it takes for normal computers to generate it. This would make all data protected by current public-key encryption vulnerable to quantum computing attacks. Other encryption techniques like elliptic curve cryptography and symmetric key encryption are also vulnerable to quantum computing.
While quantum computing could be a threat to encryption security in the future, quantum computing as it currently stands is still very limited. Quantum computing currently is not commercially available, cannot handle large amounts of code, and only exists as computational devices, not computers. Furthermore, quantum computing advancements will be able to be utilized in favor of encryption as well. The National Security Agency (NSA) is currently preparing post-quantum encryption standards for the future. Quantum encryption promises a level of security that will be able to counter the threat of quantum computing.
Attacks and countermeasures
Encryption is an important tool but is not sufficient alone to ensure the security or privacy of sensitive information throughout its lifetime. Most applications of encryption protect information only at rest or in transit, leaving sensitive data in clear text and potentially vulnerable to improper disclosure during processing, such as by a cloud service for example. Homomorphic encryption and secure multi-party computation are emerging techniques to compute on encrypted data; these techniques are general and Turing complete but incur high computational and/or communication costs.
In response to encryption of data at rest, cyber-adversaries have developed new types of attacks. These more recent threats to encryption of data at rest include cryptographic attacks, stolen ciphertext attacks, attacks on encryption keys, insider attacks, data corruption or integrity attacks, data destruction attacks, and ransomware attacks. Data fragmentation and active defense data protection technologies attempt to counter some of these attacks, by distributing, moving, or mutating ciphertext so it is more difficult to identify, steal, corrupt, or destroy.
Integrity protection of ciphertexts
Encryption, by itself, can protect the confidentiality of messages, but other techniques are still needed to protect the integrity and authenticity of a message; for example, verification of a message authentication code (MAC) or a digital signature. Authenticated encryption algorithms are designed to provide both encryption and integrity protection together. Standards for cryptographic software and hardware to perform encryption are widely available, but successfully using encryption to ensure security may be a challenging problem. A single error in system design or execution can allow successful attacks. Sometimes an adversary can obtain unencrypted information without directly undoing the encryption. See for example traffic analysis, TEMPEST, or Trojan horse.
Integrity protection mechanisms such as MACs and digital signatures must be applied to the ciphertext when it is first created, typically on the same device used to compose the message, to protect a message end-to-end along its full transmission path; otherwise, any node between the sender and the encryption agent could potentially tamper with it. Encrypting at the time of creation is only secure if the encryption device itself has correct keys and has not been tampered with. If an endpoint device has been configured to trust a root certificate that an attacker controls, for example, then the attacker can both inspect and tamper with encrypted data by performing a man-in-the-middle attack anywhere along the message's path. The common practice of TLS interception by network operators represents a controlled and institutionally sanctioned form of such an attack, but countries have also attempted to employ such attacks as a form of control and censorship.
Ciphertext length and padding
Even when encryption correctly hides a message's content and it cannot be tampered with at rest or in transit, a message's length is a form of metadata that can still leak sensitive information about the message. For example, the well-known CRIME and BREACH attacks against HTTPS were side-channel attacks that relied on information leakage via the length of encrypted content. Traffic analysis is a broad class of techniques that often employs message lengths to infer sensitive implementation about traffic flows by aggregating information about a large number of messages.
Padding a message's payload before encrypting it can help obscure the cleartext's true length, at the cost of increasing the ciphertext's size and introducing or increasing bandwidth overhead. Messages may be padded randomly or deterministically, with each approach having different tradeoffs. Encrypting and padding messages to form padded uniform random blobs or PURBs is a practice guaranteeing that the cipher text leaks no metadata about its cleartext's content, and leaks asymptotically minimal information via its length.
See also
Cryptosystem
Cold boot attack
Cyberspace Electronic Security Act (US)
Dictionary attack
Disk encryption
Encrypted function
Export of cryptography
Geo-blocking
Indistinguishability obfuscation
Key management
Multiple encryption
Physical Layer Encryption
Rainbow table
Rotor machine
Substitution cipher
Television encryption
Tokenization (data security)
References
Further reading
Kahn, David (1967), The Codebreakers - The Story of Secret Writing ()
Preneel, Bart (2000), "Advances in Cryptology - EUROCRYPT 2000", Springer Berlin Heidelberg,
Sinkov, Abraham (1966): Elementary Cryptanalysis: A Mathematical Approach, Mathematical Association of America.
Tenzer, Theo (2021): SUPER SECRETO – The Third Epoch of Cryptography: Multiple, exponential, quantum-secure and above all, simple and practical Encryption for Everyone, Norderstedt, ISBN 9783755761174.
Cryptography
Data protection |
898917 | https://en.wikipedia.org/wiki/One-liner%20program | One-liner program | Originally, a one-liner program was textual input to the command-line of an operating system shell that performs some function in just one line of input. The one-liner can be
an expression written in the language of the shell;
the invocation of an interpreter together with program source for the interpreter to run;
the invocation of a compiler together with source to compile and instructions for executing the compiled program.
Certain dynamic scripting languages such as AWK, sed, and Perl have traditionally been adept at expressing one-liners.
Shell interpreters such as Unix shells or Windows PowerShell allow for the construction of powerful one-liners.
The use of the phrase one-liner has been widened to also include program-source for any language that does something useful in one line.
History
The concept of a one-liner program has been known since the 1960s with the release of the APL programming language. With its terse syntax and powerful mathematical operators, APL allowed useful programs to be represented in a few symbols.
In the 1970s, one-liners became associated with the rise of the home computer and BASIC. Computer magazines published type-in programs in many dialects of BASIC. Some magazines devoted regular columns solely to impressive short and one-line programs.
The word One-liner also has two references in the index of the book The AWK Programming Language (the book is often referred to by the abbreviation TAPL). It explains the programming language AWK, which is part of the Unix operating system. The authors explain the birth of the one-liner paradigm with their daily work on early Unix machines:
Notice that this original definition of a one-liner implies immediate execution of the program without any compilation. So, in a strict sense, only source code for interpreted languages qualifies as a one-liner. But this strict understanding of a one-liner was broadened in 1985 when the IOCCC introduced the category of Best One Liner for C, which is a compiled language.
Examples
One-liners are also used to show off the differential expressive power of programming languages. Frequently, one-liners are used to demonstrate programming ability. Contests are often held to see who can create the most exceptional one-liner.
BASIC
A single line of BASIC can typically hold up to 255 characters, and one liners ranged from simple games to graphical demos. One of the better-known demo one-liners is colloquially known as 10PRINT, written for the Commodore 64:
10 PRINT CHR$(205.5+RND(1)); : GOTO 10
C
The following example is a C program (a winning entry in the "Best one-liner" category of the IOCCC).
main(int c,char**v){return!m(v[1],v[2]);}m(char*s,char*t){return*t-42?*s?63==*t|*s==*t&&m(s+1,t+1):!*t:m(s,t+1)||*s&&m(s+1,t);}
This one-liner program is a glob pattern matcher. It understands the glob characters `*' meaning `zero or more characters' and `?' meaning exactly one character, just like most Unix shells.
Run it with two args, the string and the glob pattern. The exit status is 0 (shell true) when the pattern matches, 1 otherwise. The glob pattern must match the whole string, so you may want to use * at the beginning and end of the pattern if you are looking for something in the middle. Examples:
$ ./a.out foo 'f??'; echo $?
$ ./a.out 'best short program' '??st*o**p?*'; echo $?
AWK
The TAPL book contains 20 examples of one-liners at the end of the book's first chapter.
Here are the very first of them:
Print the total number of input lines: END { print NR }
Print the tenth input line: NR == 10
Print the last field of every input line: { print $NF }
J
Here are examples in J:
A function avg to return the average of a list of numbers: avg=: +/ % #
Quicksort: quicksort=: (($:@(<#[) , (=#[) , $:@(>#[)) ({~ ?@#)) ^: (1<#)
Perl
Here are examples in the Perl programming language:
Look for duplicate words
perl -0777 -ne 'print "$.: doubled $_\n" while /\b(\w+)\b\s+\b\1\b/gi'
Find Palindromes in /usr/dict/words
perl -lne 'print if $_ eq reverse' /usr/dict/words
in-place edit of *.c files changing all foo to bar
perl -p -i.bak -e 's/\bfoo\b/bar/g' *.c
Many one-liners are practical. For example, the following Perl one-liner will reverse all the bytes in a file:
perl -0777e 'print scalar reverse <>' filename
While most Perl one-liners are imperative, Perl's support for anonymous functions, closures, map, filter (grep) and fold (List::Util::reduce) allows the creation of 'functional' one-liners.
This one-liner creates a function that can be used to return a list of primes up to the value of the first parameter:
my $z = sub { grep { $a=$_; !grep { !($a % $_) } (2..$_-1)} (2..$_[0]) }
It can be used on the command line, like this:
perl -e'$,=",";print sub { grep { $a=$_; !grep { !($a % $_) } (2..$_-1)} (2..$_[0]) }->(shift)' number
to print out a comma-separated list of primes in the range 2 - number.
Haskell
The following Haskell program is a one-liner: it sorts its input lines ASCIIbetically.
main = (mapM_ putStrLn . Data.List.sort . lines) =<< getContents -- In ghci a qualified name like Data.List.sort will work, although as a standalone executable you'd need to import Data.List.
An even shorter version:
main = interact (unlines . Data.List.sort . lines) -- Ditto.
Usable on the command line like:
cat filename | ghc -e "interact (unlines . Data.List.sort . lines)"
Racket
The following Racket program is equivalent to the above Haskell example:
#lang racket
(for-each displayln (sort (port->lines) string<?))
and this can be used on the command line as follows:
racket -e '(for-each displayln (sort (port->lines) string<?))'
Python
Performing one-liners directly on the Unix command line can be accomplished by using Python's -cmd flag (-c for short), and typically requires the import of one or more modules. Statements are separated using ";" instead of newlines. For example, to print the last field of unix long listing:
ls -l | python -c "import sys;[sys.stdout.write(' '.join([line.split(' ')[-1]])) for line in sys.stdin]"
Python wrappers
Several open-source scripts have been developed to facilitate the construction of Python one-liners. Scripts such as
pyp or Pyline import commonly used modules and provide more human-readable variables in an attempt to make Python functionality more accessible on the command line. Here is a redo of the above example (printing the last field of a unix long listing):
ls -l | pyp "whitespace[-1]" # "whitespace" represents each line split on white space in pyp
ls -l | pyline "words[-1]" # "words" represents each line split on white space in pyline
Executable libraries
The Python CGIHTTPServer module for example is also an executable library that performs as a web server with CGI. To start the web server enter:
$ python -m CGIHTTPServer
Serving HTTP on 0.0.0.0 port 8000 …
TCL Tool Control Language
Tcl (Tool Command Language) is a dynamic programming/scripting language based on concepts of Lisp, C, and Unix shells. It can be used interactively, or by running scripts (programs) which can use a package system for structuring. Following are direct quotes from Wiki Books Tcl Programming. The text in the Wiki Books Tcl Programming is available under the Creative Commons Attribution-ShareAlike License.
Many strings are also well-formed lists. Every simple word is a list of length one, and elements of longer lists are separated by whitespace. For instance, a string that corresponds to a list of three elements:
set example {foo bar grill}
Strings with unbalanced quotes or braces, or non-space characters directly following closing braces, cannot be parsed as lists directly. You can explicitly split them to make a list.
The "constructor" for lists is of course called list. It's recommended to use when elements come from variable or command substitution (braces won't do that). As Tcl commands are lists anyway, the following is a full substitute for the list command:
# One liners program
proc list args {set args}
Windows PowerShell
Finding palindromes in file words.txt
Get-Content words.txt | Where { $_ -eq -join $_[($_.length-1)..0] }
Piping semantics in PowerShell help enable complex scenarios with one-liner programs. This one-liner in PowerShell script takes a list of names and counts from a comma-separated value file, and returns the sum of the counts for each name.
ipcsv .\fruit.txt –H F, C|Group F|%{@{"$($_.Name)"=($_.Group|measure C -sum).Sum}}|sort value
See also
Bookmarklet
Tcl
References
External links
Perl Programming links
Wikibooks Free Tcl Programming introduction & download pdf
SourceForge, download website and also Multiple computer languages
Tcl Sources, main Tcl and Tk source code download website
Tcler's Wiki, Tcl/Tk scripts and reference clearing house
TkDocs, Tcl/Tk Official documentation and archives
Computer programming
Articles with example Haskell code
Articles with example C code
Articles with example Perl code
Articles with example Python (programming language) code
Articles with example Racket code |
2189519 | https://en.wikipedia.org/wiki/Mike%20Henry%20%28American%20football%29 | Mike Henry (American football) | Michael Dennis Henry (August 15, 1936 – January 8, 2021) was an American actor and NFL football linebacker. He was best known for his role as Tarzan in the 1960s trilogy and as Junior in the Smokey and the Bandit film series.
Football career
Henry attended Bell High School in Los Angeles, where his play caught the attention of USC Trojans alum John Ferraro, who arranged for him to get a tryout at USC. He attended USC and was co-captain of the 1957 USC Trojans football team.
Acting career
Henry's most prominent role was as Tarzan in three 1960s movies Tarzan and the Valley of Gold (1966), Tarzan and the Great River (1967), and Tarzan and the Jungle Boy (1968) that were all filmed back-to-back in 1965. At the time, critics said the dark-haired, square-jawed, muscular Henry resembled classic illustrations of the apeman more than any other actor who had taken on the role. Henry turned down the lead of the subsequent Tarzan television series, which then went to Ron Ely.
Henry is probably best known to movie audiences for playing Jackie Gleason's character's dim-witted son "Junior" in the highly popular Smokey and the Bandit comedies, starring Burt Reynolds and Sally Field.
Henry portrayed a corrupt prison guard in The Longest Yard (1974). Henry played Sergeant Kowalski in The Green Berets (1968), Luke Santee in More Dead Than Alive (1968), and corrupt Sheriff "Blue Tom" Hendricks in Rio Lobo (1970). He also acted with Charlton Heston in three films: the football movie Number One (1969), Skyjacked (1972), and Soylent Green (1973).
Henry played Lt. Col. Donald Penobscot in an episode of the television series M*A*S*H. In another football-oriented role, he portrayed Tatashore, one of the members of the gang who kidnap Larry Bronco (Larry Csonka) in the "One of Our Running Backs Is Missing" episode of The Six Million Dollar Man.
Personal life
Henry and his wife Cheryl Henry, were married in 1984. Together they had a daughter, Shannon Noble.
Illness and death
After being diagnosed with Parkinson's disease, he retired from acting in 1988. Henry died on January 8, 2021 at the age of 84 at Providence Saint Joseph Medical Center in Burbank, California, after years of complications from both Parkinson's disease and chronic traumatic encephalopathy.
Filmography
Curfew Breakers (1957) as Reagan
General Hospital (1963, TV Series) as Rudolpho (1988)
Spencer's Mountain (1963) as Spencer Brother (uncredited)
Palm Springs Weekend (1963) as Doorman (uncredited)
Tarzan and the Valley of Gold (1966) as Tarzan
Tarzan and the Great River (1967) as Tarzan
Tarzan and the Jungle Boy (1968) as Tarzan
The Green Berets (1968) as Sergeant Kowalski
More Dead Than Alive (1968) as Luke Santee
Number One (1969) as Walt Chaffee
Rio Lobo (1970) as Sheriff Tom Hendricks
Walt Disney's Wonderful World of Color (1972) as Fargo
Skyjacked (1972) as Sam Allen
Soylent Green (1973) as Kulozik
The Longest Yard (1974) as Rassmeusen
Mean Johnny Barrows (1976) as Carlo Da Vince
Adiós Amigo (1976) as Mary's Husband
No Way Back (1976) as Goon #3
Smokey and the Bandit (1977) as Junior Justice
M*A*S*H (1977, TV Series) as Donald Penobscot
Smokey and the Bandit II (1980) as Junior Justice
Fantasy Island (1981, TV Series) as Mike
Smokey and the Bandit Part 3 (1983) as Junior Justice
Outrageous Fortune (1987) as Russian #1
References
External links
Mike Henry at Brian's Drive-In Theater
1936 births
2021 deaths
Players of American football from Los Angeles
American football linebackers
USC Trojans football players
Pittsburgh Steelers players
Los Angeles Rams players
American male television actors
American male film actors |
25408438 | https://en.wikipedia.org/wiki/Simple%20Protocol%20for%20Independent%20Computing%20Environments | Simple Protocol for Independent Computing Environments | In computing, SPICE (the Simple Protocol for Independent Computing Environments) is a remote-display system built for virtual environments which allows users to view a computing "desktop" environment – not only on its computer-server machine, but also from anywhere on the Internet – using a wide variety of machine architectures.
Qumranet originally developed SPICE using a closed-source codebase in 2007. Red Hat, Inc acquired Qumranet in 2008, and in December 2009 released the code under an open-source license and made the protocol an open standard.
Security
A SPICE client connection to a remote desktop server consists of multiple data channels, each of which is run over a separate TCP or UNIX socket connection. A data channel can be designated to operate in either clear-text, or TLS modes, allowing the administrator to tradeoff the security level vs performance. The TLS mode provides strong encryption of all traffic transmitted on the data channel.
In addition to encryption, the SPICE protocol allows for a choice of authentication schemes. The original SPICE protocol defined a ticket based authentication scheme using a shared secret. The server would generate an RSA public/private keypair and send its public key to the client. The client would encrypt the ticket (password) with the public key and send the result back to the server, which would decrypt and verify the ticket. The current SPICE protocol also allows for use of the SASL authentication protocol, thus enabling support for a wide range of admin configurable authentication mechanisms, in particular Kerberos.
Implementations
While only one server implementation exists, several programmers have developed new implementations of the SPICE client-side since the open-sourcing of SPICE.
spice-protocol
The spice-protocol module defines the SPICE wire protocol formats. This is made available under the BSD license, and is portable across the Linux and Windows platforms.
spice
The spice module provides the reference implementation for the server side of the SPICE protocol. The server is provided as a dynamic library which can be linked to any application wishing to expose a SPICE server. , QEMU uses this to provide a SPICE interface for virtual machines. The spice codebase is available under the LGPL v2+ license.
A client part of the spice codebase named spicec was removed in December 2014.
spice-gtk
The spice-gtk module implements a SPICE client using the GObject type system and the GTK widget toolkit. This comprises a low-level library, spice-client-glib, which implements the client protocol code, and a high-level set of widgets which provide a graphical client capability using GTK. This is made available under the LGPLv2+ license, and is portable across the Linux, OS X and Windows platforms.
spice-html5
The spice-html5 module implements a SPICE client that uses JavaScript and is intended to run inside a web browser supporting HTML5. While it implements the SPICE protocol, it cannot talk directly to a regular SPICE server. It must connect to the server indirectly via WebSocket proxy. This is made available under a combination of the GPLv3+ and LGPLv3+ licenses.
Applications
The SPICE protocol originated to provide improved remote desktop capabilities in a fork of the KVM codebase.
QEMU/KVM
The QEMU maintainers merged support for providing SPICE remote desktop capabilities for all QEMU virtual machines in March 2010. The QEMU binary links to the spice-server library to provide this capability and implements the QXL paravirtualized framebuffer device to enable the guest OS to take advantage of the performance benefits the SPICE protocol offers. The guest OS may also use a regular VGA card, albeit with degraded performance as compared to QXL.
Xspice
The X.Org Server driver for the QXL framebuffer device includes a wrapper script which makes it possible to launch an Xorg server whose display is exported via the SPICE protocol. This enables use of SPICE in a remote desktop environment, without requiring QEMU/KVM virtualization.
virt-viewer
The virt-viewer program uses the spice-gtk client library to connect to virtual machines using SPICE, as an alternative to its previous support for VNC.
oVirt
SPICE is integrated into oVirt private-cloud management software, allowing users to connect to virtual machines through SPICE.
See also
Red Hat Virtualization
HP Remote Graphics Software
References
External links
SPICE protocol
Application layer protocols
Red Hat software
Remote desktop
Remote desktop protocols
Thin clients
Virtualization-related software for Linux |
694240 | https://en.wikipedia.org/wiki/Preferred%20Executable%20Format | Preferred Executable Format | The Preferred Executable Format is a file format that specifies the format of executable files and other object code. PEF executables are also called Code Fragment Manager files (CFM).
PEF was developed by Apple Computer for use in its classic Mac OS operating system. It was optimised for RISC processors. In macOS, the Mach-O file format is the native executable format. However, PEF was still supported on PowerPC-based Macintoshes running Mac OS X and was used by some Carbon applications ported from earlier versions for classic Mac OS, so that the same binary could be run on classic Mac OS and Mac OS X.
BeOS on PowerPC systems also uses PEF, although x86 systems do not.
See also
Comparison of executable file formats
Fat binary
External links
PEF Structure - documentation at developer.apple.com via web.archive.org
Mac OS Runtime Architectures For System 7 Through Mac OS 9 - PDF from developer.apple.com (see chapter 8, PEF Structure)
dumppef Documentation - description of what is in a PEF file, such as allowed sections the string table.
Executable file formats
Apple Inc. software |
14997036 | https://en.wikipedia.org/wiki/1974%20NCAA%20Division%20I%20football%20season | 1974 NCAA Division I football season | The 1974 NCAA Division I football season finished with two national champions. The Associated Press (AP) writers' poll ranked the University of Oklahoma, which was on probation and barred by the NCAA from postseason play, No. 1 at season's end. The United Press International (UPI) coaches' poll did not rank teams on probation, by unanimous agreement of the 25 member coaches' board. The UPI trophy went to the University of Southern California (USC).
During the 20th century, the NCAA had no playoff for the major college football teams, later known as "Division I-A". The NCAA Football Guide, however, did note an "unofficial national champion" based on the top ranked teams in the "wire service" (AP and UPI) polls. The "writers' poll" by Associated Press (AP) was the most popular, followed by the "coaches' poll" by United Press International) (UPI). Starting in 1974, the UPI joined AP in issuing its final poll after the bowl games were completed. Both polls operated under a point system of 20 points for first place, 19 for second, etc., whereby the overall ranking was determined. The AP poll consisted of the votes of 60 writers, though not all voted in each poll, and the UPI poll was taken of a 25-member board.
Rule changes
Blocking below the waist is prohibited on kickoffs, punts, or free kicks, and anywhere on the field except in a three-yard area around the line of scrimmage.
Shoulder pads are required equipment for all players. Prior to this, kickers and wide receivers frequently played without shoulder pads.
Penalty enforcement on running plays is from the end of the run except for fouls committed by the offense; those are penalized from the spot of the foul.
Players who enter the field are required to remain for one play, and players who leave the field are required to stay on the bench for one play. This ended the practice of sending "messenger" players in to relay plays from the sideline, then leave the field without participating.
Players leaving the bench to participate in touchdown celebrations will result in a five-yard penalty for the scoring team. If a coach joins in the celebration on the field, the penalty is 15 yards.
Successful field goals now must travel between the uprights; previously a field goal was declared good if the ball went over an upright, the standard still used by the National Football League. This became a point of dispute in the 1974 Ohio State-Michigan game, as Michigan's game-winning field goal attempt was declared no good due to the ball going over the left upright. Michigan claimed the ball curled just inside the left upright.
Conference and program changes
September
In the preseason poll released on September 2, 1974, the AP ranked Oklahoma No. 1, followed by No. 2 Ohio State, No. 3 Notre Dame, No. 4 Alabama and No. 5 USC.
September 7: No. 3 Notre Dame, the defending national champion, beat Georgia Tech in Atlanta, 31–7, in a nationally televised game on Monday night, September 9. The few other schools playing that weekend included No. 11 Houston, which lost 30−9 to No. 15 Arizona State, and No. 12 UCLA, which tied No. 16 Tennessee 17−17. Elsewhere, the scheduled Ole Miss-Tulane game in New Orleans was postponed until November 30 due to the threat of Hurricane Carmen. The next poll featured No. 1 Oklahoma, No. 2 Notre Dame, No. 3 Alabama, No. 4 Ohio State, and No. 5 USC.
September 14: No. 1 Oklahoma beat Baylor, 28–11. No. 2 Notre Dame was idle. No. 3 Alabama won at No. 14 Maryland, 21–16. No. 4 Ohio State won at Minnesota, 34–19. No. 5 USC lost to No. 20 Arkansas in Little Rock, 22–7. No. 7 Nebraska, which beat Oregon in its opener, 61–7, moved up in the polls. There was considerable disagreement between AP voters at the top of the next poll, with 19 first-place votes going to Notre Dame, 18 to Oklahoma, and 17 to Ohio State. The top five were No. 1 Notre Dame, No. 2 Ohio State, No. 3 Oklahoma, No. 4 Nebraska, and No. 5 Alabama.
September 21: No. 1 Notre Dame won at Northwestern, 49–3. No. 2 Ohio State beat Oregon State 51–10. No. 3 Oklahoma was idle. No. 4 Nebraska lost at Wisconsin, 21–20. No. 5 Alabama beat Southern Mississippi at home, 52–0. No. 6 Michigan, which beat Colorado 31–0, replaced Nebraska in the top five. In the next poll, Notre Dame had the edge in first-place votes (26 to 23), but Ohio State took the lead based on overall points. No. 1 Ohio State and No. 2 Notre Dame were followed by No. 3 Oklahoma, No. 4 Alabama, and No. 5 Michigan.
September 28: No. 1 Ohio State defeated SMU, 28–9. No. 2 Notre Dame was upset at home by Purdue, 31–20. No. 3 Oklahoma rolled over visiting Utah State, 72–3. No. 4 Alabama beat Vanderbilt 23–10. No. 5 Michigan beat Navy, 52–0. Losses by the sixth- through eighth-ranked teams opened the door for No. 9 Texas A&M, which won at Washington 28–15, to move into the top five. The next poll featured No. 1 Ohio State, No. 2 Oklahoma, No. 3 Alabama, No. 4 Michigan, and No. 5 Texas A&M.
October
October 5: No. 1 Ohio State beat Washington State 42–7 in Seattle. No. 2 Oklahoma shut out Wake Forest 63–0. No. 3 Alabama beat Mississippi at Jackson, 35–21. No. 4 Michigan won at Stanford, 27–16. No. 5 Texas A&M lost at Kansas, 28–10. No. 6 Nebraska, which beat Minnesota 54–0, moved up to No. 5 in the next poll, with the top four remaining the same.
October 12: No. 1 Ohio State beat visiting No. 13 Wisconsin 52–7. No. 2 Oklahoma barely defeated No. 17 Texas in Dallas, 16–13. No. 3 Alabama survived a game against winless (0–4) Florida State, winning 8–7. No. 4 Michigan beat Michigan State, 21–7. No. 5 Nebraska lost to Missouri 21–10 and was replaced in the next poll by No. 10 Auburn, which beat Kentucky 31–13. The poll featured No. 1 Ohio State, No. 2 Oklahoma, No. 3 Michigan, No. 4 Alabama, and No. 5 Auburn.
October 19: No. 1 Ohio State beat Indiana, 49–9. No. 2 Oklahoma won at Colorado, 49–14. No. 3 Michigan won at Wisconsin, 24–20. No. 4 Alabama won at Tennessee, 28–6. No. 5 Auburn beat Georgia Tech 31–22. The top five remained the same.
October 26: This week was defined by blowouts. No. 1 Ohio State won at Northwestern 55–7, No. 2 Oklahoma beat Kansas State 63–0, No. 3 Michigan beat Minnesota 49–0, No. 4 Alabama beat TCU 41–3 at Birmingham, and No. 5 Auburn beat Florida State 38–6. The top five again remained the same.
November
November 2: No. 1 Ohio State defeated Illinois at home, 49–7. With a record of 8–0, the Buckeyes had outscored their opposition 360 to 75. No. 2 Oklahoma won at Iowa State, 28–10. No. 3 Michigan won at Indiana, 21–7. No. 4 Alabama beat No. 17 Mississippi State 35–0, and thereby jumped over Michigan in the next poll. No. 5 No. 5 Auburn lost at No. 11 Florida, 25–14. No. 8 Texas A&M, which beat Arkansas 20–10, returned to the Top Five: No. 1 Ohio State, No. 2 Oklahoma, No. 3 Alabama, No. 4 Michigan, and No. 5 Texas A&M.
November 9: In East Lansing, Michigan, No. 1 Ohio State was upset by unranked (and 4–3–1) Michigan State, 16–13. No. 2 Oklahoma, which had beaten Missouri 37–0, took the first spot. No. 3 Alabama beat LSU in Birmingham, 30–0. No. 4 Michigan won at Illinois, 14–6. No. 5 Texas A&M lost at SMU, 18–14. No. 8 No. 8 Notre Dame was idle, but rose to fifth place after losses by No. 6 Florida and No. 7 Penn State. The top five were No. 1 Oklahoma, No. 2 Alabama, No. 3 Michigan, No. 4 Ohio State, and No. 5 Notre Dame.
November 16: No. 1 Oklahoma won at Kansas, 45–14. No. 2 Alabama won in Florida over Miami, 28–7, and No. 3 Michigan beat Purdue. All three teams were undefeated and untied. No. 4 Ohio State won at Iowa, 35–10, and No. 5 Notre Dame beat No. 17 Pittsburgh, 14–10. The top five remained the same.
November 23: No. 1 Oklahoma beat No. 6 Nebraska, 28–14. No. 2 Alabama was idle as it prepared for its season ender with Auburn. The latest battle of "The Ten Year War" took place in Columbus, Ohio, as No. 3 Michigan (10–0) met No. 4 Ohio State (9–1) in their annual clash for the Big Ten title. OSU won, 12–10, to clinch a third consecutive Rose Bowl berth. Over the last three years, Michigan was 30−0 against all opponents other than Ohio State, but the Big Ten's rule that only the conference champion could participate in a bowl game kept the Wolverines out of the postseason each year. No. 5 Notre Dame beat Air Force, 38–0. No. 6 USC topped UCLA 34–9 for the Pac-8 title and the right to face Ohio State in the Rose Bowl. The next poll featured No. 1 Oklahoma, No. 2 Alabama, No. 3 Ohio State, No. 4 Michigan, and No. 5 Notre Dame.
November 29−30: The annual Alabama-Auburn game took place on a Friday night in Birmingham, with No. 2 Alabama winning 17–13 over No. 7 Auburn to close its season at 11–0. The next day, No. 1 Oklahoma won its annual season ender against Oklahoma State, 44–13, to also finish 11–0. With Oklahoma barred from the postseason due to NCAA probation, the Orange Bowl organizers had already arranged for Alabama to meet No. 5 Notre Dame in a rematch of last year's national championship game. However, the Fighting Irish still had one more regular season game left, against No. 6 USC in Los Angeles. After trailing 24–0, the Trojans scored 55 unanswered points and cruised to victory, keeping themselves in national championship contention and effectively eliminating Notre Dame. The final regular season AP Poll featured No. 1 Oklahoma, No. 2 Alabama, No. 3 Ohio State, No. 4 Michigan, and No. 5 USC. Since teams on probation were ineligible to be ranked in the coaches' poll, the UPI named Alabama as No. 1, followed by Ohio State, Michigan, USC, and Auburn.
In other action, Tulane lost its final game at Tulane Stadium 26–10 to Ole Miss. The Green Wave played 38 of their next 39 seasons at the Superdome, except for 2005, when they were forced to play all of their games away from New Orleans in the wake of Hurricane Katrina. Tulane returned to campus in 2014 when Yulman Stadium opened.
Conference standings
Bowl games
Wednesday, January 1, 1975
Major bowls
Nebraska erased a 10-point deficit by defeating Florida in the Sugar Bowl played on New Year's Eve. The following afternoon, Penn State defeated the surprise SWC champion Baylor in the Cotton Bowl. Third-ranked Ohio State (led by Woody Hayes) and No. 5 USC (coached by John McKay) played in the Rose Bowl before a crowd of 106,721 in Pasadena. Ohio State led 7–3 after three quarters, and 17–10 in the closing minutes. With 2:03 left, Pat Haden fired a 38-yard pass to John McKay Jr. (son of USC's coach) to make the score 17–16. Coach McKay then passed up a chance for a tie over the favored Buckeyes, and ordered the Trojans to go for two. Shelton Diggs dove and caught Haden's low pass in the end zone to give USC an 18–17 lead. Ohio State could only get close enough for a desperation 62-yard field goal attempt that fell about 8 yards short as time expired.
Alabama, coached by Bear Bryant was ranked No. 1 in the UPI poll, and No. 2 (behind on-probation Oklahoma) in the AP, as it went to the Orange Bowl, where it faced 9th ranked Notre Dame, playing its final game under Ara Parseghian. The Irish went out to a 13–0 lead early in the game, but Bama battled back with a field goal, a touchdown and a two-point run to close the score to 13–11 with three minutes left. After ruling out an onside kick attempt, the Tide force a Notre Dame punt and got the ball back with 1:37 left. Quarterback Richard Todd attempted to drive the team to field goal range, but he threw his 3rd interception of the game, and Notre Dame ran out the clock to preserve the upset win.
In the final polls, USC was ranked first by UPI, followed by Alabama, Ohio State, Michigan, and Notre Dame. The Trojans were second in the AP poll, where the Oklahoma Sooners were the first place choice for 51 of the 60 writers. The NCAA recognized both the Sooners and the Trojans as champions in its football guide.
Other bowls
Heisman Trophy
Archie Griffin, RB - Ohio State, 1,920 points
Anthony Davis, RB - USC, 819
Joe Washington, RB - Oklahoma, 661
Tom Clements, QB - Notre Dame, 244
David Humm, QB - Nebraska, 210
Dennis Franklin, QB - Michigan, 100
Rod Shoate, LB -Oklahoma, 97
Gary Sheide, QB - BYU, 90
Randy White, DT - Maryland, 85
Steve Bartkowski, QB - California, 74
Griffin and Washington were juniors
See also
1974 NCAA Division I football rankings
1974 College Football All-America Team
1974 NCAA Division II football season
1974 NCAA Division III football season
References |
592788 | https://en.wikipedia.org/wiki/Wen%20Tsing%20Chow | Wen Tsing Chow | Wen Tsing Chow (; 1918–2001), was a Chinese-born American missile guidance scientist and a digital computer pioneer, known for the invention of programmable read-only memory or PROM.
Biography
Chow was born in Taiyuan, Shanxi in 1918. He received a B.S. in Electrical Engineering from National Chiao Tung University (now Shanghai Jiao Tong University) in 1940 and an M.S. in EE from the Massachusetts Institute of Technology in 1942.
Chow, working for the Arma Division of the American Bosch Arma Corporation, pioneered the use of digital computers in missile, satellite and spacecraft guidance systems, leading the design of the United States Air Force Atlas E/F ICBM (Inter-Continental Ballistic Missile) all-inertial guidance system and guidance computer, the first production airborne digital computer. Mr. Chow personally formulated the design of the first all solid state, high reliability, space-borne digital computer and established the basic systems approach and mechanization of America's ICBM guidance systems.
Chow invented and holds a fundamental patent on what is now commonly known as programmable read-only memory or PROM. PROM, in the late 1950s called a "constants storage matrix," was invented for the Atlas E/F ICBM airborne digital computer.
He would continue working throughout the 1960s and early 1970s to develop and advance missile and spacecraft digital computers and guidance systems technology beyond the state of the art - working at the Aerospace Corporation in the Gemini and Minuteman programs and at IBM in the B-1, B-52, Saturn V and Skylab programs, and of course, in the development of the AP-101 digital computer used in the Space Shuttle Computer Complex.
Chow, uniquely, worked on the guidance computers and guidance systems for every major United States Air Force ICBM and NASA manned space program from the very beginning with the Atlas, through Titan, Gemini, Saturn, and Skylab, to missiles and spacecraft still in service today, Minuteman and the Space Shuttle.
In 2004, the United States Air Force posthumously awarded Mr. Chow one of their highest awards, the Air Force Space and Missiles Pioneers Award, previously held by only 30 individuals. Chow is one of only a handful of civilians to receive this award and along with John von Neumann, one of only two computer scientists so honored.
References
External links
Air Force Space and Missiles Pioneer Citation for Wen Tsing Chow
U. S. Patent 3,028,659 for key PROM technology
1918 births
2001 deaths
Chinese emigrants to the United States
IBM employees
MIT School of Engineering alumni
Shanghai Jiao Tong University alumni
American electrical engineers
Chinese electrical engineers
Scientists from Shanxi
People from Taiyuan
Missile guidance
20th-century American inventors
20th-century Chinese inventors |
19391960 | https://en.wikipedia.org/wiki/YASARA | YASARA | Yet Another Scientific Artificial Reality Application (YASARA) is a computer program for molecular visualising, modelling, and dynamics. It has many scientific uses, as expressed by the large number of scientific articles mentioning the software. The free version of YASARA is well suited to bioinformatics education. A series of freely available bioinformatics courses exist that use this software. See the Center for Molecular and Biomolecular Informatics (CMBI) education pages for a series of examples.
Modelling:
Dynamics:
See also
List of molecular graphics systems
Comparison of software for molecular mechanics modeling
Molecular graphics
Molecular design software
References
External links
Molecular modelling software
Molecular dynamics software |
34798551 | https://en.wikipedia.org/wiki/Princess%20Amelia%20%281799%20packet%29 | Princess Amelia (1799 packet) | Princess Amelia was launched in 1799 and became a packet for the British Post Office Packet Service, sailing from Falmouth, Cornwall. She sailed to North America, the West Indies, Mediterranean, and Brazil. In 1800 a French privateer captured her, but she returned to the packet service later the same year. Joshua Barney, in the American privateer , captured her on 16 September 1812, at the start of the War of 1812. The United states Navy took her into service as HMS Georgia, but then renamed her USS Troup. She served as a guardship at Savannah; the Navy sold her in 1815.
Packet
As a packet, Princess Amelia sailed from Falmouth on numerous voyages to Jamaica, to the Mediterranean, and to Brazil.
On 14 May 1800 a French privateer captured Princess Amelia packet, Richard Stevens, master, as she was returning from the Leeward Islands, and took her into Bordeaux. The privateer was Decide, or Grande Decide of Bordeaux. On 12 October 1800 the Danish vessel Two Sisters, Gardrund, master, came into Falmouth from Bordeaux, in ballast. Two Sisters was the former Princess Amelia packet. The Packet Service took Princess Amelia back into service. On 17 December 1800 Princess Amelia, G. Bryant, master, sailed from Falmouth for Jamaica.
On 18 December 1800 Grantham Packet, Bull, master, was going to Jamaica from Falmouth was wrecked on the Mendham Shoals off Barbados. The people aboard her were rescued. The Post Office hired to carry the passengers and mail that Grantham Packet was to carry to England, but Caroline was wrecked at Jamaica before she could leave for England. Princess Amelia Packet, Bryant, master, took the passengers and mail to Falmouth, leaving Jamaica on 8 February and arriving at Falmouth on 22 March.
Princess Amelia twice had to go into quarantine at Falmouth because of deaths due to fever. Princess Amelia, Richard Stevens, master, left Tortola on 19 January 1805 and arrived at Falmouth on 10 February. Because two crew members had died of fever on the passage she went into quarantine on her arrival. In 1807 Captain Stevens and eight crew members died of yellow fever at Jamaica. Princess Amelia arrived at Falmouth on 11 January 1808.
Princess Amelia Packet arrived in Falmouth on 2 July 1811, having sailed from Jamaica in April. In August 1812 she left Bridgetown, Barbados, for St. Thomas. From there she sailed for England. It was on this voyage back to Falmouth that she encountered Rossie.
Rossie was armed with ten 12-pounder guns and one long 9-pounder on a pivot, and had a crew of 95; Princess Amelia was armed with four 6-pounders and two 9-pounders, and had a crew of 27 or 28. Princess Amelia had to strike after she had lost three men killed, including her captain, Isaac Moorsom, and her sailing master, John Nankevell, and 11 men wounded. (Some of the wounded may have died later as a report on her arrival in Savannah gives her casualties as six dead and six or seven wounded.) American casualties were seven men wounded, one of them, the first lieutenant, severely.
Rossie sent her prize into Savannah, Georgia.
USS Troup
At Savannah the United States Navy bought Princess Amelia Packet and named her Georgia. The US Navy then changed her name to Troup, naming her after Congressman George Troup of Georgia who had written to Secretary Hamilton urging her purchase.
The US Navy used Troup as a guard and receiving ship at Savannah for the remainder of the War of 1812, under the command of a Captain Walpole. She was sold at Savannah in 1815.
Notes, citations, and references
Notes
Citations
References
1799 ships
Brigs of the United States Navy
Captured ships
War of 1812 ships of the United States
1812 ships
Packet (sea transport)
Falmouth Packets |
33531350 | https://en.wikipedia.org/wiki/Timeline%20of%20the%202011%20military%20intervention%20in%20Libya | Timeline of the 2011 military intervention in Libya | This is a timeline of the 2011 military intervention in Libya. It covers all military action taken by NATO to implement United Nations Security Council Resolution 1973, beginning on 19 March 2011.
March
19 March: BBC News reported at 16:00 GMT that the French Air Force had sent 19 fighter planes to cover an area of over Benghazi to prevent any attacks on the rebel-controlled city. "Our air force will oppose any aggression by Colonel Gaddafi against the population of Benghazi", said French President Nicolas Sarkozy. BBC News reported at 16:59 GMT that at 16:45 GMT a French plane had fired at and destroyed a Libyan military vehicle – this being confirmed by French defence ministry spokesman Laurent Teisseire.
According to Al Jazeera, French aircraft destroyed four Libyan tanks in air strikes to the south-west of Benghazi. The French military claimed that its aircraft had also flown reconnaissance missions over "all Libyan territory". On the same day, British Prime Minister David Cameron confirmed that Royal Air Force jets were also in action and reports suggested that the US Navy had fired the first cruise missile. CBS News's David Martin reported that three B-2 stealth bombers flew non-stop from the US to drop 40 bombs on a major Libyan airfield. Martin further reported that US fighter jets were searching for Libyan ground forces to attack.
The Pentagon and the British Ministry of Defence confirmed that, jointly, and U.S. Navy ships (including , pictured) and submarines fired more than 110 Tomahawk cruise missiles, supported with air attacks on military installations, both inland and on the coast.
At the start of operations United States Africa Command commanded by General Carter Ham exercised strategic command. Tactical command in the theater of operations was executed from in the Mediterranean Sea under command of Admiral Sam Locklear, commander United States Naval Forces Europe. United States Secretary of Defense Robert Gates indicated that control of the operation would be transferred to French and British authorities, or NATO, within days.
20 March: Several Storm Shadow missiles were launched by British jets. Nineteen US planes conducted strike operations in Libya. The planes included Marine Corps AV-8B Harriers, US Navy EA-18G Growlers, which were diverted from operations over Iraq and jammed Libyan radar and communications, and Air Force F-15 and F-16 fighter jets. A military convoy was destroyed south of Benghazi by air strikes. Seventy military vehicles are known to have been destroyed, multiple loyalist ground troop casualties were also reported.
Four Danish F-16 fighters left Italy's Sigonella air base for a successful five-hour-long "high-risk mission", and four Italian Tornados ECR, accompanied by four Italian F-16 as fighter escorts took off from the Trapani base. A second immediate cease-fire was declared by the Libyan Army on 20 March, starting at 9 pm.
21 March: SA-2, SA-3 and SA-5 air defence systems in Libya have been destroyed by Italian aircraft during a raid near Tripoli. Only SA-6, hand-held SA-7s and SA-8 mobile SAMs remain a possible threat to aircraft. A spokesman for the National Transitional Council said Gaddafi's forces were using human shields in defence of their military assets, bringing civilians to Misrata to surround their vehicles and troops to deter airstrikes. RAF Tornados aborted a planned airstrike due to information that a number of civilians were reported close to the intended target. Among the buildings hit late on 20 March and early 21 March were parts of the Bab al-Azizia compound often used by Colonel Gaddafi. Further strikes on Tripoli and, according to Libyan government spokesmen, Sabha and Sirte, took place on late 21 March.
22 March: During a mission over Libya, a US F-15E crashed in rebel-held territory. It was reported that the aircraft, based at RAF Lakenheath in England, came down following a mechanical fault. Both crewmen were rescued by a US CSAR unit, but six local villagers were injured by gunfire from the rescuing US forces. There are claims that the pilot called in a bomb strike by Harrier jump jets, possibly injuring the civilians. The US announced that Qatari forces would join the operation by the weekend.
23 March: Coalition aircraft flew at least two bombing missions against loyalist forces near the besieged city of Misrata. Late in the day, it was announced that the remaining pro-Gaddafi forces and their equipment in the city, with the exception of individual snipers, had been forced to retreat or had been destroyed. In the early morning hours, four Canadian Forces CF-18 Hornets conducted two separate bombing runs on multiple targets at a pro-Gaddafi munitions depot near Misrata. NATO announced it would enforce the UN embargo to "cut off the flow of arms and mercenaries" under the name Operation Unified Protector.
24 March: Multiple Tomahawk cruise missiles were launched at targets during the day. French aircraft attacked Al Jufra Air Base inland and destroyed a Libyan Soko G-2 Galeb light attack jet as it landed at Misrata Airport. Eyewitnesses reported that coalition aircraft had bombed Sabha Air Base, south of Tripoli. F-16s from the Royal Norwegian Air Force were assigned to the US African command and Operation Odyssey Dawn. A number of Norwegian F-16s took off from Souda Bay Air Base on Crete, Greece, performing several missions over Libya during the day, evening and through the night.
25 March: Three laser-guided bombs were launched from two F-16s of the Royal Norwegian Air Force against Libyan tanks. French Air Force destroyed an artillery battery overnight outside Ajdabiya. RAF Tornado fighter/bombers together with the French Air Force struck and destroyed seven pro-Gaddafi tanks dug in on the outskirts of Ajdabiya with precision guided munitions.
26 March: F-16s from the Royal Norwegian Air Force bombed an airfield in Libya during the night. Two CF-18s from the Canadian Forces detachment conducted one sortie each, on a mission to release precision-guided munitions against electronic warfare sites near Misrata. French Air Force confirms the destruction by its aircraft of at least 5 Libyan Soko G-2 Galeb aircraft and 2 Mi-35 military helicopters. RAF Tornados destroyed three armoured vehicles in Misrata and a further two vehicles in Ajdabiya with Brimstone missiles. Royal Danish Air Force (RDAF) F-16s knocked out Libyan self-propelled rocket launchers and tanks.
27 March: RDAF F-16s knocked out Libyan self-propelled artillery south of Tripoli. 4 Canadian Forces CF-18s struck and destroyed Regime ammunition bunkers 92 km south of Misrata. Air Force and Navy Rafales attacked a command centre south of Tripoli. French and Qatari Mirage 2000-5s conducted joint patrols and air interdiction missions from Souda Air Base. The number of French Mirage 2000-5s based as Souda was increased to four.
28 March: RAF Tornados destroyed two Libyan tanks and two armoured vehicles near Misrata earlier in the day. The Ministry of Defence said British jets had launched missiles against ammunition bunkers in the morning in the Sabha area of southern Libya. Air operations were planned to focus on the region around Zintan and Misrata. A US Navy P-3 Orion Maritime Patrol aircraft fired at the 12-meter Libyan Coast Guard vessel Vittoria after multiple explosions were seen near the Libyan port of Misrata Monday evening forcing it to be beached. The USAF said an A-10 Thunderbolt also fired on two smaller Libyan vessels traveling with the larger ship, destroying one and forcing the other to be abandoned. Air force Rafales and Mirage 2000Ds and a joint patrol of Navy Rafales and Super Etendards bombed an ammunition dump at Gharyan, south of Tripoli. Mirage F1CRs conducted reconnaissance missions for the first time in the operation.
29 March: The US used AC-130 gunships and A-10 Thunderbolt tankbusters against Muammar Gaddafi's troops in Libya. US aircraft fired on a Libyan coast guard vessel, forcing it to limp to shore, after it launched missiles at merchant ships in the port of Misrata, U.S. military officials said Tuesday. Two patrols of Air Force Rafales and Mirage 2000Ds and a patrol of Navy Rafales and Super Etendards attacked anti-aircraft missile sites south west of Tripoli. Two joint patrols of French and Qatari Mirage 2000-5s conducted air interdiction sorties. Mirage 2000Ds and Super Etendards bombed a military depot south of Tripoli. 2 Canadian Forces CF-18s flew on a mission to help the rebels by attacking targets in Misrata.
30 March: A joint strike force of French Air Force Rafales and Mirage 2000Ds and Navy Rafales and Super Etendards attacked anti-aircraft missile sites south of Sirte. A patrol of two French and four Qatari Mirage 2000-5s conducted air interdiction sorties. RAF Tornados flying from Gioia del Colle engaged near Misrata three Libyan tanks, two armoured fighting vehicles and a surface-to-air missile site with Brimstone missiles and Paveway IV bombs.
31 March: At 0600 GMT, NATO took command of all operations in Libya. Subsequent operations were conducted as part of Operation Unified Protector.
April
1 April: A coalition air strike near Brega killed at least 13 people after a rebel convoy was fired upon. A NATO A-10 Thunderbolt II aircraft was believed to attack after an anti-aircraft gun was fired from the convoy. In the same region, up to seven civilians were reported to have been killed and 25 injured after an attack on an ammunition truck triggered an explosion that destroyed several buildings. French patrol Mirage 2000D and Super Etendard a strike on a car was conducted in Khoms, located west of Misrata.
2 April: French Navy Rafale fighter jets destroyed five tanks in Sirte.
3 April: French Air Force destroyed several armored vehicles in Ra's Lanuf.
4 April: A US Marine Corps AV-8B Harrier and a US Air Force A-10 Thunderbolt II flew missions near Sirte and Brega, respectively. 4 April also marked the last day of US armed forces taking an active role in military action, as all American forces were placed in reserve that evening, to be used only if requested by NATO.
5 April: Fighter jets from Jordan flew missions from an unidentified European airbase to escort transport aircraft delivering humanitarian aid in eastern Libya. NATO aircraft flew fourteen sorties near Misrata, attacking anti-aircraft installations and ground vehicles.
6 April: RAF Tornados flew missions around rebel-held Misrata and Sirte. The targets were six armoured fighting vehicles and six battle tanks. Two Typhoon aircraft had flown from Gioia del Colle air base, southern Italy, to police the no-fly zone, while two RAF VC10 aircraft provided air-to-air refuelling. The RAF announced four Typhoon jets will join 16 RAF ground-attack aircraft already under NATO command.
8 April: NATO aircraft attacked a column of rebel tanks, killing five rebels.
9 April: NATO warplanes forced a rebel MiG-23 to land. The fighter jet took off from an airfield east of Benghazi and was detected by an airborne early-warning airplane. This was the first no-fly zone violation by any aircraft since NATO took command. Also, an anonymous NATO official claimed that they had destroyed seventeen and damaged nine loyalist tanks in and around Misrata and Brega in the previous two days, of which five were destroyed by British planes. However, there was no independent confirmation of the claims, though footage of three tanks destroyed had surfaced.
10 April: NATO claimed to have hit 11 tanks or armored vehicles in the early part of the day outside Ajdabiya. A Reuters correspondent saw 15 charred corpses of Gaddafi's forces near several destroyed armored vehicles.
11 April: 4 tanks in Zintan and 1 ammunition storage site south of Sirte were hit.
12 April: RAF Typhoon aircraft were used operationally in a ground attack role for the first time. A Typhoon destroyed two main battle tanks near Misrata with Paveway II whilst a Tornado destroyed the third with Paveway IV. In total, RAF aircraft destroyed eight main battle tanks on 12 April. Since the start of Operation ELLAMY up until 12 April.
13 April: 13 bunkers, 1 tank, and 1 APC in Tripoli, and 3 multiple rocket launchers in Brega were hit.
14 April: 4 Ammunition sites, 8 bunkers, and 2 APCs in Sirte, an SA-3 radar and launcher at the Tunisian Border, 3 bunkers and a helicopter near Misrata, and 2 Ammunition sites, 1 radar, and 1 tank in Tripoli were hit.
15 April: Three airstrikes in Tripoli were carried out, targeting a missile installation and two unidentified targets.
16 April: 42 sorties were flown, occurring in several cities. Near Tripoli, two ammunition storage facilities and an antiaircraft station were destroyed. Near Misrata, six armored ground vehicle was destroyed. Near Sirte, several pieces of ground-based heavy weaponry and four ammunition facilities were destroyed. Near Zintan, an ammunition facility was damaged.
17 April: NATO flew 145 missions, of which 60 attacked targets. Near Tripoli, seven ammunition facilities were destroyed. Near Misrata, four radar installations were destroyed. Near Sirte, aircraft and 1 ammunition facilities were destroyed. Near Zintan, air defense and one ammunition facility were destroyed.
18 April: 9 ammunition sites and the headquarters of the 32nd Brigade in Tripoli, 6 SAM launchers, 4 tanks, 3 anti-air missile sites, and a mobile rocket launcher in Misrata, 3 ammunition bunkers in Sirte, 3 tanks, an anti-aircraft weapons system, and an armored vehicle in Zintan, and 1 building in Brega were hit.
19 April: Missions were flown to attack Gaddafi's command center in Tripoli.
21 April: US Predator drones entered the theater. According to General James Cartwright, two patrols of drones would be above Libya at all times, with the first deployment originally scheduled for 21 April, though inclement weather forced a delay.
23 April: The first successful attack using drones was carried out, according to The Pentagon, though no further information was provided.
25 April: Norwegian F-16s attacked the command center and residence of Muammar Gaddafi in Tripoli; government officials claimed it was an assassination attempt, but the US military said it was an attack on a military target. Other strikes took place in Misrata and Sirte, destroying four rocket launchers, eight personnel carriers and one vehicle and three ammunition storage or bunker facilities.
26 April: 133 sorties carried out by NATO aircraft, 56 of which attacked targets. Tanks, missile and rocket launchers, various storage facilities and other vehicles were targeted in Tripoli, Misrata, Sirte and Khoms.
27 April: Rebel forces claimed a NATO airstrike killed around 12 rebels in Misrata in a friendly fire incident. Elsewhere, 119 sorties were carried out by NATO aircraft, 41 of which engaged targets, mostly targeting weapons storage facilities, as well as several rocket or missile launchers.
30 April: The Libyan government claimed a NATO airstrike killed Saif al-Arab Gaddafi and three of Muammar Gaddafi's grandchildren in an apparent assassination attempt on the leader. They took journalists to tour what appeared to be a residential house in a wealthy section of Tripoli that had been hit by at least three missiles, but did not show them the bodies of the purported dead.
May
1 May: NATO air strikes destroyed 45 government vehicles after they were used in attacks that killed five civilians in Jalu and Awjila. After attacks on its Embassy, the British Government expelled the Libyan Ambassador from the United Kingdom. Later, the British Embassy in Tripoli was completely burnt down. British Foreign Secretary William Hague said that the government's actions broke the Vienna Convention, as they are required to "protect diplomatic missions." International staff of the UN also pulled out of Tripoli following attacks.
2 May: NATO aircraft flew 158 sorties, 56 of which were intended as strike sorties. Targets included 13 ammunition stores, one truck-mounted gun, three self-propelled guns, two armored personnel carriers and rocket launchers; these strikes occurred in Misrata, Ra's Lanuf, Brega and Zintan.
3 May: NATO conducted 161 sorties, including 62 strike sorties. Key hits include two ammunition storage areas in Tripoli, two ammunition storage areas and one armored fighting vehicle in Zintan, three ammunition storage areas and three tanks near Misrata, two tanks in Sirte, and two rocket launchers and one tank in Ra's Lanuf.
4 May: NATO aircraft conducted 160 sorties, 49 of which were intended as strike sorties. Targets included two ammunition stores, one bunker, seven military vehicles, and rocket launchers; strikes were carried out in Tripoli, Misrata, Ajdabiya and Sirte.
5 May: NATO flew 154 sorties, of which 57 were strike sorties. Targets include nine ammunition storage depots, three tanks, two armored fighting vehicles, two rocket launchers, two truck-mounted guns, and one resupply facility in Zintan, eight ammunition storage facilities in Sirte, three rocket launchers in Brega, three ammunition storage facilities in Mizda, one tank near Misrata and one communications facility in Ra's Lanuf.
6 May: NATO aircraft flew 149 sorties, 56 of which were planned to fire on targets. Nine military vehicles, seven tanks, 12 ammunition storage facilities, a building housing snipers, rocket and missile launchers and command centers in Sirte and Ra's Lanuf were targeted.
7 May: NATO aircraft flew 153 sorties on 7 May, 58 of which were intended as strike sorties to identify and/or attack targets. Targets attacked included 16 weapons storage facilities near Zintan, 34 vehicles, one anti-aircraft gun, and Scud rocket launchers near Sirte, which were destroyed by British Tornado aircraft.
8 May: 159 sorties were flown by NATO aircraft, of which 64 were strike sorties. Aircraft attacked 32 weapons storage facilities near Zintan, 25 vehicles, three buildings hosting active shooters, as well as government buildings in Tripoli.
9 May: 146 sorties, including 46 strike sorties were flown. Key hits include three command and control facilities in Tripoli, 15 ammunition depots in Mizda, one tank and one command and control node near Misrata, and two ammunition storage facility in Sirte.
10 May: 123 sorties were flown, including 42 strike sorties. Targets include six vehicle storage depots, three ammunition storage depots, one self-propelled anti-aircraft gun, and one SAM Launcher in Tripoli, one ammunition storage building in Mizda, and three ammunition storage buildings in Qaryat.
11 May: 141 sorties were conducted, of which 46 were intended to engage targets. Targets hit were four ammunition storage facilities, four command and control centers, and two SAM Launchers in Tripoli, four SAM Launchers in Sorman, and one SAM Launcher near Misrata.
12 May: NATO flew 135 sorties, including 52 strike sorties. Key hits were two SAM Launchers and three buildings in a military camp in Tripoli, one SAM Launcher, two buildings, one truck-mounted gun, and one anti-aircraft gun near Misrata, ten ammo storage buildings near Al Qaryat, five ammo storage buildings and one command and control node in Sirte, and one rocket launcher and one tank in Brega.
13 May: 148 sorties were flown by NATO aircraft, of which 44 were strike sorties. Targets included a command facility in Brega, a strike which killed eleven people with 45 injured, a command facility, 20 storage buildings, four ammunition storage facilities and two SAM launchers in Tripoli, eight military vehicles near Misrata and Zintan, tanks near Brega and weapons storage facilities near al Qaryat.
15 May: NATO flew 147 sorties, including 48 strike sorties. Key targets hit include one command and control center in Zawiya, four SAM launchers in Tripoli, one self-propelled artillery piece near Misrata, two ammunition storage facilities in Hun, and two SAM launchers, one self-propelled artillery piece and one APC in Sirte.
18 May: 159 sorties were flown by NATO aircraft, of which 53 were strike sorties. Major targets included a training facility near Tripoli, command buildings near Zuwara, heavy weaponry near Misrata and an ammunition storage facility near Mizda.
19 May: Eight ships of the Libyan Navy were destroyed in the ports of Tripoli, Khoms and Sirte. Elsewhere, NATO aircraft flew 166 sorties, 60 of which were strike sorties. Targets included, in addition to the naval vessels, command facilities near Tripoli, Sirte and Zuwara, rocket launchers near Khoms and Mizda, two storage buildings near Mizda and Sabha and four military vehicles near Brega and Mizda.
20 May: 157 sorties were flown by NATO aircraft, including 58 strike sorties. Targets included command facilities near Tripoli and Sabha, one storage facilities near Tripoli and Sirte, rocket launchers near Tripoli and Zintan, and three SAM missile launchers near Sirte.
23 May: French Defense Minister Gerard Longuet announced that France and Britain planned to send attack helicopters to enter the conflict. According to a French newspaper, twelve Tiger and Gazelle helicopters were deployed on Tonnerre on 17 May. At the time, Britain did not confirm that it intended to send helicopters to Libya.
25 May: The Guardian reported that a formal announcement that Britain planned to send Apache helicopters would occur on 26 May. Government officials said that HMS Ocean, carrying four helicopters, was due to arrive at the Libyan coast in several days. NATO aircraft flew 136 sorties, including 42 strike sorties.
26 May: NATO aircraft flew 140 sorties, including 54 strike sorties, with targets including five storage facilities, one vehicle, one SAM launcher, heavy weaponry and aircraft.
27 May: According to a senior NATO official, French and British helicopters were planned to begin operations as soon as possible.
30 May: NATO aircraft flew 158 sorties, including 58 strike sorties.
June
1 June: NATO announced that it had extended its military operations in Libya for 90 days; Secretary-General Anders Fogh Rasmussen said that NATO "will sustain our efforts to fulfill the United Nations mandate. We will keep up the pressure to see it through." NATO also said that aircraft strikes had hit a vehicle storage facility and an SAM launcher in Tripoli, one ammunition storage facility in Mizda and one ammunition storage facility and one fire control radar in Hun, as well as having boarded and denied passage to one ship as part of the arms embargo.
2 June: 172 sorties were flown, of which 63 engaged targets. One vehicle depot, two ammunition depots, four SAM launchers, six armoured personnel carriers (APCs), one tank, two armored fighting vehicles, one command and control node and one radar were hit.
3 June: Airstrikes hit two ammunition storage facilities, three command and control nodes and a military camp comprising fourteen vehicles, two shelters and twelve tents.
4 June: British and French helicopters engaged targets for the first time on the night of 4 June, targeting heavy weapons, a radar installation and a checkpoint with Hellfire missiles and 30mm cannons.
5 June: NATO strikes attacked three command and control centers, one SAM storage facility, one ground forces compound, one air defense compound, four SAM launchers, one radar, three military vehicles and an armored fighting vehicle.
6 June: 42 strike sorties were conducted, hitting one command and control center, one SAM storage facility, two command and control nodes, one vehicle storage facility and four SAM launchers in Tripoli and one mobile command and control node near Sirte.
7 June: Warplanes struck six command and control facilities, one vehicle storage facility, two self-propelled anti-aircraft guns, one air surveillance radar and two truck mounted guns.
8 June: 113 sorties were carried out, including 47 strike sorties. Targets hit included one vehicle storage facility, two SAM facilities, one command and control center, one tank, four armored fighting vehicles, one electronic warfare vehicle, one military training camp comprising five shelters, nine containers and one air asset.
9 June: NATO conducted 149 sorties and 43 strike sorties. Targets included one vehicle storage facility, two command and control facilities, one early warning radar, two SAM launchers, two anti-air guns and three tanks in Tripoli, two rocket launchers, one truck mounted gun, four tanks, one heavy equipment transport, two command and control nodes, two armored fighting vehicles and two checkpoints near Misrata and one command and control center in Brega.
10 June: NATO planes bombed a military camp, one vehicle and maintenance facility, fourteen tanks and one military vehicle close to Tripoli, one command and control facility, three radar facilities in Ra's Lanuf, three artillery units in Waddan and one tank, two rocket launchers, one technical and two armed vehicles near Misrata.
11 June: NATO aircraft conducted 116 sorties, including 51 strike sorties. Key hits included one vehicle depot, one ammunition depot, one command and control facility, one tank, two anti-aircraft emplacements and one self-propelled artillery piece in Tripoli, one ammunition depot in Waddan, one armored vehicle near Misrata and one tank in Zliten.
12 June: International forces conducted 136 sorties, of which 52 were strike sorties. Targets included three anti-air pieces, one SAM launcher and one grenade launcher in Tripoli, one ammunition storage facility in Waddan, two rocket launchers, two anti-air emplacements and a military truck near Misrata, one ammunition facility near Al-Qaryat and four truck mounted guns and one tank in Brega.
13 June: Airstrikes hit 11 SAM launchers and detection radar, one ammunition storage facility, one command and control center, one towed artillery piece, three truck mounted guns, two military trucks, one shelter, an armored vehicle depot and one anti-air emplacement.
14 June: On the same day Canada recognised the National Transitional Council as the government of Libya, Canada extended its military involvement by three months, to expire in September 2011. Canada had no plans to take further military action following their deadline. However, Charles Bouchard is to continue to assume command of the Canadian-led NATO mission past September. NATO aircraft hit one air defense compound and two SAM launchers in Tripoli, one ammunition storage facility in Waddan, three armored fighting vehicles and a truck-mounted gun near Misrata, one truck-mounted gun in Yafran and two armored fighting vehicles in Brega.
15 June: NATO flew 44 strike sorties, hitting a vehicle storage area and two command and control nodes in Tripoli, one anti-aircraft gun in Zuwara, one anti-aircraft guns, one tank, one technical and one lightweight weapon near Zintan, one ammunition storage facility near Waddan and two rocket launchers and an anti-aircraft gun near Misrata.
16 June: Warplanes bombed one vehicle storage and maintenance facility, one SAM launcher and loader vehicle, fourteen truck-mounted guns, three tanks, an ammunition storage facility and a fuel truck.
17 June: 139 sorties and 59 strike sorties were flown, hitting three vehicle depots, three SAM loaders and three self-propelled artillery pieces in and near Tripoli, one rocket launcher, three tanks and a military truck in the vicinity of Misrata and five truck mounted guns and two anti-air guns near Zintan.
18 June: NATO jets hit a column of vehicles belonging to Libyan rebels, killing four and wounded 16 in a town of east Ajdabiya in a friendly fire incident.
19 June: 137 sorties were flown, including 60 strike sorties. In Tripoli, one military vehicle storage facility and two SAM guidance radars were hit. Near Misrata, two rocket launchers, one truck-mounted gun, three tanks, two anti-aircraft artillery pieces and a military logistics truck was hit. In Sabha, one command and control node was struck.
20 June: NATO conducted 52 strike sorties, hitting one command and control node, eight SAM launchers, one SAM transport vehicle, three truck-mounted guns, two self-propelled anti-aircraft guns, one tank, one military equipment storage facility, one military vehicle storage facility and one rocket launcher.
21 June: 140 sorties were conducted, including 48 strike sorties. Targets included three SAM launchers and one self propelled anti-air gun in Tripoli, five truck-mounted guns, one APC and two anti-aircraft guns in Nalut, one truck mounted gun and one military camp comprising one tuck-mounted guns, six military trucks and twelve shelters in Zliten and one rocket launcher, two anti-aircraft guns and three SAM loader vehicles in Zintan were hit.
22 June: 128 sorties were flown by international forces, of which 44 were strike sorties. Among the targets were one radar and one command and control node in Tripoli, one ammunition storage facility in Jadu, thirteen armed vehicles, one APC and one rocket launcher in Zliten, one SAM loader near Zintan and one command and control facility comprising two radar towers near Misrata.
23 June: NATO flew 149 sorties, including 47 strike sorties. Targets included one tank, one radar, one military equipment storage facility, nine self propelled artillery pieces and one anti-aircraft transport loader truck in and around Tripoli, one artillery piece in the vicinity of Zliten and two rocket launchers, one anti-aircraft missile launcher and three self-propelled artillery pieces near Zintan.
24 June: 137 sorties were conducted, including 43 strike sorties. Targets included seven command and control facilities, one military storage facility, fourteen truck mounted guns, one tank, two APCs, three logistics trucks and seven military shelters in Brega, one early warning radar and one truck mounted gun near Gharyan, two artillery pieces, one mortar and one truck mounted gun in Zliten and three SAM loader vehicles near Okba.
25 June: International forces conducted 123 sorties, of which 45 were strike sorties. Key hits included two tanks, one logistics vehicle, three military shelters, four military compounds and one antennae in Brega, one armed vehicle near Zintan, one vehicle storage facility, four anti-aircraft guns, two SAM loader vehicles, two SAM transport vehicles, one radar van and one self-propelled anti-aircraft gun in and around Tripoli.
26 June: 138 sorties were conducted, including 56 strike sorties. Among the targets hit were three command and control facilities and one tank near Brega, three technicals in Ra's Lanuf, two artillery pieces in Zintan, one antennae near Zuwara, one logistics facility near Yafran and two towed artillery pieces in Tripoli.
27 June: 142 sorties were conducted, 46 of which were intended as strike sorties. NATO hit a command and control facility and a tank in Brega, one tank in the vicinity of Ra's Lanuf, six APCs and three tanks near Zintan, three fire control radars in Zuwara and one command and control facility and one command and control vehicle in and around Tripoli.
28 June: International aircraft flew 148 sorties, including 58 strike sorties. Targets included three command and control facilities, one armored vehicle, one artillery piece, thelve armed vehicles, five armed pick-up trucks, three trucks and three military hangars in Brega, one multiple rocket launcher, one mortar, one armed vehicle, one command and control center in Zliten, one anti-air missile launcher and two radars in Tripoli and one military compound in Gharyan.
29 June: 149 sorties and 55 strike sorties were flown. Key hits included twelve military vehicles, one military truck, one APC, one ammunition storage facility, one military compound and one military checkpoint in Brega, one multiple rocket launcher, five battle tanks, two pieces of artillery and three military vehicles near Misrata, one self-propelled anti-aircraft gun, one military technical vehicle, two multiple rocket launchers and one military equipment storage facility in Tripoli, four battle tanks, one military technical vehicle and one heavy equipment transport in Gharyan, one battle tank and one military technical vehicle in Sirte, one ammunition storage facility in Waddan and one military technical vehicle in Nalut.
30 June: International forces flew 140 sorties, of which 42 were strike sorties. Targets were two command and control centers in Brega, two technicals near Misrata, one military facility and a radar in Tripoli, one military facility near Gharyan, one military storage facility in Waddan and two multiple rocket launchers near Bir al-Ghanam.
July
1 July: 136 sorties and 42 strike sorties were conducted, hitting one military facility, three radars, two anti-aircraft guns, one SAM launcher, four tanks and a command and control vehicle in Tripoli, two tanks in the vicinity of Gharyan, one military storage facility near Waddan, and two armed vehicles in Bir al-Ghanam.
2 July: 125 sorties were flown, of which 52 were strike sorties. NATO aircraft destroyed two radars and three military vehicles in Tripoli, one armed vehicle, two anti-aircraft guns, and one self-propelled artillery piece near Okba, one tank near Gharyan, three armored vehicles in Zliten, three armed vehicles near Misrata, one military storage facility and a truck in the vicinity of Waddan, and one military vehicle in Sirte. NATO vessels also boarded and denied a vessel as part of the arms embargo.
3 July: NATO aircraft flew 148 sorties, of which 71 were strike sorties. Targets that were hit included one armored fighting vehicle, one anti-aircraft gun and two command and control buildings in Tripoli, thirteen armed vehicles, two command and control nodes, two military storage facilities, one tank and one artillery piece in Brega, one military storage facility in Waddan, two armed vehicles and an anti-aircraft gun in Misrata, one armored fighting vehicle in Abu Qurayn, one tank in Sirte, one armed vehicle in Gharyan, and three armed vehicles in Zuwara.
4 July: 145 sorties and 59 strike sorties were flown, hitting one command and control center in Tripoli, one tank, one artillery piece and one military storage facility in Brega, one military facility in Waddan, one tank near Misrata, one military camp in Nalut, two armed vehicles and an armored fighting vehicle near Zintan, one armed vehicle in Zliten, and two armored fighting vehicles in Sirte.
5 July: NATO conducted 134 sorties, including 56 strike sorties. Key targets included one command and control center in Brega, two armed vehicles and four tanks near Gharyan, one tank, one command and control center and an artillery piece near Misrata, three armored fighting vehicles in Sirte, and one military storage facility in Waddan.
6 July: NATO aircraft conducted 140 sorties and 57 strike sorties, hitting military refueling equipment, eight armed vehicles, two armored fighting vehicles and one truck in Brega, one anti-aircraft gun in Gharyan, three armed vehicles near Misrata, one military storage facility near Waddan, one artillery piece and one armed vehicle in Yafran, eight armed vehicles in Zliten and an armed vehicle in Zintan.
7 July: NATO aircraft flew 134 sorties, of which 46 were strike sorties. Targets destroyed included military refueling equipment in Brega, three armed vehicles in Gharyan, one military facility in Waddan, one tank in Sirte, one artillery piece and one radar in Zliten, and three anti-aircraft guns and a command and control center in Tripoli.
8 July: 140 sorties were flown, of which 49 were strike sorties. Targets included one tank in Brega, one artillery piece and one multiple rocket launcher near Misrata, two military storage facilities, two SAM launchers, one radar and four command and control buildings in Tripoli, one military storage facility near Waddan, four armed vehicles in Yafran, one tank in Zliten and one command and control facility in Zintan. fired on three RHIBs near Zliten. Two returned to Zliten, the other beached and was destroyed by Liverpool's 4.5" gun.
9 July: NATO aircraft flew 112 sorties, of which 48 were strike sorties. Airstrikes hit one tank and one armed vehicle in Brega, one armored fighting vehicle, four armed vehicles, one missile, four artillery pieces and one multiple rocket launcher near Misrata, one military storage facility and five SAM launchers in Tripoli, one military storage facility and one multiple rocket launcher in Waddan, one multiple rocket launcher and a tank in Yafran and Gharyan, one artillery piece in Ra's Lanuf and one artillery piece in Zintan.
10 July: NATO conducted 139 sorties and 54 strike sorties. Key hits included three armed vehicles in Brega, eight artillery pieces, one tank, eight military vehicles, one military compound and three multiple rocket launchers in Misrata, three radars and three SAM launchers in Tripoli, three multiple rocket launchers in Zliten, one radar and one military storage facility in Okba and one military storage facility in 'Aziziya.
11 July: NATO aircraft conducted 132 sorties and 49 strike sorties. Key targets were two armed vehicles near Misrata, three radars, three SAM launchers and an anti-aircraft gun in Tripoli, one military storage facility near Waddan and three military facilities, seven military vehicles and one armed vehicle in Zuwara.
12 July: NATO aircraft flew 127 sorties, 35 of which were strike sorties, with targets including two missile launchers, two vehicles, a radar installation, five SAM launchers and a storage facility; most strikes were near Misrata and an unidentified location.
13 July: UK defense minister Liam Fox said that the British military had been stretched by the continued conflict, and that he believed other European members of NATO should expand their participation in military operations. NATO aircraft flew 130 sorties, 50 of which were strike sorties, targeting four command centers, seven SAM launchers, artillery, one tank and six armed vehicles, with most targets near Brega, Misrata and Tripoli.
14 July: An official at the UK defense ministry said that UK forces were finding it difficult to obtain further targets to attack, due to government troops using civilian vehicles and infrastructure. NATO aircraft flew 132 sorties, of which 48 were strike sorties that targeted four rocket launchers, three tanks, three vehicles and other heavy weapons, as well as five military buildings; most targets were near Brega, Gharyan and Tripoli.
15 July: NATO aircraft flew 115 sorties, 46 of which were strike sorties targeting seventeen vehicles, one SAM launcher and rocket launcher and three buildings near Brega.
16 July: NATO aircraft flew 110 sorties, 45 of which were strike sorties with targets including six vehicles, six rocket and missile launchers, seven anti-aircraft guns, three radar installations and two storage facilities; most strikes were near Breaga, Misrata and Tripoli.
17 July: NATO aircraft flew 122 sorties, 46 of which were strike sorties, targeting eleven vehicles, three military buildings and a roadblock, with most strikes occurring near Brega.
18 July: NATO aircraft flew 129 sorties, 44 of which were strike sorties, with targets consisting largely of seven artillery pieces, a SAM launcher, two storage facilities, a command building and ten vehicles; strikes took place mostly near Brega, Misrata and Tripoli.
19 July: NATO aircraft flew 113 sorties, 40 of which were strike sorties; targets included ten vehicles, seven SAM launchers, two storage facilities and seven military facilities, with most targets near Zliten, Tripoli and Brega.
20 July: NATO aircraft flew 122 sorties, 53 of which were strike sorties, targeting five vehicles, two SAM launchers, three heavy weapons and 14 buildings, including five storage, two command and operations facilities, with most targets near Zliten and Misrata.
21 July: NATO aircraft flew 124 sorties, 45 of which were strike sorties, targeting six storage facilities, one multiple rocket launcher, one building, eight anti-aircraft defenses and five military vehicles, largely near Tripoli and Zliten. The Los Angeles Times reported that the US government was considering moving additional UAVs and other surveillance aircraft to the Libyan conflict, after NATO commanders said that finding targets to attack was becoming difficult.
22 July: NATO aircraft flew 128 sorties, including 46 strike sorties. Aircraft destroyed one military storage facility in Khoms, one military storage facility and four armed vehicles in Brega, one command and control facility in Tripoli, one military facility in Waddan, three anti-aircraft guns near Zintan, and one military storage facility, two tanks, two anti-aircraft guns and one armed vehicle in Zliten.
23 July: NATO aircraft conducted 125 sorties, 56 of which were strike sorties, targeting storage facilities, anti-aircraft defenses, heavy weaponry and command centers, with most of the strikes taking place near Brega, Tripoli and Zliten.
24 July: 163 sorties were flown, of which 43 were strike sorties. Key hits included one military storage facility in Brega, one SAM launcher, one military storage facility and one tank in Tripoli, one tank near Zintan, two ammunition storage facilities and one command and control facility in Zliten and one tank and one multiple rocket launcher in Gharyan.
25 July: NATO aircraft flew 111 sorties and 54 strike sorties, hitting one military facility, five armored vehicles, two tanks and eleven light military vehicles in Brega, two command and control nodes, two anti-aircraft weapons, three multiple rocket launchers and one military vehicle near Tripoli, one ammunition storage facility near Waddan, three command and control facilities, one armored vehicle storage facility and two armed vehicles in Zliten and two armored fighting vehicles near Gharyan.
26 July: NATO aircraft flew 134 sorties, of which 46 were strike sorties. Strikes destroyed five military vehicles, one tank and one military facility in Brega, one military facility and four anti-aircraft systems in Tripoli, one ammunition storage facility near Waddan, one military ammunition supply facility, two military facilities and four military supply vehicles in Zliten and four military supply vehicles near Misrata.
27 July: NATO aircraft flew 133 sorties, of which 54 were strike sorties. Targets included three armed vehicles, two military facilities and one multiple rocket launcher in Brega, three SAM launchers and three fire control radars in Tripoli, one ammunition storage facility near Waddan, one military facility, one ammunition storage facility and two military supply vehicles in Zliten, one self-propelled artillery piece and one anti-aircraft gun near Zintan and one multiple rocket launcher in Nalut.
28 July: NATO aircraft hit two armed military vehicles near Brega, two fire control radars, four military vehicles, one military facility and one command and control node in Tripoli, one ammunition storage facility in Waddan and four military facilities, one command and control node and one ammunition storage facility in Zliten.
29 July: NATO aircraft flew 124 sorties, hitting two armed military vehicles, one mortar, one multiple rocket launcher, two military logistics vehicles and seven military facilities in Brega, three fire control radars, one command and control node and three state TV satellite dishes in Tripoli, one ammunition storage facility near Waddan, three military facilities, one command and control node, one armored fighting vehicle, five armed vehicles, one fire control radar and an ammunition storage facility in Zliten, one command and control center near Bir al-Ghanam, one military facility in Bani Walid and one anti-aircraft gun in Yafran.
30 July 2011: Aircraft from the Norwegian Air Force flew their final missions over Libya, with all troops, equipment and aircraft to have left their base on Crete within two weeks. Targets hit included eight military vehicles, one tank and two military facilities in Brega, two anti-aircraft systems in Tripoli, one ammunition storage facility near Waddan and one command and control node, two tanks, one military vehicle, one anti-aircraft system and one military facility in Zliten.
31 July: NATO conducted 126 sorties, of which 49 were intended to strike targets. Key hits included one armed military vehicle and six multiple rocket launchers in Brega, one military facility in Tripoli, one ammunition storage facility near Waddan, three military facilities, two command and control nodes, two ammunition storage facilities, one tank, one anti-aircraft system, one multiple rocket launcher and one armed military vehicle in Zliten, two command and control nodes, one military facility and one ammunition storage facility near Bir al-Ghanam and two military vehicles near Misrata.
August
1 August: NATO conducted 114 sorties, including 48 strike sorties. Two military facilities and two anti-aircraft systems were hit in Bir al-Ghanam, an ammunition facility in Gharyan, one military facility, two SAM systems, one storage facility and a radar in Tripoli, one Ammunition storage facility near Waddan, one command and control node and one military facility near Zliten were targeted.
2 August: Out of 123 NATO sorties, 58 were strike sorties, hitting six military facilities, four command and control nodes, two tanks, three armed vehicles, one radar, two SAM systems, an anti-aircraft system, one logistic vehicle, one rocket launcher, and one ammo storage facility.
3 August: NATO flew 125 sorties, of which 53 were strike sorties. Targets included two armed vehicles, six military facilities, a multiple rocket launcher, one mortar, two anti-air systems, one SAM launchers, two radars, four command and control nodes and an ammo facility.
4 August: NATO airstrikes hit an ammunitions depot and a military police facility, killing 33 troops. According to rebel spokesperson, there were unconfirmed reports that the strikes had killed Khamis Gaddafi, the youngest son of Muammar Gaddafi, though a Libyan government official said that Khamis was still alive. In addition, NATO destroyed one military facility near Bir al-Ghanam, two military facilities in Tripoli, one tank, one multiple rocket launcher system and one military facility in Gharyan, two multiple rocket launchers and a SAM system in Zliten, two artillery pieces near Taworgha, and five military vehicles near Zuwara.
5 August: NATO aircraft hit a caravan of camels transporting weapons from Chad to a pro-government stronghold. NATO also destroyed one armed vehicle near Al Jawf, two military facilities, two tanks, 19 armed vehicles, one multiple rocket launcher, two military supply vehicles, five military trucks, six military buildings and one armored fighting vehicle in Brega, one military firing position and one command and control node in Gharyan, one multiple rocket launcher system staging area and one military checkpoint near Taworgha, seven armed vehicles near Teji, two artillery pieces near Misrata, and one military radar site and one military storage facility in Zliten.
6 August: NATO flew 115 sorties, of which 45 were strike sorties. Three rocket launchers, three command and control nodes, two military facilities, a rocket launcher storage, one tank, one surface to air system, six military supply vehicles, one military vehicle and an ammo depot were among the hit targets.
7 August: 119 sorties were conducted, including 59 strike sorties. Among the targets hit were a facility, a rocket launcher and two tanks in Brega, one anti-air gun, a surface-to air system, and a surface-to-air launcher in Tripoli, an ammunition depot near Waddan, four command and control facilities, a weapons storage facility, a rocket launcher, a military facility and an anti-tank gun in Zliten, one artillery piece in Gharyan, and a military facility near Misrata.
8–9 August: Overnight RAF jets successfully bombed a Libyan frigate in Tripoli harbour, a vehicle and ammunitions depot, a communications center, military barracks, and a staging post.
10 August: The Italian Air Force flew its first mission using a Predator drone, conducting surveillance operations. NATO also said they hit one ammunition storage facility near Waddan, two armed vehicles near Brega, one armed vehicle and one anti-aircraft system in Gharyan, three armed vehicles and one SAM site, one military facility, one bunker, one command and control node and one radar site in Sabha, three command and control nodes and two military storage facilities in Taworgha, one military facility and one command and control node in Zliten and one multiple rocket launcher in Bir al-Ghanam.
11 August: 123 sorties were flown by NATO, including 42 strike sorties. Key hits include an ammunition storehouse near Waddan, an armed vehicle, a rocket launcher and an artillery piece in Brega, two armed vehicles in Bir al-Ghanam, five SAM vehicles in the capital of Tripoli, a radar near Sirte and a command and control facility in Zliten.
12 August: Out of 118 sorties, NATO flew 48 strike sorties. Among the targets were three ammunition storage facilities, seven armed vehicles, a military facility, a vehicle depot, an SAM launcher, an SAM facility, two armored vehicles and two anti-air emplacements.
13 August: 110 sorties and 47 strike sorties were flown, with targets including thirteen military vehicles, five anti aircraft guns, a rocket launcher, an ammunition storage facility and two tanks.
14 August: NATO hit an anti-aircraft gun in Zawiya, a military facility near Gharyan, eleven SAM transloader vehicles, one SAM radar trailer and three radars in Tripoli, and four military facilities, one command and control facility, one armed vehicle and one artillery piece in Zliten.
15 August: NATO conducted 127 sorties, of which 49 were strike sorties. Key strikes include three tanks and two military vehicles in Zawiya, a military storage facility in Khoms, four rocket launchers in Brega, an ammunition storage facility near Waddan, one rocket launcher near Misrata, one facility and two rocket launchers in Zliten, and two tanks, one SAM vehicle, one SAM launchers and a radar near Tripoli.
16 August: NATO flew 100 sorties, including 50 strike sorties; key hits included two storage facilities, one anti-air gun, five military vehicles, two SAM trailers, one SAM launcher, and one command and control facility. NATO also claims they hit 150 targets in Libya over the past week.
17 August: A boat carrying Gaddafi forces and two armed vehicles in Zawiya, three rocket launchers and two tanks in Brega, four armed vehicles in Badr, two tanks in Zliten, two ammunition depots near Waddan, and one military facility, one radar, two surface to air transloaders, three SAM launchers and two surface to surface launchers in Tripoli were destroyed.
18 August: 133 sorties were flown, 48 of which were strike sorties. In Zawiya, one command and control facility, five tanks, two armed vehicles and a transloader were hit. In Tripoli, warplanes bombed one SAM launcher and four military facilities.
19 August: NATO conducted 130 sorties, 26 of which were strike sorties. Key hits included a military vehicle and a tank in Zawiya, nine military facilities, three radars, one radar guided anti-aircraft system and one tank in Tripoli, and one military logistics vehicle and one tank near Zliten.
20 August: In Tripoli, NATO airstrikes hit three military facilities, one military storage facility, seven SAM transloaders, one radar, one SAM launcher, two armed vehicles, two armored fighting vehicles, three command and control node, and two multiple rocket launchers. NATO also hit a command and control facility near Sirte, one multiple rocket launcher, one heavy machine gun, and a military firing position in Brega, one armed vehicle and an anti-air emplacement near Gharyan and an SAM launcher near Zliten. NATO ships also boarded and stopped a vessel headed for Libya as part of the arms embargo.
21 August: NATO warplanes conducted 126 sorties, of which 46 were strike sorties. Three command and control centers, one military facility, three radars, 14 SAM launchers, one tank and two armed vehicles were hit, with the majority of the strikes in Tripoli.
22 August: Two Multiple Rocket Launchers near Brega were destroyed by NATO. NATO also said they conducted 36 strike sorties.
23 August: NATO struck two armored fighting vehicles, two military heavy equipment trucks, three SAM systems and one radar in Tripoli, three armed vehicles and three multiple rocket launchers in near Ra's Lanuf, and two tanks, three armed vehicles, two military trucks and one military facility in Zuwara.
24 August: NATO conducted 141 sorties, including 38 strike sorties. Key hits include two military storage facilities, one military heavy equipment truck, two anti-aircraft guns, one SAM support vehicle, one multiple rocket launcher and one radar in Tripoli, one SAM support vehicle in Sirte, one SAM launcher in Okba, and one anti-tank rifle in Bani Walid. NATO also boarded and denied a ship as part of the arms embargo, bringing the total to 11 denials. France began operating EADS Harfang drones in the conflict, operated from Naval Air Station Sigonella in Sicily.
August 25: NATO airstrikes destroyed one command and control node, one SAM trans/loader vehicle and one SAM launcher in Tripoli, 29 armed vehicles and one command and control node in Sirte.
August 26: NATO conducted 123 sorties, including 42 strike sorties. Key hits include two military facilities, one military storage facility, and one SAM launcher near Tripoli, one armored fighting vehicle, 11 armed vehicles, three logistic military vehicles, one military observation point, two shelters, and one military engineer asset near Sirte, two multiple rocket launchers near Ra's Lanuf, one tank near El Assa, one SAM transporter and one radar near Okba, and one SAM launcher and two radars near 'Aziziya.
27 August: NATO hit a SAM launcher in Tripoli, one surface to surface supply vehicle in Sirte, one military storage facility in Bani Walid and one SAM facility in 'Aziziya.
28 August: NATO Aircraft destroyed two armed vehicles, one multiple rocket launcher and one anti-aircraft gun near Waddan, four radars, 20 SAM canisters, three military support vehicles, one antennae and two SAM systems in Sirte, and five multiple rocket launchers, one artillery piece and one armed vehicle near Ra's Lanuf.
29 August: NATO flew 120 sorties, of which 42 were intended as strike sorties. Key hits include three command and control nodes, four radars, one SAM system, 22 armed vehicles, one command post, two military supply vehicles, one anti-aircraft missile system and one military facility near Sirte, two command and control nodes and one military ammo storage facility near Bani Walid, five anti-aircraft artillery, one multiple rocket launcher, one radar and one anti-aircraft gun near Hun. One of Muammar Gaddafi's sons, Khamis, was reported killed when a British Apache helicopter struck his vehicle. The NTC confirmed Khamis was dead and that he was buried in Bani Walid.
30 August: International forces conducted 109 sorties, including 38 strike sorties. Targets included one command and control center, three tanks, 12 armed vehicles, one military facility, one command post and one radar in Sirte, one military ammo storage facility, one military tank/multiple rocket launcher storage facility, one military facility and three SAM launchers near Bani Walid, and four anti-aircraft weapon systems, one anti-aircraft artillery piece, one radar, two tanks, two multiple rocket launchers and one artillery piece near Hun.
31 August: NATO flew 110 sorties, including 34 strike sorties. Key hits include one command and control node, five SAM transloaders, one armed vehicle, one tank, four SAM launchers and one multiple rocket launcher in Sirte, one ammo storage facility and one command and control node in Bani Walid, and one radar and one military support vehicle in Hun.
September
1 September: NATO aircraft flew 110 sorties, of which 38 were strike sorties. Key hits included one command and control node and ammunition storage facility, seven SAM transloaders, two armed vehicles, one tank, two military trucks and three SAM canisters in Sirte, one ammunition storage facility and one armed vehicle in Bani Walid and two anti-aircraft guns, two anti-aircraft artillery systems and two radars near Waddan.
2 September: NATO conducted 122 sorties, including 40 strike sorties. Targets hit included one ammunition storage facility, eleven SAM canisters, four tanks and a training area in Sirte, one military vehicle storage facility in Bani Walid and one command and control node and one military vehicle in Hun.
3 September: NATO conducted 107 sorties, of which 48 were strike sorties. Targets hit included military barracks, an ammunition storage facility, one military police camp, one command and control node, seven SAM canisters, one SAM system and one self propelled artillery piece in Sirte, one ammunition storage facility in Bani Walid, one command and control node and four anti-aircraft guns near Hun and one command and control node, six armed vehicles, two military barracks, three military supply vehicles, two engineer support vehicles and one multiple rocket launcher near Buwayrat.
4 September: NATO conducted 117 sorties, 52 of which were intended to strike targets. Key hits included one military vehicle storage facility, two armed vehicles, four multiple rocket launchers, two heavy machine guns and four SAM canisters in Sirte, one command and control node/warehouse in Sabha, fourteen SAM canisters near Waddan and three anti-aircraft systems and three radars in Hun.
5 September: NATO conducted 116 sorties, including 42 strike sorties. Key hits included one military radar/communications site, one command and control bunker, four armed vehicles, four SAM systems and two general military facilities in Sirte and three radars and four anti-aircraft guns in Hun.
6 September: NATO conducted 118 sorties, of which 40 were strike sorties. Key hits included one SAM canister, one multiple rocket launcher, four armed vehicles, one ammunition storage facility, six tanks, six armored fighting vehicles and one self-propelled artillery in Sirte, three radars and three anti-aircraft guns in Hun, one SAM facility in Sabha and eight anti-aircraft guns in Waddan.
7 September: NATO destroyed five armored fighting vehicles and two armed vehicles in Sirte and eighteen SAM systems in Waddan.
8 September: NATO conducted 113 sorties, of which 36 were strike sorties. Targets hit included two armed vehicles and one multiple rocket launcher in Sirte, nine anti-aircraft guns and three radar systems near Waddan, one military vehicle storage facility in Sabha and one SAS storage facility in Bani Walid.
9 September: 110 sorties were conducted, including 40 strike sorties. Targets included one SAM facility, one multiple rocket launcher and an armed vehicle in Sirte, one command and control facility near Hun, one military facility near Jufra, one tank in Sabha and one armed vehicle in Bani Walid.
10 September: NATO conducted 112 sorties, of which 50 were strike sorties. Key hits included one set of SAM canisters, two tanks and two armed vehicles in Sirte, three anti-aircraft guns and five SAM canisters in Waddan, one staging area near Sabha and one tank, two armed vehicles and one multiple rocket launcher in Bani Walid.
11 September: NATO conducted 114 sorties, of which 44 were strike sorties. Key hits included one military logistic facility, one command and control facility, one radar system, seven SAM systems and seven armed vehicles near Sirte, four anti-aircraft guns near Waddan and one command and control facility near Sabha.
12 September: NATO aircraft flew 114 sorties, including 37 strike sorties. Key hits included one radar system, eight SAM systems, five SAM transloaders, one armed vehicle and two air defense command vehicles in Sirte, one anti-aircraft gun near Waddan and six tanks and two armored fighting vehicles in Sabha.
13 September: NATO flew 122 sorties, including 44 strike sorties. Targets included one command and control node, one multiple rocket launcher, two anti-aircraft guns, one armed vehicle and four radar systems in Sirte, seven anti-aircraft guns near Waddan, and one armed vehicle near Zella.
14 September: NATO aircraft flew 123 sorties, including 49 strike sorties. Key hits included one command and control node, one military vehicle storage facility, four radar systems and two SAM systems in Sirte, two anti-aircraft guns, one radar system, two military logistic vehicles and three SAM systems near Waddan, one multiple rocket launcher and two armed vehicles in Zella, two armed vehicles in Bani Walid, and one military storage depot and two military staging areas near Sabha.
15 September: NATO conducted 116 sorties, including 40 strike sorties. Key targets included one military storage facility, one tank, two armed vehicles, four multiple rocket launchers and eight air missile systems in Sirte, one multiple rocket launcher near Waddan, and four armored vehicles, one multiple rocket launcher, one tank and five armed vehicles near Sabha.
16 September: NATO aircraft flew 121 sorties, including 43 strike sorties. Targets included five command and control nodes, three radars, four armed vehicles and eight air missile systems in Sirte, and four anti-aircraft guns near Hun.
17 September: NATO conducted 106 sorties, of which 42 were intended to strike targets. Key hits included two command and control nodes, four multiple rocket launchers, one armed vehicle and four SAM systems in Sirte, nine anti-aircraft guns near Hun, one command and control node and one vehicle storage facility in Jufra, and one armored fighting vehicle, one armed vehicle and one multiple rocket launcher near Sabha.
18 September: NATO conducted 223 sorties, including 43 strike sorties. Key hits included one military facility, one command and control node, one multiple rocket launcher and four air missile systems in Sirte, and one tank, four multiple rocket launchers, two armed vehicles and six anti-aircraft guns in Waddan.
19 September: NATO conducted 91 sorties, including 32 strike sorties. Targets included one armed vehicle and one multiple rocket system in Sirte, six anti-aircraft guns and one command and control node near Waddan/Hun, two air missile systems, two air defense radar facilities and three air missile storage facilities near Sabha, and one command and control node in Bani Walid.
20 September: NATO conducted 102 sorties, including 32 strike sorties. Key hits included two ammunition storage facilities, one command and control node, one military vehicle storage facility, six air missile systems and one tank in Sirte, and one military vehicle storage facility, four anti-aircraft guns and one armed vehicle near Waddan/Hun.
21 September: NATO destroyed one command and control node and five SAM launchers in Sirte, and four anti aircraft guns and one vehicle storage depot in Hun. NATO announced an agreement to continue military action over Libya for an additional three months. The United Kingdom announced that it planned to withdraw four Eurofighter Typhoons jets and three Apache AH1 helicopters from Libyan operations, leaving sixteen Tornado GR4 jets and two Apaches for Libyan operations.
22 September 2011: NATO hit one ammunition storage depot and military barracks facility in Sirte.
23 September: NATO hit one ammunition storage facility, one anti-aircraft gun, one command and control node and two armed vehicles in Sirte.
24 September: NATO conducted 152 sorties, including 34 strike sorties, hitting two command and control nodes, one military staging position, one division storage bunker and radar facility, three ammunition storage facilities, one weapons firing position, one ammunition and vehicle depot, one vehicle staging point and 29 armed vehicles in Sirte.
25 September: NATO destroyed one command and control node, two ammunition/vehicle storage facilities, one radar facility, one multiple rocket launcher, one military support vehicle, one artillery piece and one ammunition storage facility in Sirte.
26 September: NATO destroyed one command and control node and one ammunition/vehicle storage facility in Sirte, and two bunkers/command and control nodes and one firing point in Bani Walid.
27 September: NATO destroyed one ammunition/vehicle storage facility in Sirte.
28 September: NATO conducted 96 sorties, including 30 strike sorties. Key hits include one ammunition/vehicle storage facility, one staging and firing location, one command and control staging area, two ammunition storage facilities and one tank in Sirte.
29 September: NATO flew 100 sorties, of which 42 were strike sorties. Key hits include one ammunition storage area and one multiple rocket launcher area near Sirte, and one ammunition storage area and one multiple rocket launcher in Bani Walid.
30 September: NATO destroyed 14 armed vehicles in Bani Walid.
October
1 October: NATO flew 101 sorties, including 38 strike sorties. Key hits included one multiple rocket launcher firing point and one ammunition storage facility in Bani Walid, and one command and control node, one infantry and anti-aircraft artillery staging area, two armed vehicles, four armored infantry vehicles and one tank in Sirte.
2 October: NATO destroyed one multiple rocket launcher and one armed vehicle in Sirte.
4 October: NATO destroyed one command and control node in Bani Walid.
5 October: NATO hit one military installation, six command and control nodes and one military staging location in Bani Walid.
6 October: One tank was destroyed in Bani Walid.
7 October: One firing and vehicle staging point was destroyed in Sirte.
9 October: Three armed vehicles were struck in Bani Walid.
10 October: Two ammunition and vehicle storage facilities and one missile storage facility were hit in Bani Walid.
11 October: Six military vehicles were hit in Bani Walid.
12 October: Two military vehicles in Sirte and one military vehicle in Bani Walid were hit.
13 October: Four military vehicles and one multiple rocket launcher were hit in Bani Walid.
17 October: One command and control node and nine military vehicles were hit in Bani Walid.
20 October: NATO said they had attacked a convoy of about 75 vehicles leaving Sirte, destroying one vehicle. A group of 20 vehicles carrying Muammar Gaddafi broke off from the convoy and another air asset hit ten additional vehicles. NATO also claimed they had no knowledge of Gaddafi's presence at the time of the strike. Muammar Gaddafi died after being captured by rebel forces in the area.
References
First Libyan Civil War |
69166 | https://en.wikipedia.org/wiki/Symbian%20Ltd. | Symbian Ltd. | Symbian Ltd. was a software development and licensing consortium company, known for the Symbian operating system (OS), for smartphones and some related devices. Its headquarters were in Southwark, London, England, with other offices opened in Cambridge, Sweden, Silicon Valley, Japan, India, China, South Korea, and Australia.
It was established on 24 June 1998 as a partnership between Psion, Nokia, Ericsson, Motorola, and Sony, to exploit the convergence between personal digital assistants (PDAs) and mobile phones, and a joint-effort to prevent Microsoft from extending its desktop computer monopoly into the mobile devices market. Ten years to the day after it was established, on 24 June 2008, Nokia announced that they intended to acquire the shares that they did not own already, at a cost of €264 million. On the same day the Symbian Foundation was announced, with the aim to "provide royalty-free software and accelerate innovation", and the pledged contribution of the Symbian OS and user interfaces.
The acquisition of Symbian Ltd. by Nokia was completed on 2 December 2008, at which point all Symbian employees became Nokia employees. Transfer of relevant Symbian Software Ltd. leases, trademarks, and domain names from Nokia to the Symbian Foundation was completed in April 2009. On 18 July 2009, Nokia's Symbian professional services department, which was not transferred to the Symbian Foundation, was sold to the Accenture consulting company.
Overview
Symbian Ltd. was the brainchild of Psion's next generation mobile operating system project following the 32-bit version of EPOC. Psion approached the other four companies and decided to work together on a full software suite including kernel, device drivers, and user interface. Much of Symbian's initial intellectual property came from the software arm of Psion.
Symbian Ltd developed and licensed Symbian OS, an operating system for advanced mobile phones and personal digital assistants (PDAs).
Symbian Ltd wanted the system to have different user interface layers, unlike Microsoft's offerings. Psion originally created several interfaces or "reference designs", which would later end up as Pearl (smartphone), Quartz (Palm-like PDA), and Crystal (clamshell design PDA). One early design called Emerald also ended up in the market on the Ericsson R380.
Nokia created the Series 60 (from Pearl), Series 80 and Series 90 platforms (both from Crystal), whilst UIQ Technology, which was a subsidiary of Symbian Ltd. at the time, created UIQ (from Quartz). Another interface was MOAP(S) from NTT Docomo. Despite being partners at Symbian Ltd, the different backers of each interface were effectively competing with each other's software. This became a prominent point in February 2004 when UIQ, which focuses on pen devices, announced its foray in traditional keyboard devices, competing head-on with Nokia's Series 60 offering whilst Nokia was in the process of acquiring Psion's remaining stake in Symbian Ltd. to take overall control of the company.
Shareholding
The company's founder shareholders were Psion, Nokia and Ericsson. Motorola joined the Symbian consortium shortly later, gaining the same 23.1% stake as Nokia and Ericsson in October 1998. Matsushita followed in May 1999 paying £22 million for an 8.8% stake. This was followed by Siemens taking 5% in April 2002 and Samsung also taking 5% in February 2003.
Motorola sold its stake in the company to Psion and Nokia in September 2003.
In February 2004, Psion, the originator of Symbian, intended to sell its 31.1% stake in the company to Nokia. This caused unease amongst other shareholders as Nokia would gain majority control of the company, with Sony Ericsson in particular being a vocal critic. The deal finalised with the stake shared between Nokia, Matsushita, Siemens and Sony Ericsson in July 2004, with Nokia holding a 47.9% share.
Decline
The decline of Symbian Ltd. has been tied to Nokia's fate. By 2007, it enjoyed a high level of success with its operating system running one of every two mobile phones bearing the Nokia logo so that it claimed 65 percent of the mobile market. Its Symbian OS continued to dominate the market until Nokia acquired the company in its entirety in 2008, creating it as an independent non-profit organization called Symbian Foundation. Nokia donated the assets of Symbian Ltd. as well as the Nokia's S60 platform to the new entity with the goal of developing an open-source and royalty-free mobile platform.
Nokia, however, began to lose its market share with the emergence of Apple's iPhone and Google's Android. To address this, Nokia abandoned the Symbian OS in favor of Windows Phone OS for its mobile devices, shipping its last Symbian handset in 2013. Having lost its biggest supporter and caretaker, Symbian was absorbed by Accenture, which is supposed to maintain it until 2016. The prior Symbian Foundation has transitioned into a licensing entity with no permanent staff, claiming on its website that it is responsible for only specific licensing and legal frameworks put in place during the open sourcing of the platform.
Licensees
Licensees of Symbian's operating system were:
Arima, BenQ, Fujitsu, Lenovo, Matsushita, Motorola, Nokia, Samsung, Sharp, Siemens and Sony Mobile.
Key people
Symbian Ltd's CEO at the time of acquisition was Nigel Clifford. Prior CEOs included David Levin, who left in 2005 to head United Business Media, and the founding CEO, Colly Myers, who left the company in 2002 to found IssueBits, the company behind text messaging Short Message Service (SMS) information service Any Question Answered (AQA).
See also
Symbian Foundation
Symbian OS
References
Defunct companies based in London
Ericsson
Motorola
Nokia assets
Panasonic
Samsung subsidiaries
Siemens
Software companies established in 1998
Software companies disestablished in 2008
Software companies of the United Kingdom
Sony Mobile
Symbian OS
1998 establishments in England
2008 establishments in England |
33676655 | https://en.wikipedia.org/wiki/Zram | Zram | zram, formerly called compcache, is a Linux kernel module for creating a compressed block device in RAM, i.e. a RAM disk with on-the-fly disk compression. The block device created with zram can then be used for swap or as general-purpose RAM disk. The two most common uses for zram are for the storage of temporary files () and as a swap device. Initially, zram had only the latter function, hence the original name "compcache" ("compressed cache").
After four years in the Linux kernel's driver staging area, zram was introduced into the mainline Linux kernel in version 3.14, released on March 30, 2014. From Linux kernel version 3.15 onwards (released on June 8, 2014), zram supports multiple compression streams and multiple compression algorithms. Compression algorithms include DEFLATE (DEFLATE), LZ4 (LZ4, and LZ4HC "high compression"), LZO (LZO-RLE "run-length encoding"), Zstandard (ZSTD), 842 (842). From kernel 5.1, the default is LZO-RLE, which has a balance of speed and ratio. Like most other system parameters, the compression algorithm can be selected via sysfs.
When used as a compressed swap space, zram is similar to zswap, which is not a general-purpose RAM disk, but rather an in-kernel compressed cache for swap pages. Until the introduction of CONFIG_ZRAM_WRITEBACK in kernel version 4.14, unlike zswap, zram was unable to use a storage device as a backing store, so it was unable to move less-frequently used pages to disk. However, zswap always requires a backing store, which is not the case for zram.
When used for swap, zram (like zswap) allows Linux to make more efficient use of RAM, since the operating system can then hold more pages of memory in the compressed swap than if the same amount of RAM had been used as application memory or disk cache. This is particularly effective on machines that do not have much memory. In 2012, Ubuntu briefly considered enabling zram by default on computers with small amounts of installed RAM. For this same reason, Fedora enabled zram by default starting with release 33.
Using compressed swap space with zram or zswap also offers advantages for low-end hardware devices such as embedded devices and netbooks. Such devices usually use flash-based storage, which has limited lifespan due to write amplification, and may also use it to provide swap space. Using zram or zswap reduces the swap usage, which effectively reduces the amount of wear placed on flash-based storage and makes it last longer. Using zram also results in significantly reduced I/O for Linux systems that require swapping.
See also
Swap partitions on SSDs
References
External links
zram Linux kernel documentation and zramctl(8) manual page
Compcache, Compressed Caching for Linux
Compcache: in-memory compressed swapping, May 26, 2009, LWN.net, by Nitin Gupta
In-kernel memory compression, April 3, 2013, LWN.net, by Dan Magenheimer
The Compression Cache: Virtual Memory Compression for Handheld Computers, March 16, 2000, by Michael J. Freedman
Memory management
Linux kernel features
Virtual memory |
263472 | https://en.wikipedia.org/wiki/SciPy | SciPy | SciPy (pronounced "sigh pie") is a free and open-source Python library used for scientific computing and technical computing.
SciPy contains modules for optimization, linear algebra, integration, interpolation, special functions, FFT, signal and image processing, ODE solvers and other tasks common in science and engineering.
SciPy is also a family of conferences for users and developers of these tools: SciPy (in the United States), EuroSciPy (in Europe) and SciPy.in (in India). Enthought originated the SciPy conference in the United States and continues to sponsor many of the international conferences as well as host the SciPy website.
The SciPy library is currently distributed under the BSD license, and its development is sponsored and supported by an open community of developers. It is also supported by NumFOCUS, a community foundation for supporting reproducible and accessible science.
Components
The SciPy package is at the core of Python's scientific computing capabilities. Available sub-packages include:
cluster: hierarchical clustering, vector quantization, K-means
constants: physical constants and conversion factors
fft: Discrete Fourier Transform algorithms
fftpack: Legacy interface for Discrete Fourier Transforms
integrate: numerical integration routines
interpolate: interpolation tools
io: data input and output
linalg: linear algebra routines
misc: miscellaneous utilities (e.g. example images)
ndimage: various functions for multi-dimensional image processing
ODR: orthogonal distance regression classes and algorithms
optimize: optimization algorithms including linear programming
signal: signal processing tools
sparse: sparse matrices and related algorithms
spatial: algorithms for spatial structures such as k-d trees, nearest neighbors, Convex hulls, etc.
special: special functions
stats: statistical functions
weave: tool for writing C/C++ code as Python multiline strings (now deprecated in favor of Cython)
Data structures
The basic data structure used by SciPy is a multidimensional array provided by the NumPy module. NumPy provides some functions for linear algebra, Fourier transforms, and random number generation, but not with the generality of the equivalent functions in SciPy. NumPy can also be used as an efficient multidimensional container of data with arbitrary datatypes. This allows NumPy to seamlessly and speedily integrate with a wide variety of databases. Older versions of SciPy used Numeric as an array type, which is now deprecated in favor of the newer NumPy array code.
History
In the 1990s, Python was extended to include an array type for numerical computing called Numeric (This package was eventually replaced by Travis Oliphant who wrote NumPy in 2006 as a blending of Numeric and Numarray which had been started in 2001). As of 2000, there was a growing number of extension modules and increasing interest in creating a complete environment for scientific and technical computing. In 2001, Travis Oliphant, Eric Jones, and Pearu Peterson merged code they had written and called the resulting package SciPy. The newly created package provided a standard collection of common numerical operations on top of the Numeric array data structure. Shortly thereafter, Fernando Pérez released IPython, an enhanced interactive shell widely used in the technical computing community, and John Hunter released the first version of Matplotlib, the 2D plotting library for technical computing. Since then the SciPy environment has continued to grow with more packages and tools for technical computing.
See also
Comparison of numerical-analysis software
List of numerical-analysis software
Comparison of statistical packages
SageMath
Notes
Further reading
External links
Cross-platform software
Free science software
Numerical analysis software for Linux
Numerical analysis software for MacOS
Numerical analysis software for Windows
Numerical programming languages
Python (programming language) scientific libraries
Software using the BSD license |
188479 | https://en.wikipedia.org/wiki/List%20of%20file%20formats | List of file formats | This is a list of file formats used by computers, organized by type. Filename extension it is usually noted in parentheses if they differ from the file format name or abbreviation. Many operating systems do not limit filenames to one extension shorter than 4 characters, as was common with some operating systems that supported the File Allocation Table (FAT) file system. Examples of operating systems that do not impose this limit include Unix-like systems, and Microsoft Windows NT, 95-98, and ME which have no three character limit on extensions for 32-bit or 64-bit applications on file systems other than pre-Windows 95 and Windows NT 3.5 versions of the FAT file system. Some filenames are given extensions longer than three characters. While MS-DOS and NT always treat the suffix after the last period in a file's name as its extension, in UNIX-like systems, the final period does not necessarily mean that the text after the last period is the file's extension.
Some file formats, such as .txt or .text, may be listed multiple times.
Archive and compressed
.?mn - is a custom file made by Team Gastereler for making it easy to open .arc files for Nintendo which can be opened on PC by these files. These types of files are not available anywhere, as they haven't been released yet.
.?Q? – files that are compressed, often by the SQ program.
7z – 7-Zip compressed file
A - An external file extension for C/C++
AAPKG – ArchestrA IDE
AAC – Advanced Audio Coding
ace – ACE compressed file
ALZ – ALZip compressed file
APK – Android package: Applications installable on Android; package format of the Alpine Linux distribution
APPX – Microsoft Application Package (.appx)
AT3 – Sony's UMD data compression
.bke – BackupEarth.com data compression
ARC – pre-Zip data compression
ARC - Nintendo U8 Archive (mostly Yaz0 compressed)
ARJ – ARJ compressed file
ASS (also SAS) – a subtitles file created by Aegisub, a video typesetting application (also a Halo game engine file)
B – (B file) Similar to .a, but less compressed.
BA – Scifer Archive (.ba), Scifer External Archive Type
BB - Is an 3D image file made with the application, Artlantis.
big – Special file compression format used by Electronic Arts to compress the data for many of EA's games
BIN – compressed archive, can be read and used by CD-ROMs and Java, extractable by 7-zip and WINRAR
bjsn – Used to store The Escapists saves on Android.
BKF (.bkf) – Microsoft backup created by NTBackup.c
Blend - An external 3D file format used by the animation software, Blender.
bzip2 (.bz2) –
BMP - Bitmap Image - You can create one by right-clicking the home screen, next, click new, then, click Bitmap Image
bld – Skyscraper Simulator Building
cab – A cabinet (.cab) file is a library of compressed files stored as one file. Cabinet files are used to organize installation files that are copied to the user's system.
c4 – JEDMICS image files, a DOD system
cals – JEDMICS image files, a DOD system
xaml – Used in programs like Visual Studio to create exe files.
CLIPFLAIR (.clipflair, .clipflair.zip) – ClipFlair Studio ClipFlair component saved state file (contains component options in XML, extra/attached files and nested components' state in child .clipflair.zip files – activities are also components and can be nested at any depth)
CPT, SEA – Compact Pro (Macintosh)
DAA – Closed-format, Windows-only compressed disk image
deb – Debian install package
DMG – an Apple compressed/encrypted format
DDZ – a file which can only be used by the "daydreamer engine" created by "fever-dreamer", a program similar to RAGS, it's mainly used to make somewhat short games.
DN – Adobe Dimension CC file format
DPE – Package of AVE documents made with Aquafadas digital publishing tools.
.egg – Alzip Egg Edition compressed file
EGT (.egt) – EGT Universal Document also used to create compressed cabinet files replaces .ecab
ECAB (.ECAB, .ezip) – EGT Compressed Folder used in advanced systems to compress entire system folders, replaced by EGT Universal Document
ESD – Electronic Software Distribution, a compressed and encrypted WIM File
ESS (.ess) – EGT SmartSense File, detects files compressed using the EGT compression system.
EXE (.exe) – Windows application
Flipchart file (.flipchart) – Used in Promethean ActivInspire Flipchart Software.
GBP – GBP File Extension – What is a .gbp file and how do I open it? 2 types of files: 1. An archive index file that is created by Genie Timeline . It contains references to the files that the user has chosen to backup; the references can be to an archive file or a batch of files. This files can be opened using Genie-Soft Genie Timeline on Windows. 2. A data output file created by CAD Printed Circuit Board (PCB). This type of file can be opened on Windows using Autodesk EAGLE EAGLE | PCB Design Software | Autodesk, Altium Designer , Viewplot Welcome to Viewplot.com ...For PCB Related Software;...Viewplot The Gerber Viewer & editor in one......PCB Elegance a professional layout package for a affordable price, Gerbv gerbv – A Free/Open Source Gerber Viewer on Mac using Autodesk EAGLE, Gerbv, gEDA gplEDA Homepage and on Linux using Autodesk EAGLE, gEDA, Gerbv
GBS (.gbs, .ggp, .gsc) – OtterUI binary scene file
GHO (.gho, .ghs) – Norton Ghost
GIF (.gif) – Graphics Interchange Format
gzip (.gz) – Compressed file
HTML (.html) HTML code file
IPG (.ipg) – Format in which Apple Inc. packages their iPod games. can be extracted through Winrar
jar – ZIP file with manifest for use with Java applications.
JPG - Joints Photographic Experts Group - Image File
JPEG - Joints Photographic Experts Group - Image File
LBR (.Lawrence) – Lawrence Compiler Type file
LBR – Library file
LQR – LBR Library file compressed by the SQ program.
LHA (.lzh) – Lempel, Ziv, Huffman
lzip (.lz) – Compressed file
lzo
lzma – Lempel–Ziv–Markov chain algorithm compressed file
LZX
MBW (.mbw) – MBRWizard archive
MHTML – Mime HTML (Hyper-Text Markup Language) code file
MPQ Archives (.mpq) – Used by Blizzard Entertainment
BIN (.bin) – MacBinary
NL2PKG – NoLimits 2 Package (.nl2pkg)
NTH (.nth) – Nokia Theme Used by Nokia Series 40 Cellphones
OAR (.oar) – OAR archive
OSK - Compressed osu! skin archive
OSR - Compressed osu! replay archive
OSZ – Compressed osu! beatmap archive
PAK – Enhanced type of .ARC archive
PAR (.par, .par2) – Parchive
PAF (.paf) – Portable Application File
PEA (.pea) – PeaZip archive file
PNG - Portable Network Graphic - Image File
PHP (.php) – PHP code file
PYK (.pyk) – Compressed file
PK3 (.pk3) – Quake 3 archive (See note on Doom³)
PK4 (.pk4) – Doom³ archive (Opens similarly to a zip archive.)
PXZ (.pxz) - A compressed layered image file used for the image editing website, pixlr.com .
py / pyw – Python code file
RAR (.rar) – Rar Archive, for multiple file archive (rar to .r01-.r99 to s01 and so on)
RAG, RAGS – Game file, a game playable in the RAGS game-engine, a free program which both allows people to create games, and play games, games created have the format "RAG game file"
RaX – Archive file created by RaX
RBXL – Roblox Studio place file
RBXLX – Roblox Studio XML place file
RBXM - Roblox studio script file
RPM – Red Hat package/installer for Fedora, RHEL, and similar systems.
sb – Scratch file
sb2 – Scratch 2.0 file
sb3 - Scratch 3.0 file
SEN – Scifer Archive (.sen) – Scifer Internal Archive Type
SIT (.sitx) – StuffIt (Macintosh)
SIS/SISX – Symbian Application Package
SKB – Google SketchUp backup File
SQ (.sq) – Squish Compressed Archive
SWM – Splitted WIM File, usually found on OEM Recovery Partition to store preinstalled Windows image, and to make Recovery backup (to USB Drive) easier (due to FAT32 limitations)
SZS – Nintendo Yaz0 Compressed Archive
TAR – group of files, packaged as one file
TGZ (.tar.gz) – gzipped tar file
TB (.tb) – Tabbery Virtual Desktop Tab file
TIB (.tib) – Acronis True Image backup
UHA – Ultra High Archive Compression
UUE (.uue) – unified utility engine – the generic and default format for all things UUe-related.
VIV – Archive format used to compress data for several video games, including Need For Speed: High Stakes.
VOL – video game data package.
VSA – Altiris Virtual Software Archive
WAX – Wavexpress – A ZIP alternative optimized for packages containing video, allowing multiple packaged files to be all-or-none delivered with near-instantaneous unpacking via NTFS file system manipulation.
WIM – A compressed disk image for installing Windows Vista or higher, Windows Fundamentals for Legacy PC, or restoring a system image made from Backup and Restore (Windows Vista/7)
XAP – Windows Phone Application Package
xz – xz compressed files, based on LZMA/LZMA2 algorithm
Z – Unix compress file
zoo – based on LZW
zip – popular compression format
ZIM – an open file format that stores wiki content for offline usage
Physical recordable media archiving
ISO – The generic format for most optical media, including CD-ROM, DVD-ROM, Blu-ray Disc, HD DVD and UMD.
NRG – The proprietary optical media archive format used by Nero applications.
IMG – For archiving DOS formatted floppy disks, larger optical media, and hard disk drives.
ADF – Amiga Disk Format, for archiving Amiga floppy disks
ADZ – The GZip-compressed version of ADF.
DMS – Disk Masher System, a disk-archiving system native to the Amiga.
DSK – For archiving floppy disks from a number of other platforms, including the ZX Spectrum and Amstrad CPC.
D64 – An archive of a Commodore 64 floppy disk.
SDI – System Deployment Image, used for archiving and providing "virtual disk" functionality.
MDS – DAEMON tools native disc image format used for making images from optical CD-ROM, DVD-ROM, HD DVD or Blu-ray Disc. It comes together with MDF file and can be mounted with DAEMON Tools.
MDX – New DAEMON Tools format that allows getting one MDX disc image file instead of two (MDF and MDS).
DMG – Macintosh disk image files
(MPEG-1 is found in a .DAT file on a video CD.)
CDI – DiscJuggler image file
CUE – CDRWrite CUE image file
CIF – Easy CD Creator .cif format
C2D – Roxio-WinOnCD .c2d format
DAA – PowerISO .daa format
B6T – BlindWrite 6 image file
B5T – BlindWrite 5 image file
BWT – BlindWrite 4 image file
FFPPKG - FreeFire Profile Export Package
Other Extensions
HTML - Hypertext Markup Language
Computer-aided design
Computer-aided is a prefix for several categories of tools (e.g., design, manufacture, engineering) which assist professionals in their respective fields (e.g., machining, architecture, schematics).
Computer-aided design (CAD)
Computer-aided design (CAD) software assists engineers, architects and other design professionals in project design.
3DXML – Dassault Systemes graphic representation
3MF – Microsoft 3D Manufacturing Format
ACP – VA Software VA – Virtual Architecture CAD file
AMF – Additive Manufacturing File Format
AEC – DataCAD drawing format
AR – Ashlar-Vellum Argon – 3D Modeling
ART – ArtCAM model
ASC – BRL-CAD Geometry File (old ASCII format)
ASM – Solidedge Assembly, Pro/ENGINEER Assembly
BIN, BIM – Data Design System DDS-CAD
BREP – Open CASCADE 3D model (shape)
C3D – C3D Toolkit File Format
C3P - Construct3 Files
CCC – CopyCAD Curves
CCM – CopyCAD Model
CCS – CopyCAD Session
CAD – CadStd
CATDrawing – CATIA V5 Drawing document
CATPart – CATIA V5 Part document
CATProduct – CATIA V5 Assembly document
CATProcess – CATIA V5 Manufacturing document
cgr – CATIA V5 graphic representation file
ckd – KeyCreator CAD Modeling
ckt – KeyCreator CAD Modeling
CO – Ashlar-Vellum Cobalt – parametric drafting and 3D modeling
DRW – Caddie Early version of Caddie drawing – Prior to Caddie changing to DWG
DFT – Solidedge Draft
DGN – MicroStation design file
DGK – Delcam Geometry
DMT – Delcam Machining Triangles
DXF – ASCII Drawing Interchange file format, AutoCAD
DWB – VariCAD drawing file
DWF – Autodesk's Web Design Format; AutoCAD & Revit can publish to this format; similar in concept to PDF files; Autodesk Design Review is the reader
DWG – Popular file format for Computer Aided Drafting applications, notably AutoCAD, Open Design Alliance applications, and Autodesk Inventor Drawing files
EASM – SolidWorks eDrawings assembly file
EDRW – eDrawings drawing file
EMB – Wilcom ES Designer Embroidery CAD file
EPRT – eDrawings part file
EscPcb – "esCAD pcb" data file by Electro-System (Japan)
EscSch – "esCAD sch" data file by Electro-System (Japan)
ESW – AGTEK format
EXCELLON – Excellon file
EXP – Drawing Express format
F3D – Autodesk Fusion 360 archive file
FCStd – Native file format of FreeCAD CAD/CAM package
FM – FeatureCAM Part File
FMZ – FormZ Project file
G – BRL-CAD Geometry File
GBR – Gerber file
GLM – KernelCAD model
GRB – T-FLEX CAD File
GRI - AppliCad GRIM-In file in readable text form for importing roof and wall cladding job data generated by business management and accounting systems into the modelling/estimating program
GRO - AppliCad GRIM-Out file in readable text form for exporting roof and wall cladding data job material and labour costing data, material lists generated by the modelling/estimating program to business management and accounting systems
IAM – Autodesk Inventor Assembly file
ICD – IronCAD 2D CAD file
IDW – Autodesk Inventor Drawing file
IFC – buildingSMART for sharing AEC and FM data
IGES – Initial Graphics Exchange Specification
Intergraph Standard File Formats – Intergraph
IO – Stud.io 3d model
IPN – Autodesk Inventor Presentation file
IPT – Autodesk Inventor Part file
JT – Jupiter Tesselation
MCD – Monu-CAD (Monument/Headstone Drawing file)
MDG – Model of Digital Geometric Kernel
model – CATIA V4 part document
OCD – Orienteering Computer Aided Design (OCAD) file
PAR – Solidedge Part
PIPE – PIPE-FLO Professional Piping system design file
PLN – ArchiCad project
PRT – NX (recently known as Unigraphics), Pro/ENGINEER Part, CADKEY Part
PSM – Solidedge Sheet
PSMODEL – PowerSHAPE Model
PWI – PowerINSPECT File
PYT – Pythagoras File
SKP – SketchUp Model
RLF – ArtCAM Relief
RVM – AVEVA PDMS 3D Review model
RVT – Autodesk Revit project files
RFA – Autodesk Revit family files
RXF - AppliCad annotated 3D roof and wall geometry data in readable text form used to exchange 3D model geometry with other systems such as truss design software
S12 – Spirit file, by Softtech
SCAD – OpenSCAD 3D part model
SCDOC – SpaceClaim 3D Part/Assembly
SLDASM – SolidWorks Assembly drawing
SLDDRW – SolidWorks 2D drawing
SLDPRT – SolidWorks 3D part model
dotXSI – For Softimage
STEP – Standard for the Exchange of Product model data
STL – Stereo Lithographic data format used by various CAD systems and stereo lithographic printing machines.
STD – Power Vision Plus – Electricity Meter Data (Circutor)
TCT – TurboCAD drawing template
TCW – TurboCAD for Windows 2D and 3D drawing
UNV – I-DEAS I-DEAS (Integrated Design and Engineering Analysis Software)
VC6 – Ashlar-Vellum Graphite – 2D and 3D drafting
VLM – Ashlar-Vellum Vellum, Vellum 2D, Vellum Draft, Vellum 3D, DrawingBoard
VS – Ashlar-Vellum Vellum Solids
WRL – Similar to STL, but includes color. Used by various CAD systems and 3D printing rapid prototyping machines. Also used for VRML models on the web.
X_B – Parasolids binary format
X_T – Parasolids
XE – Ashlar-Vellum Xenon – for associative 3D modeling
ZOFZPROJ – ZofzPCB 3D PCB model, containing mesh, netlist and BOM
Electronic design automation (EDA)
Electronic design automation (EDA), or electronic computer-aided design (ECAD), is specific to the field of electrical engineering.
BRD – Board file for EAGLE Layout Editor, a commercial PCB design tool
BSDL – Description language for testing through JTAG
CDL – Transistor-level netlist format for IC design
CPF – Power-domain specification in system-on-a-chip (SoC) implementation (see also UPF)
DEF – Gate-level layout
DSPF – Detailed Standard Parasitic Format, Analog-level parasitics of interconnections in IC design
EDIF – Vendor neutral gate-level netlist format
FSDB – Analog waveform format (see also Waveform viewer)
GDSII – Format for PCB and layout of integrated circuits
HEX – ASCII-coded binary format for memory dumps
LEF – Library Exchange Format, physical abstract of cells for IC design
LIB – Library modeling (function, timing) format
MS12 – NI Multisim file
OASIS – Open Artwork System Interchange Standard
OpenAccess – Design database format with APIs
PSF – Cadence proprietary format to store simulation results/waveforms (2GB limit)
PSFXL – Cadence proprietary format to store simulation results/waveforms
SDC – Synopsys Design Constraints, format for synthesis constraints
SDF – Standard for gate-level timings
SPEF – Standard format for parasitics of interconnections in IC design
SPI, CIR – SPICE Netlist, device-level netlist and commands for simulation
SREC, S19 – S-record, ASCII-coded format for memory dumps
SST2 – Cadence proprietary format to store mixed-signal simulation results/waveforms
STIL – Standard Test Interface Language, IEEE1450-1999 standard for Test Patterns for IC
SV – SystemVerilog source file
S*P – Touchstone/EEsof Scattering parameter data file – multi-port blackbox performance, measurement or simulated
TLF – Contains timing and logical information about a collection of cells (circuit elements)
UPF – Standard for Power-domain specification in SoC implementation
V – Verilog source file
VCD – Standard format for digital simulation waveform
VHD, VHDL – VHDL source file
WGL – Waveform Generation Language, format for Test Patterns for IC
Test technology
Files output from Automatic Test Equipment or post-processed from such.
Standard Test Data Format
Database
4DB – 4D database Structure file
4DD – 4D database Data file
4DIndy – 4D database Structure Index file
4DIndx – 4D database Data Index file
4DR – 4D database Data resource file (in old 4D versions)
ACCDB – Microsoft Database (Microsoft Office Access 2007 and later)
ACCDE – Compiled Microsoft Database (Microsoft Office Access 2007 and later)
ADT – Sybase Advantage Database Server (ADS)
APR – Lotus Approach data entry & reports
BOX – Lotus Notes Post Office mail routing database
CHML – Krasbit Technologies Encrypted database file for 1 click integration between contact management software and the chameleon(tm) line of imaging workflow solutions
DAF – Digital Anchor data file
DAT – DOS Basic
DAT – Intersystems Caché database file
DB – Paradox
DB – SQLite
DBF – db/dbase II,III,IV and V, Clipper, Harbour/xHarbour, Fox/FoxPro, Oracle
DTA – Sage Sterling database file
EGT – EGT Universal Document, used to compress sql databases to smaller files, may contain original EGT database style.
ESS – EGT SmartSense is a database of files and its compression style. Specific to EGT SmartSense
EAP – Enterprise Architect Project
FDB – Firebird Databases
FDB – Navision database file
FP, FP3, FP5, and FP7 – FileMaker Pro
FRM – MySQL table definition
GDB – Borland InterBase Databases
GTABLE – Google Drive Fusion Table
KEXI – Kexi database file (SQLite-based)
KEXIC – shortcut to a database connection for a Kexi databases on a server
KEXIS – shortcut to a Kexi database
LDB – Temporary database file, only existing when database is open
LIRS - Layered Intager Storage. Stores intageres with characters such as semicolons to create lists of data.
MDA – Add-in file for Microsoft Access
MDB – Microsoft Access database
ADP – Microsoft Access project (used for accessing databases on a server)
MDE – Compiled Microsoft Database (Access)
MDF – Microsoft SQL Server Database
MYD – MySQL MyISAM table data
MYI – MySQL MyISAM table index
NCF – Lotus Notes configuration file
NSF – Lotus Notes database
NTF – Lotus Notes database design template
NV2 – QW Page NewViews object oriented accounting database
ODB – LibreOffice Base or OpenOffice Base database
ORA – Oracle tablespace files sometimes get this extension (also used for configuration files)
PCONTACT – WinIM Contact file
PDB – Palm OS Database
PDI – Portable Database Image
PDX – Corel Paradox database management
PRC – Palm OS resource database
SQL – bundled SQL queries
REC – GNU recutils database
REL – Sage Retrieve 4GL data file
RIN – Sage Retrieve 4GL index file
SDB – StarOffice's StarBase
SDF – SQL Compact Database file
sqlite – SQLite
UDL – Universal Data Link
waData – Wakanda (software) database Data file
waIndx – Wakanda (software) database Index file
waModel – Wakanda (software) database Model file
waJournal – Wakanda (software) database Journal file
WDB – Microsoft Works Database
WMDB – Windows Media Database file – The CurrentDatabase_360.wmdb file can contain file name, file properties, music, video, photo and playlist information.
Big Data (Distributed)
Avro - Data format appropriate for ingestion of record based attributes. Distinguishing characteristic is schema is stored on each row enabling schema evolution.
Parquet - Columnar data storage. It is typically used within the Hadoop ecosystem.
ORC - Similar to Parquet, but has better data compression and schema evolution handling.
Desktop publishing
AI – Adobe Illustrator
AVE / ZAVE – Aquafadas
CDR – CorelDRAW
CHP / pub / STY / CAP / CIF / VGR / FRM – Ventura Publisher – Xerox (DOS / GEM)
CPT – Corel Photo-Paint
DTP – Greenstreet Publisher, GST PressWorks
FM – Adobe FrameMaker
GDRAW – Google Drive Drawing
ILDOC – Broadvision Quicksilver document
INDD – Adobe InDesign
MCF – FotoInsight Designer
PDF – Adobe Acrobat or Adobe Reader
PMD – Adobe PageMaker
PPP – Serif PagePlus
PSD – Adobe Photoshop
PUB – Microsoft Publisher
QXD – QuarkXPress
SLA / SCD – Scribus
XCF – File format used by the GIMP, as well as other programs
Document
These files store formatted text and plain text.
0 – Plain Text Document, normally used for licensing
1ST – Plain Text Document, normally preceded by the words "README" (README.1ST)
600 – Plain Text Document, used in UNZIP history log
602 – Text602 document
ABW – AbiWord document
ACL – MS Word AutoCorrect List
AFP – Advanced Function Presentation – IBc
AMI – Lotus Ami Pro
Amigaguide
ANS – American National Standards Institute (ANSI) text
ASC – ASCII text
AWW – Ability Write
CCF – Color Chat 1.0
CSV – ASCII text as comma-separated values, used in spreadsheets and database management systems
CWK – ClarisWorks-AppleWorks document
DBK – DocBook XML sub-format
DITA – Darwin Information Typing Architecture document
DOC – Microsoft Word document
DOCM – Microsoft Word macro-enabled document
DOCX – Office Open XML document
DOT – Microsoft Word document template
DOTX – Office Open XML text document template
DWD – DavkaWriter Heb/Eng word processor file
EGT – EGT Universal Document
EPUB – EPUB open standard for e-books
EZW – Reagency Systems easyOFFER document
FDX – Final Draft
FTM – Fielded Text Meta
FTX – Fielded Text (Declared)
GDOC – Google Drive Document
HTML – HyperText Markup Language (.html, .htm)
HWP – Haansoft (Hancom) Hangul Word Processor document
HWPML – Haansoft (Hancom) Hangul Word Processor Markup Language document
LOG – Text log file
LWP – Lotus Word Pro
MBP – metadata for Mobipocket documents
MD – Markdown text document
ME – Plain text document normally preceded by the word "READ" (READ.ME)
MCW – Microsoft Word for Macintosh (versions 4.0–5.1)
Mobi – Mobipocket documents
NB – Mathematica Notebook
nb – Nota Bene Document (Academic Writing Software)
NBP – Mathematica Player Notebook
NEIS – 학교생활기록부 작성 프로그램 (Student Record Writing Program) Document
NT – N-Triples RDF container (.nt)
NQ – N-Quads RDF container (.nq)
ODM – OpenDocument master document
ODOC – Synology Drive Office Document
ODT – OpenDocument text document
OSHEET – Synology Drive Office Spreadsheet
OTT – OpenDocument text document template
OMM – OmmWriter text document
PAGES – Apple Pages document
PAP – Papyrus word processor document
PDAX – Portable Document Archive (PDA) document index file
PDF – Portable Document Format
QUOX – Question Object File Format for Quobject Designer or Quobject Explorer
Radix-64
RTF – Rich Text document
RPT – Crystal Reports
SDW – StarWriter text document, used in earlier versions of StarOffice
SE – Shuttle Document
STW – OpenOffice.org XML (obsolete) text document template
Sxw – OpenOffice.org XML (obsolete) text document
TeX – TeX
INFO – Texinfo
Troff
TXT – ASCII or Unicode plain text file
UOF – Uniform Office Format
UOML – Unique Object Markup Language
VIA – Revoware VIA Document Project File
WPD – WordPerfect document
WPS – Microsoft Works document
WPT – Microsoft Works document template
WRD – WordIt! document
WRF – ThinkFree Write
WRI – Microsoft Write document
XHTML (xhtml, xht) – eXtensible HyperText Markup Language
XML – eXtensible Markup Language
XPS – Open XML Paper Specification
Financial records
MYO – MYOB Limited (Windows) File
MYOB – MYOB Limited (Mac) File
TAX – TurboTax File
YNAB – You Need a Budget (YNAB) File
Financial data transfer formats
Interactive Financial Exchange (IFX) – XML-based specification for various forms of financial transactions
Open Financial Exchange (.ofx) – open standard supported by CheckFree and Microsoft and partly by Intuit; SGML and later XML based
QFX – proprietary pay-only format used only by Intuit
Quicken Interchange Format (.qif) – open standard formerly supported by Intuit
Font file
ABF – Adobe Binary Screen Font
AFM – Adobe Font Metrics
BDF – Bitmap Distribution Format
BMF – ByteMap Font Format
BRFNT - Binary Revolution Font Format
FNT – Bitmapped Font – Graphics Environment Manager (GEM)
FON – Bitmapped Font – Microsoft Windows
MGF – MicroGrafx Font
OTF – OpenType Font
PCF – Portable Compiled Format
PostScript Font – Type 1, Type 2
PFA – Printer Font ASCII
PFB – Printer Font Binary – Adobe
PFM – Printer Font Metrics – Adobe
AFM – Adobe Font Metrics
FOND – Font Description resource – Mac OS
SFD – FontForge spline font database Font
SNF – Server Normal Format
TDF – TheDraw Font
TFM – TeX font metric
TTF (.ttf, .ttc) – TrueType Font
UFO – Unified Font Object is a cross-platform, cross-application, human readable, future proof format for storing font data.
WOFF – Web Open Font Format
Geographic information system
ASC – ASCII point of interest (POI) text file
APR – ESRI ArcView 3.3 and earlier project file
DEM – USGS DEM file format
E00 – ARC/INFO interchange file format
GeoJSON –Geographically located data in object notation
GeoTIFF – Geographically located raster data
GML – Geography Markup Language file
GPX – XML-based interchange format
ITN – TomTom Itinerary format
MXD – ESRI ArcGIS project file, 8.0 and higher
NTF – National Transfer Format file
OV2 – TomTom POI overlay file
SHP – ESRI shapefile
TAB – MapInfo Table file format
World TIFF – Geographically located raster data: text file giving corner coordinate, raster cells per unit, and rotation
DTED – Digital Terrain Elevation Data
KML – Keyhole Markup Language, XML-based
Graphical information organizers
3DT – 3D Topicscape, the database in which the meta-data of a 3D Topicscape is held, it is a form of 3D concept map (like a 3D mind-map) used to organize ideas, information, and computer files
ATY – 3D Topicscape file, produced when an association type is exported; used to permit round-trip (export Topicscape, change files and folders as desired, re-import to 3D Topicscape)
CAG – Linear Reference System
FES – 3D Topicscape file, produced when a fileless occurrence in 3D Topicscape is exported to Windows. Used to permit round-trip (export Topicscape, change files and folders as desired, re-import them to 3D Topicscape)
MGMF – MindGenius Mind Mapping Software file format
MM – FreeMind mind map file (XML)
MMP – Mind Manager mind map file
TPC – 3D Topicscape file, produced when an inter-Topicscape topic link file is exported to Windows; used to permit round-trip (export Topicscape, change files and folders as desired, re-import to 3D Topicscape)
Graphics
Color palettes
ACT – Adobe Color Table. Contains a raw color palette and consists of 256 24-bit RGB colour values.
ASE – Adobe Swatch Exchange. Used by Adobe Photoshop, Illustrator, and InDesign.
GPL – GIMP palette file. Uses a text representation of color names and RGB values. Various open source graphical editors can read this format, including GIMP, Inkscape, Krita, KolourPaint, Scribus, CinePaint, and MyPaint.
PAL – Microsoft RIFF palette file
Color management
ICC/ICM – Color profile conforming the specification of the ICC.
Raster graphics
Raster or bitmap files store images as a group of pixels.
ART – America Online proprietary format
BLP – Blizzard Entertainment proprietary texture format
BMP – Microsoft Windows Bitmap formatted image
BTI – Nintendo proprietary texture format
CD5 – Chasys Draw IES image
CIT – Intergraph is a monochrome bitmap format
CPT – Corel PHOTO-PAINT image
CR2 – Canon camera raw format; photos have this on some Canon cameras if the quality RAW is selected in camera settings
CLIP – CLIP STUDIO PAINT format
CPL – Windows control panel file
DDS – DirectX texture file
DIB – Device-Independent Bitmap graphic
DjVu – DjVu for scanned documents
EGT – EGT Universal Document, used in EGT SmartSense to compress PNG files to yet a smaller file
Exif – Exchangeable image file format (Exif) is a specification for the image format used by digital cameras
GIF – CompuServe's Graphics Interchange Format
GRF – Zebra Technologies proprietary format
ICNS – format for icons in macOS. Contains bitmap images at multiple resolutions and bitdepths with alpha channel.
ICO – a format used for icons in Microsoft Windows. Contains small bitmap images at multiple resolutions and bitdepths with 1-bit transparency or alpha channel.
IFF (.iff, .ilbm, .lbm) – ILBM
JNG – a single-frame MNG using JPEG compression and possibly an alpha channel
JPEG, JFIF (.jpg or .jpeg) – Joint Photographic Experts Group; a lossy image format widely used to display photographic images
JP2 – JPEG2000
JPS – JPEG Stereo
KRA – Krita image file
LBM – Deluxe Paint image file
MAX – ScanSoft PaperPort document
MIFF – ImageMagick's native file format
MNG – Multiple-image Network Graphics, the animated version of PNG
MSP – a format used by old versions of Microsoft Paint; replaced by BMP in Microsoft Windows 3.0
NITF – A U.S. Government standard commonly used in Intelligence systems
OTB – Over The Air bitmap, a specification designed by Nokia for black and white images for mobile phones
PBM – Portable bitmap
PC1 – Low resolution, compressed Degas picture file
PC2 – Medium resolution, compressed Degas picture file
PC3 – High resolution, compressed Degas picture file
PCF – Pixel Coordination Format
PCX – a lossless format used by ZSoft's PC Paint, popular for a time on DOS systems.
PDN – Paint.NET image file
PGM – Portable graymap
PI1 – Low resolution, uncompressed Degas picture file
PI2 – Medium resolution, uncompressed Degas picture file; also Portrait Innovations encrypted image format
PI3 – High resolution, uncompressed Degas picture file
PICT, PCT – Apple Macintosh PICT image
PNG – Portable Network Graphic (lossless, recommended for display and edition of graphic images)
PNM – Portable anymap graphic bitmap image
PNS – PNG Stereo
PPM – Portable Pixmap (Pixel Map) image
PSB – Adobe Photoshop Big image file (for large files)
PSD, PDD – Adobe Photoshop Drawing
PSP – Paint Shop Pro image
PX – Pixel image editor image file
PXM – Pixelmator image file
PXR – Pixar Image Computer image file
QFX – QuickLink Fax image
RAW – General term for minimally processed image data (acquired by a digital camera)
RLE – a run-length encoding image
SCT – Scitex Continuous Tone image file
SGI, RGB, INT, BW – Silicon Graphics Image
TGA (.tga, .targa, .icb, .vda, .vst, .pix) – Truevision TGA (Targa) image
TIFF (.tif or .tiff) – Tagged Image File Format (usually lossless, but many variants exist, including lossy ones)
TIFF/EP (.tif or .tiff) – Tag Image File Format / Electronic Photography, ISO 12234-2; tends to be used as a basis for other formats rather than in its own right.
VTF – Valve Texture Format
XBM – X Window System Bitmap
XCF – GIMP image (from Gimp's origin at the eXperimental Computing Facility of the University of California)
XPM – X Window System Pixmap
ZIF – Zoomable/Zoomify Image Format (a web-friendly, TIFF-based, zoomable image format)
Vector graphics
Vector graphics use geometric primitives such as points, lines, curves, and polygons to represent images.
3DV – 3-D wireframe graphics by Oscar Garcia
AMF – Additive Manufacturing File Format
AWG – Ability Draw
AI – Adobe Illustrator Document
CGM – Computer Graphics Metafile, an ISO Standard
CDR – CorelDRAW Document
CMX – CorelDRAW vector image
DP – Drawing Program file for PERQ
DRAWIO – Diagrams.net offline diagram
DXF – ASCII Drawing Interchange file Format, used in AutoCAD and other CAD-programs
E2D – 2-dimensional vector graphics used by the editor which is included in JFire
EGT – EGT Universal Document, EGT Vector Draw images are used to draw vector to a website
EPS – Encapsulated Postscript
FS – FlexiPro file
GBR – Gerber file
ODG – OpenDocument Drawing
MOVIE.BYU
RenderMan
SVG – Scalable Vector Graphics, employs XML
Scene description languages (3D vector image formats)
STL – Stereo Lithographic data format (see STL (file format)) used by various CAD systems and stereo lithographic printing machines. See above.
VRML Uses .wrl extension – Virtual Reality Modeling Language, for the creation of 3D viewable web images.
X3D
SXD – OpenOffice.org XML (obsolete) Drawing
TGAX - Texture format used by Zwift
V2D – voucher design used by the voucher management included in JFire
VDOC – Vector format used in AnyCut, CutStorm, DrawCut, DragonCut, FutureDRAW, MasterCut, SignMaster, VinylMaster software by Future Corporation
VSD – Vector format used by Microsoft Visio
VSDX – Vector format used by MS Visio and opened by VSDX Annotator
VND – Vision numeric Drawing file used in TypeEdit, Gravostyle.
WMF – Windows Meta File
EMF – Enhanced (Windows) MetaFile, an extension to WMF
ART – Xara – Drawing (superseded by XAR)
XAR – Xara – Drawing
3D graphics
3D graphics are 3D models that allow building models in real-time or non-real-time 3D rendering.
3DMF – QuickDraw 3D Metafile (.3dmf)
3DM – OpenNURBS Initiative 3D Model (used by Rhinoceros 3D) (.3dm)
3MF – Microsoft 3D Manufacturing Format (.3mf)
3DS – legacy 3D Studio Model (.3ds)
ABC – Alembic (computer graphics)
AC – AC3D Model (.ac)
AMF – Additive Manufacturing File Format
AN8 – Anim8or Model (.an8)
AOI – Art of Illusion Model (.aoi)
ASM – PTC Creo assembly (.asm)
B3D – Blitz3D Model (.b3d)
BLEND – Blender (.blend)
BLOCK – Blender encrypted blend files (.block)
BMD3 – Nintendo GameCube first-party J3D proprietary model format (.bmd)
BDL4 – Nintendo GameCube and Wii first-party J3D proprietary model format (2002, 2006–2010) (.bdl)
BRRES – Nintendo Wii first-party proprietary model format 2010+ (.brres)
BFRES – Nintendo Wii U and later Switch first-party proprietary model format
C4D – Cinema 4D (.c4d)
Cal3D – Cal3D (.cal3d)
CCP4 – X-ray crystallography voxels (electron density)
CFL – Compressed File Library (.cfl)
COB – Caligari Object (.cob)
CORE3D – Coreona 3D Coreona 3D Virtual File(.core3d)
CTM – OpenCTM (.ctm)
DAE – COLLADA (.dae)
DFF – RenderWare binary stream, commonly used by Grand Theft Auto III-era games as well as other RenderWare titles
DPM – deepMesh (.dpm)
DTS – Torque Game Engine (.dts)
EGG – Panda3D Engine
FACT – Electric Image (.fac)
FBX – Autodesk FBX (.fbx)
G – BRL-CAD geometry (.g)
GLB – a binary form of glTF required to be loaded in Facebook 3D Posts. (.glb)
GLM – Ghoul Mesh (.glm)
glTF – the JSON-based standard developed by Khronos Group (.gltf)
IO - Bricklink Stud.io 2.0 Model File (.io)
IOB – Imagine (3D modeling software) (.iob)
JAS – Cheetah 3D file (.jas)
JMESH - Universal mesh data exchange file based on JMesh specification (.jmsh for text/JSON based, .bmsh for binary/UBJSON based)
LDR - LDraw Model File (.ldr)
LWO – Lightwave Object (.lwo)
LWS – Lightwave Scene (.lws)
LXF – LEGO Digital Designer Model file (.lxf)
LXO – Luxology Modo (software) file (.lxo)
M3D – Model3D, universal, engine-neutral format (.m3d)
MA – Autodesk Maya ASCII File (.ma)
MAX – Autodesk 3D Studio Max file (.max)
MB – Autodesk Maya Binary File (.mb)
MPD - LDraw Multi-Part Document Model File (.mpd)
MD2 – Quake 2 model format (.md2)
MD3 – Quake 3 model format (.md3)
MD5 – Doom 3 model format (.md5)
MDX – Blizzard Entertainment's own model format (.mdx)
MESH – New York University(.m)
MESH – Meshwork Model (.mesh)
MM3D – Misfit Model 3d (.mm3d)
MPO – Multi-Picture Object – This JPEG standard is used for 3d images, as with the Nintendo 3DS
MRC – voxels in cryo-electron microscopy
NIF – Gamebryo NetImmerse File (.nif)
OBJ – Wavefront .obj file (.obj)
OFF – OFF Object file format (.off)
OGEX – Open Game Engine Exchange (OpenGEX) format (.ogex)
PLY – Polygon File Format / Stanford Triangle Format (.ply)
PRC – Adobe PRC (embedded in PDF files)
PRT – PTC Creo part (.prt)
POV – POV-Ray document (.pov)
R3D – Realsoft 3D (Real-3D) (.r3d)
RWX – RenderWare Object (.rwx)
SIA – Nevercenter Silo Object (.sia)
SIB – Nevercenter Silo Object (.sib)
SKP – Google Sketchup file (.skp)
SLDASM – SolidWorks Assembly Document (.sldasm)
SLDPRT – SolidWorks Part Document (.sldprt)
SMD – Valve Studiomdl Data format (.smd)
U3D – Universal 3D format (.u3d)
USD – Universal Scene Description (.usd)
USDA – Universal Scene Description , Human-readable text format (.usda)
USDC – Universal Scene Description , Binary format (.usdc)
USDZ – Universal Scene Description Zip (.usdz)
VIM – Revizto visual information model format (.vimproj)
VRML97 – VRML Virtual reality modeling language (.wrl)
VUE – Vue scene file (.vue)
VWX – Vectorworks (.vwx)
WINGS – Wings3D (.wings)
W3D – Westwood 3D Model (.w3d)
X – DirectX 3D Model (.x)
X3D – Extensible 3D (.x3d)
Z3D – Zmodeler (.z3d)
ZBMX - Mecabricks Blender Add-On (.zbmx)
Links and shortcuts
Alias (Mac OS)
JNLP – Java Network Launching Protocol, an XML file used by Java Web Start for starting Java applets over the Internet
LNK – binary-format file shortcut in Microsoft Windows 95 and later
APPREF-MS – File shortcut format used by ClickOnce
NAL - ZENworks Instant shortcut (opens a .EXE not on the C:/ )
URL – INI file pointing to a URL bookmarks/Internet shortcut in Microsoft Windows
WEBLOC – Property list file pointing to a URL bookmarks/Internet shortcut in macOS
SYM – Symbolic link
.desktop – Desktop entry on Linux Desktop environments
Mathematical
Harwell-Boeing file format – a format designed to store sparse matrices
MML – MathML – Mathematical Markup Language
ODF – OpenDocument Math Formula
SXM – OpenOffice.org XML (obsolete) Math Formula
Object code, executable files, shared and dynamically linked libraries
.8BF files – plugins for some photo editing programs including Adobe Photoshop, Paint Shop Pro, GIMP and Helicon Filter.
.a – Objective C native static library
a.out – (no suffix for executable image, .o for object files, .so for shared object files) classic UNIX object format, now often superseded by ELF
APK – Android Application Package
APP – A folder found on macOS systems containing program code and resources, appearing as one file.
BAC – an executable image for the RSTS/E system, created using the BASIC-PLUS COMPILE command
BPL – a Win32 PE file created with Borland Delphi or C++Builder containing a package.
Bundle – a Macintosh plugin created with Xcode or make which holds executable code, data files, and folders for that code.
.Class – used in Java
COFF (no suffix for executable image, .o for object files) – UNIX Common Object File Format, now often superseded by ELF
COM files – commands used in DOS and CP/M
DCU – Delphi compiled unit
DLL – library used in Windows and OS/2 to store data, resources and code.
DOL – the format used by the GameCube and Wii, short for Dolphin, which was the codename of the GameCube.
.EAR – archives of Java enterprise applications
ELF – (no suffix for executable image, .o for object files, .so for shared object files) used in many modern Unix and Unix-like systems, including Solaris, other System V Release 4 derivatives, Linux, and BSD)
expander (see bundle)
DOS executable (.exe – used in DOS)
.IPA – apple IOS application executable file. Another form of zip file.
JEFF – a file format allowing execution directly from static memory
.JAR – archives of Java class files
.XPI – PKZIP archive that can be run by Mozilla web browsers to install software.
Mach-O – (no suffix for executable image, .o for object files, .dylib and .bundle for shared object files) Mach-based systems, notably native format of macOS, iOS, watchOS, and tvOS
NetWare Loadable Module (.NLM) – the native 32-bit binaries compiled for Novell's NetWare Operating System (versions 3 and newer)
New Executable (.EXE – used in multitasking ("European") MS-DOS 4.0, 16-bit Microsoft Windows, and OS/2)
.o – un-linked object files directly from the compiler
Portable Executable (.EXE, – used in Microsoft Windows and some other systems)
Preferred Executable Format – (classic Mac OS for PowerPC applications; compatible with macOS via a classic (Mac OS X) emulator)
RLL – used in Microsoft operating systems together with a DLL file to store program resources
.s1es – Executable used for S1ES learning system.
.so – shared library, typically ELF
Value Added Process (.VAP) – the native 16-bit binaries compiled for Novell's NetWare Operating System (version 2, NetWare 286, Advanced NetWare, etc.)
.WAR – archives of Java Web applications
XBE – Xbox executable
.XAP – Windows Phone package
XCOFF – (no suffix for executable image, .o for object files, .a for shared object files) extended COFF, used in AIX
XEX – Xbox 360 executable
LIST – variable list
Object extensions
.VBX – Visual Basic extensions
.OCX – Object Control extensions
.TLB – Windows Type Library
Page description language
DVI – Device independent format
EGT – Universal Document can be used to store CSS type styles (*.egt)
PLD
PCL
PDF – Portable Document Format
PostScript (.ps, .ps.gz)
SNP – Microsoft Access Report Snapshot
XPS
XSL-FO (Formatting Objects)
Configurations, Metadata
CSS – Cascading Style Sheets
XSLT, XSL – XML Style Sheet (.xslt, .xsl)
TPL – Web template (.tpl)
Personal information manager
MSG – Microsoft Outlook task manager
ORG – Lotus Organizer PIM package
ORG - Emacs Org-Mode Mindmanager, contacts, calendar, email-integration
PST, OST – Microsoft Outlook email communication
SC2 – Microsoft Schedule+ calendar
Presentation
GSLIDES – Google Drive Presentation
KEY, KEYNOTE – Apple Keynote Presentation
NB – Mathematica Slideshow
NBP – Mathematica Player slideshow
ODP – OpenDocument Presentation
OTP – OpenDocument Presentation template
PEZ – Prezi Desktop Presentation
POT – Microsoft PowerPoint template
PPS – Microsoft PowerPoint Show
PPT – Microsoft PowerPoint Presentation
PPTX – Office Open XML Presentation
PRZ – Lotus Freelance Graphics
SDD – StarOffice's StarImpress
SHF – ThinkFree Show
SHOW – Haansoft(Hancom) Presentation software document
SHW – Corel Presentations slide show creation
SLP – Logix-4D Manager Show Control Project
SSPSS – SongShow Plus Slide Show
STI – OpenOffice.org XML (obsolete) Presentation template
SXI – OpenOffice.org XML (obsolete) Presentation
THMX – Microsoft PowerPoint theme template
WATCH – Dataton Watchout Presentation
Project management software
MPP – Microsoft Project
Reference management software
Formats of files used for bibliographic information (citation) management.
bib – BibTeX
enl – EndNote
ris – Research Information Systems RIS (file format)
Scientific data (data exchange)
FITS (Flexible Image Transport System) – standard data format for astronomy (.fits)
Silo – a storage format for visualization developed at Lawrence Livermore National Laboratory
SPC – spectroscopic data
EAS3 – binary format for structured data
EOSSA – Electro-Optic Space Situational Awareness format
OST (Open Spatio-Temporal) – extensible, mainly images with related data, or just pure data; meant as an open alternative for microscope images
CCP4 – X-ray crystallography voxels (electron density)
MRC – voxels in cryo-electron microscopy
HITRAN – spectroscopic data with one optical/infrared transition per line in the ASCII file (.hit)
.root – hierarchical platform-independent compressed binary format used by ROOT
Simple Data Format (SDF) – a platform-independent, precision-preserving binary data I/O format capable of handling large, multi-dimensional arrays.
MYD – Everfine LEDSpec software file for LED measurements
CSDM (Core Scientific Dataset Model) – model for multi-dimensional and correlated datasets from various spectroscopies, diffraction, microscopy, and imaging techniques (.csdf, .csdfe).
Multi-domain
NetCDF – Network common data format
HDR, [HDF], h4 or h5 – Hierarchical Data Format
SDXF – (Structured Data Exchange Format)
CDF – Common Data Format
CGNS – CFD General Notation System
FMF – Full-Metadata Format
Meteorology
GRIB – Grid in Binary, WMO format for weather model data
BUFR – WMO format for weather observation data
PP – UK Met Office format for weather model data
NASA-Ames – Simple text format for observation data. First used in aircraft studies of the atmosphere.
Chemistry
CML – Chemical Markup Language (CML) (.cml)
Chemical table file (CTab) (.mol, .sd, .sdf)
Joint Committee on Atomic and Molecular Physical Data (JCAMP) (.dx, .jdx)
Simplified molecular input line entry specification (SMILES) (.smi)
Mathematics
graph6, sparse6 – ASCII encoding of Adjacency matrices (.g6, .s6)
Biology
Molecular biology and bioinformatics:
AB1 – In DNA sequencing, chromatogram files used by instruments from Applied Biosystems
ACE – A sequence assembly format
ASN.1– Abstract Syntax Notation One, is an International Standards Organization (ISO) data representation format used to achieve interoperability between platforms. NCBI uses ASN.1 for the storage and retrieval of data such as nucleotide and protein sequences, structures, genomes, and PubMed records.
BAM – Binary Alignment/Map format (compressed SAM format)
BCF – Binary compressed VCF format
BED – The browser extensible display format is used for describing genes and other features of DNA sequences
CAF – Common Assembly Format for sequence assembly
CRAM – compressed file format for storing biological sequences aligned to a reference sequence
DDBJ – The flatfile format used by the DDBJ to represent database records for nucleotide and peptide sequences from DDBJ databases.
EMBL – The flatfile format used by the EMBL to represent database records for nucleotide and peptide sequences from EMBL databases.
FASTA – The FASTA format, for sequence data. Sometimes also given as FNA or FAA (Fasta Nucleic Acid or Fasta Amino Acid).
FASTQ – The FASTQ format, for sequence data with quality. Sometimes also given as QUAL.
GCPROJ – The Genome Compiler project. Advanced format for genetic data to be designed, shared and visualized.
GenBank – The flatfile format used by the NCBI to represent database records for nucleotide and peptide sequences from the GenBank and RefSeq databases
GFF – The General feature format is used to describe genes and other features of DNA, RNA, and protein sequences
GTF – The Gene transfer format is used to hold information about gene structure
MAF – The Multiple Alignment Format stores multiple alignments for whole-genome to whole-genome comparisons
NCBI ASN.1 – Structured ASN.1 format used at National Center for Biotechnology Information for DNA and protein data
NEXUS – The Nexus file encodes mixed information about genetic sequence data in a block structured format
NeXML–XML format for phylogenetic trees
NWK – The Newick tree format is a way of representing graph-theoretical trees with edge lengths using parentheses and commas and useful to hold phylogenetic trees.
PDB – structures of biomolecules deposited in Protein Data Bank, also used to exchange protein and nucleic acid structures
PHD – Phred output, from the base-calling software Phred
PLN – Protein Line Notation used in proteax software specification
SAM – Sequence Alignment Map format, in which the results of the 1000 Genomes Project will be released
SBML – The Systems Biology Markup Language is used to store biochemical network computational models
SCF – Staden chromatogram files used to store data from DNA sequencing
SFF – Standard Flowgram Format
SRA – format used by the National Center for Biotechnology Information Short Read Archive to store high-throughput DNA sequence data
Stockholm – The Stockholm format for representing multiple sequence alignments
Swiss-Prot – The flatfile format used to represent database records for protein sequences from the Swiss-Prot database
VCF – Variant Call Format, a standard created by the 1000 Genomes Project that lists and annotates the entire collection of human variants (with the exception of approximately 1.6 million variants).
Biomedical imaging
Digital Imaging and Communications in Medicine (DICOM) (.dcm)
Neuroimaging Informatics Technology Initiative (NIfTI)
.nii – single-file (combined data and meta-data) style
.nii.gz – gzip-compressed, used transparently by some software, notably the FMRIB Software Library (FSL)
.gii – single-file (combined data and meta-data) style; NIfTI offspring for brain surface data
.img,.hdr – dual-file (separate data and meta-data, respectively) style
AFNI data, meta-data (.BRIK,.HEAD)
Massachusetts General Hospital imaging format, used by the FreeSurfer brain analysis package
.MGH – uncompressed
.MGZ – zip-compressed
Analyze data, meta-data (.img,.hdr)
Medical Imaging NetCDF (MINC) format, previously based on NetCDF; since version 2.0, based on HDF5 (.mnc)
Biomedical signals (time series)
ACQ – AcqKnowledge format for Windows/PC from Biopac Systems Inc., Goleta, CA, USA
ADICHT – LabChart format from ADInstruments Pty Ltd, Bella Vista NSW, Australia
BCI2000 – The BCI2000 project, Albany, NY, USA
BDF – BioSemi data format from BioSemi B.V. Amsterdam, Netherlands
BKR – The EEG data format developed at the University of Technology Graz, Austria
CFWB – Chart Data Format from ADInstruments Pty Ltd, Bella Vista NSW, Australia
DICOM – Waveform An extension of Dicom for storing waveform data
ecgML – A markup language for electrocardiogram data acquisition and analysis
EDF/EDF+ – European Data Format
FEF – File Exchange Format for Vital signs, CEN TS 14271
GDF v1.x – The General Data Format for biomedical signals, version 1.x
GDF v2.x – The General Data Format for biomedical signals, version 2.x
HL7aECG – Health Level 7 v3 annotated ECG
MFER – Medical waveform Format Encoding Rules
OpenXDF – Open Exchange Data Format from Neurotronics, Inc., Gainesville, FL, USA
SCP-ECG – Standard Communication Protocol for Computer assisted electrocardiography EN1064:2007
SIGIF – A digital SIGnal Interchange Format with application in neurophysiology
WFDB – Format of Physiobank
XDF – eXtensible Data Format
Other biomedical formats
Health Level 7 (HL7) – a framework for exchange, integration, sharing, and retrieval of health information electronically
xDT – a family of data exchange formats for medical records
Biometric formats
CBF – Common Biometric Format, based on CBEFF 2.0 (Common Biometric ExFramework).
EBF – Extended Biometric Format, based on CBF but with S/MIME encryption support and semantic extensions
CBFX – XML Common Biometric Format, based upon XCBF 1.1 (OASIS XML Common Biometric Format)
EBFX – XML Extended Biometric Format, based on CBFX but with W3C XML Encryption support and semantic extensions
ADB – Ada body
ADS – Ada specification
AHK – AutoHotkey script file
APPLESCRIPT- applescript – see SCPT
AS – Adobe Flash ActionScript File
AU3 – AutoIt version 3
BAT – Batch file
BAS – QBasic & QuickBASIC
BTM — Batch file
CLASS — Compiled Java binary
CLJS – ClojureScript
CMD – Batch file
Coffee – CoffeeScript
C – C
CPP – C++
CS - C#
INO – Arduino sketch (program)
EGG – Chicken
EGT – EGT Asterisk Application Source File, EGT Universal Document
ERB – Embedded Ruby, Ruby on Rails Script File
GO – Go
HTA – HTML Application
IBI – Icarus script
ICI – ICI
IJS – J script
.ipynb – IPython Notebook
ITCL – Itcl
JS – JavaScript and JScript
JSFL – Adobe JavaScript language
.kt - Kotlin
LUA – Lua
M – Mathematica package file
MRC – mIRC Script
NCF – NetWare Command File (scripting for Novell's NetWare OS)
NUC – compiled script
NUD – C++ External module written in C++
NUT – Squirrel
O — Compiled and optimized C/C++ binary
pde – Processing (programming language), Processing script
PHP – PHP
PHP? – PHP (? = version number)
PL – Perl
PM – Perl module
PS1 – Windows PowerShell shell script
PS1XML – Windows PowerShell format and type definitions
PSC1 – Windows PowerShell console file
PSD1 – Windows PowerShell data file
PSM1 – Windows PowerShell module file
PY – Python
PYC – Python byte code files
PYO – Python
R – R scripts
r – REBOL scripts
RB – Ruby
RDP – RDP connection
red – Red scripts
RS – Rust (programming language)
SB2/SB3 – Scratch
SCPT – Applescript
SCPTD – See SCPT.
SDL – State Description Language
SH – Shell script
SYJS – SyMAT JavaScript
SYPY – SyMAT Python
TCL – Tcl
TNS – Ti-Nspire Code/File
TS - Typescript
VBS – Visual Basic Script
XPL – XProc script/pipeline
ebuild – Gentoo Linux's portage package.
Security
Authentication and general encryption formats are listed here.
OpenPGP Message Format – used by Pretty Good Privacy, GNU Privacy Guard, and other OpenPGP software; can contain keys, signed data, or encrypted data; can be binary or text ("ASCII armored")
Certificates and keys
GXK – Galaxkey, an encryption platform for authorized, private and confidential email communication
OpenSSH private key (.ssh) – Secure Shell private key; format generated by ssh-keygen or converted from PPK with PuTTYgen
OpenSSH public key (.pub) – Secure Shell public key; format generated by ssh-keygen or PuTTYgen
PuTTY private key (.ppk) – Secure Shell private key, in the format generated by PuTTYgen instead of the format used by OpenSSH
nSign public key (.nSign) - nSign public key in a custom format
X.509
Distinguished Encoding Rules (.cer, .crt, .der) – stores certificates
PKCS#7 SignedData (.p7b, .p7c) – commonly appears without main data, just certificates or certificate revocation lists (CRLs)
PKCS#12 (.p12, .pfx) – can store public certificates and private keys
PEM – Privacy-enhanced Electronic Mail: full format not widely used, but often used to store Distinguished Encoding Rules in Base64 format
PFX – Microsoft predecessor of PKCS#12
Encrypted files
This section shows file formats for encrypted general data, rather than a specific program's data.
AXX – Encrypted file, created with AxCrypt
EEA – An encrypted CAB, ostensibly for protecting email attachments
TC – Virtual encrypted disk container, created by TrueCrypt
KODE – Encrypted file, created with KodeFile
nSignE - An encrypted private key, created by nSign
Password files
Password files (sometimes called keychain files) contain lists of other passwords, usually encrypted.
BPW – Encrypted password file created by Bitser password manager
KDB – KeePass 1 database
KDBX – KeePass 2 database
Signal data (non-audio)
ACQ – AcqKnowledge format for Windows/PC from Biopac
ADICHT – LabChart format from ADInstruments
BKR – The EEG data format developed at the University of Technology Graz
BDF, CFG – Configuration file for Comtrade data
CFWB – Chart Data format from ADInstruments
DAT – Raw data file for Comtrade data
EDF – European data format
FEF – File Exchange Format for Vital signs
GDF – General data formats for biomedical signals
GMS – Gesture And Motion Signal format
IROCK – intelliRock Sensor Data File Format
MFER – Medical waveform Format Encoding Rules
SAC – Seismic Analysis Code, earthquake seismology data format
SCP-ECG – Standard Communication Protocol for Computer assisted electrocardiography
SEED, MSEED – Standard for the Exchange of Earthquake Data, seismological data and sensor metadata
SEGY – Reflection seismology data format
SIGIF – SIGnal Interchange Format
WIN, WIN32 – NIED/ERI seismic data format (.cnt)
Sound and music
Lossless audio
Uncompressed
8SVX – Commodore-Amiga 8-bit sound (usually in an IFF container)
16SVX – Commodore-Amiga 16-bit sound (usually in an IFF container)
AIFF, AIF, AIFC – Audio Interchange File Format
AU – Simple audio file format introduced by Sun Microsystems
BWF – Broadcast Wave Format, an extension of WAVE
CDDA – Compact Disc Digital Audio
DSF, DFF – Direct Stream Digital audio file, also used in Super Audio CD
RAW – Raw samples without any header or sync
WAV – Microsoft Wave
Compressed
RA, RM – RealAudio format
FLAC – Free lossless codec of the Ogg project
LA – Lossless audio
PAC – LPAC
APE – Monkey's Audio
OFR, OFS, OFF – OptimFROG
RKA – RKAU
SHN – Shorten
TAK – Tom's Lossless Audio Kompressor
THD – Dolby TrueHD
TTA – Free lossless audio codec (True Audio)
WV – WavPack
WMA – Windows Media Audio 9 Lossless
BRSTM – Binary Revolution Stream
DTS, DTSHD, DTSMA – DTS (sound system)
AST – Nintendo Audio Stream
AW – Nintendo Audio Sample used in first-party games
PSF – Portable Sound Format, PlayStation variant (originally PlayStation Sound Format)
Lossy audio
AC3 – Usually used for Dolby Digital tracks
AMR – For GSM and UMTS based mobile phones
MP1 – MPEG Layer 1
MP2 – MPEG Layer 2
MP3
MPEG Layer 3
SPX – Speex (Ogg project, specialized for voice, low bitrates)
GSM – GSM Full Rate, originally developed for use in mobile phones
WMA – Windows Media Audio
AAC – Advanced Audio Coding (usually in an MPEG-4 container)
MPC – Musepack
VQF – Yamaha TwinVQ
OTS – Audio File (similar to MP3, with more data stored in the file and slightly better compression; designed for use with OtsLabs' OtsAV)
SWA – Adobe Shockwave Audio (Same compression as MP3 with additional header information specific to Adobe Director)
VOX – Dialogic ADPCM Low Sample Rate Digitized Voice
VOC – Creative Labs Soundblaster Creative Voice 8-bit & 16-bit Also output format of RCA Audio Recorders
DWD – DiamondWare Digitized
SMP – Turtlebeach SampleVision
OGG – Ogg Vorbis
Tracker modules and related
MOD – Soundtracker and Protracker sample and melody modules
MT2 – MadTracker 2 module
S3M – Scream Tracker 3 module
XM – Fast Tracker module
IT – Impulse Tracker module
NSF – NES Sound Format
MID, MIDI – Standard MIDI file; most often just notes and controls but occasionally also sample dumps (.mid, .rmi)
FTM – FamiTracker Project file
BTM – BambooTracker Project file
Sheet music files
ABC – ABC Notation sheet music file
DARMS – DARMS File Format also known as the Ford-Columbia Format
ETF – Enigma Transportation Format abandoned sheet music exchange format
GP* – Guitar Pro sheet music and tablature file
KERN – Kern File Format sheet music file
LY – LilyPond sheet music file
MEI – Music Encoding Initiative file format that attempts to encode all musical notations
MUS, MUSX – Finale sheet music file
MXL, XML – MusicXML standard sheet music exchange format
MSCX, MSCZ – MuseScore sheet music file
SMDL – Standard Music Description Language sheet music file
SIB – Sibelius sheet music file
Other file formats pertaining to audio
NIFF – Notation Interchange File Format
PTB – Power Tab Editor tab
ASF – Advanced Systems Format
CUST – DeliPlayer custom sound format
GYM – Genesis YM2612 log
JAM – Jam music format
MNG – Background music for the Creatures game series, starting from Creatures 2
RMJ – RealJukebox Media used for RealPlayer
SID – Sound Interface Device – Commodore 64 instructions to play SID music and sound effects
SPC – Super NES sound format
TXM – Track ax media
VGM – Stands for "Video Game Music", log for several different chips
YM – Atari ST/Amstrad CPC YM2149 sound chip format
PVD – Portable Voice Document used for Oaisys & Mitel call recordings
Playlist formats
AIMPPL – AIMP Playlist format
ASX – Advanced Stream Redirector
RAM – Real Audio Metafile For RealAudio files only.
XPL – HDi playlist
XSPF – XML Shareable Playlist Format
ZPL – Xbox Music (Formerly Zune) Playlist format from Microsoft
M3U – Multimedia playlist file
PLS – Multimedia playlist, originally developed for use with the museArc
Audio editing and music production
ALS – Ableton Live set
ALC – Ableton Live clip
ALP – Ableton Live pack
ATMOS, AUDIO, METADATA – Dolby Atmos Rendering and Mastering related file
AUP – Audacity project file
AUP3 – Audacity 3.0 project file
BAND – GarageBand project file
CEL – Adobe Audition loop file (Cool Edit Loop)
CAU Caustic project file
CPR – Steinberg Cubase project file
CWP – Cakewalk Sonar project file
DRM – Steinberg Cubase drum file
DMKIT – Image-Line's Drumaxx drum kit file
ENS – Native Instruments Reaktor Ensemble
FLP – Image Line FL Studio project file
GRIR – Native Instruments Komplete Guitar Rig Impulse Response
LOGIC – Logic Pro X project file
MMP – LMMS project file (alternatively MMPZ for compressed formats)
MMR – MAGIX Music Maker project file
MX6HS – Mixcraft 6 Home Studio project file
NPR – Steinberg Nuendo project file
OMF, OMFI – Open Media Framework Interchange OMFI succeeds OMF (Open Media Framework)
PTX - Pro Tools 10 or later project file
PTF - Pro Tools 7 up to Pro Tools 9 project file
PTS - Legacy Pro Tools project file
RIN – Soundways RIN-M file containing sound recording participant credits and song information
RPP, RPP-BAK – REAPER project file
REAPEAKS – REAPER peak (waveform cache) file
SES – Adobe Audition multitrack session file
SFK – Sound Forge waveform cache file
SFL – Sound Forge sound file
SNG – MIDI sequence file (MidiSoft, Korg, etc.) or n-Track Studio project file
STF – StudioFactory project file. It contains all necessary patches, samples, tracks and settings to play the file
SND – Akai MPC sound file
SYN – SynFactory project file. It contains all necessary patches, samples, tracks and settings to play the file
UST – Utau Editor sequence excluding wave-file
VCLS – VocaListener project file
VPR – Vocaloid 5 Editor sequence excluding wave-file
VSQ – Vocaloid 2 Editor sequence excluding wave-file
VSQX – Vocaloid 3 & 4 Editor sequence excluding wave-file
Recorded television formats
DVR-MS – Windows XP Media Center Edition's Windows Media Center recorded television format
WTV – Windows Vista's and up Windows Media Center recorded television format
Source code for computer programs
ADA, ADB, 2.ADA – Ada (body) source
ADS, 1.ADA – Ada (specification) source
ASM, S – Assembly language source
BAS – BASIC, FreeBASIC, Visual Basic, BASIC-PLUS source, PICAXE basic
BB – Blitz Basic Blitz3D
BMX – Blitz Basic BlitzMax
C – C source
CLJ – Clojure source code
CLS – Visual Basic class
COB, CBL – COBOL source
CPP, CC, CXX, C, CBP – C++ source
CS – C# source
CSPROJ – C# project (Visual Studio .NET)
D – D source
DBA – DarkBASIC source
DBPro123 – DarkBASIC Professional project
E – Eiffel source
EFS – EGT Forever Source File
EGT – EGT Asterisk Source File, could be J, C#, VB.net, EF 2.0 (EGT Forever)
EL – Emacs Lisp source
FOR, FTN, F, F77, F90 – Fortran source
FRM – Visual Basic form
FRX – Visual Basic form stash file (binary form file)
FTH – Forth source
GED – Game Maker Extension Editable file as of version 7.0
GM6 – Game Maker Editable file as of version 6.x
GMD – Game Maker Editable file up to version 5.x
GMK – Game Maker Editable file as of version 7.0
GML – Game Maker Language script file
GO – Go source
H – C/C++ header file
HPP, HXX – C++ header file
HS – Haskell source
I – SWIG interface file
INC – Turbo Pascal included source
JAVA – Java source
L – lex source
LGT – Logtalk source
LISP – Common Lisp source
M – Objective-C source
M – MATLAB
M – Mathematica
M4 – m4 source
ML – Standard ML and OCaml source
MSQR – M² source file, created by Mattia Marziali
N – Nemerle source
NB – Nuclear Basic source
P – Parser source
PAS, PP, P – Pascal source (DPR for projects)
PHP, PHP3, PHP4, PHP5, PHPS, Phtml – PHP source
PIV – Pivot stickfigure animator
PL, PM – Perl
PLI, PL1 – PL/I
PRG – Ashton-Tate; dbII, dbIII and dbIV, db, db7, clipper, Microsoft Fox and FoxPro, harbour, xharbour, and Xbase
PRO – IDL
POL – Apcera Policy Language doclet
PY – Python source
R – R source
RED – Red source
REDS – Red/System source
RB – Ruby source
RESX – Resource file for .NET applications
RC, RC2 – Resource script files to generate resources for .NET applications
RKT, RKTL – Racket source
SCALA – Scala source
SCI, SCE – Scilab
SCM – Scheme source
SD7 – Seed7 source
SKB, SKC – Sage Retrieve 4GL Common Area (Main and Amended backup)
SKD – Sage Retrieve 4GL Database
SKF, SKG – Sage Retrieve 4GL File Layouts (Main and Amended backup)
SKI – Sage Retrieve 4GL Instructions
SKK – Sage Retrieve 4GL Report Generator
SKM – Sage Retrieve 4GL Menu
SKO – Sage Retrieve 4GL Program
SKP, SKQ – Sage Retrieve 4GL Print Layouts (Main and Amended backup)
SKS, SKT – Sage Retrieve 4GL Screen Layouts (Main and Amended backup)
SKZ – Sage Retrieve 4GL Security File
SLN – Visual Studio solution
SPIN – Spin source (for Parallax Propeller microcontrollers)
STK – Stickfigure file for Pivot stickfigure animator
SWG – SWIG source code
TCL – TCL source code
VAP – Visual Studio Analyzer project
VB – Visual Basic.NET source
VBG – Visual Studio compatible project group
VBP, VIP – Visual Basic project
VBPROJ – Visual Basic .NET project
VCPROJ – Visual C++ project
VDPROJ – Visual Studio deployment project
XPL – XProc script/pipeline
XQ – XQuery file
XSL – XSLT stylesheet
Y – yacc source
Spreadsheet
123 – Lotus 1-2-3
AB2 – Abykus worksheet
AB3 – Abykus workbook
AWS – Ability Spreadsheet
BCSV – Nintendo proprietary table format
CLF – ThinkFree Calc
CELL – Haansoft(Hancom) SpreadSheet software document
CSV – Comma-Separated Values
GSHEET – Google Drive Spreadsheet
numbers – An Apple Numbers Spreadsheet file
gnumeric – Gnumeric spreadsheet, a gziped XML file
LCW – Lucid 3-D
ODS – OpenDocument spreadsheet
OTS – OpenDocument spreadsheet template
QPW – Quattro Pro spreadsheet
SDC – StarOffice StarCalc Spreadsheet
SLK – SYLK (SYmbolic LinK)
STC – OpenOffice.org XML (obsolete) Spreadsheet template
SXC – OpenOffice.org XML (obsolete) Spreadsheet
TAB – tab delimited columns; also TSV (Tab-Separated Values)
TXT – text file
VC – Visicalc
WK1 – Lotus 1-2-3 up to version 2.01
WK3 – Lotus 1-2-3 version 3.0
WK4 – Lotus 1-2-3 version 4.0
WKS – Lotus 1-2-3
WKS – Microsoft Works
WQ1 – Quattro Pro DOS version
XLK – Microsoft Excel worksheet backup
XLS – Microsoft Excel worksheet sheet (97–2003)
XLSB – Microsoft Excel binary workbook
XLSM – Microsoft Excel Macro-enabled workbook
XLSX – Office Open XML worksheet sheet
XLR – Microsoft Works version 6.0
XLT – Microsoft Excel worksheet template
XLTM – Microsoft Excel Macro-enabled worksheet template
XLW – Microsoft Excel worksheet workspace (version 4.0)
Tabulated data
TSV – Tab-separated values
CSV – Comma-separated values
db – databank format; accessible by many econometric applications
dif – accessible by many spreadsheet applications
Video
AAF – mostly intended to hold edit decisions and rendering information, but can also contain compressed media essence
3GP – the most common video format for cell phones
GIF – Animated GIF (simple animation; until recently often avoided because of patent problems)
ASF – container (enables any form of compression to be used; MPEG-4 is common; video in ASF-containers is also called Windows Media Video (WMV))
AVCHD – Advanced Video Codec High Definition
AVI – container (a shell, which enables any form of compression to be used)
BIK (.bik) – Bink Video file. A video compression system developed by RAD Game Tools
BRAW - a video format used by Blackmagic's Ursa Mini Pro 12K cameras.
CAM – aMSN webcam log file
COLLAB – Blackboard Collaborate session recording
DAT – video standard data file (automatically created when we attempted to burn as video file on the CD)
DSH
DVR-MS – Windows XP Media Center Edition's Windows Media Center recorded television format
FLV – Flash video (encoded to run in a flash animation)
M1V MPEG-1 – Video
M2V MPEG-2 – Video
NOA - rare movie format use in some Japanese eroges around 2002
FLA – Adobe Flash (for producing)
FLR – (text file which contains scripts extracted from SWF by a free ActionScript decompiler named FLARE)
SOL – Adobe Flash shared object ("Flash cookie")
STR - Sony PlayStation video stream
M4V – video container file format developed by Apple
Matroska (*.mkv) – Matroska is a container format, which enables any video format such as MPEG-4 ASP or AVC to be used along with other content such as subtitles and detailed meta information
WRAP – MediaForge (*.wrap)
MNG – mainly simple animation containing PNG and JPEG objects, often somewhat more complex than animated GIF
QuickTime (.mov) – container which enables any form of compression to be used; Sorenson codec is the most common; QTCH is the filetype for cached video and audio streams
MPEG (.mpeg, .mpg, .mpe)
THP – Nintendo proprietary movie/video format
MPEG-4 Part 14, shortened "MP4" – multimedia container (most often used for Sony's PlayStation Portable and Apple's iPod)
MXF – Material Exchange Format (standardized wrapper format for audio/visual material developed by SMPTE)
ROQ – used by Quake 3
NSV – Nullsoft Streaming Video (media container designed for streaming video content over the Internet)
Ogg – container, multimedia
RM – RealMedia
SVI – Samsung video format for portable players
SMI – SAMI Caption file (HTML like subtitle for movie files)
SMK (.smk) – Smacker video file. A video compression system developed by RAD Game Tools
SWF – Adobe Flash (for viewing)
WMV – Windows Media Video (See ASF)
WTV – Windows Vista's and up Windows Media Center recorded television format
YUV – raw video format; resolution (horizontal x vertical) and sample structure 4:2:2 or 4:2:0 must be known explicitly
WebM – video file format for web video using HTML5
Video editing, production
BRAW – Blackmagic Design RAW video file name
FCP – Final Cut Pro project file
MSWMM – Windows Movie Maker project file
PPJ & PRPROJ– Adobe Premiere Pro video editing file
IMOVIEPROJ – iMovie project file
VEG & VEG-BAK – Sony Vegas project file
SUF – Sony camera configuration file (setup.suf) produced by XDCAM-EX camcorders
WLMP – Windows Live Movie Maker project file
KDENLIVE – Kdenlive project file
VPJ – VideoPad project file
MOTN – Apple Motion project file
IMOVIEMOBILE – iMovie project file for iOS users
WFP / WVE — Wondershare Filmora Project
PDS - Cyberlink PowerDirector project
VPROJ - VSDC Free Video Editor project file
Video game data
List of common file formats of data for video games on systems that support filesystems, most commonly PC games.
Minecraft — files used by Mojang to develop Minecraft
MCADDON – format used by the Bedrock Edition of Minecraft for add-ons; Resource packs for the game
MCFUNCTION – format used by Minecraft for storing functions
MCMETA – format used by Minecraft for storing data for customizable texture packs for the game
MCPACK – format used by the Bedrock Edition of Minecraft for in-game texture packs; full addons for the game
MCR – format used by Minecraft for storing data for in-game worlds before version 1.2
MCTEMPLATE – format used by the Bedrock Edition of Minecraft for world templates
MCWORLD – format used by the Bedrock Edition of Minecraft for in-game worlds
NBS – format used by Note Block Studio, a tool that can be used to make note block songs for Minecraft.
TrackMania/Maniaplanet Engine – Formats used by games based on the TrackMania engine.
GBX - All user-created content is stored in this file type.
REPLAY.GBX - Stores the replay of a race.
CHALLENGE.GBX/MAP.GBX - Stores tracks/maps.
SYSTEMCONFIG.GBX - Launcher info.
TRACKMANIAVEHICLE.GBX - Info about a certain car type.
VEHICLETUNINGS.GBX - Vehicle physics.
SOLID.GBX - A block's model.
ITEM.GBX - Custom Maniaplanet item.
BLOCK.GBX - Custom Maniaplanet block.
TEXTURE.GBX - Info about a texture that are used in materials.
MATERIAL.GBX - Info about a material such as surface type that are used in Solids.
TMEDCLASSIC.GBX - Block info.
GHOST.GBX - Player ghosts in Trackmania and TrackMania Turbo.
CONTROLSTYLE.GBX - Menu files.
SCORES.GBX - Stores info about the player's best times.
PROFILE.GBX - Stores a player's info such as their login.
DDS - Almost every texture in the game uses this format.
PAK - Stores environment data such as valid blocks.
LOC - A locator. Locators allow the game to download content such as car skins from an external server.
SCRIPT.TXT - Scripts for Maniaplanet such as menus and game modes.
XML - ManiaLinks.
Doom engine – Formats used by games based on the Doom engine.
DEH – DeHackEd files to mutate the game executable (not officially part of the DOOM engine)
DSG – Saved game
LMP – A lump is an entry in a DOOM wad.
LMP – Saved demo recording
MUS – Music file (usually contained within a WAD file)
WAD – Data storage (contains music, maps, and textures)
Quake engine – Formats used by games based on the Quake engine.
BSP – (For Binary space partitioning) compiled map format
MAP – Raw map format used by editors like GtkRadiant or QuArK
MDL/MD2/MD3/MD5 – Model for an item used in the game
PAK/PK2 – Data storage
PK3/PK4 – used by the Quake II, Quake III Arena and Quake 4 game engines, respectively, to store game data, textures etc. They are actually .zip files.
.dat – not specific file type, often generic extension for "data" files for a variety of applications
sometimes used for general data contained within the .PK3/PK4 files
.fontdat – a .dat file used for formatting game fonts
.roq – Video format
.sav – Savegame format
Unreal Engine – Formats used by games based on the Unreal engine.
U – Unreal script format
UAX – Animations format for Unreal Engine 2
UMX – Map format for Unreal Tournament
UMX – Music format for Unreal Engine 1
UNR – Map format for Unreal
UPK – Package format for cooked content in Unreal Engine 3
USX – Sound format for Unreal Engine 1 and Unreal Engine 2
UT2 – Map format for Unreal Tournament 2003 and Unreal Tournament 2004
UT3 – Map format for Unreal Tournament 3
UTX – Texture format for Unreal Engine 1 and Unreal Engine 2
UXX – Cache format; these are files a client downloaded from server (which can be converted to regular formats)
Duke Nukem 3D Engine – Formats used by games based on this engine
DMO – Save game
GRP – Data storage
MAP – Map (usually constructed with BUILD.EXE)
Diablo Engine – Formats used by Diablo by Blizzard Entertainment.
SV – Save Game
ITM – Item File
Real Virtuality Engine – Formats used by Bohemia Interactive. Operation:Flashpoint, ARMA 2, VBS2
SQF – Format used for general editing
SQM – Format used for mission files
PBO – Binarized file used for compiled models
LIP – Format that is created from WAV files to create in-game accurate lip-synch for character animations.
Source Engine – Formats used by Valve. Half-Life 2, Counter-Strike: Source, Day of Defeat: Source, Half-Life 2: Episode One, Team Fortress 2, Half-Life 2: Episode Two, Portal, Left 4 Dead, Left 4 Dead 2, Alien Swarm, Portal 2, Counter-Strike: Global Offensive, Titanfall, Insurgency, Titanfall 2, Day of Infamy
VMF – Valve Hammer Map editor raw map file
VMX - Valve Hammer Map editor backup map file
BSP – Source Engine compiled map file
MDL – Source Engine model format
SMD – Source Engine uncompiled model format
PCF – Source Engine particle effect file
HL2 – Half-Life 2 save format
DEM – Source Engine demo format
VPK – Source Engine pack format
VTF – Source Engine texture format
VMT – Source Engine material format.
Pokemon Generation V
CGB - Pokemon Black and White/Pokemon Black 2 and White 2 C-Gear skins.
Other Formats
ARC - used to store New Super Mario Bros. Wii level data
B – used for Grand Theft Auto saved game files
BOL – used for levels on Poing!PC
DBPF – The Sims 2, DBPF, Package
DIVA – Project DIVA timings, element coördinates, MP3 references, notes, animation poses and scores.
ESM, ESP – Master and Plugin data archives for the Creation Engine
HAMBU - format used by the Aidan's Funhouse game RGTW for storing map data
HE0, HE2, HE4 HE games File
GCF – format used by the Steam content management system for file archives
IMG – format used by Renderware-based Grand Theft Auto games for data storage
LOVE – format used by the LOVE2D Engine
MAP – format used by Halo: Combat Evolved for archive compression, Doom³, and various other games
MCA – format used by Minecraft for storing data for in-game worlds
NBT – format used by Minecraft for storing program variables along with their (Java) type identifiers
OEC – format used by OE-Cake for scene data storage
OSB - osu! storyboard data
OSC - osu!stream combined stream data
OSF2 - free osu!stream song file
OSR – osu! replay data
OSU – osu! beatmap data
OSZ2 - paid osu!stream song file
P3D – format for panda3d by Disney
PLAGUEINC - format used by Plague Inc. for storing custom scenario information
POD – format used by Terminal Reality
RCT – Used for templates and save files in RollerCoaster Tycoon games
REP – used by Blizzard Entertainment for scenario replays in StarCraft.
Simcity 4, DBPF (.dat, .SC4Lot, .SC4Model) – All game plugins use this format, commonly with different file extensions
SMZIP – ZIP-based package for StepMania songs, themes and announcer packs.
SOLITAIRETHEME8 - A solitaire theme for Windows solitaire
USLD – format used by Unison Shift to store level layouts.
VVVVVV – format used by VVVVVV
CPS – format used by The Powder Toy, Powder Toy save
STM – format used by The Powder Toy, Powder Toy stamp
PKG – format used by Bungie for the PC Beta of Destiny 2, for nearly all the game's assets.
CHR – format used by Team Salvato, for the character files of Doki Doki Literature Club!
Z5 – format used by Z-machine for story files in interactive fiction.
scworld – format used by Survivalcraft to store sandbox worlds.
scskin – format used by Survivalcraft to store player skins.
scbtex – format used by Survivalcraft to store block textures.
prison – format used by Prison Architect to save prisons
escape – format used by Prison Architect to save escape attempts
Video game storage media
List of the most common filename extensions used when a game's ROM image or storage medium is copied from an original read-only memory (ROM) device to an external memory such as hard disk for back up purposes or for making the game playable with an emulator. In the case of cartridge-based software, if the platform specific extension is not used then filename extensions ".rom" or ".bin" are usually used to clarify that the file contains a copy of a content of a ROM. ROM, disk or tape images usually do not consist of one file or ROM, rather an entire file or ROM structure contained within one file on the backup medium.
A26 – Atari 2600 (.a26)
A52 – Atari 5200 (.a52)
A78 – Atari 7800 (.a78)
LNX – Atari Lynx (.lnx)
JAG,J64 – Atari Jaguar (.jag, .j64)
ISO, WBFS, WAD, WDF – Wii and WiiU (.iso, .wbfs, .wad, .wdf)
GCM, ISO – GameCube (.gcm, .iso)
min - Pokemon mini (.min)
NDS – Nintendo DS (.nds)
3DS – Nintendo 3DS (.3ds)
CIA – Installation File (.cia)
GB – Game Boy (.gb) (this applies to the original Game Boy and the Game Boy Color)
GBC – Game Boy Color (.gbc)
GBA – Game Boy Advance (.gba)
GBA – Game Boy Advance (.gba)
SAV – Game Boy Advance Saved Data Files (.sav)
SGM – Visual Boy Advance Save States (.sgm)
N64, V64, Z64, U64, USA, JAP, PAL, EUR, BIN – Nintendo 64 (.n64, .v64, .z64, .u64, .usa, .jap, .pal, .eur, .bin)
PJ – Project 64 Save States (.pj)
NES – Nintendo Entertainment System (.nes)
FDS – Famicom Disk System (.fds)
JST – Jnes Save States (.jst)
FC? – FCEUX Save States (.fc#, where # is any character, usually a number)
GG – Game Gear (.gg)
SMS – Master System (.sms)
SG – SG-1000 (.sg)
SMD,BIN – Mega Drive/Genesis (.smd or .bin)
32X – Sega 32X (.32x)
SMC,078,SFC – Super NES (.smc, .078, or .sfc) (.078 is for split ROMs, which are rare)
FIG – Super Famicom (Japanese releases are rarely .fig, above extensions are more common)
SRM – Super NES Saved Data Files (.srm)
ZST – ZSNES Save States (.zst, .zs1-.zs9, .z10-.z99)
FRZ – Snes9X Save States (.frz, .000-.008)
PCE – TurboGrafx-16/PC Engine (.pce)
NPC, NGP – Neo Geo Pocket (.npc, .ngp)
NGC – Neo Geo Pocket Color (.ngc)
VB – Virtual Boy (.vb)
INT – Intellivision (.int)
MIN – Pokémon Mini (.min)
VEC – Vectrex (.vec)
BIN – Odyssey² (.bin)
WS – WonderSwan (.ws)
WSC – WonderSwan Color (.wsc)
TZX – ZX Spectrum (.tzx) (for exact copies of ZX Spectrum games)
TAP – for tape images without copy protection
Z80,SNA – (for snapshots of the emulator RAM)
DSK – (for disk images)
TAP – Commodore 64 (.tap) (for tape images including copy protection)
T64 – (for tape images without copy protection, considerably smaller than .tap files)
D64 – (for disk images)
CRT – (for cartridge images)
ADF – Amiga (.adf) (for 880K diskette images)
ADZ – GZip-compressed version of the above.
DMS – Disk Masher System, previously used as a disk-archiving system native to the Amiga, also supported by emulators.
Virtual machines
Microsoft Virtual PC, Virtual Server
VFD – Virtual Floppy Disk (.vfd)
VHD – Virtual Hard Disk (.vhd)
VUD – Virtual Undo Disk (.vud)
VMC – Virtual Machine Configuration (.vmc)
VSV – Virtual Machine Saved State (.vsv)
EMC VMware ESX, GSX, Workstation, Player
LOG – Virtual Machine Logfile (.log)
VMDK, DSK – Virtual Machine Disk (.vmdk, .dsk)
NVRAM – Virtual Machine BIOS (.nvram)
VMEM – Virtual Machine paging file (.vmem)
VMSD – Virtual Machine snapshot metadata (.vmsd)
VMSN – Virtual Machine snapshot (.vmsn)
VMSS,STD – Virtual Machine suspended state (.vmss, .std)
VMTM – Virtual Machine team data (.vmtm)
VMX,CFG – Virtual Machine configuration (.vmx, .cfg)
VMXF – Virtual Machine team configuration (.vmxf)
VirtualBox
VDI – VirtualBox Virtual Disk Image (.vdi)
Vbox-extpack – VirtualBox extension pack. (.vbox-extpack)
Parallels Workstation
HDD – Virtual Machine hard disk (.hdd)
PVS – Virtual Machine preferences/configuration (.pvs)
SAV – Virtual Machine saved state (.sav)
QEMU
COW – Copy-on-write
QCOW – QEMU copy-on-write Qcow
QCOW2 – QEMU copy-on-write – version 2 Qcow
QED – QEMU enhanced disk format
Web page
Static
DTD – Document Type Definition (standard), MUST be public and free
HTML (.html, .htm) – HyperText Markup Language
XHTML (.xhtml, .xht) – eXtensible HyperText Markup Language
MHTML (.mht, .mhtml) – Archived HTML, store all data on one web page (text, images, etc.) in one big file
MAF (.maff) – web archive based on ZIP
Dynamically generated
ASP (.asp) – Microsoft Active Server Page
ASPX – (.aspx) – Microsoft Active Server Page. NET
ADP – AOLserver Dynamic Page
BML – (.bml) – Better Markup Language (templating)
CFM – (.cfm) – ColdFusion
CGI – (.cgi)
iHTML – (.ihtml) – Inline HTML
JSP – (.jsp) JavaServer Pages
Lasso – (.las, .lasso, .lassoapp) – A file created or served with the Lasso Programming Language
PL – Perl (.pl)
PHP – (.php, .php?, .phtml) – ? is version number (previously abbreviated Personal Home Page, later changed to PHP: Hypertext Preprocessor)
SSI – (.shtml) – HTML with Server Side Includes (Apache)
SSI – (.stm) – HTML with Server Side Includes (Apache)
Markup languages and other web standards-based formats
Atom – (.atom, .xml) – Another syndication format.
EML – (.eml) – Format used by several desktop email clients.
JSON-LD – (.jsonld) – A JSON-based serialization for linked data.
KPRX – (.kprx) – A XML-based serialization for workflow definition generated by K2.
PS – (.ps) – A XML-based serialization for test automation scripts called PowerScripts for K2 based applications.
Metalink – (.metalink, .met) – A format to list metadata about downloads, such as mirrors, checksums, and other information.
RSS – (.rss, .xml) – Syndication format.
Markdown – (.markdown, .md) – Plain text formatting syntax, which is popularly used to format "readme" files.
Shuttle – (.se) – Another lightweight markup language.
Other
AXD – cookie extensions found in temporary internet folder
BDF – Binary Data Format – raw data from recovered blocks of unallocated space on a hard drive
CBP – CD Box Labeler Pro, CentraBuilder, Code::Blocks Project File, Conlab Project
CEX – SolidWorks Enterprise PDM Vault File
COL – Nintendo GameCube proprietary collision file (.col)
CREDX – CredX Dat File
DDB – Generating code for Vocaloid singers voice (see .DDI)
DDI – Vocaloid phoneme library (Japanese, English, Korean, Spanish, Chinese, Catalan)
DUPX – DuupeCheck database management tool project file
FTM – Family Tree Maker data file
FTMB – Family Tree Maker backup file
GA3 – Graphical Analysis 3
GEDCOM (.ged) – (GEnealogical Data COMmunication) format to exchange genealogy data between different genealogy software
HLP – Windows help file
IGC – flight tracks downloaded from GPS devices in the FAI's prescribed format
INF – similar format to INI file; used to install device drivers under Windows, inter alia.
JAM – JAM Message Base Format for BBSes
KMC – tests made with KatzReview's MegaCrammer
KCL – Nintendo GameCube/Wii proprietary collision file (.kcl)
KTR – Hitachi Vantara Pentaho Data Integration/Kettle Transformation Project file
LNK – Microsoft Windows format for Hyperlinks to Executables
LSM – LSMaker script file (program using layered .jpg to create special effects; specifically designed to render lightsabers from the Star Wars universe) (.lsm)
NARC – Archive format used in Nintendo DS games.
OER – AU OER Tool, Open Educational Resource editor
PA – Used to assign sound effects to materials in KCL files (.pa)
PIF – Used to run MS-DOS programs under Windows
POR – So called "portable" SPSS files, readable by PSPP
PXZ – Compressed file to exchange media elements with PSALMO
RISE – File containing RISE generated information model evolution
SCR - Windows Screen Saver file
TOPC – TopicCrunch SEO Project file holding keywords, domain, and search engine settings (ASCII)
XLF – Utah State University Extensible LADAR Format
XMC – Assisted contact lists format, based on XML and used in kindergartens and schools
ZED – My Heritage Family Tree
Zone file – a text file containing a DNS zone
Cursors
ANI – Animated cursor
CUR – Cursor file
Smes – Hawk's Dock configuration file
Generalized files
General data formats
These file formats are fairly well defined by long-term use or a general standard, but the content of each file is often highly specific to particular software or has been extended by further standards for specific uses.
Text-based
CSV – comma-separated values
HTML – hyper text markup language
CSS – cascading style sheets
INI – a configuration text file whose format is substantially similar between applications
JSON – JavaScript Object Notation is an openly used data format now used by many languages, not just JavaScript
TSV – tab-separated values
XML – an open data format
YAML – an open data format
ReStructuredText – an open text format for technical documents used mainly in the Python programming language
Markdown (.md) – an open lightweight markup language to create simple but rich text, often used to format README files
AsciiDoc – an open human-readable markup document format semantically equivalent to DocBook
Generic file extensions
These are filename extensions and broad types reused frequently with differing formats or no specific format by different programs.
Binary files
Bak file (.bak, .bk) – various backup formats: some just copies of data files, some in application-specific data backup formats, some formats for general file backup programs
BIN – binary data, often memory dumps of executable code or data to be re-used by the same software that originated it
DAT – data file, usually binary data proprietary to the program that created it, or an MPEG-1 stream of Video CD
DSK – file representations of various disk storage images
RAW – raw (unprocessed) data
Text files
configuration file (.cnf, .conf, .cfg) – substantially software-specific
logfiles (.log) – usually text, but sometimes binary
plain text (.asc or .txt) – human-readable plain text, usually no more specific
Partial files
Differences and patches
diff – text file differences created by the program diff and applied as updates by patch
Incomplete transfers
!UT (.!ut) – partly complete uTorrent download
CRDOWNLOAD (.crdownload) – partly complete Google Chrome download
OPDOWNLOAD (.opdownload) – partly complete Opera download
PART (.part) – partly complete Mozilla Firefox or Transmission download
PARTIAL (.partial) – partly complete Internet Explorer or Microsoft Edge download
Temporary files
Temporary file (.temp, .tmp, various others) – sometimes in a specific format, but often just raw data in the middle of processing
Pseudo-pipeline file – used to simulate a software pipe
See also
List of filename extensions
MIME#Content-Type, a standard for referring to file formats
List of motion and gesture file formats
List of file signatures, or "magic numbers"
References
External links |
1191588 | https://en.wikipedia.org/wiki/Edward%20Thackeray | Edward Thackeray | Colonel Sir Edward Talbot Thackeray (19 October 1836 – 3 September 1927) was an English recipient of the Victoria Cross, the highest and most prestigious award for gallantry in the face of the enemy that can be awarded to British and Commonwealth forces.
The son of Rev. Francis Thackeray and Mary Anne Shakespear, he was the first cousin of the novelist, William Makepeace Thackeray. He was educated at Marlborough College and Addiscombe Military Seminary.
Thackeray was 20 years old, and a second lieutenant in the Bengal Engineers, Bengal Army during the Indian Mutiny when the following deed took place on 16 September 1857 at Delhi, British India for which he was awarded the Victoria Cross
He later achieved the rank of colonel, and was elected to the Athenaeum in 1876. Thackeray retired from the Army in 1888 and in 1898 he went to live in Italy where he spent the rest of his life.
His medal is currently displayed at the National Museum of Military History in Johannesburg, South Africa.
Works
Biographical notices of officers of the Royal (Bengal) engineers; (1900)
References and sources
References
Sources
Monuments to Courage (David Harvey, 1999)
The Register of the Victoria Cross (This England, 1997)
The Sapper VCs (Gerald Napier, 1998)
External links
Sappers VCs
1836 births
1927 deaths
British recipients of the Victoria Cross
Alumni of Addiscombe Military Seminary
British Indian Army officers
Knights Commander of the Order of the Bath
Indian Rebellion of 1857 recipients of the Victoria Cross
People from Broxbourne
People educated at Marlborough College
British military personnel of the Second Anglo-Afghan War
Royal Engineers officers
Bengal Engineers officers
Edward |
48672234 | https://en.wikipedia.org/wiki/List%20of%20engineering%20colleges%20in%20Nepal | List of engineering colleges in Nepal |
Engineering colleges of Far-western University
School of Engineering
Civil Engineering
Computer Engineering
Engineering colleges of Kathmandu University
School of Science
Environmental Engineering
School of Engineering
Civil Engineering
Mechanical Engineering
Computer Engineering
Electrical & Electronics
Engineering
Geomatics Engineering
Chemical Engineering
Engineering colleges of Mid Western University
School of Engineering
Civil Engineering
Computer Engineering
Hydropower Engineering
Engineering colleges of Pokhara University
School of Engineering
Civil Engineering
Electrical & Electronics Engineering
Equivalent Colleges
Madan Bhandari Memorial Academy Nepal
Civil Engineering
Computer Engineering
Bachelor in Architecture
Masters in Construction Management
Affiliated Engineering Colleges of Pokhara University
Gandaki College of Engineering
Software Engineering
Computer Engineering
Cosmos College of Management & Technology
Civil Engineering
Electronics & Communication Engineering
Computer Engineering
Information Technology
National Academy of Science & Technology, Dhangadhi Engineering College
Civil Engineering
Computer Engineering
Nepal College of Information Technology
Civil Engineering
Electronics & Communication Engineering
Computer Engineering
Software Engineering
Information Technology
Nepal Engineering College
Civil Engineering
Electronics & Communication Engineering
Computer Engineering
Architecture Engineering
Electrical & Electronics Engineering
Civil & Rural Engineering
Lumbini Engineering, Management & Science College
Civil Engineering
Electronics & Communication Engineering
Computer Engineering
Electrical & Electronics Engineering
Pokhara Engineering College
Civil Engineering
Computer Engineering
Electronics & Communication Engineering
Oxford College of Engineering & Management
Civil Engineering
Computer Engineering
Electrical & Electronics Engineering
Everest Engineering & Management College
Civil Engineering
Electronics & Communication Engineering
Computer Engineering
Rapti Engineering College
Civil Engineering
Electronics & Communication Engineering
United Technical College
Civil Engineering
Electrical & Electronics Engineering
Universal Science College
Civil Engineering
College Of Engineering & Management
Civil Engineering
Nepal Western Academy
Civil Engineering
Engineering colleges of Purbanchal University
School of Engineering & Technology
Electronics & Communication Engineering
Computer Engineering
Affiliated Engineering Colleges of Purbanchal University
Acme Engineering College
Civil Engineering
Electronics & Communication Engineering
Computer Engineering
Architecture Engineering
Himalayan White house International College
Civil Engineering
Electronics & Communication Engineering
Computer Engineering
Kantipur City College
Civil Engineering
Electronics & Communication Engineering
Computer Engineering
Eastern College of Engineering
Civil Engineering
Electronics & Communication Engineering
Computer Engineering
Khwopa Engineering College
Civil Engineering
Electrical Engineering
Electronics & Communication Engineering
Computer Engineering
Architecture Engineering
Himalayan Institute of Science & Tech.
Civil Engineering
Electronics & Communication Engineering
College of Biomedical Engineering & Applied Science
Biomedical Engineering
Aryan School Of Engineering
Civil Engineering
Central Engineering college
Civil Engineering
Geomatics Institute Of Technology
Geomatics Engineering
Kantipur International College
Civil Engineering
Architecture Engineering
Nepal polytechnic Institute
Civil Engineering
Electronics Engineering
Hillside College of Engineering
Civil Engineering
Electrical Engineering
Morgan Engineering & Management College
Civil Engineering
Electrical Engineering
Lord Buddha college of Engineering & Management (LBCEM)
Computer Engineering
Electrical Engineering
Caliber College Of Engineering
Civil Engineering
Electronics & Communication Engineering
Pathibhara Centre for Advance Studies
Civil Engineering
Electronics & Communication Engineering
Computer Engineering
Engineering colleges of Tribhuwan University
Institute of Engineering (IOE), Purwanchal Campus
Civil Engineering
Agriculture Engineering
Mechanical Engineering
Electrical Engineering
Electronics Communication and Information Engineering
Computer Engineering
Architecture Engineering
Institute of Engineering (IOE), Pashchimanchal Campus
Civil Engineering
Electronics Communication and Information Engineering
Mechanical Engineering
Electrical Engineering
Computer Engineering
Geomatics Engineering
Automobile Engineering
Institute of Engineering (IOE), Pulchowk Campus
Civil Engineering
Electrical Engineering
Electronics Communication and Information Engineering
Mechanical Engineering
Computer Engineering
Architecture Engineering
Aerospace Engineering
Chemical Engineering
Institute of Engineering (IOE), Thapathali Campus
Civil Engineering
Electronics Communication and Information Engineering
Mechanical Engineering
Industrial Engineering
Automobile Engineering
Computer Engineering
Architecture Engineering
Institute of Engineering (IOE), Chitwan Engineering Campus
Architecture Engineering
Affiliated Engineering Colleges of Tribhuwan University
Advanced Colleges of Engineering & Management
Civil Engineering
Electrical Engineering
Electronics Communication and Information Engineering
Computer Engineering
Himalaya College of Engineering
Civil Engineering
Electronics Communication and Information Engineering
Computer Engineering
Architecture Engineering
Kantipur Engineering College
Civil Engineering
Electronics Communication and Information Engineering
Computer Engineering
Kathmandu Engineering College
Civil Engineering
Electrical Engineering
Electronics Communication and Information Engineering
Computer Engineering
Architecture Engineering
Khwopa College of Engineering
Civil Engineering
Electrical Engineering
Janakpur Engineering College
Civil Engineering
Electronics Communication and Information Engineering
Computer Engineering
Kathford College of Engineering and Management
Civil Engineering
Electronics Communication and Information Engineering
Computer Engineering
Sagarmatha Engineering College
Civil Engineering
Electronics Communication and Information Engineering
Computer Engineering
Lalitpur Engineering College
Civil Engineering
Computer Engineering
National College of Engineering
Civil Engineering
Electrical Engineering
Electronics Communication and Information Engineering
Computer Engineering
References
External links
Far-Western University
Kathmandu University
Mid-Western University
Pokhara University
Purbanchal University
Tribhuwan University
Nepal |
50707157 | https://en.wikipedia.org/wiki/1987%20Florida%20Citrus%20Bowl | 1987 Florida Citrus Bowl | The 1987 Florida Citrus Bowl was held on January 1, 1987 at the Florida Citrus Bowl in Orlando, Florida. The #10 Auburn Tigers defeated the USC Trojans by a score of 16–7.
The first score of the game came when the Trojans intercepted an Auburn pass and returned it 24 yards to take the lead. No other scoring took place in the first quarter, which ended 7–0. The second quarter saw Auburn retaliate, as the Tigers found the end zone twice (a 3-yard pass and a 4-yard run) to lead 14-7 at halftime. The third quarter saw no scoring and Auburn capped off the game with a safety in the fourth quarter, and the game ended 16-7.
Auburn finished the game with 9 more first downs, 156 more rushing yards, and 133 more total yards. However, the Trojans out-passed the Tigers by 23 yards.
References
Florida Citrus Bowl
Citrus Bowl (game)
Auburn Tigers football bowl games
USC Trojans football bowl games
Florida Citrus Bowl
Florida Citrus Bowl |
29141650 | https://en.wikipedia.org/wiki/Jeremy%20Gibbons | Jeremy Gibbons | Jeremy Gibbons is a computer scientist and professor of computing at the University of Oxford. He serves as Deputy Director of the Software Engineering Programme in the Department of Computer Science, Governing Body Fellow at Kellogg College and Pro-Proctor of the University of Oxford.
Academic
Professor Gibbons obtained a Bachelor of Science (BSc) (Hons) in computer science from the University of Edinburgh (1983–1987), and a Doctor of Philosophy (DPhil) in Computation from the University of Oxford (1987–1991).
Before taking his current post, he was, first, lecturer in computer science, University of Auckland (1991–1996), next, lecturer and senior lecturer in computing, Oxford Brookes University (1996–1999), and then reader in software engineering at the University of Oxford.
His research activities include programming languages and methods; functional programming; generic programming; object technology; program specification, derivation and transformation.
His current projects include CancerGrid; Datatype-Generic Programming; Automatic Generation of Software Components; Workflow for Cancer Bioinformatics.
His publications cover generic programming, functional programming, formal methods, computational biology and bioinformatics.
He is a member of the International Federation for Information Processing (IFIP) IFIP Working Group 2.1 on Algorithmic Languages and Calculi, which specified, maintains, and supports the programming languages ALGOL 60 and ALGOL 68. Since 2009, he has been chairperson.
References
External links
, academic
Patterns in Functional Programming – his blog
Members of the Department of Computer Science, University of Oxford
Fellows of Kellogg College, Oxford
British computer scientists
Living people
Academics of Oxford Brookes University
Alumni of the University of Oxford
Alumni of the University of Edinburgh
University of Auckland faculty
Year of birth missing (living people)
People educated at Boroughmuir High School |
4617475 | https://en.wikipedia.org/wiki/Comparison%20of%20VoIP%20software | Comparison of VoIP software | This is a comparison of voice over IP (VoIP) software used to conduct telephone-like voice conversations across Internet Protocol (IP) based networks. For residential markets, voice over IP phone service is often cheaper than traditional public switched telephone network (PSTN) service and can remove geographic restrictions to telephone numbers, e.g., have a PSTN phone number in a New York area code ring in Tokyo.
For businesses, VoIP obviates separate voice and data pipelines, channelling both types of traffic through the IP network while giving the telephony user a range of advanced abilities.
Softphones are client devices for making and receiving voice and video calls over the IP network with the standard functions of most original telephones and usually allow integration with VoIP phones and USB phones instead of using a computer's microphone and speakers (or headset). Most softphone clients run on the open Session Initiation Protocol (SIP) supporting various codecs. Skype runs on a closed proprietary networking protocol but additional business telephone system (PBX) software can allow a SIP based telephone system to connect to the Skype network. Online chat programs now also incorporate voice and video communications.
Other VoIP software applications include conferencing servers, intercom systems, virtual foreign exchange services (FXOs) and adapted telephony software which concurrently support VoIP and public switched telephone network (PSTN) like Interactive Voice Response (IVR) systems, dial in dictation, on hold and call recording servers.
Some entries below are Web-based VoIP; most are standalone Desktop applications.
Desktop applications
Discontinued softphone service
Mobile phones
For mobile VoIP clients:
Frameworks and libraries
Server software
Secure VoIP software
VoIP software with client-to-client encryption
The following table is an overview of those VoIP clients which (can) provide end-to-end encryption.
VoIP software with client-to-server encryption
The following table is an overview of those VoIP clients which (normally) provide client-to-server encryption.
Notes
See also
Comparison of audio coding formats
Comparison of instant messaging clients
Comparison of web conferencing software
List of codecs
List of SIP software
List of video telecommunication services and product brands
Matrix (communication protocol)
Secure communication
Comparison of user features of messaging platforms
References
VoIP software
Cryptographic software
VoIP software
VoIP software |
10571915 | https://en.wikipedia.org/wiki/Tom%20Thacker%20%28musician%29 | Tom Thacker (musician) | Thomas Arnold Thacker (born April 11, 1974), nicknamed Brown Tom, is a Canadian musician. He is the lead guitarist, lead singer and co-founder of the Canadian punk rock group Gob and rhythm guitarist for Sum 41. Thacker formed Gob with Theo Goutzinakis in 1993. Following Dave Baksh’s departure from Sum 41 on May 11, 2006, Thacker was recruited as their touring guitarist, and then became an official member in 2009. He has remained with Sum 41 ever since, even after Baksh rejoined the band in 2015, contributing to three studio releases.
Thacker was born in Langley, British Columbia, Canada. He has been in the films Going the Distance and Sharp as Marbles.
Professional career
Gob (1993–present)
Thacker is one of the vocalists and guitarists in the punk rock band Gob. He formed the band with Theo Goutzinakis, Wolfman Pat Integrity on drums, and Kelly Macaulay on bass. They released their self-titled EP in 1994. Since signing a deal with Nettwerk and EMI, the band has released six studio albums. The current bassist and drummer, Steven Fairweather and Gabe Mantle, joined the band after the band went through several other bassists.
Sum 41 (2006–present)
In late 2006, Thacker joined Sum 41 as touring guitarist, replacing former guitarist Dave Baksh. He also plays keyboards for Sum 41, and provides backing vocals.
On June 26, 2009, in a special chat taking place on the website AbsolutePunk.net, Sum 41 frontman Deryck Whibley made it clear that Thacker was now an official member of the band and not just a touring member. On July 20, 2009, Steve Jocz of Sum 41 stated on the band's official website that the band finished all their tour dates for 2009 and confirmed that Thacker will be appearing on the upcoming Sum 41 album.
Even after Dave Baksh rejoined Sum 41 in 2015, Thacker has remained with the group ever since, which expanded them into a five-piece.
Other musical projects
Under the name Tommy, Tom Thacker played drums in the Canadian pop punk The McRackins for their 1995 album What Came First?. He returned again to play on their 1999 release Comicbooks and Bubblegum.
On September 2, 2011, in an interview with Todd Morse of The Operation M.D., he said that the band plans to go on their first ever European tour in December, with possibly Tom Thacker playing lead guitar. Matt Brann is rumored to join the touring line-up on drums. In an interview with Thacker, he has confirmed that he was the one to suggest Cone and Todd to tour with the Operation M.D. in the winter, after Sum 41 cancelled all their tour dates for the rest of 2011.
In 2013 Thacker played 2 shows with The Offspring filling in for Todd Morse. In July and August 2017 he has been touring with The Offspring again filling in for Noodles
Discography
Gob
Gob (1994)
Too Late... No Friends (1995)
How Far Shallow Takes You (1998)
World According to Gob (2001)
Foot in Mouth Disease (2003)
Muertos Vivos (2007)
Gob documentary (2012)
Apt. 13 (2014)
Sum 41
All the Good Shit (2009) - live tracks
Screaming Bloody Murder (2011) - co-wrote the first single
Live at the House of Blues, Cleveland 9.15.07 (2011)
13 Voices (2016)
Order in Decline (2019)
Others
Various Artists - ''FUBAR: The Album (2002)
By a Thread (2011) (producer)
Floodlight (2009) (producer)
Steven Fairweather (2014) (producer)
References
External links
Gob official site
Tom Thacker on Myspace
Gob on Myspace
Interview with Tom and Theo
1974 births
Living people
Canadian punk rock guitarists
Gob (band) members
Lead guitarists
Musicians from British Columbia
Sum 41 members
People from Langley, British Columbia (city) |
27368908 | https://en.wikipedia.org/wiki/6090%20Aulis | 6090 Aulis | 6090 Aulis, provisional designation: , is a Jupiter trojan from the Greek camp, approximately in diameter. It was discovered on 27 February 1989, by Belgian astronomer Henri Debehogne at ESO's La Silla Observatory in northern Chile. The dark Jovian asteroid belongs to the 50 largest Jupiter trojans and has a rotation period of 18.5 hours. It was named for the ancient Greek port Aulis, mentioned in the Iliad.
Orbit and classification
Aulis is a dark Jovian asteroid orbiting in the leading Greek camp at Jupiter's Lagrangian point, 60° ahead of the Gas Giant's orbit in a 1:1 resonance . It is also a non-family asteroid in the Jovian background population.
It orbits the Sun at a distance of 5.0–5.6 AU once every 12 years and 3 months (4,470 days; semi-major axis of 5.31 AU). Its orbit has an eccentricity of 0.06 and an inclination of 20° with respect to the ecliptic. The body's observation arc begins with a precovery taken at Palomar Observatory in March 1954, almost 35 years prior to its official discovery observation.
Numbering and naming
This minor planet was numbered on 19 September 1994 (). On 14 May 2021, the object was named by the Working Group Small Body Nomenclature (WGSBN) for the ancient Greek port Aulis, mentioned in the Iliad. In Greek mythology, it was the place where the Greek fleet gathered to set off for Troy and where King Agamemnon had sacrificed his daughter Iphigenia.
Physical characteristics
Aulis is an assumed C-type asteroid. Its V–I color index of 0.98 is typical for that of most Jovian D-types, the dominant spectral type among the larger Jupiter trojans.
Rotation period
Italian astronomer Stefano Mottola obtained two concurring rotational lightcurves from photometric observations. In June 1994, together with astronomer Anders Erikson, he constructed a lightcurve from observations made with the 0.9-meter Dutch telescope at La Silla, showing a rotation period of hours and a brightness variation of magnitude (). In September 2009, he used the 1.2-meter reflector at Calar Alto Observatory, Spain, and measured a refined period of hours with an amplitude of in magnitude (), confirming his previous result.
Diameter and albedo
According to the space-based surveys carried out by the Infrared Astronomical Satellite IRAS, the Japanese Akari satellite, and the NEOWISE mission of NASA's Wide-field Infrared Survey Explorer, Aulis measures between 59.57 and 81.92 kilometers in diameter and its surface has an albedo between 0.046 and 0.087. The Collaborative Asteroid Lightcurve Link adopts an albedo of 0.0553 from IRAS, and derives a similar diameter of 74.53 kilometers based on an absolute magnitude of 9.4.
References
External links
Asteroid Lightcurve Database (LCDB), query form (info )
Discovery Circumstances: Numbered Minor Planets (5001)-(10000) – Minor Planet Center
Asteroid (6090) 1989 DJ at the Small Bodies Data Ferret
006090
Discoveries by Henri Debehogne
Minor planets named from Greek mythology
Minor planets named for places
Named minor planets
19890227 |
2691029 | https://en.wikipedia.org/wiki/DRTE%20Computer | DRTE Computer | The DRTE Computer was a transistorized computer built at the Defence Research Telecommunications Establishment (DRTE), part of the Canadian Defence Research Board. It was one of the earlier fully transistorized machines, running in prototype form in 1957, and fully developed form in 1960. Although the performance was quite good, equal to that of contemporary machines like the PDP-1, no commercial vendors ever took up the design, and the only potential sale to the Canadian Navy's Pacific Naval Laboratories, fell through. The machine is currently part of the Canadian national science and technology collection housed at the Canada Science and Technology Museum.
Transistor research
In the early 1950s transistors had not yet replaced vacuum tubes in most electronics. Tubes varied widely in their actual characteristics from tube to tube even of the same model. Engineers had developed techniques to ensure that the overall circuit was not overly sensitive to these changes so they could be replaced without causing trouble. The same techniques had not yet been developed for transistor-based systems, they were simply too new. While smaller circuits could be "hand tuned" to work, larger systems using many transistors were not well understood. At the same time transistors were still expensive; a tube cost about $0.75 while a similar transistor cost about $8. This limited the amount of experimentation most companies were able to perform.
DRTE was originally formed to improve communications systems, and to this end, they started a research program into using transistors in complex circuits in a new Electronics Lab under the direction of Norman Moody. Between 1950 and 1960, the Electronics Lab became a major center of excellence in the field of transistors, and through an outreach program, the Electronic Component Research and Development Committee, were able to pass on their knowledge to visiting engineers from major Canadian electronics firms who were entering the transistor field.
The key development that led to the eventual construction of the computer was Moody's invention of a new type of flip-flop circuit, a key component of all computer systems. Moody's design used a P-N-P-N junction, consisting of a PNP and NPN transistor connected back-to-back. Most machines of the era used Eccles-Jordan flip-flops; this was originally a tube-based concept that was being used by replacing the tubes with transistors. The P-N-P-N circuit offered much higher power output, allowing it to drive a greater number of "downstream" circuits without additional amplifiers. The overall effect was to reduce, sometimes greatly, the total number of transistors needed to implement a digital circuit. Moody published his circuit in 1956.
One downside, only realized later, is that the current draw of Moody's flip-flop was not balanced, so storing different numbers in them could lead to dramatically different current needs on the power supply. Generally this sort of changing load is something that should be avoided wherever possible to reduce noise generated when the power draw increases or decreases. At very low power levels, as in a computer, these pulses of noise can be as powerful as the signals themselves.
The computer
Although it appears it was never an official recommendation, by the mid-1950s the DRTE decided that the best way to really develop transistor techniques in a complex system was to build a computer. This was not something they needed for their own use at the time, it was simply an example of an extremely complex system that would test their capabilities like few other systems could. But as development continued, many of the engineers involved became more interested in computer design than electronics. This was outside the DRTE's charter and eventually a source of friction between the group and the DRB who funded them.
Starting about 1955, David Florida drove the development of a computer using Moody's flip-flop design. He examined existing computer designs and concluded that the main limitation in computer complexity was due largely to the burnout rate of the tubes; a more powerful design required more tubes, which meant more frequent burnouts. Although a number of truly massive machines had been built, like SAGE, most machines were much smaller in order to improve uptime. With transistors this limitation was removed; more complex machines could be built with little effect on reliability, as long as one was willing to pay the price for more transistors. With the price of transistors falling all the time, Florida's design included every feature he imagined would be useful in a scientific machine.
In particular, the design ultimately included a number of subsystems for input/output, a hardware binary/decimal converter,
floating-point hardware including a square root function, a number of loop instructions and index registers to support them, and used a complex three-address instruction format. The three-address system meant that every instruction included the address of up to two operands and the result. The system did not include an accumulator, the results of all operations being written back to main memory. This was desirable at the time, when computer memories were generally comparable in speed to the processors (today memory is much slower than processors).
Processor design
Florida had previously worked with the team building the Manchester Mark 1, and following their lead he designed the DRTE machine with 40-bit words. An instruction was broken down into four 10-bit parts, the instruction and three 10-bit addresses. This allowed a total main memory size of 2^10 = 1024 40-bit words, or 40 kB in modern terminology. Integers used 39 bits and one bit for a sign, while floating point numbers had an 8-bit exponent with one bit for the sign and a 32-bit mantissa with one bit for the sign. Florida felt that the three-address instruction format, including the addresses of two parameters and a result, would make programming easier than a register-based system.
An experimental version of the machine consisted of the basic math unit and memory handling. Construction of the complete system started in 1958 and was completed in 1960. The machine ran on a 5 microseconds/cycle lock, or 200 kHz, fairly competitive for a machine of the era. A floating point add took between 50 and 365 microseconds (μS). The longest instructions, divide or square root, took 5.3 milliseconds (ms) for floating point. Integer adds took about 200 μS, but other operations were handled in subroutines as opposed to hardware and took much longer; an integer division/square root required 8.2 ms for instance.
Memory system
The computer used core memory for all storage, lacking "secondary" systems such as a memory drum. Normally the memory for a machine would be built up by stacking a number of core assemblies, or "planes", each one holding a single bit of the machine's word. For instance, with a 40-bit word as in the DRTE, the system would use 40 planes of core. Addresses would be looked up by translating each 10-bit address into an X and Y address in the planes; for 1,024 words in the DTRE this needed 32×32 planes.
One problem with using core on the DRTE machine was that core required fairly high power in order to operate. Providing such power from transistors, which at the time were low-power only, represented a major challenge. Although one solution, commonly used at the time, was to build the core machinery out of tubes, for the DRTE machine this was considered one more challenge in transistor design. The eventual solution, designed primarily by Richard Cobbald, was entirely transistor-based, and later patented.
Another improvement introduced in their core design involved the handling of the read wire. Reading a location in core works by powering the address in question, as if you wanted to write a "1" to that location. If the core was already holding a "1" nothing will happen. However, if the core was holding a "0", the power will cause the core to change polarity to a "1". The small amount of energy used to do this causes a pulse to be output on a different wire, the read line. So to read data, you write "1" to that location, if a pulse is seen on the read line the location originally held "0", and no pulse means it held "1".
One problem with this system is that other cores on the same lines (X or Y) will give off a very small signal as well, potentially masking the signal being looked for. The conventional solution was to wire the read line diagonally back and forth through the plane, so that these smaller signals would cancel out—the positive signal from one would be a negative signal from the next as the wire passed through it in the opposite direction. However this solution also made wiring the core fairly difficult, and considerable amounts of research went into various ways to improve the cost of wiring core.
Cobbald's design made what in retrospect seems like an obvious change; the read wire was threaded across the planes instead of one per plane. In this system the read wire really did pass through only one set of powered lines, and the problems of the "extra signal" were avoided completely. It is not entirely surprising that this solution was not hit on before; cores were constructed a plane at a time and then wired together, whereas this method required the entire core to be built before the read wires could be added. The only major downside to the design is that it required more power to run.
Input/Output
I/O device on the DRTE design were extremely limited, consisting of a Flexowriter for output, and a paper tape reader at about 600 CPS for input. In particular, the system added a hardware binary-to-decimal/decimal-to-binary converter that was implemented inline with the I/O systems. This allowed the paper tape to be punched in decimal codes which would be converted invisibly into binary and stored in memory while being read. The reverse was also true, allowing the machine to print the contents of memory directly to tape again. The system was tuned so that the machine could read or write data essentially for free; that is, the system could read and store data exactly as fast as the paper tape could feed it.
The system also offered a crude sort of assembler language support. Using the shift key, characters entered into the system represented mnemonics instead of numerical data, which would then be translated differently. For instance, the letters "AA" would add two floating point numbers, the numbers being stored in the two decimal addresses following. While being read, the paper tape's shift column would signal the BDC decoder to ignore the next codes.
The hardware implementation eventually revealed itself as an anti-feature. If one assumed that all the data being read and written was a decimal representation of binary data the system made perfect sense, but if the data was in some other form, more complex assembler language character codes for instance, it ended up simply adding complexity that then had to be turned off. The system was eventually removed when assembler programming became common. It also seriously limited the sorts of devices that could be hooked up, due to the careful tuning of the interface speed.
Further development and use
Parallel math unit
As soon as the prototype math unit was completed in 1957, a new unit that operated on an entire word in parallel was started. This new unit was ready around the same time as the "full version" of the machine (1960–61) and was later retrofitted into the design. This improved speeds by about ten times, for instance, a floating-point add improved from 300 μs to only 40, multiplication from 2200 to 180 μs, and a square root from 5300 to 510 microseconds. Integer math was likewise improved by about the same factor, although "complex" arithmetic like multiplication remained in code as opposed to hardware. With the new math unit the machine was faster than the average contemporary system, although slower than "high end" machines like the IBM 7090 by about two to five times.
As with any research machine, the DRTE system was used for a number of "household" calculations, as well as the development of a number of simple computer games. These included tic-tac-toe and hangman, as well as a simple music generator that could play the Colonel Bogey March by attaching a speaker to a particular flip-flop.
DAR
In the late 1950s the US was in the midst of rolling out the SAGE system, and became interested in the effects of aurora borealis on radar operation. An agreement was eventually signed between the DRB and US Air Force, with the Air Force providing two million dollars to build a radar research center modelled on MIT's Lincoln Laboratory, which had provided much of the US technical lead in radar systems.
The DRB proposed a site between five and six hundred miles from the Churchill Rocket Research Range, which was already being used for extensive aurora research with their rocketry program. Such a location would allow the radars to directly measure the effects of aurora on radar by tracking the rocket launches. Eventually a site outside Prince Albert, Saskatchewan was selected; it has been suggested this was due to it being Prime Minister's John Diefenbaker home riding. The new site was opened in June 1959, known as the Prince Albert Radar Laboratory, or PARL.
In order to quickly record data during test runs, the DRTE built a custom system known as DAR, the Digital Analyzer and Recorder. DAR was a fairly high-priority project, and some of the manpower originally working on the DRTE computer were put on DAR instead. The machine itself consisted of a non-programmable computer that read the data into 40,000 bits of core memory, tagged it with timecode and other information, and then wrote it to magnetic tape. DAR was used for a number of years, and had to be rebuilt after a fire in 1962.
Alouette
In 1958 the DRB sent a proposal to NASA to launch a "topside sounder", which would take measurements of the Earth's ionosphere from space. This was a topic of some importance at the time; the DRB was conducting a major ionospheric research program in order to build a very-long-distance communications system (which would later be used on the Mid-Canada Line and DEW Line). The various US agencies that commented on the system were highly sceptical that the DRB could build such a device, but suggested they do so anyway as a backup to their own much simpler design. In the end the US design ran into lengthy delays, and the "too advanced" Canadian design was eventually launched in 1962 as Alouette I.
While Alouette was being designed, a major question about the lifetime of the solar cells powering the system came to be solved on the DRTE computer. They developed a program that simulated the effects of precession on the satellite's orbit, and used this information to calculate the percentage of time that sunlight fell on it. The result proved the system would have more than enough power. While it was designed with a lifetime of only one year, Alouette I eventually ran for ten years before being shut off.
The computer was also put into use generating tracking commands for the receiver dish antenna in Ottawa that downloaded data from Alouette. The antenna could not track through "straight up", and had to be rotated 180 degrees to track back down to the opposite horizon. The movement was controlled by a simple system reading a paper tape. The computer produced tapes so the dish would be slowly rotated as it tracked the satellite, thereby guaranteeing no "dead time". Eventually a library of tapes was built up for any possible pass.
References
External links
The DRTE Computer - in-depth article on the machine and its history. This article is based on an unpublished report for the Canada Science and Technology Museum, written by John Vardalas in 1985.
Dirty Gertie: The DRTE Computer - smaller article by Linda Petiot
The Prince Albert Radar Laboratory
One-of-a-kind computers
Transistorized computers |
11625077 | https://en.wikipedia.org/wiki/Threading%20Building%20Blocks | Threading Building Blocks | oneAPI Threading Building Blocks (oneTBB; formerly Threading Building Blocks or TBB), is a C++ template library developed by Intel for parallel programming on multi-core processors. Using TBB, a computation is broken down into tasks that can run in parallel. The library manages and schedules threads to execute these tasks.
Overview
A oneTBB program creates, synchronizes, and destroys graphs of dependent tasks according to algorithms, i.e. high-level parallel programming paradigms (a.k.a. Algorithmic Skeletons). Tasks are then executed respecting graph dependencies. This approach groups TBB in a family of techniques for parallel programming aiming to decouple the programming from the particulars of the underlying machine.
oneTBB implements work stealing to balance a parallel workload across available processing cores in order to increase core utilization and therefore scaling. Initially, the workload is evenly divided among the available processor cores. If one core completes its work while other cores still have a significant amount of work in their queue, oneTBB reassigns some of the work from one of the busy cores to the idle core. This dynamic capability decouples the programmer from the machine, allowing applications written using the library to scale to utilize the available processing cores with no changes to the source code or the executable program file. In a 2008 assessment of the work stealing implementation in TBB, researchers from Princeton University found that it was suboptimal for large numbers of processors cores, causing up to 47% of computing time spent in scheduling overhead when running certain benchmarks on a 32-core system.
oneTBB, like the STL (and the part of the C++ standard library based on it), uses templates extensively. This has the advantage of low-overhead polymorphism, since templates are a compile-time construct which modern C++ compilers can largely optimize away.
oneTBB is available commercially as a binary distribution with support, and as open-source software in both source and binary forms.
oneTBB does not provide guarantees of determinism or freedom from data races.
Library contents
oneTBB is a collection of components for parallel programming:
Basic algorithms: parallel_for, parallel_reduce, parallel_scan
Advanced algorithms: parallel_pipeline, parallel_sort
Containers: concurrent_queue, concurrent_priority_queue, concurrent_vector, concurrent_hash_map
Memory allocation: scalable_malloc, scalable_free, scalable_realloc, scalable_calloc, scalable_allocator, cache_aligned_allocator
Mutual exclusion: mutex, spin_mutex, queuing_mutex, spin_rw_mutex, queuing_rw_mutex, recursive_mutex
Timing: portable fine grained global time stamp
Task scheduler: direct access to control the creation and activation of tasks
Systems supported
The hardware, operating system, and software prerequisites for oneTBB.
Supported Hardware
Intel Celeron processor family
Intel Core processor family
Intel Xeon processor family
Intel Xeon Phi processor family
Intel Atom processor family
Non-Intel processors compatible with the processors above
Supported Operating Systems
Systems with Microsoft Windows operating systems:
Microsoft Windows 10
Microsoft Windows Server 2016
Microsoft Windows Server 2019
Systems with Linux* operating systems:
Clear Linux
Amazon Linux 2
CentOS 8
Debian 10
Fedora 34
Red Hat Enterprise Linux 7, 8
SuSE Linux Enterprise Server 15
Ubuntu 18.04 LTS, 20.04, 21.04
Systems with macOS operating systems:
macOS 10.15, 11.x
Systems with Android operating systems:
Android 9
Supported Compilers
Intel oneAPI DPC++/C++ Compiler
Intel C++ Compiler 19.0 and 19.1 version
Microsoft Visual C++ 14.2 (Microsoft Visual Studio 2019, Windows OS only)
GNU Compilers (gcc) 4.8.5 - 11.1.1
GNU C Library (glibc) version 2.17 - 2.33
Clang 6.0.0-12.0.0
See also
Intel oneAPI Base Toolkit
Intel Integrated Performance Primitives (IPP)
Intel oneAPI Data Analytics Library (oneDAL)
Intel oneAPI Math Kernel Library (oneMKL)
Intel Advisor
Intel Inspector
Intel VTune Profiler
Intel Concurrent Collections (CnC)
Algorithmic skeleton
Parallel computing
List of C++ multi-threading libraries
List of C++ template libraries
Parallel Patterns Library
Grand Central Dispatch (GCD)
Notes
References
External links
oneTBB Industry Specification
at Intel
Concurrent programming libraries
Application programming interfaces
C++ programming language family
Generic programming
Threads (computing)
C++ libraries
Intel software |
17180190 | https://en.wikipedia.org/wiki/Baltimore%20Steam%20Packet%20Company | Baltimore Steam Packet Company | The Baltimore Steam Packet Company, nicknamed the , was an American steamship line from 1840 that provided overnight steamboat service on the Chesapeake Bay, primarily between Baltimore, Maryland, and Norfolk, Virginia. Called a "packet" for the mail packets carried on government mail contracts, the term in the 19th century came to mean a steamer line operating on a regular, fixed daily schedule between two or more cities. When it closed in 1962 after 122 years of existence, it was the last surviving overnight steamship passenger service in the United States.
In addition to regularly calling on Baltimore and Norfolk, the Baltimore Steam Packet Company at various times provided freight, passenger and vehicle transport to Washington, D.C., Old Point Comfort, and Richmond, Virginia. The Old Bay Line, as it came to be known by the 1860s, was acclaimed for its genteel service and fine dining, serving Chesapeake Bay specialties. Walter Lord, famed author of A Night to Remember (and whose grandfather had been the packet line's president from 1893 to 1899), mused that its reputation for excellent service was attributable to "some magical blending of the best in the North and the South, made possible by the Company's unique role in 'bridging' the two sections ... the North contributed its tradition of mechanical proficiency, making the ships so reliable; while the South contributed its gracious ease".
In 1947 a former Old Bay Line steamship, President Warfield, became , carrying Jewish refugees from Europe in an unsuccessful attempt to emigrate to Mandatory Palestine. The voyage was commemorated in a book in 1958 and movie in 1960.
History
Just seven years after Robert Fulton proved the commercial viability of steam-powered ships with his North River Steamboat (more commonly known today as Clermont) in 1807, small wood-burning steamers began to ply the Chesapeake Bay. Before the arrival of railroads and river steamboats in the early 19th century, overland travel was exceedingly slow and tedious. Rivers were the main means of transportation and most cities were founded on them. This was especially so in North America, where journeys over vast distances of hundreds or even thousands of miles required months of hazardous, uncomfortable travel by stagecoach or wagon on rutted, unpaved trails. In the 1830s, railroads were being built, but the technology was crude and average passenger train speed was only . Perhaps more importantly, most early railroads did not connect. It would be many years before the various lines were knitted together to make intercity rail travel in the U.S. a reality. Not until 1863, for example, was it possible to travel between New York City and Washington, D. C., without changing trains en route.
In this period, steamships on rivers such as the Ohio and Mississippi or large inland bodies of water such as the Great Lakes and the Chesapeake Bay offered a comfortable and relatively fast mode of transportation. The first steamboat to serve Baltimore was the locally built Chesapeake, constructed in 1813 to link Baltimore with Philadelphia, Pennsylvania. Operated by the Union Line, the boat connected with a stagecoach for the overland portion of the journey. Two years later, the Briscoe-Partridge Line's Eagle was the first steamboat to sail the length of Chesapeake Bay.
The direct ancestor of the Baltimore Steam Packet Company was the Maryland & Virginia Steam Boat Company formed in 1828 to link Baltimore, Richmond, and Norfolk, traversing the Chesapeake Bay and the James River. By 1839, the Maryland & Virginia was heavily in debt from the purchase of two new, large ships the year before: the long Alabama and the Jewess. Alabama was costly to operate and proved impractical for Chesapeake Bay operations, causing the bankruptcy of the Maryland & Virginia later that year.
1840s–1850s
When the Maryland & Virginia collapsed in late 1839, the Maryland legislature convened to grant a charter to the Baltimore Steam Packet Company, organized in Baltimore to provide overnight steamship service on the Chesapeake Bay. The company's incorporators were Benjamin Bush, Andrew F. Henderson (who became the line's first president), John B. Howell, Thomas Kelso (who would become a director of the line), John S. McKim, Samuel McDonald, Gen. William McDonald, Robert A. Taylor, and Joel Vickers, all of Baltimore.
The company was granted a 20-year charter on March 18, 1840, by the Maryland legislature and then acquired three of the former Maryland & Virginia's steamboats: Pocahontas, Georgia, and Jewess. The company began overnight paddlewheel steamship passenger and freight service daily except Sundays between Baltimore and Norfolk. By 1848, the company's steamship Herald was making the trip in less than 12 hours, a time which the line would maintain until the end in 1962. An affiliate, the Powhatan Line, started service between Norfolk and Richmond in 1845, interchanging freight and passengers with the Old Bay Line.
By the 1850s, competition was keen as steamships grew in size and efficiency to serve the fast-growing nation. The Old Bay Line, in particular, served as a link between the antebellum South and northern markets, hauling large quantities of cotton north and manufactured goods south, along with a thriving passenger business between Baltimore and Norfolk. Railroads also began acquiring steamship lines in the 1850s, and the Seaboard & Roanoke Railroad, a predecessor of the Richmond, Fredericksburg and Potomac Railroad (RF&P), acquired a controlling interest in the Baltimore Steam Packet Company in 1851. As competitors entered the field, each line vied to outdo its competitors in the luxurious appointments of their ships' staterooms and dining service. The company acquired newer and larger ships in the 1850s, such as North Carolina in 1852 and Louisiana in 1854, the latter at in length being the largest wooden vessel the company would own. A passenger on Georgia was effusive in his description of an overnight trip in 1853:
North Carolina similarly impressed a Baltimore Patriot reporter in 1852, who described the ship's dining saloon as "having imported Belgian carpets, velvet chairs with marble-topped tables, and white panelling with gilded mouldings".
On February 20, 1858, the northbound steamer Louisiana collided with a sailing vessel hauling a catch of oysters, William K. Perrin, causing the sailboat to founder near the mouth of the Rappahannock River. In a case that reached all the way to the U.S. Supreme Court, Haney et al. v Baltimore Steam Packet Company, Louisiana was found to be at fault. The high court considered the rules of the sea pertaining to steamers and sailing ships approaching one another and concluded (with Chief Justice Roger B. Taney dissenting) that "entire disregard of these rules of navigation by the steamer" caused the collision, reversing a Circuit Court ruling.
North Carolina burned on January 29, 1859, when a fire started in a passenger stateroom. She sank the following day with the loss of two lives. The following month, the line acquired to replace the lost steamer.
1860s–1910s
The outbreak of the Civil War in April 1861 immediately affected the Baltimore Steam Packet Company. On April 19, two days after Virginia's secession, a violently pro-Southern mob in Baltimore attacked Union soldiers en route to Washington, D.C. as the troops marched through the city's streets between railroad stations. Thereafter known as the Baltimore riot of 1861, the resulting loss of life and local unrest also threatened the , a U.S. Navy ship in Baltimore at the time. Later that same day, the Baltimore Steam Packet Company declined to transport Union forces from Baltimore to the beleaguered Union naval yard facility at Portsmouth, Virginia.
Two weeks later, on May 7, the US Navy chartered Adelaide and her attached to the Atlantic Blockading Squadron. In that role, she was used to transport Federal troops in support of operations in North Carolina's Outer Banks, directed against the Confederate-held forts guarding Hatteras Inlet. Later that year, Adelaide was returned to the Baltimore Steam Packet Company.
As a steamship line connecting northern cities and the south, the Old Bay Line hauled a considerable volume of freight between the two regions and their ships' cargo holds were filled with bales of cotton, produce, and other goods. When hostilities commenced, Southern ports were blockaded by the Federal Navy and the Old Bay Line was unable to serve Norfolk for the duration of the war, going no further south than Old Point Comfort. Passenger traffic as well as cargo shipping declined significantly. The Powhatan Line discontinued operations altogether between Norfolk and Richmond until the war's end.
As soon as the war ended in 1865, the Leary Line of New York briefly challenged the Baltimore Steam Packet Company on the Chesapeake, starting its own Baltimore-Norfolk steamship service. A fare war ensued, with one-way prices reduced to $3.00 (equivalent to $ in ). Emphasizing the longevity of its service compared to their upstart rival, the Baltimore Steam Packet Company began referring to itself as the "Old Established Bay Line" in advertising, a moniker that would soon become simply the Old Bay Line for the next century.
The Leary Line withdrew in January 1867, selling its George Leary to the Old Bay Line. Two years later, the Norfolk Journal of August 2, 1869, described the vessel as having a "gorgeous style of furniture and elegant fittings ... magnificently furnished with upholstered sofas and lounges of rich red velvet ...". Another competitor, the Chesapeake Steamship Company, began directly competing on the Baltimore-Norfolk route in 1874. Controlled by the Southern Railway, a rival of the RF&P, it would be a formidable competitor until 1941, when the two steamship lines merged. Cargo traffic was also booming in the 1870s as the South recovered from the Civil War, resulting in the Old Bay Line's freight revenue surpassing passenger revenue by the end of the decade.
By the time of John Moncure Robinson's retirement as president of the company in 1893, the Old Bay Line had upgraded its fleet with propeller-driven, steel-hulled steamers equipped with modern conveniences such as electric lighting and staterooms with private baths. Georgia introduced in 1887 was the first Old Bay Line boat to have a modern screw propeller instead of old-fashioned side paddlewheels and Alabama launched in 1893 was the company's first steel-hulled vessel. Robinson served the Old Bay Line as president for 26 years (1867–1893), longer than any other person in the company's history.
The halcyon days of the 1890s were the company's heyday, under president Richard Curzon Hoffman (the grandfather of noted author Walter Lord), when the prosperous line's gleaming steamships were heavily patronized by passengers enjoying the well-appointed staterooms and Chesapeake Bay culinary delights while dining to the accompaniment of live music. The nightly menu on board included oyster fritters, diamondback terrapin, duck, and turkey.
The company built a new terminal and headquarters in Baltimore on Light Street in 1898 to accommodate the increasing traffic. Rebuilt after the Great Baltimore Fire of 1904, the building with its four-sided clocktower would be a landmark for decades on Baltimore's Inner Harbor waterfront. (The location of the now-demolished terminal is between the present Harborplace and Maryland Science Center.)
The Richmond, Fredericksburg and Potomac Railroad, which had first acquired a controlling interest in the Baltimore Steam Packet Company in 1851, gained total control of the company's stock on September 5, 1901. The Old Bay Line continued to be managed separately from the RF&P, however.
World War I and aftermath
In contrast to the Civil War, when hostilities sharply curtailed business on the Old Bay Line, World War I doubled freight and passenger business on the line to the busy ports of Norfolk and the Hampton Roads area, with 107,664 passengers using the line in 1917. As a result of congestion on the nation's railroads and ports when the U.S. entered the war in April 1917, the Federal government established the wartime U.S. Railroad Administration (USRA) to take charge of railroads and steamship companies, including the Baltimore Steam Packet Company. The USRA directed the operations of the Old Bay Line and the rival Chesapeake line for the duration of the war and more than a year thereafter, until March 1, 1920.
Baltimore-native John Roberts Sherwood, who had joined the Old Bay Line as a 22-year-old engineer in 1868 and became president in 1907, retired in October 1918 after 49 years with the company. The Baltimore Sun extolled Sherwood's distinguished half-century of service to the steamer line when he retired, noting approvingly that his oft-expressed philosophy was, "Stand up for your home city wherever you may go." (His son, John W. Sherwood, founded Baltimore's celebrated Sherwood Gardens in the mid-1920s.) Sherwood was succeeded by S. Davies Warfield as president (1918–1927).
Catastrophe struck the Old Bay Line on May 24, 1919, when Virginia II caught fire shortly after midnight in the middle of Chesapeake Bay with 156 passengers and a crew of 82 on board. The ship burned completely as many passengers jumped overboard and a lifeboat capsized. The Chesapeake Line's City of Norfolk and other vessels came to the rescue and pulled people from the water to safety. Virginia's captain, Walter Lane, remained with his ship to the end and suffered burns.
1920s–1930s
The corporate ownership of the Baltimore Steam Packet Company changed again in 1922, when the Seaboard Air Line Railroad (SAL) formed the Seaboard–Bay Line Company, which owned all of the outstanding shares of the Baltimore Steam Packet Company, making the steamship company a wholly owned subsidiary of the SAL on February 6, 1922. In addition to the infusion of capital from the SAL, the Old Bay Line also obtained a $4.4 million federal loan (equivalent to $ million in ) to build two new steamers for the Old Bay Line: State of Maryland and State of Virginia. The Old Bay Line's president, S. Davies Warfield, was named president of SAL railroad as well as the Old Bay Line in 1922.
In 1928, the Baltimore Steam Packet Company took delivery of two more new ships – President Warfield and Yorktown. President Warfield, built by Pusey and Jones Corp. in Wilmington, Delaware, was named for the Old Bay Line's president of the time, S. Davies Warfield. She would be the last new ship built for the Old Bay Line.
As the new-fangled Ford Model Ts and other early automobiles increasingly took to the roads in the 1920s, inland steamship lines in the U.S. initially resisted carrying automobiles on their boats. By the Depression-ravaged 1930s, however, the Old Bay Line became one of the first inland steamship companies to promote the carriage of automobiles as a means of filling its ships' empty cargo holds. The Depression and loss of business to improved highways took an increasing toll of many U.S. steamship lines in the 1930s, as historic companies such as the Fall River Line ceased operation in 1937, preceded by the Lake Champlain company, which was the oldest steamboat line in the U.S. at its demise in 1932.
Fortunately for the Old Bay Line, its freight and passenger traffic remained relatively strong in the 1930s and the company embarked on a modernization program for its main boats of the line. President Warfield and State of Maryland were converted from coal to oil burning in 1933 and had sprinkler systems installed in 1938. In 1939, State of Virginia was converted to oil burning and all three ships were equipped with radio direction finders and ship-to-shore telephones.
1940s
As the Old Bay Line celebrated its centennial in 1940 with parades and other events in Baltimore, the company's future seemed bright. Business was steady and the company's facilities were in sound condition. Commemorative dinner plates in blue and pink decorated with a map of the Chesapeake Bay were introduced.
On June 14, 1941, the Baltimore Steam Packet Company's owner, the Seaboard Air Line Railroad, entered into an agreement with a consortium of railroads and steamship companies to merge the Chesapeake Steamship Company into the Old Bay Line. The railroad group, consisting of the Atlantic Coast Line Railroad, Southern Railway, and the SAL, together controlled the Baltimore Steam Packet Company and the Chesapeake Steamship Company. As a result, the Old Bay Line took over the Chesapeake Line's business and assets and became the sole operator of passenger and freight steamship transportation between the important ports of Baltimore and Norfolk. As part of this agreement, half of the outstanding shares of the Baltimore Steam Packet Company were assigned to Chesapeake Steamship Company, which was one-third owned by Southern Railway and two-thirds owned by the Atlantic Coast Line Railroad. With the amalgamation, two of the Chesapeake Line's steamships, City of Norfolk and City of Richmond, were transferred to the Old Bay Line. As it turned out, these would be the last two vessels operated by the Old Bay Line when it went out of business in 1962. Robert E. Dunn was named president of the Old Bay Line in 1941, remaining at the helm of the company to the end of service in 1962.
World War II
After the United States entered World War II on December 7, 1941, the Federal government set up the War Shipping Administration to manage the vitally important maritime shipping and Naval support needs of the U.S. and its Allies, including the power to expropriate civilian-owned boats. On April 1, 1942, the government acquired the Old Bay Line's State of Virginia and State of Maryland. On July 13, President Warfield and Yorktown were also taken over. Thus, by mid-1942, four of the Old Bay Line's six ships had become government property, leaving the company only the two oldest and smallest ships in its fleet for the duration of the war, City of Norfolk and City of Richmond.
Postwar and Exodus
After World War II, the line promoted its automobile service to Florida-bound motorists, advertising the elimination of of driving by taking the family car on an overnight cruise down the Chesapeake to Virginia, while enjoying a sumptuous dinner and relaxing stateroom aboard an Old Bay Line steamer instead of a roadside motel. In March 1946, the Old Bay Line installed radar on City of Richmond and City of Norfolk, the first commercial passenger ships to be equipped with radar.
After President Warfield was expropriated in 1942 by the War Shipping Administration for national defense as a transport in World War II, she was transferred to the United Kingdom on September 21, 1942. Later in the war, she was returned to the US Navy and commissioned as on May 21, 1944. Following the end of World War II, President Warfield was decommissioned and returned to the War Shipping Administration for disposal as surplus. After inspecting President Warfield, Old Bay Line officials decided that the expense for reconditioning the badly deteriorated ship was excessive, and accepted a cash settlement from the War Shipping Administration instead of taking back the war surplus vessel.
The old President Warfield was eventually acquired in early 1947 by Mossad Le'aliyah Bet, a Jewish organization helping Holocaust survivors illegally reach Mandatory Palestine, then under British rule. She was renamed when she embarked from France for Palestine on July 11, 1947, carrying 4,515 passengers. Two Royal Navy destroyers rammed her as she entered Palestinian waters near Haifa on July 18. British forces boarded the damaged ship and eventually deported her passengers. Exodus remained in Haifa harbor until 1952, when the derelict caught fire and burned completely. The 1960 film Exodus depicted the refugees' odyssey aboard her.
1950s and demise
The Bay Line's Light Street terminal and headquarters building in Baltimore, where it had been located since 1898, were sold to the city in October 1950 for the widening of Light Street and later development as the acclaimed Inner Harbor waterfront festival marketplace. The company relocated to a pier on Pratt Street at the foot of Gay Street, where it remained until it went out of business in 1962.
Various travel writers in the 1950s extolled the pleasures of the nightly cruises and meals on the Old Bay Line's antique steamers. By the mid-1950s, however, improved highways and the increase in air travel meant that the Old Bay Line's 12-hour transit time between Baltimore and Norfolk was a comparatively slow means of transportation. Old Bay Line officials hoped that the steamship line's unique service might continue to appeal to travellers seeking the pleasures of a cruise on the scenic Chesapeake with fine dining en route and a well-furnished, private stateroom. The Sunday travel section of The New York Times in 1954 featured the "long established, more leisurely water route across Chesapeake Bay", as the writer described the Old Bay Line, recommending "the boat trip can be made comfortably and comparatively inexpensively every night between Baltimore, Old Point Comfort and Norfolk, and on alternate nights between Washington, D. C., and the Virginia communities".
In the end too few people opted for this leisurely form of travel and passenger volume steadily declined. As deficits rose during the 1950s, the Old Bay Line began cutting back. On September 30, 1957, it abandoned service to Washington, D.C., discontinuing its Washington–Norfolk overnight service on the Potomac River. By 1960, the Old Bay Line reduced operation of its mainstay Baltimore–Norfolk route to freight service only during the lightly travelled winter months of October–April, eliminating all passenger service on the Chesapeake Bay during those months. In October 1961, the company announced that its passenger service was "temporarily suspended until further notice", indicating that resumption of passenger service was expected the following summer season beginning in April 1962. Finally, on April 14, 1962, the venerable Old Bay Line discontinued all operations entirely, ending one of the last remaining overnight steamship passenger services in the United States (The Georgian Bay Line still operated out of Georgian Bay along with Canadian Pacific and Canada Steamship Lines but those companies engaged in purely cruising and all were out of service by 1967). The following month, the stockholders of the Baltimore Steam Packet Company formally voted on May 25, 1962, to liquidate the 122-year-old corporation.
Routes operated
The routes over which the Baltimore Steam Packet Company operated passenger, mail, and freight service on a scheduled basis were:
Old Bay Line fleet
The company owned 54 ships during its 122 years of existence, many being small cargo vessels. Originally, all of the line's steamboats were of wooden construction with side paddlewheels and used wood logs for fuel. The first boat with an iron hull acquired by the Old Bay Line was Georgeanna, in 1860. By the late 1870s, the company had acquired its last paddlewheel steamers: Florida, Carolina, and Virginia. Later, ships would use coal for fuel until the 1930s, when oil began to be used. Beginning with Georgia built in 1887, their ships used the more modern propeller or "screw" design. Georgia also was the first Old Bay Line vessel to be equipped with electric lighting and steam heating. Passenger ships of the line provided large, lavishly furnished staterooms to accommodate passengers on the overnight trip. Alabama built in 1892 represented the inception of modern shipbuilding and design for the Old Bay Line: the first vessel to have a steel hull instead of iron or wood and propelled by a four-cylinder triple-expansion reciprocating engine, the same type engine that all of the line's later steamers would have. Notable Old Bay Line passenger vessels used in scheduled overnight service, with dates acquired and gross tonnages, were:
At the time of the Old Bay Line's dissolution in April 1962, three ships remained docked at the Pratt Street pier: District of Columbia, which had been kept as a spare since the Washington–Norfolk service ended in 1957, was scrapped soon afterwards. City of Richmond was sold for use as a floating restaurant in the Virgin Islands, but sank in the Atlantic Ocean off Georgetown, South Carolina, while under tow to her new home. City of Norfolk was idled in Norfolk until 1966, when it was towed to Fieldsboro, New Jersey on the Delaware River and scrapped.
See also
Notes
References
Shipping companies of the United States
Packet (sea transport)
History of Baltimore
American companies established in 1840
Transport companies disestablished in 1962
1840 establishments in Maryland
1962 disestablishments in Maryland
Transport companies established in 1840 |
9629153 | https://en.wikipedia.org/wiki/Authoring%20system | Authoring system | An authoring system is a program that has pre-programmed elements for the development of interactive multimedia software titles. Authoring systems can be defined as software that allows its user to create multimedia applications for manipulating multimedia objects.
In the development of educational software, an authoring system is a program that allows a non-programmer, usually an instructional designer or technologist, to easily create software with programming features. The programming features are built in but hidden behind buttons and other tools, so the author does not need to know how to program. Generally authoring systems provide many graphics, much interaction, and other tools educational software needs. The three main components of an authoring system are: content organization, control of content delivery, and type(s) of assessment. Content Organization allows the user to structure and sequence the instructional content and media. Control of content delivery refers to the ability for the user to set the pace in which the content is delivered, and how learners engage with the content. Assessment refers to the ability to test learning outcomes within the system, usually in the form of tests, discussions, assignments, and other activities which can be evaluated.
An authoring system usually includes an authoring language, a programming language built (or extended) with functionality for representing the tutoring system. The functionality offered by the authoring language may be programming functionality for use by programmers or domain representation functionality for use by subject experts. There is overlap between authoring languages with domain representation functionality and domain-specific languages.
Authoring language
An authoring language is a programming language used to create tutorials, computer-based training courseware, websites, CD-ROMs and other interactive computer programs. Authoring systems (packages) generally provide high-level visual tools that enable a complete system to be designed without writing any programming code, although the authoring language is there for more in-depth usage.
Examples of authoring languages
DocBook
DITA
PILOT
TUTOR
Examples of Web authoring languages
Bigwig
See also
Chamilo
Hollywood (programming language) with its Hollywood Designer graphical interface.
Learning management system
SCORM
Experience API
Web design program
XML editor
Game engine
References
External links
Authoring system at IFWiki
Locatis, C., Ullmer, E., Carr, V. et al. "Authoring systems: An introduction and assessment." J. Comput. High. Educ. 3, 23–35 (1991). https://doi.org/10.1007/BF02942596
Kearsley, Greg. "Authoring Systems in Computer Based Education." Communications of the ACM, Volume 25, Issue 7, July 1982, pp 429–437. https://doi.org/10.1145/358557.358569
Learning
E-learning
Educational software |
13074220 | https://en.wikipedia.org/wiki/Panorama%20%28typesetting%20software%29 | Panorama (typesetting software) | Panorama is a line layout and text composition engine to render text in various worldwide languages made by Bitstream Inc. Panorama uses Font Fusion as the base to support rendering of the text. The engine allows the user to manage different text formatting aspects like spacing, alignment, style effects (bold, embossed, outline, drop shadows etc.).
Panorama provides support for OpenType font tables leading to automatic character substitution for ligatures, swashes, scientific figures, etc. Panorama supports three anti-aliasing modes - monochrome, grayscale, and LCD optimized (Horizontal and Vertical).
Version history
Panorama has undergone several changes since its initial release as well as numerous additions of APIs to the core engine.
Features
Support for Thai shaping and OpenType rules.
Enhanced support for the Unicode line breaking algorithm.
Better support for TV screens.
Enhanced font weight management and formatting support with font ratio, shadow width and shadow color.
Unicode Compliance — Full layout support for Unicode 5.0 and all international languages including complex scripting languages, such as Arabic, Indic, and Thai.
Supports bi-directional algorithms required to rearrange characters sequentially. For example in languages such as Arabic, Hebrew, Urdu, the characters may be entered on a keyboard in one way, but need to be rendered in a correct way on a visual device.
Contextual Shaping — Applies contextual shaping to the characters, i.e., the characters are substituted, combined, or repositioned depending on the rules of the language.
Composes text in all worldwide languages, which includes various complex scripting languages such as, Arabic, Indic, and Hebrew.
Supports key OpenType tables required for line layout such as, BASE, glyph definition (GDEF), glyph positioning (GPOS), and glyph substitution (GSUB).
Supports kerning information in OpenType fonts.
Text on Path — Enables text rendering along a path, outline, or a predefined shape.
Font Mapping — Supports script-based font mapping enabling the application to support multiple scripts at a single instance.
Style Mapping — Allows grouping of style-linked fonts to be treated as a single font. The engine "knows" to access a font’s own true-drawn style when you apply styles from the style menu.
Unicode Mapping: Supports automatic font switching based on the Unicode values of the text stream to be rendered.
Unicode-Image Mapping — Enables the developers to map a Unicode sequence to any image.
Paragraph Styling — Supports paragraph-specific formatting attributes including text alignment, letter/line spacing, and indentation functions.
Termination style — Facilitates the application to include an ellipses kind of termination style for the truncated text if the string does not fits inside the designated area.
Inline Images — Supports floating graphic object types that are inline with the text.
Rich-text Editing features, such as space wrap, tab stops, and dynamic property changes for inter-character space, line indents, and line gaps.
Supports industry-standard color formats, including monochrome, RGB, and BGR, with alpha channel support.
Font Formats Supported
Multiple master fonts
WOFF fonts
Type 1
TrueType
TrueType collections
OpenType
Compact font format (CFF)/Type 2
TrueDoc Portable Font Resources (PFRs)
Bitstream Speedo
T2K
Font Fusion Stroke (FFS)
Embedded bitmaps (TrueType, TrueDoc, and T2K)
Windows bitmap font format FNT/FON
Bitmap Distribution Format (BDF)
Mac font suitcase (Dfont)
Character Sets Supported
Color Formats Supported
Supports monochrome and grayscale format.
Supports industry-standard screen color formats including monochrome, RGB, and BGR.
Supports eight different pixel depths for R, G, B, and alpha channel in RGB or BGR format.
Text Style and Effects
Embossed
Engraved
Left and right drop shadows
Algorithmic obliquing
Algorithmic emboldening
Underline/Overline/Strikethrough (Single/Double/Dotted line)
Outlines
Colored border text styles
Superscript
Subscript
Flicker filter
User defined filter
Applications/Operating Systems Supported
Cross-platform applications
Web (HTML) applications
Macintosh & Windows
BREW
Linux & UNIX
Embedded operating systems
Real time operating systems
Devices Supported
Consumer Electronic Devices, Mobile Handset, Set-top box, Digital TV, Printer, Medical Imaging Device, GPS System, Automobile Display, and other Embedded System
See also
Font Fusion
Bitstream Inc.
References
External links
Line layout engine for worldwide text layout, multilanguage, multilingual fonts, and international complex scripts
2007 Bitstream Press Releases
Embedded Technology Journal
BITSTREAM INC 10-K, BITSTREAM INC Annual Report
Layout engines |
1209805 | https://en.wikipedia.org/wiki/Depaneling | Depaneling | Depaneling is a process step in high-volume electronics assembly production. In order to increase the throughput of printed circuit board (PCB) manufacturing and surface mount (SMT) lines, PCBs are often designed so that they consist of many smaller individual PCBs that will be used in the final product. This PCB cluster is called a panel or multiblock. The large panel is broken up or "depaneled" as a certain step in the process - depending on the product, it may happen right after SMT process, after in-circuit test (ICT), after soldering of through-hole elements, or even right before the final assembly of the PCBA into the enclosure.
Risks
When selecting a depaneling technique, it is important to be mindful of the risks, including:
Mechanical Strain: depaneling can be a violent operation and may bend the PCB causing some components to fracture, or in the worst case, break traces. Ways to mitigate this are avoiding placing components near the edge of the PCBA, and orienting components parallel to the break line.
Tolerance: some methods of depaneling may result in the PCBA being a different size than intended. Ways to mitigate are to communicate with the manufacturer about which dimensions are critical, and selecting a depaneling method that meets your needs. Hand depaneling will have the loosest tolerance, laser depaneling the tightest.
Main depanel technologies
There are six main depaneling cutting techniques currently in use:
hand break
pizza cutter / V-cut
punch
router
saw
laser
Hand break
This method is suitable for strain-resistant circuits (e.g. without SMD components). The operator simply breaks the PCB, usually along a prepared V-groove line, with the help of a proper fixture.
Pizza cutter / V-cut
A pizza cutter is a rotary blade, sometimes rotating using its own motor. The operator moves a pre-scored PCB along a V-groove line, usually with the help of a special fixture. This method is often used only for cutting huge panels into smaller ones. The equipment is cheap and requires only sharpening of the blade and greasing as maintenance.
it uses an aluminum based jig to secure the PCB in place.
Punch
Punching is a process where single PCBs are punched out of the panel through the use of special fixture. It is a two-part fixture, with sharp blades on one part and supports on the other. The production capacity of such a system is high, but fixtures are quite expensive and require regular sharpening.
Router
A Depaneling router is a machine similar to wood router. It uses a router bit to mill the material of the PCB. The hardness of the PCB material wears down the bit, which must be replaced periodically.
Routing requires that single boards are connected using tabs in a panel. The bit mills the whole material of the tab. It produces much dust that has to be vacuumed. It is important for the vacuum system to be ESD-safe. Also the fixturing of the PCB must be tight - usually an aluminium jig or a vacuum holding system is used.
The two most important parameters of the routing process are: feed rate and rotational speed. They are chosen according to the bit type and diameter and should remain proportional (i.e. increasing feed rate should be done together with increasing the rotational speed).
Routers generate vibrations of the same frequency as their rotational speed (and higher harmonics), which might be important if there are vibration-sensitive components on the surface of the board. The strain level is lower than for other depaneling methods. Their advantage is that they are able to cut arcs and turn at sharp angles. Their disadvantage is lower capacity.
Saw
A saw is able to cut through panels at high feed rates. It can cut both V-grooved and not-V-grooved PCBs. It does not cut much material and therefore generates low amounts of dust.
The disadvantages are: ability to cut in straight lines only and higher stress than for routing.
Laser
Laser cutting is now being offered as an additional method by some manufacturers.
UV laser depaneling makes use of a 355 nm wavelength (ultraviolet), diode-pumped, Nd:YAG laser source. At this wavelength the laser is capable of cutting, drilling and structuring on rigid and flex circuit substrates. The laser beam, capable of cut widths under 25μm, is controlled by high-precision, galvo-scanning mirrors with repeat accuracy of +/- 4 μm.
A variety of substrate materials can be cut with a UV laser source including FR4 and similar resin-based substrates, polyimide, ceramics, PTFE, PET, Aluminum, Brass and Copper.
Advantages: accuracy, precision, low mechanical stress and flexible contour and cut capabilities.
Disadvantages: initial capital investment is often higher than traditional depaneling technologies, also the optimal board thickness is recommended to be no more than 1mm.
CO2 laser sources have also been used for depaneling, but are considered outdated as UV laser technology provides cleaner cuts, less-thermal stress and higher precision capabilities.
References
External links
Depaneling: a study in yield and productivity: saw systems can provide a low stress and fast alternative to hand breaking methods
CircuitPeople PCB Panel Calculator
Printed circuit board manufacturing |
231835 | https://en.wikipedia.org/wiki/C%20preprocessor | C preprocessor | The C preprocessor is the macro preprocessor for the C, Objective-C and C++ computer programming languages. The preprocessor provides the ability for the inclusion of header files, macro expansions, conditional compilation, and line control.
In many C implementations, it is a separate program invoked by the compiler as the first part of translation.
The language of preprocessor directives is only weakly related to the grammar of C, and so is sometimes used to process other kinds of text files.
History
The preprocessor was introduced to C around 1973 at the urging of Alan Snyder and also in recognition of the usefulness of the file-inclusion mechanisms available in BCPL and PL/I. Its original version offered only file inclusion and simple string replacement using #include and #define for parameterless macros, respectively. It was extended shortly after, firstly by Mike Lesk and then by John Reiser, to incorporate macros with arguments and conditional compilation.
The C preprocessor was part of a long macro-language tradition at Bell Labs, which was started by Douglas Eastwood and Douglas McIlroy in 1959.
Phases
Preprocessing is defined by the first four (of eight) phases of translation specified in the C Standard.
Trigraph replacement: The preprocessor replaces trigraph sequences with the characters they represent.
Line splicing: Physical source lines that are continued with escaped newline sequences are spliced to form logical lines.
Tokenization: The preprocessor breaks the result into preprocessing tokens and whitespace. It replaces comments with whitespace.
Macro expansion and directive handling: Preprocessing directive lines, including file inclusion and conditional compilation, are executed. The preprocessor simultaneously expands macros and, since the 1999 version of the C standard, handles _Pragma operators.
Including files
One of the most common uses of the preprocessor is to include another file:
#include <stdio.h>
int main(void)
{
printf("Hello, world!\n");
return 0;
}
The preprocessor replaces the line #include <stdio.h> with the textual content of the file 'stdio.h', which declares the printf() function among other things.
This can also be written using double quotes, e.g. #include "stdio.h". If the filename is enclosed within angle brackets, the file is searched for in the standard compiler include paths. If the filename is enclosed within double quotes, the search path is expanded to include the current source file directory. C compilers and programming environments all have a facility that allows the programmer to define where include files can be found. This can be introduced through a command-line flag, which can be parameterized using a makefile, so that a different set of include files can be swapped in for different operating systems, for instance.
By convention, include files are named with either a .h or .hpp extension. However, there is no requirement that this is observed. Files with a .def extension may denote files designed to be included multiple times, each time expanding the same repetitive content; #include "icon.xbm" is likely to refer to an XBM image file (which is at the same time a C source file).
#include often compels the use of #include guards or #pragma once to prevent double inclusion.
Conditional compilation
The if-else directives #if, #ifdef, #ifndef, #else, #elif and #endif can be used for conditional compilation. and are simple shorthands for and .
#if VERBOSE >= 2
printf("trace message");
#endif
Most compilers targeting Microsoft Windows implicitly define _WIN32. This allows code, including preprocessor commands, to compile only when targeting Windows systems. A few compilers define WIN32 instead. For such compilers that do not implicitly define the _WIN32 macro, it can be specified on the compiler's command line, using -D_WIN32.
#ifdef __unix__ /* __unix__ is usually defined by compilers targeting Unix systems */
# include <unistd.h>
#elif defined _WIN32 /* _WIN32 is usually defined by compilers targeting 32 or 64 bit Windows systems */
# include <windows.h>
#endif
The example code tests if a macro __unix__ is defined. If it is, the file <unistd.h> is then included. Otherwise, it tests if a macro _WIN32 is defined instead. If it is, the file <windows.h> is then included.
A more complex #if example can use operators, for example something like:
#if !(defined __LP64__ || defined __LLP64__) || defined _WIN32 && !defined _WIN64
// we are compiling for a 32-bit system
#else
// we are compiling for a 64-bit system
#endif
Translation can also be caused to fail by using the #error directive:
#if RUBY_VERSION == 190
# error 1.9.0 not supported
#endif
Macro definition and expansion
There are two types of macros, object-like and function-like. Object-like macros do not take parameters; function-like macros do (although the list of parameters may be empty). The generic syntax for declaring an identifier as a macro of each type is, respectively:
#define <identifier> <replacement token list> // object-like macro
#define <identifier>(<parameter list>) <replacement token list> // function-like macro, note parameters
The function-like macro declaration must not have any whitespace between the identifier and the first, opening, parenthesis. If whitespace is present, the macro will be interpreted as object-like with everything starting from the first parenthesis added to the token list.
A macro definition can be removed with #undef:
#undef <identifier> // delete the macro
Whenever the identifier appears in the source code it is replaced with the replacement token list, which can be empty. For an identifier declared to be a function-like macro, it is only replaced when the following token is also a left parenthesis that begins the argument list of the macro invocation. The exact procedure followed for expansion of function-like macros with arguments is subtle.
Object-like macros were conventionally used as part of good programming practice to create symbolic names for constants, e.g.,
#define PI 3.14159
instead of hard-coding numbers throughout the code. An alternative in both C and C++, especially in situations in which a pointer to the number is required, is to apply the const qualifier to a global variable. This causes the value to be stored in memory, instead of being substituted by the preprocessor.
An example of a function-like macro is:
#define RADTODEG(x) ((x) * 57.29578)
This defines a radians-to-degrees conversion which can be inserted in the code where required, i.e., RADTODEG(34). This is expanded in-place, so that repeated multiplication by the constant is not shown throughout the code. The macro here is written as all uppercase to emphasize that it is a macro, not a compiled function.
The second is enclosed in its own pair of parentheses to avoid the possibility of incorrect order of operations when it is an expression instead of a single value. For example, the expression expands correctly as ; without parentheses, gives precedence to the multiplication.
Similarly, the outer pair of parentheses maintain correct order of operation. For example, expands to ; without parentheses, gives precedence to the division.
Order of expansion
function-like macro expansion occurs in the following stages:
Stringification operations are replaced with the textual representation of their argument's replacement list (without performing expansion).
Parameters are replaced with their replacement list (without performing expansion).
Concatenation operations are replaced with the concatenated result of the two operands (without expanding the resulting token).
Tokens originating from parameters are expanded.
The resulting tokens are expanded as normal.
This may produce surprising results:
#define HE HI
#define LLO _THERE
#define HELLO "HI THERE"
#define CAT(a,b) a##b
#define XCAT(a,b) CAT(a,b)
#define CALL(fn) fn(HE,LLO)
CAT(HE, LLO) // "HI THERE", because concatenation occurs before normal expansion
XCAT(HE, LLO) // HI_THERE, because the tokens originating from parameters ("HE" and "LLO") are expanded first
CALL(CAT) // "HI THERE", because parameters are expanded first
Special macros and directives
Certain symbols are required to be defined by an implementation during preprocessing. These include and , predefined by the preprocessor itself, which expand into the current file and line number. For instance the following:
// debugging macros so we can pin down message origin at a glance
// is bad
#define WHERESTR "[file %s, line %d]: "
#define WHEREARG ,
#define DEBUGPRINT2(...) fprintf(stderr, __VA_ARGS__)
#define DEBUGPRINT(_fmt, ...) DEBUGPRINT2(WHERESTR _fmt, WHEREARG, __VA_ARGS__)
// OR
// is good
#define DEBUGPRINT(_fmt, ...) fprintf(stderr, "[file %s, line %d]: " _fmt, , , __VA_ARGS__)
DEBUGPRINT("hey, x=%d\n", x);
prints the value of x, preceded by the file and line number to the error stream, allowing quick access to which line the message was produced on. Note that the WHERESTR argument is concatenated with the string following it. The values of and can be manipulated with the #line directive. The #line directive determines the line number and the file name of the line below. E.g.:
#line 314 "pi.c"
printf("line=%d file=%s\n", , );
generates the printf function:
printf("line=%d file=%s\n", 314, "pi.c");
Source code debuggers refer also to the source position defined with and .
This allows source code debugging when C is used as the target language of a compiler, for a totally different language.
The first C Standard specified that the macro be defined to 1 if the implementation conforms to the ISO Standard and 0 otherwise, and the macro __STDC_VERSION__ defined as a numeric literal specifying the version of the Standard supported by the implementation. Standard C++ compilers support the __cplusplus macro. Compilers running in non-standard mode must not set these macros or must define others to signal the differences.
Other Standard macros include , the current date, and , the current time.
The second edition of the C Standard, C99, added support for __func__, which contains the name of the function definition within which it is contained, but because the preprocessor is agnostic to the grammar of C, this must be done in the compiler itself using a variable local to the function.
Macros that can take a varying number of arguments (variadic macros) are not allowed in C89, but were introduced by a number of compilers and standardized in C99. Variadic macros are particularly useful when writing wrappers to functions taking a variable number of parameters, such as printf, for example when logging warnings and errors.
One little-known usage pattern of the C preprocessor is known as X-Macros. An X-Macro is a header file. Commonly these use the extension ".def" instead of the traditional ".h". This file contains a list of similar macro calls, which can be referred to as "component macros". The include file is then referenced repeatedly.
Many compilers define additional, non-standard macros, although these are often poorly documented. A common reference for these macros is the Pre-defined C/C++ Compiler Macros project, which lists "various pre-defined compiler macros that can be used to identify standards, compilers, operating systems, hardware architectures, and even basic run-time libraries at compile-time".
Token stringification
The # operator (known as the "Stringification Operator") converts a token into a C string literal, escaping any quotes or backslashes appropriately.
Example:
#define str(s) #s
str(p = "foo\n";) // outputs "p = \"foo\\n\";"
str(\n) // outputs "\n"
If you want to stringify the expansion of a macro argument, you have to use two levels of macros:
#define xstr(s) str(s)
#define str(s) #s
#define foo 4
str (foo) // outputs "foo"
xstr (foo) // outputs "4"
You cannot combine a macro argument with additional text and stringify it all together. You can however write a series of adjacent string constants and stringified arguments: the C compiler will then combine all the adjacent string constants into one long string.
Token concatenation
The ## operator (known as the "Token Pasting Operator") concatenates two tokens into one token.
Example:
#define DECLARE_STRUCT_TYPE(name) typedef struct name##_s name##_t
DECLARE_STRUCT_TYPE(g_object); // Outputs: typedef struct g_object_s g_object_t;
User-defined compilation errors
The #error directive outputs a message through the error stream.
#error "error message"
Implementations
All C, C++ and Objective-C implementations provide a preprocessor, as preprocessing is a required step for those languages, and its behavior is described by official standards for these languages, such as the ISO C standard.
Implementations may provide their own extensions and deviations, and vary in their degree of compliance with written standards. Their exact behavior may depend on command-line flags supplied on invocation. For instance, the GNU C preprocessor can be made more standards compliant by supplying certain flags.
Compiler-specific preprocessor features
The #pragma directive is a compiler-specific directive, which compiler vendors may use for their own purposes. For instance, a #pragma is often used to allow suppression of specific error messages, manage heap and stack debugging and so on. A compiler with support for the OpenMP parallelization library can automatically parallelize a for loop with #pragma omp parallel for.
C99 introduced a few standard #pragma directives, taking the form #pragma STDC ..., which are used to control the floating-point implementation. The alternative, macro-like form was also added.
Many implementations do not support trigraphs or do not replace them by default.
Many implementations (including, e.g., the C compilers by GNU, Intel, Microsoft and IBM) provide a non-standard directive to print out a warning message in the output, but not stop the compilation process. A typical use is to warn about the usage of some old code, which is now deprecated and only included for compatibility reasons, e.g.:// GNU, Intel and IBM
#warning "Do not use ABC, which is deprecated. Use XYZ instead."// Microsoft
#pragma message("Do not use ABC, which is deprecated. Use XYZ instead.")
Some Unix preprocessors traditionally provided "assertions", which have little similarity to assertions used in programming.
GCC provides #include_next for chaining headers of the same name.
Objective-C preprocessors have #import, which is like #include but only includes the file once. A common vendor pragma with a similar functionality in C is #pragma once.
Other uses
As the C preprocessor can be invoked separately from the compiler with which it is supplied, it can be used separately, on different languages. Notable examples include its use in the now-deprecated imake system and for preprocessing Fortran. However, such use as a general purpose preprocessor is limited: the input language must be sufficiently C-like. The GNU Fortran compiler automatically calls "traditional mode" (see below) cpp before compiling Fortran code if certain file extensions are used. Intel offers a Fortran preprocessor, fpp, for use with the ifort compiler, which has similar capabilities.
CPP also works acceptably with most assembly languages and Algol-like languages. This requires that the language syntax not conflict with CPP syntax, which means no lines starting with # and that double quotes, which cpp interprets as string literals and thus ignores, don't have syntactical meaning other than that. The "traditional mode" (acting like a pre-ISO C preprocessor) is generally more permissive and better suited for such use. A more flexible variant of the C preprocessor called GPP is preferred for more complex cases.
The C preprocessor is not Turing-complete, but it comes very close: recursive computations can be specified, but with a fixed upper bound on the amount of recursion performed. However, the C preprocessor is not designed to be, nor does it perform well as, a general-purpose programming language. As the C preprocessor does not have features of some other preprocessors, such as recursive macros, selective expansion according to quoting, and string evaluation in conditionals, it is very limited in comparison to a more general macro processor such as m4.
See also
C syntax
Make
Preprocessor
m4 (computer language)
PL/I preprocessor
References
Sources
External links
ISO/IEC 9899. The official C standard. As of 2014, the latest publicly available version is a working paper for C11.
GNU CPP online manual
Visual Studio .NET preprocessor reference
Pre-defined C/C++ Compiler Macros project: lists "various pre-defined compiler macros that can be used to identify standards, compilers, operating systems, hardware architectures, and even basic run-time libraries at compile-time"
C (programming language)
Transformation languages
Macro programming languages |
43232347 | https://en.wikipedia.org/wiki/Astra%20Linux | Astra Linux | Astra Linux is a Russian Linux-based computer operating system (OS) developed to meet the needs of the Russian army, other armed forces and intelligence agencies. It provides data protection up to the level of "top secret" in Russian classified information grade by featuring mandatory access control. It has been officially certified by Russian Defense Ministry, Federal Service for Technical and Export Control and Federal Security Service.
Specifications
The creator of the OS is the Scientific/Manufacturing Enterprise Rusbitech which is applying solutions according to Russian Government decree No.2299-р of 17/10/2010 that orders federal authorities and budget institutions to implement Free Software use.
The OS releases are named after Hero Cities in Russia and Commonwealth of Independent States (CIS). There is one release for general purpose code named Oryol aimed at "achieving small and mid-business goals". Other releases are marked "special purpose" – the Smolensk for x86-64 PCs, Tula for networking hardware, Novorossiysk for ARM mobile devices and Murmansk for IBM System Z mainframe computers.
Rusbitech also manufactures a "soft/hardware trusted boot control module" MAKSIM-M1 ("М643М1") with PCI bus. It prevents unauthorized access and offers some other raised digital security features. The module, besides Astra Linux, also supports OSes with Linux kernel 2.6.x up to 5.x.x, as well as several Microsoft Windows OSes.
It is declared the Astra Linux licenses correspond with Russian and international laws and "don't contradict with the spirit and demands of GPL license". The system uses .deb packages.
Astra Linux is a recognized Debian derivative. Rusbitech has partnership relations with The Linux Foundation. It was part of the advisory board of The Document Foundation, but was suspended at 26th Feb. 2022 because of the Ukraine crisis.
Use
The Special Edition version (paid) is used in many Russian state-related organizations. Particularly, it is used in the Russian National Center for Defence Control.
There are talks to deploy mass use of Astra Linux in many state institutions of the Republic of Crimea – legitimate use of other popular OSes is questionable because of international sanctions during the Ukrainian crisis.
Also there are plans on cooperation of Rusbitech and Huawei.
In January 2018, it was announced that Astra Linux was going to be deployed to all Russian Army computers, and Microsoft Windows will be dropped.
In February 2018, Rusbitech announced it has ported Astra Linux to Russian-made Elbrus microprocessors.
In February 2019, Astra Linux was announced to be implemented at Tianwan Nuclear Power Plant in China.
Since 2019 "super-protected" tablet computers branded MIG are available with Astra Linux, smartphones are expected.
In 2019 Gazprom national gas/oil holding announced Astra Linux implementation, in 2020 nuclear corporation Rosatom, in early 2021 Russian Railways was reported to do so.
In 2020, Astra Linux sold more than a million copies in licenses and generated 2 billion rubles in sales.
In 2021, several Russian nuclear power plants and subsidiaries of Rosatom are planned to switch to Astra Linux, with a total of 15000 users.
Version history
References
X86-64 Linux distributions
Debian-based distributions
Linux distributions
Russian-language Linux distributions
State-sponsored Linux distributions |
37503253 | https://en.wikipedia.org/wiki/RjDj | RjDj | RjDj (Reality Jockey Ltd.) was a startup founded in late 2008 by last.fm co-founder Michael Breidenbruecker. The company was based in London and was run by a small team of four employees.
Its mission was to create sonic experiences specifically designed for the latest generation of personal music players. RjDj produced and distributed a network of mobile applications and sold additional musical content within this network.
RjDj developed a new genre of music that it called reactive music, a non-linear form of music that was able to react to the listener and their environment in real-time. Reactive music is closely connected to generative music, interactive music, and augmented reality. Similar to music in video games, that is changed by specific events happening in the game, reactive music is affected by events occurring in the real life of the listener. Reactive music adapts to a listener and their environment by using built in sensors (e.g. camera, microphone, accelerometer, touch-screen and GPS) in mobile media players. The main difference to generative music is that listeners are part of the creative process, co-creating the music with the composer. Reactive music is also able to augment and manipulate the listeners real-world auditory environment.
What is distributed in reactive music is not the music itself, but software that generates the music. Applications made by RjDj were available on Apple’s iOS platform. The technology behind it was based on the Pure Data (or Pd) digital signal processing framework. Reactive music pieces, so called Scenes, could be made using rjlib, which is an open source library of useful software building blocks to construct reactive music.
RjDj closed its website and removed its apps from circulation in 2013.
Apps Developed
RjDj
RjDj Shake
Love by Air
Inception – The App
Kids on DSP – reactive minimal techno
Little Boots – Reactive Remixer
Dimensions – The Game
Rj Voyager
MusicZones
Trippy
Dark Night Rises Z+
The app formerly known as H _ _ r
Artists who have released music as RjDj apps
ookoi
Carl Craig
Little Boots
Air
Acid Pauli
Booka Shade
Jimmy Edgar
Chiddy Bang
Son of Dave
Kirsty Hawkshaw
Venus Hum
Easy Star Allstars
Sophie Barker
Kids on DSP
Hans Zimmer
Netsky
YouTube videos
– Dimensions – from the makers of Inception The App
– Hans Zimmer about Inception The App
– Inception The App – Sleep Dream Review
– RjDj Rj Voyager featuring Booka Shade
– Little Boots Reactive Remixer iPhone app tutorial
References
Defunct companies based in London
Computer music software
IOS software
British companies established in 2008
Software companies established in 2008
2008 establishments in England
British companies disestablished in 2013
Software companies disestablished in 2013
2013 disestablishments in England |
22364665 | https://en.wikipedia.org/wiki/55th%20Mobile%20Command%20and%20Control%20Squadron | 55th Mobile Command and Control Squadron | The United States Air Force's 55th Mobile Command and Control Squadron (55 MCCS) was a mobile command and control unit located at Offutt AFB, Nebraska.
History
Personnel of the 55 MCCS were trained in their primary specialty, in addition to vital expeditionary capabilities that ensure survival.
Logo Significance
Blue and yellow are the Air Force colors. Blue alludes to the sky, the primary theater of Air Force operations. Yellow refers to the sun and the excellence required of Air Force personnel.
Previous designations
55th Mobile Command and Control Squadron (1 July 1994 – 30 September 2006)
Bases stationed
Offutt AFB, Nebraska (1 July 1994 – 30 September 2006)
Commanders
Lt Col John J. Jordan (2000–2002)
Maj. Karen Hibbard (2005-2006)
Lt Col Ronald J. Hefner (1997-1999)
Equipment Utilized
Mobile Consolidated Command Center (1998–Present),
MILSTAR
DSCS
Single Channel Anti-Jam Manpower (SCAMP) terminals
Decorations
Air Force Outstanding Unit Award
1 July 1994 – 31 July 1995
1 June 1997 – 31 May 1999
1 June 1999 – 31 May 2001
1 June 2002 – 31 May 2004
1 June 2004 – 31 May 2006
See also
4th Command and Control Squadron
153d Command and Control Squadron
721st Mobile Command and Control Squadron
References
External links
Unofficial 55th MCCS Reaper Locator
Military units and formations in Nebraska
Mobile Command and Control 0055 |
1300026 | https://en.wikipedia.org/wiki/Limited-access%20road | Limited-access road | A limited-access road, known by various terms worldwide, including limited-access highway, dual-carriageway, expressway, and partial controlled access highway, is a highway or arterial road for high-speed traffic which has many or most characteristics of a controlled-access highway (also known as a freeway or motorway), including limited or no access to adjacent property, some degree of separation of opposing traffic flow, use of grade separated interchanges to some extent, prohibition of some modes of transport such as bicycles or horses, and very few or no intersecting cross-streets or level crossings. The degree of isolation from local traffic allowed varies between countries and regions. The precise definition of these terms varies by jurisdiction.
History
The first implementation of limited-access roadways in the United States was the Bronx River Parkway in New York, in 1907. The New York State Parkway System was constructed as a network of high-speed roads in and around New York City. The first limited access highway built is thought to be the privately built Long Island Motor Parkway in Long Island, New York. The Southern State Parkway opened in 1927, while the Long Island Motor Parkway was closed in 1937 and replaced by the Northern State Parkway (opened in 1931) and the contiguous Grand Central Parkway (opened in 1936).
Bike freeway
The first Dutch bike freeway route opened in 2004 between Breda and Etten-Leur; many others have been added since then
Regional implementations
In the United States, the national Manual on Uniform Traffic Control Devices (MUTCD) uses "full control of access" only for freeways. Expressways are defined as having "partial control of access" (or semi-controlled access). This means that major roads typically use interchanges and commercial development is accessed via cross roads or frontage roads, while minor roads can cross at grade and farms can have direct access. This definition is also used by some states, some of which also restrict freeways only to motor vehicles capable of maintaining a certain speed. Some other states use "controlled access" to mean a higher standard than "limited access", while others reverse the two terms.
Oceania
Australia
While Australia's larger capital cities feature controlled-access highway networks, the smaller metropolitan areas mostly rely on limited-access highways for high-speed local traffic.
In South Australia the terms "expressway" and "freeway" can be synonymous. The Southern and Northern Expressways are both controlled-access highways. However, perhaps confusingly, the Port River Expressway is a limited-access highway.
Dual carriageways that connect capital cities and regional centres, such as the M31 Hume Highway between Sydney and Melbourne, are almost all limited-access highways. In spite of this, 'freeway' terminology is used on signage for most regional limited access highways in the state of Victoria.
New Zealand: Expressway, Motorway
The terms Motorway and Expressway in New Zealand both encompass multi-lane divided freeways as well as narrower 2-4-lane undivided expressways with varying degrees of grade separation; the difference being that in New Zealand a Motorway has certain additional legal traffic restrictions.
Asia
China
The Expressway Network of the People's Republic of China is the longest highway system in the world. The network is also known as National Trunk Highway System (NTHS). By the end of 2016, the total length of China's expressway network reached 131,000 kilometers (82,000 mi).
Expressways in China are a fairly recent addition to a complex network of roads. China's first expressway was built in 1988. Until 1993, very few expressways existed. The network is expanding rapidly after 2000. In 2011, 11,000 kilometres (6,800 mi) of expressways were added to the network.
Pakistan
The Expressways of Pakistan are a network of multiple-lane, high-speed highways in Pakistan, which are owned, maintained and operated federally by Pakistan's National Highway Authority. They are one class lower than the country's motorways and are usually upgraded versions of the national highways. The total length of Pakistan's expressways is as of November, 2016. Around of expressways are currently under construction in different parts of country. Most of these expressways will be complete between 2017 and 2020.
India
Expressways in India make up more than of the Indian National Highway System on which they are the highest class of road. The National Highways Development Project is underway to add an additional of expressways to the network by the year 2022.
Iran
Expressways in Iran are one class lower than freeways and are used in large urban areas such as Isfahan, Mashhad, or Tehran and between other important cities (Usually two province capitals) in rural and desert areas. The speed limit in Urban areas is between and in rural and desert areas between .
Japan
The term Expressway as used in English in Japan refers to both freeway-style highways and narrower, more winding, often undivided Regional High-Standard Highways . Both types of expressways have a combined length of as of April 2012.
Malaysia
Limited-access roads in Malaysia usually, but not always, take the name ( – this is also the name for expressways). Highways normally have a lower speed limit than expressways (but still higher than the rest of the local road network), and permit at-grade intersections and junctions to residential roads and shopfronts, although grade separation is still typical. Highways are normally toll-free and are owned and operated by the federal government. Notable examples of limited-access roads are the Federal Highway, Skudai Highway, Gelugor Highway, Kuantan Bypass and Kuching Bypass.
Singapore
Limited-access roads in Singapore are formally known as (in contrast to controlled-access highways which are known as expressways). While still functioning as high-speed roads, semi-expressways may still have at-grade intersections with traffic lights, and speed limits are not uniform. Grade separation is, however, still typical at major junctions. Five roads have been designated as semi-expressways: Bukit Timah Road, Jurong Island Highway, Nicoll Highway, Outer Ring Road System and West Coast Highway.
South Korea
Motorways in South Korea (자동차전용도로 Jadongcha jeonyong doro, literally 'motor-car-only road') includes various grades of highways other than expressways. Contrary to the expressway in South Korea, motorway is a measure of traffic control, rather than a class of the road. For example, Jayu-ro is a segment of the national route 77 as well as a motorway. As of June 2011, 1,610 km of highways in total were designated as motorways. (1,052 km national highways, 351 km metropolitan highways, 185 km regional highways and 20 km municipal highways)
Like expressways, motorcycles are not permitted.
Sri Lanka
Sri Lanka has ensured to classify the expressways in reference to the connotation of E grades. , three expressways namely the Southern Expressway, Outer Circular Expressway and the Colombo – Katunayake Expressway have been created. A tax levying structure is proposed for travelling via the expressways. Speed limits in the range of 80–100 km/h is attested for travelling through the expressways. Up to now two expressways namely the Northern Expressway and the Ruwanpura Expressway are in process to satisfy the needs of public transport.
Taiwan (R.O.C.)
Expressways in Taiwan may be controlled-access highways similar to National Freeways or limited-access roads. Most have Provincial (as opposed to National) Highway status, although some are built and maintained by cities. All provincial expressways run east–west except for Provincial Highway No. 61, which runs north–south along the west coast. Some provincial expressway routes are still under construction.
Europe
Austria: Schnellstraße
In Austria the speed limit on a Schnellstraße is . Schnellstraßen are very similar to Austrian Autobahnen (freeways/motorways); the chief difference is that they are more cheaply built with smaller curve radius, often undivided and have fewer bridges and tunnels.
Belgium: Autoweg
In Belgium an autoweg is a public road, the beginning of which is indicated by the first signboard (F9) and the end by the second sign (F11).
An important difference with an autosnelweg is that crossroads as well as traffic lights can be on an autoweg.
In Belgium there is no specific speed regulation for an autoweg.
Only motor vehicles and their trailers (with the exception of mopeds), agricultural vehicles and the towing of fairground vehicles, as well as four-wheelers (without passenger compartment), are allowed to drive on an autoweg.
An autoweg can consist of two or more lanes. The driving directions can be separated by a roadmarking, or by a central reservation. If a public road (autosnelweg, autoweg, weg) consists of two or more lanes that are clearly separated from each other by a roadside or a space that is not accessible to vehicles, the drivers may not drive on the lane opposite to them.
Croatia: Brza cesta
In Croatia, the term brza cesta (lit. "fast road") is used to describe a motor vehicle-only road, usually grade-separated, without an emergency lane, with a speed limit of , although it can be lowered, usually to . They range from 2+2 lane dual carriageways with grade-separated intersections and speed limit (D2 in Osijek), four or six-lane urban streets with at-grade intersections with traffic lights (D1 in Karlovac) or two-lane single carriageways with grade-separated intersections (D33 in Šibenik). They are either a standalone state road (D10) or a part of one (Southern Osijek bypass, D2). Some portions of motorways are expressways since they are either in construction (A8 between Pazin and Matulji) or designed as such (A7 in Rijeka). As a rule, the expressways are not tolled, however major tunnels on expressways are tolled.
Czech Republic: Rychlostní silnice
Expressways in the Czech Republic (, are defined as dual carriageways with smaller emergency lane. The speed limit is 110 km/h (70 mph). Expressway road signs are white on blue.
Denmark: Motortrafikvej
In Denmark, a 'motortrafikvej' (Danish for "motor traffic road") is a high-speed highway with a speed limit between . The most common 'motortrafikvej' has two lanes (1+1) or 2+1. There is no grade intersections. The signs for 'motortrafikvej' have white text on blue background.
Finland: Moottoriliikennetie
In Finland, highways are separated into three categories: all-access valtatie ("main road"), limited-access moottoriliikennetie ("motor traffic road") and finally moottoritie ("motorway"); the latter two are marked with green signage, while valtatie signage is blue. While most of the network is all-access road, of it is motorway, and is limited-access road. The access is limited to motor vehicles faster than 50 km/h, thus excluding pedestrian, bicycle, moped or tractor traffic; furthermore, towing is not allowed. Limited-access roads are generally similar to motorways, but do not fulfill all the technical requirements, such as several lanes in one direction or separation of opposite directions. Limited-access roads are usually built because the local population density is too low to justify a motorway. Often space has been left during construction for an eventual upgrade to a motorway. Limited-access roads also function as feeder routes for motorways. The general speed limit on main roads and limited-access roads is 100 km/h (summertime) and 80 km/h (wintertime). On motorways the speed limits are 120 and 100 km/h respectively. Especially during winter the speed limits can be changed due to weather conditions.
Germany: Kraftfahrstraße
A Kraftfahrstrasse (German for "motor-power road", also colloquially called Schnellstraße, literally "fast road") in Germany is any road with access limited to motor vehicles with a maximum design speed of more than , excluding pedestrian, bicycle, moped or tractor traffic. Oversized vehicles are banned.
The construction of transregional Kraftfahrstraßen highways (Autostraßen) rank below the standard of German autobahns. With regard to the general German speed limits, on roads with lanes separated by a median or with a minimum of two marked lanes per direction, an advisory speed limit (Richtgeschwindigkeit) of applies. At-grade intersections are admissible, regulation at junctions is usually provided by traffic lights or roundabouts. U-turns and any deliberate stopping are prohibited. Kraftfahrstraßen are out of bounds to pedestrians, except for special crosswalks.
Hungary: Autóút
Expressways in Hungary are called Autóút (Auto/car road). They are mostly dual carriageways.
The main difference between Hungarian motorways and expressways is, that they are more cheaply built with narrower width and often undivided.
Maximum speed limit is reduced to 110 km/h for vehicles under 3.5 tons, and 70 km/h for vehicles over 3.5 tons.
In Hungary there are multiple types of dual carriageways. One part is almost identical with motorways, but the driving lanes are narrower.
Parameters of a 2+2 lane dual carriageway off-habitat area:
Total width of road: 25.60 m
Driving lane width: 3.50 m
Pavement width: 2x10.25 m
Parking lane: 3.00 m
Middle separation area width: 3.60 m
Parameters of a 2+2 lane dual carriageway in habitat (town/city) area:
Total width of road: 24.10 m
Driving lane width: 3.50 m
Pavement width: 2x10.75 m
Parking lane: 3.00 m
Middle separation area width: 3.60 m
There are also semi-motorways with only one side of the motorway built.
After the missing lanes are built, they will become standard motorways.
Ireland: HQDC
A High-quality dual carriageway (HQDC) in Ireland is normally completed to a motorway standard, including no right-turns, but with no motorway restrictions. These are common on the final stretches of motorways nearing a major city, generally in order to enable use of bus stops and city bus services on the particular stretch of road.
There are not yet any specific signs for this type of road, but the National Roads Authority have hinted that they are looking at implementing the German-style Autostrasse sign in Ireland.
Speed limits are normally 100 km/h compared to 120 km/h on motorways
Italy: Superstrada
In Italy there are:
Type B highway (or strada extraurbana principale), commonly but unofficially known as superstrada, is a divided highway with at least two lanes for each direction, paved shoulder on the right, no cross-traffic and no at-grade intersections. Access restrictions on such highways are exactly the same of Italian motorways (autostrade), as well as signage at the beginning and the end of the highway (with the only difference being the background color, blue instead of green). Speed limit on type-B road is 110 km/h.
Type C highway (or strada extraurbana secondaria), a single carriageway with at least one lane for each direction and shoulders. It may have at-grade, at-level crossings with railways, roundabouts and traffic lights. This category contains also dual carriageways that can not be classified as type-B highways because of the lack of one or more required features. In absence of specific regulation signs, a type-C road is accessible by all vehicles and pedestrians, even if it has separate carriageways and no cross-traffic.
The sign shown here on the left allows access only to motorized vehicles. Speed limit on type-C roads is 90 km/h.
Netherlands: Autoweg
The Netherlands has much more kilometres of motorways (snelwegen), than expressways (autowegen). The latter only form a complementary part of the country's main highway network. They are typically shorter than motorways, offering connections of a more regional significance. The general speed limit is 100 km/h. Only faster motor vehicles, both capable and legally allowed to go at least 50 km/h, may use the road. Autowegen are always numbered and mostly signposted with an N (for Non motorway highway) and up to three digits, like . For the most part they fall under national or provincial management.
Dutch expressways are built to significantly varying standards. Designs range from fully controlled-access dual carriageways with grade separation, center dividers and full hard shoulders, to single carriageways with just one lane per direction and only intermittent shoulder patches called Vluchthavens (small Lay-bys). Intersections are frequently at grade with traffic lights, or they are roundabouts. There can be moveable bridges in these roads. In either case, the speed limit is frequently reduced to 70 km/h before reaching the junction or the bridge.
Since 1997 a national traffic safety program called Sustainable Safety has introduced a new road categorisation and new design standards. Although autowegen don't have to conform completely to the new Dutch design standard for regional flow roads (stroomwegen), many of these roads require at least some upgrades. The ideal is to make expressways divided and grade-separated, as much as possible. Otherwise these roads are downgraded to the safety category of distributor roads, thereby losing their expressway status.
Norway: Motortrafikkvei
In Norway, a motortrafikkvei (Norwegian for "motor traffic road"), formerly called motorvei klasse B ("class-B motorway") is a high-speed highway with a speed limit of up to 90 km/h. There are no at grade intersections. Direction signs for motortrafikkvei have black text on yellow background, while same signs on motorvei have white text on blue background. As of October 2017 the Norwegian Road DataBase show approximately 455 km of motortrafikkvei in Norway.
Poland: droga ekspresowa
Droga ekspresowa (plural: drogi ekspresowe) in Poland refers to a network of roads fulfilling the role of bringing traffic to the motorways, and serving major international and inter-regional purposes. They are often built as ring roads since they take less space than motorway and allow more entrances and exits. All expressways start with the letter S, followed by a number. They can be dual or single carriageways and have reduced number of one level intersections. As of May 2004 the Polish government documents indicated that the country had plans of an expressway and motorway network totalling (including about of motorways). The speed limit is 120 km/h (dual carriageway) and 100 km/h (single carriageway).
Portugal: via rápida
In Portugal, a non-motorway limited access road is commonly referred as a via rápida (rapid way, plural: vias rápidas), although there is not a specific official technical designation for it.
The legal term via reservada a automóveis e motociclos (reserved way for automobiles and motorcycles) is used to designate a non-motorway road where motorway rules apply (except the speed limit which is lower). However, this term refers only to the road rules and not to the road technical characteristics.
There are two main types of roads commonly referred as vias rápidas in Portugal. The first type is a limited access road, with dual carriageway and with interchanges grade separation. Many of these roads have all or almost all the technical characteristics of full motorways. Examples are the several urban highways in cities like Lisbon, Oporto, Coimbra and Braga. In Madeira, the main regional highways, that connect the cities and other important places of the island, are mainly of these type, there are two vias rápidas classified as motorways in the region, VR1 and VR2.
The second Portuguese type of via rápida is a highway with all the same characteristics of the above first type, except the number of carriageways that is only one. Examples of this type of roads are the ancient IP4 and IP5 (before being transformed in full motorways), the Portalegre-Beja section of the IP2, the Coimbra-Viseu section of the IP3 and several complementary routes (IC).
The dual carriageway vias rápidas can be classified and signalized as reserved ways for automobiles and motorcycles, cases in which general motorway rules apply, except speed limited which is never above 100 km/h. In dual carriageway vias rápidas not signalized as reserved ways, normal road rules apply, including speed limit which is never above 90 km/h. Single carriageway vias rápidas cannot be classified and signalized as reserved ways and so normal road rules always apply there.
Romania
In Romania, such roads are called drumuri expres (or drum expres in singular form). Whilst there are no expressways in Romania so far, their main difference from regular motorways are the lack of hard shoulders and a slightly lower speed limit of 120 km/h, otherwise, being similar to a motorway regarding grade separation and featuring at least 2 lanes per direction.
Expressways were introduced for the first time on the 2014 roads masterplan. This masterplan envisaged building most planned motorways up to expressway standards, provided that in the future they would be converted to actual motorways. However, by mid-2019, no expressway has been built, nevermind starting works on one, although contracts were signed to allow for their construction, meaning that in the 2020s the first expressways will likely be completed.
Planned expressways according to CNADNR (Romanian National Company of Motorways and National Roads), based on the 2014 roads masterplan:
Russia
Russia has a large federal highway network that totals approximately . Federal highways in the country are classified into two categories: "motorways" (, not the same as the English term motorway) and "other".
In the Road Rules, there are 2 designations for a limited-access road, one being "motorway" and the other being "road for cars"(), on both of which special motorway rules apply.
"Roads for cars" are different from motorways by the fact that they don't have to be dual-carriageway, at-grade traffic light intersections are permitted, and the speed limit is still 90 km/h.
Spain: Autovía
Unlike Spain's Autopistas, specifically reserved for vehicles able to sustain at least 60 km/h (37 mph), and usually tolled, Autovías are usually upgrades from older roads, and never toll roads. In general, slow vehicles like bicycles and agricultural machinery are allowed under certain restrictions.
Sweden
The Swedish road type motortrafikled is a road with limited access (all grade-separated, no slow traffic) and two or three lanes. According to the EU's multilingual term base, motortrafikled should be translated to expressway, rapid road or road with limited access. The same rules apply to a motortrafikled as to a motorway - it is basically a half motorway. The speed limit is usually 90 – 100 km/h. Many motortrafikleder are built as 2+1 roads, alternating two lanes in one direction and one in the other, with a narrow fence in between.
Switzerland
In Switzerland Autostrasse (German, "auto road"), semi-autoroute, or semiautostrade (French and Italian for "semi-freeway") is a highway that is only allowed to high-speed traffic with no crossings, but it is not the highest class road, the motorways (Autobahn/autoroute/autostrada). The speed limit on these roads in Switzerland is . Most of the Autostrasse / semi-autoroutes / semiautostrade have no central barrier separating the lanes in different directions.
United Kingdom
In the United Kingdom, the second tier of high speed roads below Motorways are typically dual carriageways. Many roads such as the A1, the A14, the A19 and the A42 are built to a high quality, in many places they are only intersected by grade-separated junctions, have full barriers at both the road side and the central reservations and in some cases three or more lanes of traffic, however they are not subjected to motorway restrictions as they are typically built to a lower standard, or have existing right of way rights for non motorised vehicles. They may lack some features that a motorway would have, such as hard shoulders, and may have tighter bends and steeper gradients than would be allowed on a motorway or have established rights of way that cannot be removed. The standard motorway speed limit for cars of also applies to many dual carriageways.
In March 2015, it was announced that a new standard would be developed to formally designate certain high-quality routes in England as Expressways. This new standard would have the same motorway regulations as traditional motorways, however would lack a hard shoulder and use traffic management systems like those on smart motorways. An "expressway" is limited to 3 through lanes, they are to be built largely to the same standards as a smart motorway, although some non-standard existing alignments are allowed to remain if they are just short of being standard.
Some roads have "expressway" in their name, this has no reflection on the purpose or standard of the road. For example, the Aston Expressway or the North Wales Expressway
North America
Canada
In Ontario, expressway is synonymous with freeway and is used to mean limited-access divided-highways with no at-grade intersections. An example of this is the Gardiner Expressway through western and downtown Toronto, and once it turns into a 6-lane arterial road (Lake Shore Boulevard) east of the Don River, there is a sign warning of the end of the freeway. The E. C. Row Expressway in Windsor, Ontario is a controlled-access divided freeway with grade-separated interchanges, between Ojibway Parkway at its western terminus and Banwell Road at its eastern terminus, where there are traffic intersections at both termini. The Macdonald–Cartier Freeway would be an example of a route that uses the term freeway, however, that name is being phased out by the Ministry of Transportation. In general, the term "expressway" is used more frequently for municipally maintained roads, while provincial freeways are known more by their route number (particularly the 400-series highways are known as Highway 4__) despite some of them having an "expressway" name for all or part of their length, such as the (Chedoke Expressway/Hamilton Expressway, Belfield Expressway, and Airport Expressway).
The Veterans Memorial Parkway in London, Ontario, has intersections instead of interchanges, and thus is considered an expressway and not a freeway. It was originally designed with sufficient right-of-way to be built as a full freeway, but a lack of funding forced it to be built with at-grade intersections. Similarly, the Hanlon Parkway in Guelph and Highway 40 in Sarnia, Ontario were originally opened with intersections in lieu of interchanges, save for the couple grade-separated interchanges. Regional Road 420 in Niagara Falls is also an expressway. While Allen Road and Highway 400 were originally full freeways, their extensions (for Allen Road to meet Sheppard Avenue and Dufferine Street, and the 400 South Extension which became Black Creek Drive and handed over to Metro Toronto upon completion) were built as expressways with at-grade intersections.
Two sections of Highway 11, between Barrie and Orillia as well as between Orillia and Gravenhurst, are a Right-in Right-out (RIRO) expressway rather than a full freeway. The joint route of Highway 35/115 in Durham Region is also a RIRO expressway.
In most of Western Canada, an expressway is a high-speed arterial road along the lines of the California definition, while a freeway is fully controlled access with no at-grade intersections. In Alberta, the term "Trail" refers to both full freeways (Stoney Trail), or high-speed arterials with a mix of signalized intersections and interchanges (Crowchild Trail). The Yellowhead Trail as it passes through Edmonton, Alberta has both intersections and interchanges. It is the main east–west artery for the northern half of the city. There are plans to upgrade many of the most congested remaining intersections into interchanges in the near future.
In Quebec, the term freeway is never used, with the terms expressway (in English) and autoroute (in English and French) being preferred. English terms are rare, and only found on bilingual signage of expressways (abbreviated "expy") found in Montreal around bridges and on the Bonaventure Expressway; these signs are controlled by the federal government. Most of the Autoroutes are built or at least designed to be upgrade to a full freeway (initially constructed as a Two-lane expressway), a notable exception is the section of Autoroute 20 through Vaudreuil-Dorion and L'Île-Perrot which is an 8 km urban boulevard.
United States
In the United States, an expressway is defined by the federal government’s Manual on Uniform Traffic Control Devices as a divided highway with partial control of access. In contrast, a freeway is defined as a divided highway with full control of access. The difference between partial and full access control is that expressways may have a limited number of driveways and at-grade intersections (thus making them a form of high-speed arterial road), while access to freeways is allowed only at grade-separated interchanges. Expressways under this definition do not conform to Interstate highway standards (which ban all driveways and at-grade intersections) and are therefore usually numbered as state highways or U.S. Highways.
This distinction was first developed in 1949 by the Special Committee on Nomenclature of what is now the American Association of State Highway and Transportation Officials (AASHTO). In turn, the definitions were incorporated into AASHTO's official standards book, the Manual on Uniform Traffic Control Devices, which would become the national standards book of the U.S. Department of Transportation under a 1966 federal statute. The same distinction has also been codified into the statutory law of eight states: California, Minnesota, Mississippi, Missouri, Nebraska, North Dakota, Ohio, and Wisconsin.
However, each state codified the federal distinction slightly differently. California expressways do not necessarily have to be divided, though they must have at least partial access control. For both terms to apply in Wisconsin, a divided highway must be at least four lanes wide. In Missouri, both terms apply only to divided highways at least 10 miles long that are not part of the Interstate Highway System. In North Dakota and Mississippi, an expressway may have "full or partial" access control and "generally" has grade separations at intersections; a freeway is then defined as an expressway with full access control. Ohio's statute is similar, but instead of the vague word "generally," it imposes a requirement that 50% of an expressway's intersections must be grade-separated for the term to apply. Only Minnesota enacted the exact MUTCD definitions, in May 2008.
However, many states around the Great Lakes region and along the Eastern Seaboard have refused to conform their terminology to the federal definition. The following states officially prefer the term expressway instead of freeway to describe what are technically freeways in federal parlance: Connecticut, Florida, Illinois, Maryland, and West Virginia. In those states, it is common to find Interstate highways that bear the name expressway. Ultimately, it is the federal definition that defines a road's classification whether it is an expressway or freeway no matter the preferred term. No state, for instance, could have what is technically an expressway given Interstate status just because semantically they use the term interchangeably with freeway.
Most expressways under the federal definition have speed limits of 45-55 mph (70–90 km/h) in urban areas and 55-70 mph (90–110 km/h) in rural areas. Urban expressways are usually free of private driveways, but occasional exceptions include direct driveways to gas stations and shopping malls at major intersections (which would never be allowed on a true freeway).
The vast majority of expressways are built by state governments, or by private companies, which then operate them as toll roads pursuant to a license from the state government.
A famous example of a local government getting into the expressway business is Santa Clara County in California, which deliberately built its own expressway system in the 1960s to supplement the freeway system then planned by Caltrans. Although the county originally planned to upgrade the expressways into full-fledged freeways, such a project became politically infeasible after the rise of the tax revolt movement in the mid-1970s, which began with California Proposition 13 in 1978.
South America
Brazil
In Brazil, an expressway is known as Via Expressa and its function is to connect the most important streets and avenues of certain cities with their adjacent highways. Because of this, some expressways are numbered (in the same way as highways). According to the Código Brasileiro de Trânsito (Brazilian Traffic Code), expressways are officially defined as Vias de Trânsito Rápido (Rapid Transit Routes) and are considered the most important urban roads, with standard speed limits of 80 km/h (unless specified). A few examples of expressways include Marginal Tietê and Marginal Pinheiros in São Paulo; Avenida Brasil, Red Line and Yellow Line in Rio de Janeiro; among others.
See also
Supercorridor
References
Types of roads |
47926105 | https://en.wikipedia.org/wiki/Open%20Energy%20Modelling%20Initiative | Open Energy Modelling Initiative | The Open Energy Modelling Initiative (openmod) is a grassroots community of energy system modellers from universities and research institutes across Europe and elsewhere. The initiative promotes the use of open-source software and open data in energy system modelling for research and policy advice. The Open Energy Modelling Initiative documents a variety of open-source energy models and addresses practical and conceptual issues regarding their development and application. The initiative runs an email list, an internet forum, and a wiki and hosts occasional academic workshops. A statement of aims is available.
Context
The application of open-source development to energy modelling dates back to around 2003. This section provides some background for the growing interest in open methods.
Growth in open energy modelling
Just two active open energy modelling projects were cited in a 2011 paper: OSeMOSYS and TEMOA. Balmorel was also public at that time, having been made available on a website in 2001.
the openmod wiki lists 24 such undertakings.
the Open Energy Platform lists 17open energy frameworks and about 50open energy models.
Academic literature
This 2012 paper presents the case for using "open, publicly accessible software and data as well as crowdsourcing techniques to develop robust energy analysis tools". The paper claims that these techniques can produce high-quality results and are particularly relevant for developing countries.
There is an increasing call for the energy models and datasets used for energy policy analysis and advice to be made public in the interests of transparency and quality. A 2010 paper concerning energy efficiency modeling argues that "an open peer review process can greatly support model verification and validation, which are essential for model development". One 2012 study argues that the source code and datasets used in such models should be placed under publicly accessible version control to enable third-parties to run and check specific models. Another 2014 study argues that the public trust needed to underpin a rapid transition in energy systems can only be built through the use of transparent open-source energy models. The UK TIMES project (UKTM) is open source, according to a 2014 presentation, because "energy modelling must be replicable and verifiable to be considered part of the scientific process" and because this fits with the "drive towards clarity and quality assurance in the provision of policy insights". In 2016, the Deep Decarbonization Pathways Project (DDPP) is seeking to improve its modelling methodologies, a key motivation being "the intertwined goals of transparency, communicability and policy credibility." A 2016 paper argues that model-based energy scenario studies, wishing to influence decision-makers in government and industry, must become more comprehensible and more transparent. To these ends, the paper provides a checklist of transparency criteria that should be completed by modelers. The authors note however that they "consider open source approaches to be an extreme case of transparency that does not automatically facilitate the comprehensibility of studies for policy advice." An editorial from 2016 opines that closed energy models providing public policy support "are inconsistent with the open access movement [and] funded research". A 2017 paper lists the benefits of open data and models and the reasons that many projects nonetheless remain closed. The paper makes a number of recommendations for projects wishing to transition to a more open approach. The authors also conclude that, in terms of openness, energy research has lagged behind other fields, most notably physics, biotechnology, and medicine. Moreover:
A one-page opinion piece in Nature News from 2017 advances the case for using open energy data and modeling to build public trust in policy analysis. The article also argues that scientific journals have a responsibility to require that data and code be submitted alongside text for scrutiny, currently only Energy Economics makes this practice mandatory within the energy domain.
Copyright and open energy data
Issues surrounding copyright remain at the forefront with regard to open energy data. Most energy datasets are collated and published by official or semi-official sources, for example, national statistics offices, transmission system operators, and electricity market operators. The doctrine of open data requires that these datasets be available under free licenses (such as ) or be in the public domain. But most published energy datasets carry proprietary licenses, limiting their reuse in numerical and statistical models, open or otherwise. Measures to enforce market transparency have not helped because the associated information is normally licensed to preclude downstream usage. Recent transparency measures include the 2013 European energy market transparency regulation 543/2013 and a 2016 amendment to the German Energy Industry Act to establish a nation energy information platform, slated to launch on 1July 2017. Energy databases may also be protected under general database law, irrespective of the copyright status of the information they hold.
In December 2017, participants from the Open Energy Modelling Initiative and allied research communities made a written submission to the European Commission on the of public sector information. The document provides a comprehensive account of the data issues faced by researchers engaged in open energy system modeling and energy market analysis and quoted extensively from a German legal opinion.
In May 2020, participants from the Open Energy Modelling Initiative made a further submission on the European strategy for data. In mid2021, participants made two written submissions on a proposed Data Act legislative work-in-progress intended primarily to improve public interest business-to-government (B2G) information transfers within the European Economic Area (EEA). More specifically, the two Data Act submissions drew attention to restrictive but nonetheless compliant public disclosure reporting practices deployed by the European Energy Exchange (EEX).
Public policy support
In May 2016, the European Union announced that "all scientific articles in Europe must be freely accessible as of 2020". This is a step in the right direction, but the new policy makes no mention of open software and its importance to the scientific process. In August 2016, the United States government announced a new federal source code policy which mandates that at least 20% of custom source code developed by or for any agency of the federal government be released as open-source software (OSS). The US Department of Energy (DOE) is participating in the program. The project is hosted on a dedicated website and subject to a three-year pilot. Open-source campaigners are using the initiative to advocate that European governments adopt similar practices. In 2017 the Free Software Foundation Europe (FSFE) issued a position paper calling for free software and open standards to be central to European science funding, including the flagship EU program Horizon2020. The position paper focuses on open data and open data processing and the question of open modeling is not traversed perse.
Workshops
The Open Energy Modelling Initiative participants take turns to host regular academic workshops.
The Open Energy Modelling Initiative also holds occasional specialist meetings.
See also
Crowdsourcing
Energy modeling
Energy system – the interpretation of the energy sector in system terms
Free Software Foundation Europe – a non-profit organization advocating for free software in Europe
Open data
Open energy system models – a review of energy system models that are also open source
Open energy system databases – database projects which collect, clean, and republish energy-related datasets
Notes
Further reading
GenerationR open science blog on the openmod community
Introductory video on open energy system modeling using the python language as an example
Introductory video on the Open Energy Outlook (OEO) project specific to the United States
External links
Related to openmod
Open Energy Modelling Initiative website
Open Energy Modelling Initiative wiki
Open Energy Modelling Initiative discussion forum
Open Energy Modelling Initiative email list archive
Open Energy Modelling Initiative YouTube channel
Open Energy Modelling Initiative GitHub account
Open Energy Modelling Initiative twitter feed
Open energy data
Open Energy Platform – a collaborative versioned database for storing open energy system model datasets
Enipedia – a semantic wiki-site and database covering energy systems data worldwide
Energypedia – a wiki-based collaborative knowledge exchange covering sustainable energy topics in developing countries
Open Power System Data project – triggered by the work of the Open Energy Modelling Initiative
OpenEI – a US-based open energy data portal
Similar initiatives
soundsoftware.ac.uk – an open modelling community for acoustic and music software
Other
REEEM – a scientific project modeling sustainable energy futures for Europe
EERAdata – a project exploring FAIR energy data for Europe
References
Economics models
Energy development
Energy models
Energy organizations
Energy policy
Free and open-source software organizations
Mathematical modeling
Open data
Open science
Simulation |
8666284 | https://en.wikipedia.org/wiki/Cardkey | Cardkey | Cardkey was a producer of electronic access control products and was based in Simi Valley, California. They were the first company to develop and widely distribute "Electronic Access Control Systems".
The company's original readers used cards which were made from barium ferrite and worked by magnetically attracting and repelling locking/unlocking cores within the reader module mechanism. These cards were primarily used by fraternal organizations and clubs, such as BPOE (Elks) and others.
From there they were the first to develop Wiegand cards and readers which were again magnetically based but were more reliable and did not require calibration as used in the barium ferrite readers. These cards and readers were highly programmable, and used in applications ranging from ADT (American District Telegraph) to government installations worldwide.
In the UK they had offices in Manchester and Reading and sold their systems to companies such as British Telecom, Shell and BP. The main facility for Cardkey was located on Nordhoff and Mason in Chatsworth with an additional location on Cozycroft in Chatsworth which housed Customer Engineering and a singular Engineering project (PASS system) in approximately 1978.
They were once a division of Greer Hydraulics, Inc., and in January 1999 they were bought out by Johnson Controls, a company whose founder invented the electric room thermostat in the late 1800s.
See also
card reader
key card
References
Security companies of the United States
Access control |
15161 | https://en.wikipedia.org/wiki/I486 | I486 | The Intel 486, officially named i486 and also known as 80486, is a microprocessor. It is a higher-performance follow-up to the Intel 386. The i486 was introduced in 1989. It represents the fourth generation of binary compatible CPUs following the 8086 of 1978, the Intel 80286 of 1982, and 1985's i386.
It was the first tightly-pipelined x86 design as well as the first x86 chip to include more than one million transistors. It offered a large on-chip cache and an integrated floating-point unit.
A typical 50 MHz i486 executes around 40 million instructions per second (MIPS), reaching 50 MIPS peak performance. It is approximately twice as fast as the i386 or i286 per clock cycle. The i486's improved performance is thanks to its five-stage pipeline with all stages bound to a single cycle. The enhanced FPU unit on the chip was significantly faster than the i387 FPU per cycle. The intel 80387 FPU ("i387") was a separate, optional math coprocessor that was installed in a motherboard socket alongside the i386.
The i486 was succeeded by the original Pentium.
History
The i486 was announced at Spring Comdex in April 1989. At the announcement, Intel stated that samples would be available in the third quarter and production quantities would ship in the fourth quarter. The first i486-based PCs were announced in late 1989.
The first major update to the i486 design came in March 1992 with the release of the clock-doubled 486DX2 series. It was the first time that the CPU core clock frequency was separated from the system bus clock frequency by using a dual clock multiplier, supporting 486DX2 chips at 40 and 50 MHz. The faster 66 MHz 486DX2-66 was released that August.
The fifth-generation Pentium processor launched in 1993, while Intel continued to produce i486 processors, including the triple-clock-rate 486DX4-100 with a 100 MHz clock speed and a L1 cache doubled to 16 KB.
Earlier, Intel had decided not to share its 80386 and 80486 technologies with AMD. However, AMD believed that their technology sharing agreement extended to the 80386 as a derivative of the 80286. AMD reverse-engineered Intel 386 chip and produced the 40 MHz Am386DX-40 chip, which was cheaper and had lower power consumption than Intel's best 33 MHz version. Intel attempted to prevent AMD from selling the processor, but AMD won in court, which allowed it to establish itself as a competitor.
AMD continued to create clones, releasing the first-generation Am486 chip in April 1993 with clock frequencies of 25, 33 and 40 MHz. Second-generation Am486DX2 chips with 50, 66 and 80 MHz clock frequencies were released the following year. The Am486 series was completed with a 120 MHz DX4 chip in 1995.
AMD's long-running 1987 arbitration lawsuit against Intel was settled in 1995, and AMD gained access to Intel's 80486 microcode. This led to the creation of two versions of AMD's 486 processor - one reverse-engineered from Intel's microcode, while the other used AMD's microcode in a cleanroom development process. However, the settlement also concluded that the 80486 would be AMD's last Intel clone.
Another 486 clone manufacturer was Cyrix, which was a fabless co-processor chip maker for 80286/386 systems. The first Cyrix 486 processors, the 486SLC and 486DLC, were released in 1992 and used the 80386 package. Both Texas Instruments-manufactured Cyrix processors were pin-compatible with 386SX/DX systems, which allowed them to become an upgrade option. However, these chips could not match the Intel 486 processors, having only 1 KB of cache memory and no built-in math coprocessor. In 1993, Cyrix released its own Cx486DX and DX2 processors, which were closer in performance to Intel's counterparts. Intel and Cyrix sued each other, with Intel going for patent infringement and Cyrix going with antitrust claims. In 1994 Cyrix won and dropped its antitrust claim.
In 1995, both Cyrix and AMD began looking at a ready market for users wanting to upgrade their processors. Cyrix released a derivative 486 processor called the 5x86, based on the Cyrix M1 core, which was clocked up to 120 MHz and was an option for 486 Socket 3 motherboards. AMD released a 133 MHz Am5x86 upgrade chip, which was essentially an improved 80486 with double the cache and a quad multiplier that also worked with the original 486DX motherboards. Am5x86 was the first processor to use AMD's performance rating and was marketed as Am5x86-P75, with claims that it was equivalent to the Pentium 75. Kingston Technology launched a 'TurboChip' 486 system upgrade that used a 133 MHz Am5x86.
Intel responded by making a Pentium OverDrive upgrade chip for 486 motherboards, which was a modified Pentium core that ran up to 83 MHz on boards with a 25 or 33 MHz front-side bus clock. OverDrive wasn't popular due to speed and price. The 486 was declared obsolete as early as 1996, with a Florida school district's purchase of a fleet of 486DX4 machines in that year sparking controversy. New computers equipped with 486 processors in discount warehouses became scarce, and an IBM spokesperson called it a "dinosaur". Even after the Pentium series of processors gained a foothold in the market, however, Intel continued to produce 486 cores for industrial embedded applications. Intel discontinued production of i486 processors in late 2007.
Improvements
The instruction set of the i486 is very similar to the i386, with the addition of a few extra instructions, such as CMPXCHG, a compare-and-swap atomic operation, and XADD, a fetch-and-add atomic operation that returned the original value (unlike a standard ADD, which returns flags only).
The i486's performance architecture is a vast improvement over the i386. It has an on-chip unified instruction and data cache, an on-chip floating-point unit (FPU) and an enhanced bus interface unit. Due to the tight pipelining, sequences of simple instructions (such as ALU reg,reg and ALU reg,im) could sustain single-clock-cycle throughput (one instruction completed every clock). These improvements yielded a rough doubling in integer ALU performance over the i386 at the same clock rate. A 16 MHz i486 therefore had performance similar to a 33 MHz i386. The older design had to reach 50 MHz to be comparable with a 25 MHz i486 part.
Differences between i386 and i486
An 8 KB on-chip (level 1) SRAM cache stores the most recently used instructions and data (16 KB and/or write-back on some later models). The i386 had no internal cache but supported a slower off-chip cache (not officially a level 2 cache because i386 had no internal level 1 cache).
An enhanced external bus protocol to enable cache coherency and a new burst mode for memory accesses to fill a cache line of 16 bytes within five bus cycles. The 386 needed eight bus cycles to transfer the same amount of data.
Tightly coupled pipelining completes a simple instruction like ALU reg,reg or ALU reg,im every clock cycle (after a latency of several cycles). The i386 needed two clock cycles.
Integrated FPU (disabled or absent in SX models) with a dedicated local bus; together with faster algorithms on more extensive hardware than in the i387, this performed floating-point calculations faster than the i386/i387 combination.
Improved MMU performance.
New instructions: XADD, BSWAP, CMPXCHG, INVD, WBINVD, INVLPG.
Just as in the i386, a flat 4 GB memory model could be implemented. All "segment selector" registers could be set to a neutral value in protected mode, or to zero in real mode, and using only the 32-bit "offset registers" (x86-terminology for general CPU registers used as address registers) as a linear 32-bit virtual address bypassing the segmentation logic. Virtual addresses were then normally mapped onto physical addresses by the paging system except when it was disabled. (Real mode had no virtual addresses.) Just as with the i386, circumventing memory segmentation could substantially improve performance for some operating systems and applications.
On a typical PC motherboard, either four matched 30-pin (8-bit) SIMMs or one 72-pin (32-bit) SIMM per bank were required to fit the i486's 32-bit data bus. The address bus used 30-bits (A31..A2) complemented by four byte-select pins (instead of A0,A1) to allow for any 8/16/32-bit selection. This meant that the limit of directly addressable physical memory was 4 gigabytes as well (230 32-bit words = 232 8-bit words).
Models
Intel offered several suffixes and variants (see table). Variants include:
Intel RapidCAD: a specially packaged Intel 486DX and a dummy floating-point unit (FPU) designed as pin-compatible replacements for an i386 processor and 80387 FPU.
i486SL-NM: i486SL based on i486SX.
i487SX (P23N): i486DX with one extra pin sold as an FPU upgrade to i486SX systems; When the i487SX was installed, it ensured that an i486SX was present on the motherboard but disabled it, taking over all of its functions.
i486 OverDrive (P23T/P24T): i486SX, i486SX2, i486DX2 or i486DX4. Marked as upgrade processors, some models had different pinouts or voltage-handling abilities from "standard" chips of the same speed. Fitted to a coprocessor or "OverDrive" socket on the motherboard, they worked the same as the i487SX.
The maximal internal clock frequency (on Intel's versions) ranged from 16 to 100 MHz. The 16 MHz i486SX model was used by Dell Computers.
One of the few i486 models specified for a 50 MHz bus (486DX-50) initially had overheating problems and was moved to the 0.8-micrometer fabrication process. However, problems continued when the 486DX-50 was installed in local-bus systems due to the high bus speed, making it unpopular with mainstream consumers. Local-bus video was considered a requirement at the time, though it remained popular with users of EISA systems. The 486DX-50 was soon eclipsed by the clock-doubled i486DX2, which although running the internal CPU logic at twice the external bus speed (50 MHz), was nevertheless slower becaue the external bus ran at only 25 MHz. The i486DX2 at 66 MHz (with 33 MHz external bus) was faster than the 486DX-50, overall.
More powerful i486 iterations such as the OverDrive and DX4 were less popular (the latter available as an OEM part only), as they came out after Intel had released the next-generation Pentium processor family. Certain steppings of the DX4 also officially supported 50 MHz bus operation, but it was a seldom-used feature.
{| class="wikitable"
! || Model || CPU/busclock speed || Voltage || L1 cache* || Introduced
! width="520px" | Notes
|-
| || i486DX (P4) || 20, 25 MHz33 MHz50 MHz || 5 V || 8 KB WT || April 1989May 1990June 1991 || The original chip without clock multiplier
|-
| || i486SL || 20, 25, 33 MHz || 5 V or 3.3 V || 8 KB WT || November 1992 || Low-power version of the i486DX, reduced VCore, SMM (System Management Mode), stop clock, and power-saving features — mainly for use in portable computers
|-
| || i486SX (P23) || 16, 20, 25 MHz33 MHz || 5 V || 8 KB WT || September 1991September 1992 || An i486DX with the FPU part disabled; later versions had the FPU removed from the die to reduce area and hence cost.
|-
| || i486DX2 (P24) || 40/20, 50/25 MHz66/33 MHz || 5 V || 8 KB WT || March 1992August 1992 || The internal processor clock runs at twice the clock rate of the external bus clock
|-
| || i486DX-S (P4S) || 33 MHz; 50 MHz || 5 V or 3.3 V || 8 KB WT || June 1993 || SL Enhanced 486DX
|-
| || i486DX2-S (P24S) || 40/20 MHz,50/25 MHz,(66/33 MHz) || 5 V or 3.3 V || 8 KB WT || June 1993 ||
|-
| || i486SX-S (P23S)|| 25, 33 MHz || 5 V or 3.3 V || 8 KB WT || June 1993 || SL Enhanced 486SX
|-
| || i486SX2 || 50/25, 66/33 MHz || 5 V || 8 KB WT || March 1994 || i486DX2 with the FPU disabled
|-
| || IntelDX4 (P24C) || 75/25, 100/33 MHz || 3.3 V || 16 KB WT || March 1994 || Designed to run at triple clock rate (not quadruple, as often believed; the DX3, which was meant to run at 2.5× the clock speed, was never released). DX4 models that featured write-back cache were identified by an "&EW" laser-etched into their top surface, while the write-through models were identified by "&E".
|-
| || i486DX2WB (P24D)|| 50/25 MHz,66/33 MHz || 5 V || 8 KB WB || October 1994 || Enabled write-back cache.
|-
| || IntelDX4WB || 100/33 MHz || 3.3 V || 16 KB WB || October 1994 ||
|-
| || i486DX2 (P24LM) || 90/30 MHz,100/33 MHz || 2.5–2.9 V || 8 KB WT || 1994 ||
|-
| || i486GX || up to 33 MHz || 3.3 V || 8 KB WT || || Embedded ultra-low-power CPU with all features of the i486SX and 16-bit external data bus. This CPU is for embedded battery-operated and hand-held applications.
|}
*WT = write-through cache strategy, WB = write-back cache strategy
Other makers of 486-like CPUs
Processors compatible with the i486 were produced by companies such as IBM, Texas Instruments, AMD, Cyrix, UMC, and STMicroelectronics (formerly SGS-Thomson). Some were clones (identical at the microarchitectural level), others were clean room implementations of the Intel instruction set. (IBM's multiple-source requirement was one of the reasons behind its x86 manufacturing since the 80286.) The i486 was, however, covered by many Intel patents, including from the prior i386. Intel and IBM had broad cross-licenses of these patents, and AMD was granted rights to the relevant patents in the 1995 settlement of a lawsuit between the companies.
AMD produced several clones using a 40 MHz bus (486DX-40, 486DX/2-80, and 486DX/4-120) which had no Intel equivalent, as well as a part specified for 90 MHz, using a 30 MHz external clock, that was sold only to OEMs. The fastest running i486-compatible CPU, the Am5x86, ran at 133 MHz and was released by AMD in 1995. 150 MHz and 160 MHz parts were planned but never officially released.
Cyrix made a variety of i486-compatible processors, positioned at the cost-sensitive desktop and low-power (laptop) markets. Unlike AMD's 486 clones, the Cyrix processors were the result of clean-room reverse engineering. Cyrix's early offerings included the 486DLC and 486SLC, two hybrid chips that plugged into 386DX or SX sockets respectively, and offered 1 KB of cache (versus 8 KB for the then-current Intel/AMD parts). Cyrix also made "real" 486 processors, which plugged into the i486's socket and offered 2 or 8 KB of cache. Clock-for-clock, the Cyrix-made chips were generally slower than their Intel/AMD equivalents, though later products with 8 KB caches were more competitive, albeit late to market.
The Motorola 68040, while not i486 compatible, was often positioned as its equivalent in features and performance. Clock-for-clock basis the Motorola 68040 could significantly outperform the Intel chip. However, the i486 had the ability to be clocked significantly faster without overheating. Motorola 68040 performance lagged behind the later production i486 systems.
Motherboards and buses
Early i486-based computers were equipped with several ISA slots (using an emulated PC/AT-bus) and sometimes one or two 8-bit-only slots (compatible with the PC/XT-bus). Many motherboards enabled overclocking of these from the default 6 or 8 MHz to perhaps 16.7 or 20 MHz (half the i486 bus clock) in several steps, often from within the BIOS setup. Especially older peripheral cards normally worked well at such speeds as they often used standard MSI chips instead of slower (at the time) custom VLSI designs. This could give significant performance gains (such as for old video cards moved from a 386 or 286 computer, for example). However, operation beyond 8 or 10 MHz could sometimes lead to stability problems, at least in systems equipped with SCSI or sound cards.
Some motherboards came equipped with a 32-bit EISA bus that was backward compatible with the ISA-standard. EISA offered attractive features such as increased bandwidth, extended addressing, IRQ sharing, and card configuration through software (rather than through jumpers, DIP switches, etc.) However, EISA cards were expensive and therefore mostly employed in servers and workstations. Consumer desktops often used the simpler, faster VESA Local Bus (VLB). Unfortunately prone to electrical and timing-based instability; typical consumer desktops had ISA slots combined with a single VLB slot for a video card. VLB was gradually replaced by PCI during the final years of the i486 period. Few Pentium class motherboards had VLB support as VLB was based directly on the i486 bus; much different from the P5 Pentium-bus. ISA persisted through the P5 Pentium generation and was not completely displaced by PCI until the Pentium III era.
Late i486 boards were normally equipped with both PCI and ISA slots, and sometimes a single VLB slot. In this configuration, VLB or PCI throughput suffered depending on how buses were bridged. Initially, the VLB slot in these systems was usually fully compatible only with video cards (fitting as "VESA" stands for Video Electronics Standards Association); VLB-IDE, multi I/O, or SCSI cards could have problems on motherboards with PCI slots. The VL-Bus operated at the same clock speed as the i486-bus (basically a local bus) while the PCI bus also usually depended on the i486 clock but sometimes had a divider setting available via the BIOS. This could be set to 1/1 or 1/2, sometimes even 2/3 (for 50 MHz CPU clocks). Some motherboards limited the PCI clock to the specified maximum of 33 MHz and certain network cards depended on this frequency for correct bit-rates. The ISA clock was typically generated by a divider of the CPU/VLB/PCI clock.
One of the earliest complete systems to use the i486 chip was the Apricot VX FT, produced by British hardware manufacturer Apricot Computers. Even overseas in the United States it was popularized as "The World's First 486".
Later i486 boards supported Plug-And-Play, a specification designed by Microsoft that began as a part of Windows 95 to make component installation easier for consumers.
Obsolescence
The AMD Am5x86 and Cyrix Cx5x86 were the last i486 processors often used in late-generation i486 motherboards. They came with PCI slots and 72-pin SIMMs that were designed to run Windows 95, and also used for 80486 motherboards upgrades. While the Cyrix Cx5x86 faded when the Cyrix 6x86 took over, the AMD Am5x86 remained important given AMD K5 delays.
Computers based on the i486 remained popular through the late 1990s, serving as low-end processors for entry-level PCs. Production for traditional desktop and laptop systems ceased in 1998, when Intel introduced the Celeron brand, though it continued to be produced for embedded systems through the late 2000s.
In the general-purpose desktop computer role, i486-based machines remained in use into the early 2000s, especially as Windows 95 through 98 and Windows NT 4.0 were the last Microsoft operating systems to officially support i486-based systems. However, as they were overtaken by newer operating systems, i486 systems fell out of use except for backward compatibility with older programs (most notably games), especially given problems running on newer operating systems. However, DOSBox was available for later operating systems and provides emulation of the i486 instruction set, as well as full compatibility with most DOS-based programs.
The i486 was eventually overtaken by the Pentium for personal computer applications, although Intel continued production for use in embedded systems. In May 2006, Intel announced that production of the i486 would stop at the end of September 2007.
See also
List of Intel microprocessors
Motorola 68040, although not compatible, was often positioned as the Motorola equivalent to the Intel 486 in terms of performance and features.
VL86C020, ARM3 core of similar time frame and comparable MIPS performance on integer code (25 MHz for both), with 310,000 transistors (in a 1.5 µm process) instead of 1 million
Notes
References
External links
Intel486 datasheets
Low power SX and DX with variable freq. Dec 1992
EMBEDDED ULTRA-LOW POWER Intel 486 SX
Embedded Write-Back Enhanced Intel DX4. Oct 1995
Intel i486 DX images and descriptions at cpu-collection.de
Die photo of Intel 386DX
Computer-related introductions in 1989
486
32-bit microprocessors |
7430463 | https://en.wikipedia.org/wiki/Lake%20Worth%20Community%20High%20School | Lake Worth Community High School | Lake Worth Community High School is a public high school located in Lake Worth Beach, Florida. Established in 1922 as Lake Worth High School, it is currently one of Palm Beach County's largest schools.
The Palm Beach County School Board added the word Community to the names of all public high schools, including Lake Worth High School, in the 1980s.
Academics
Lake Worth Community High School has many Advanced Placement and [AICE] courses as well as many honors courses.
English
Four Credits of English are required. English 1 is offered all the way up to AP English Language and Literature.
Social Studies
One credit of World History, one credit of American History, 0.5 credit of Economics, and 0.5 credit of Government are required to graduate. World history is usually taken during sophomore year, American history during junior year, and economics and government is taken during senior year.
Math
Four Credits of Math are required to graduate. Algebra 1, Geometry, Algebra II, Pre-Calculus, AP Calculus AB, AP Calculus BC, and AP Statistics are offered.
Science
Three Credits of Science are needed to graduate. Integrated Science I (formerly known as Earth Sciences), Biology, Chemistry, Anatomy & Physiology, Physics, Environmental Science, Integrated Science I & II.
Some courses are also available in AP version.
Foreign Languages
Foreign Language is required to graduate high school. At least two credits are required for college acceptance. Spanish and French are both offered from Spanish and French 1 up to AP Spanish Language and Literature and AP French Language.
Choice Academies
Lake Worth Community High School has seven Choice Academies.
3DE by Junior Achievement
Air Force JROTC
Criminal Justice
Culinary Arts
Drafting and Design
Early Childhood
Medical Science / Biomedical Sciences
The school also has one in-house Academy which is Construction Technology.
Athletics
The Lake Worth Community High School mascot is the Trojan (Troy Trojan) and the colors are Maroon, Silver and White. The current Athletic Director is Frank Baxley.
Trojan Athletics include:
Baseball
Basketball - Girls & Boys
Bowling- Girls & Boys
Cheerleading
Cross Country - Girls & Boys
Football - Boys
Flag Football - Girls
Golf - Girls & Boys
Soccer - Girls & Boys
Softball
Swimming - Girls & Boys
Tennis - Girls & Boys
Track and Field - Girls & Boys
Volleyball - Girls & Boys
Water Polo - Girls & Boys
Weightlifting - Girls & Boys
Wrestling
Lake Worth has a long running crosstown rivalry with the Atlantic Community High School Eagles, the Santaluces Community High School Chiefs, John I. Leonard Community High School Lancers and most recently the Park Vista Community High School Cobras.
Notable alumni
Daniel Cane, co-creator of Blackboard Learning Management Systems.
Mayo Smith, former Major League baseball player and manager (Class of 1932)
Mark Foley, former member of the United States House of Representatives (Class of 1973)
Deidre Hall and Andrea Hall, twin sisters; both stars of the soap opera Days of Our Lives (Class of 1965)
Herb Score, former MLB All-Star and 1955 American League Rookie of the Year for the Cleveland Indians and Chicago White Sox (Class of 1952)
Otis Thorpe, former Providence College Friar and NBA basketball player for the Kansas City Kings (Class of 1980)
Robert McKnight, former Florida State Senator and Representative (Class of 1962)
Scott Levy, professional wrestler known as Raven (Class of 1982)
Joe Looney, professional NFL football player for the [Dallas Cowboys] (Class of 2008)
LaVon Brazill, Former American football player (National Football League - NFL) with the Indianapolis colts (Class of 2007)
Stanley Shakespeare, Former American football player (National Football League - NFL)
Scott Henderson, award-winning jazz guitarist (Class of 1972)
James Looney, NFL player
References
External links
Lake Worth High School Alumni Foundation
Lake Worth Dollars for Scholars (DFS)
School District of Palm Beach County (SDPBC)
Buildings and structures in Lake Worth Beach, Florida
High schools in Palm Beach County, Florida
Educational institutions established in 1922
Public high schools in Florida
1922 establishments in Florida |
212098 | https://en.wikipedia.org/wiki/Geometric%20primitive | Geometric primitive | In vector computer graphics, CAD systems, and geographic information systems, geometric primitive (or prim) is the simplest (i.e. 'atomic' or irreducible) geometric shape that the system can handle (draw, store). Sometimes the subroutines that draw the corresponding objects are called "geometric primitives" as well. The most "primitive" primitives are point and straight line segment, which were all that early vector graphics systems had.
In constructive solid geometry, primitives are simple geometric shapes such as a cube, cylinder, sphere, cone, pyramid, torus.
Modern 2D computer graphics systems may operate with primitives which are lines (segments of straight lines, circles and more complicated curves), as well as shapes (boxes, arbitrary polygons, circles).
A common set of two-dimensional primitives includes lines, points, and polygons, although some people prefer to consider triangles primitives, because every polygon can be constructed from triangles. All other graphic elements are built up from these primitives. In three dimensions, triangles or polygons positioned in three-dimensional space can be used as primitives to model more complex 3D forms. In some cases, curves (such as Bézier curves, circles, etc.) may be considered primitives; in other cases, curves are complex forms created from many straight, primitive shapes.
Common primitives
The set of geometric primitives is based on the dimension of the region being represented:
Point (0-dimensional), a single location with no height, width, or depth.
Line or curve (1-dimensional), having length but no width, although a linear feature may curve through a higher-dimensional space.
Planar surface or curved surface (2-dimensional), having length and width.
Volumetric region or solid (3-dimensional), having length, width, and depth.
In GIS, the terrain surface is often spoken of colloquially as "2 1/2 dimensional," because only the upper surface needs to be represented. Thus, elevation can be conceptualized as a scalar field property or function of two-dimensional space, affording it a number of data modeling efficiencies over true 3-dimensional objects.
A shape of any of these dimensions greater than zero consists of an infinite number of distinct points. Because digital systems are finite, only a sample set of the points in a shape can be stored. Thus, vector data structures typically represent geometric primitives using a strategic sample, organized in structures that facilitate the software interpolating the remainder of the shape at the time of analysis or display, using the algorithms of Computational geometry.
A Point is a single coordinate in a Cartesian coordinate system. Some data models allow for Multipoint features consisting of several disconnected points.
A Polygonal chain or Polyline is an ordered list of points (termed vertices in this context). The software is expected to interpolate the intervening shape of the line between adjacent points in the list as a parametric curve, most commonly a straight line, but other types of curves are frequently available, including circular arcs, cubic splines, and Bézier curves. Some of these curves require additional points to be defined that are not on the line itself, but are used for parametric control.
A Polygon is a polyline that closes at its endpoints, representing the boundary of a two-dimensional region. The software is expected to use this boundary to partition 2-dimensional space into an interior and exterior. Some data models allow for a single feature to consist of multiple polylines, which could collectively connect to form a single closed boundary, could represent a set of disjoint regions (e.g., the state of Hawaii), or could represent a region with holes (e.g., a lake with an island).
A Parametric shape is a standardized two-dimensional or three-dimensional shape defined by a minimal set of parameters, such as an ellipse defined by two points at its foci, or three points at its center, vertex, and co-vertex.
A Polyhedron or Polygon mesh is a set of polygon faces in three-dimensional space that are connected at their edges to completely enclose a volumetric region. In some applications, closure may not be required or may be implied, such as modeling terrain. The software is expected to use this surface to partition 3-dimensional space into an interior and exterior. A triangle mesh is a subtype of polyhedron in which all faces must be triangles, the only polygon that will always be planar, including the Triangulated irregular network (TIN) commonly used in GIS.
A parametric mesh represents a three-dimensional surface by a connected set of parametric functions, similar to a spline or Bézier curve in two dimensions. The most common structure is the Non-uniform rational B-spline (NURBS), supported by most CAD and animation software.
Application in GIS
A wide variety of vector data structures and formats have been developed during the history of Geographic information systems, but they share a fundamental basis of storing a core set of geometric primitives to represent the location and extent of geographic phenomena. Locations of points are almost always measured within a standard Earth-based coordinate system, whether the spherical Geographic coordinate system (latitude/longitude), or a planar coordinate system, such as the Universal Transverse Mercator. They also share the need to store a set of attributes of each geographic feature alongside its shape; traditionally, this has been accomplished using the data models, data formats, and even software of relational databases.
Early vector formats, such as POLYVRT, the ARC/INFO Coverage, and the Esri shapefile support a basic set of geometric primitives: points, polylines, and polygons, only in two dimensional space and the latter two with only straight line interpolation. TIN data structures for representing terrain surfaces as triangle meshes were also added. Since the mid 1990s, new formats have been developed that extend the range of available primitives, generally standardized by the Open Geospatial Consortium's Simple Features specification. Common geometric primitive extensions include: three-dimensional coordinates for points, lines, and polygons; a fourth "dimension" to represent a measured attribute or time; curved segments in lines and polygons; text annotation as a form of geometry; and polygon meshes for three-dimensional objects.
Frequently, a representation of the shape of a real-world phenomenon may have a different (usually lower) dimension than the phenomenon being represented. For example, a city (a two-dimensional region) may be represented as a point, or a road (a three-dimensional volume of material) may be represented as a line. This dimensional generalization correlates with tendencies in spatial cognition. For example, asking the distance between two cities presumes a conceptual model of the cities as points, while giving directions involving travel "up," "down," or "along" a road imply a one-dimensional conceptual model. This is frequently done for purposes of data efficiency, visual simplicity, or cognitive efficiency, and is acceptable if the distinction between the representation and the represented is understood, but can cause confusion if information users assume that the digital shape is a perfect representation of reality (i.e., believing that roads really are lines).
In 3D modelling
In CAD software or 3D modelling, the interface may present the user with the ability to create primitives which may be further modified by edits. For example, in the practice of box modelling the user will start with a cuboid, then use extrusion and other operations to create the model. In this use the primitive is just a convenient starting point, rather than the fundamental unit of modelling.
A 3D package may also include a list of extended primitives which are more complex shapes that come with the package. For example, a teapot is listed as a primitive in 3D Studio Max.
In graphics hardware
Various graphics accelerators exist with hardware acceleration for rendering specific primitives such as lines or triangles, frequently with texture mapping and shaders. Modern 3D accelerators typically accept sequences of triangles as triangle strips.
See also
2D geometric model
Sculpted prim
References
External links
Peachpit.com Info On 3D Primitives
Computer graphics
Geometric algorithms |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.