id
stringlengths 3
8
| url
stringlengths 32
207
| title
stringlengths 1
114
| text
stringlengths 93
492k
|
---|---|---|---|
29154943 | https://en.wikipedia.org/wiki/Hackers%20Are%20People%20Too | Hackers Are People Too | Hackers Are People Too is a 2008 documentary film about the hacker community, written and directed by Ashley Shwartau. The film was recorded at DEF CON conference.
References
External links
2008 films
Hacking (computer security)
Computing culture
Documentary films about the Internet
Hacker culture
Works about computer hacking |
156252 | https://en.wikipedia.org/wiki/Star%20Control | Star Control | Star Control: Famous Battles of the Ur-Quan Conflict, Volume IV or just simply Star Control is a science fiction video game developed by Toys for Bob and published by Accolade in 1990. It was originally released for Amiga and MS-DOS in 1990, followed by ports for the Sega Mega Drive/Genesis, Amstrad CPC, Commodore 64 and ZX Spectrum in 1991. The game was a commercial and critical success, and is remembered as one of the best games of all time, as well as the foundation for the highly praised sequel. Two sequels were released, Star Control II in 1992 (and the free open-source remake The Ur-Quan Masters in 2002), and Star Control 3 in 1996.
Gameplay
Star Control is a combination of a strategy game and real-time one-on-one ship combat game. The ship combat is based on the classic game Spacewar!, while the turn-based strategy is inspired by Paul Reiche III's 1983 game Archon. Players have the option to play the full game with the turn-based campaign, or to practice the one-on-one ship battles.
The full game allows players to select one of 15 different scenarios, with opposing fleets arranged on a rotating star map. The player has up to three ship actions per turn, which are used to explore new stars and colonize or fortify worlds. These colonies provide resources to the player's ships, such as currency and crew. The goal is to move your ships across the galaxy, claim planets along the way, and finally destroy your opponent’s star base.
When two rival ships meet on the battlefield, an arcade-style combat sequence begins. The game offers different ships to pilot, which are deliberately imbalanced in ability. Match-ups between these ships have a major influence over combat. There are 14 different ships, with unique abilities for each. Ships typically have a unique firing attack, as well as some kind of secondary ability. Both actions consume the ship's battery, which recharges automatically (with few exceptions). Ships have a limited amount of crew, representing the total damage a ship can take before being destroyed. This ties into the strategic meta-game between combat, where the crew can be replenished at colonies.
During combat, the screen frames the action between the two ships with an overhead view, zooming in as they approach each other. Players try to outgun and outmaneuver each other. There is a planet in the middle of the battlefield, providing a centre of gravity, which players can either crash into, or glide nearby to gain momentum.
The story framing the gameplay is minimal compared to the sequel, described mostly in the game's scenario introductions. Some background can be found in the manuals about two warring factions. The game can be played by one player against the computer, or two players head to head. As was typical of copy protection at the time, Star Control requested a special pass phrase that players found by using a three-ply code wheel, called "Professor Zorq's Instant Etiquette Analyzer".
Development
Concept and origins
Star Control is the first collaboration between Paul Reiche III and Fred Ford. Reiche had started his career working for Dungeons & Dragons publisher TSR, before developing PC games for Free Fall Associates. After releasing World Tour Golf, Reiche created an advertising mock-up for what would become Star Control, showing a dreadnaught and some ships fighting. He pitched the game to Electronic Arts, before instead securing an agreement with Accolade as a publisher, thanks to Reiche's former producer taking a job there. Meanwhile, Ford had started his career creating games for Japanese personal computers before transitioning to more corporate software development. After a few years working at graphics companies in Silicon Valley, Ford realized he missed working in the game industry. At this point, Reiche needed a programmer-engineer and Ford was seeking a designer-artist, so their mutual friends set up a gaming night to re-introduce them. The meeting was hosted at game designer Greg Johnson's house, and one of the friends who encouraged the meeting was fantasy artist Erol Otus.
Originally called Starcon, the game began as an evolution of concepts that Reiche first created in Archon: The Light and the Dark and Mail Order Monsters. The vision for the game was science-fiction Archon, where asymmetric combatants fight using different abilities in space. According to Ford, "StarCon is really just Archon with an S-T in front of it", pointing to the one-on-one combat and strategic modes of both games. Star Control would base its combat sequences on the classic game Spacewar!, as well as the core experience of space combat game Star Raiders. As Ford and Reiche were still building their workflow as a team, the game took on a more limited scope compared to the sequel.
Design and production
Fred Ford's first prototype was a two-player action game where the VUX and Yehat ships blow up asteroids, which led them to build the entire universe around that simple play experience. Ford designed the Yehat ship with a crescent-shape, and the ship's shield-generator led them to optimize the ship for close combat. They built on these two original ships with many additional ships and character concepts, and play-tested them with friends such as Greg Johnson and Robert Leyland. The team preferred to iterate on ship designs rather than plan them, as they discovered different play-styles during testing. The asymmetry between the combatants became essential to the experience. Ford explained: "Our ships weren't balanced at all, one on one... but the idea was, your fleet of ships, your selection of ships in total was as strong as someone else's, and then, it came down to which matchup did you find". Still, the ships were still given some balance by having their energy recharge at different rates.
Although the story does not factor heavily into the game, the character concepts were created based on the ship designs. The team would begin with paper illustrations, followed by logical abilities for those ships, and a character concept that suited the ship's look-and-feel. The first ship sketches were based on popular science fiction, such as SpaceWars! or Battlestar Galactica, and slowly evolved into original designs as they discussed why the ships were fighting each other. Paul Reiche III describes their character creation process: "I know it probably sounds weird, but when I design a game like this, I make drawings of the characters and stare at them. I hold little conversations with them. 'What do you guys do?' And they tell me". By the end of this process, they wrote a short summary for each alien, describing their story and personality.
After creating a large ship that launches fighters on command, Reiche and Ford decided this would be a dominating race. These antagonists would be called the Ur-Quan, with a motivation to dominate the galaxy to hunt for slaves, and an appearance based on a National Geographic image of a predatory caterpillar dangling over its prey. They decided to organize the characters into nominally "good" and "bad" factions, each with seven unique races and ships, with the humans on the good side. As they were creating the alien characters based on the ship abilities, the Spathi's cowardly personality was inspired by their backwards-shooting missiles. A more robotic ship inspired an alien race called the Androsynth, whose appearance was imagined as Devo flying a spaceship. The team also decided that the game would need more humanoid characters, and created the Syreen as a powerful and attractive humanoid female race. Reiche and Ford were also inspired by character concepts in David Brin's The Uplift War. The designers asked what kind of race would be uplifted by the fiercely heroic Yehat, and decided to create the Shofixti as a ferocious super rodent.
Each alien race also had a short victory theme song, composed by Reiche's friend Tommy Dunbar of The Rubinoos. The longer Ur-Quan theme played at the end of the game was composed by fantasy artist Erol Otus.
Porting and compatibility
The number of visible colors was a major technological limitation at the time, and the team created different settings for CGA, EGA, and VGA monitors. A separate team ported a stripped down version of the game to the Commodore 64, Spectrum and Amstrad, which meant reducing the number of ships to 8, not to mention the introduction of new bugs and balance issues. Additional problems were caused by the number of simultaneous key-presses required for a multiplayer game, which required Ford to code a solution that would work across multiple different computer keyboards.
Star Control was ultimately ported to the Sega Genesis, in a team led by Fred Ford. Because the Genesis port was a cartridge-based game with no battery backup, it lacked the scenario-creator of its PC cousin, but it came pre-loaded with a few additional scenarios not originally in the game. Where the PC version featured synthesized audio, the team discovered the digital MOD file format to help port the music to console, which would later become the core music format for the sequel. It took nearly 5 months to convert the code and color palates, leaving little time to optimize the game under Accolade's tight schedule, leading to slowdown issues. Released under Accolade's new "Ballistic" label for high quality games, the game was touted as the first 12-megabit cartridge created for the system. The box art for the Sega version was adapted from the original PC version, this time re-painted by famed artist Boris Vallejo.
The Genesis port was not authorized by Sega, which led to a lawsuit between Accolade and Sega of America. Sega v. Accolade became an important legal case, creating a precedent to allow reverse engineering under fair use. This led Sega to settle the lawsuit in Accolade's favor, making them a licensed Sega developer.
Reception
Star Control was a commercial success at the time, reaching the top 5 on the sales charts by September 1990. According to a retrospective by Finnish gaming magazine Pelit, the game would go on to sell 120,000 copies, leading Accolade to request a sequel from creators Reiche and Ford.
Critics also praised Star Control for its arcade combat, as well as its character designs, animations, and sound. MegaTech described it as "one of the best two-player Mega Drive games ever", and gave it their editorial Hyper Game Award. Similarly, Computer and Video Games chose Star Control for their editorial "CVG Hit" award, citing the sound effects and the playability of the game's two-player mode. The two-player mode earned additional praise from Digital Press, who also highlighted the game's artistic detail and lore. Strategy Plus similarly praised the humor and personality of the aliens, and declared the graphics as "truly spectacular in 256 color VGA". Italian publication The Games Machine rated the game 88%, describing it as a modern re-invention of Spacewar! with many entertaining artistic details. Similarly, Videogame & Computer World praised the game's unique animations and replayable arcade mode, giving a rating of 8/10 on the PC, 8/10 on the Commodore 64, and 9/10 on the Amiga. Entertainment Weekly praised the game for evolving the Spacewar! formula with a variety of unique ships.
Some reviews were more mixed. Computer Gaming World criticized Star Control for its thin strategic gameplay, but still praised the game's arcade combat. Advanced Computer Entertainment called the Amiga version "disappointing", criticizing the load times and "tacky two-dimensional combat sequences that look as if they've been borrowed from an early Eighties coin-op". Computer and Video Games similarly compared Star Control to the "aging co-op Spacewars!", rating the game at 68%. Raze Magazine rated the Sega version at 70/100 for lacking the polish of the PC version. Joystick rated the game 75%, with strongest praise for the game's sound design.
At the end of the year, Video Games & Computer Entertainment gave Star Control an award for "Best Computer Science Fiction Game", noting that "the two creators have put together a game that is great either as a full simulation or an action-combat contest". They later highlighted the game in a list of science fiction releases, proclaiming "Reiche and Ford's action-strategy tour de force is one of the most absorbing and challenging science fiction games of all-time". Star Control was also highlighted by Strategy Plus in their review of 1990, praising the game among other strategy titles for its unique humor. The game was additionally nominated for Best Action/Arcade Program at the 1991 Spring Symposium of the Software Publishers Association.
Legacy and impact
Star Control has earned a legacy for combining different kinds of gameplay into an artistically detailed space setting. Years after its release, Retro Gamer described Star Control as "a textbook example of good game design", where "two genres were brilliantly combined, making for a finely balanced and well-rounded game experience". Sega-16 also called the game "superb in its simplicity", noting that "Star Control graphically does borrow from existing concepts, the design and presentation is so impeccably done that it stands well on its own".
In 1996, Video Games & Computer Entertainment ranked it as the 127th best game of all time, describing it as "Space War enters the 90s with a touch of humor". In 2001, PC Gameplay ranked Star Control as the 45th most influential game of all time, based on a survey of dozens of game studios. In 2017, Polygon mentioned it in their top 500 games of all time, with its flexibility "as a melee or strategic game, it helped define the idea that games can be malleable and dynamic and players can make an experience wholly their own". The game is also celebrated for the debut of the Ur-Quan, as "one of the all-time villainous races in the history of computer games".
The legacy of the original Star Control is also its foundation for future games, including the critically acclaimed sequel Star Control II. Retro Gamer highlighted the numerous "elements that gave Star Control 'soul'", describing it as "the seed from which the vastly expanded narrative found in Star Control 2 grew". Sega-16 explains that "Star Control remains a fantastic game and a blueprint for what many would call one of if not the best game ever, Star Control II". Founder of BioWare, Ray Muzyka, cites the Star Control series as an inspiration for the Mass Effect series of games, stating that "the uncharted worlds in Mass Effect comes from imagining what a freely explorable universe would be like inside a very realistic next-gen game". Former BioWare writer Mike Laidlaw also praised the creativity of the Star Control ship designs, and credited the game with laying the foundation for a sequel, which influenced him as a writer on Mass Effect.
Sequels and open-source remake
Star Control II
Star Control II is an action-adventure science fiction game, set in an open universe. The game was originally published by Accolade in 1992 for MS-DOS, and was later ported to the 3DO with an enhanced multimedia presentation. Created by Fred Ford and Paul Reiche III, it vastly expands on the story and characters introduced in the first game. When the player discovers that Earth has been encased in a slave shield, they must recruit allies to liberate the galaxy. The game features ship-to-ship combat based on the original Star Control, but removes the first game's strategy elements to focus on story and dialog, as seen in other adventure games. Star Control II has earned acclaim as one of the best games of all time through the 1990s, 2000s, and 2010s. It is also ranked among the best games in several creative areas, including writing, world design, character design, and music.
Star Control 3
Star Control 3 is an adventure science fiction video game developed by Legend Entertainment, and published by Accolade in 1996. The story takes place after the events of Star Control II when the player must travel deeper into the galaxy to investigate the mysterious collapse of hyperspace. Several game systems from Star Control II are changed. Hyperspace navigation is replaced with instant fast travel, and planet landing is replaced with a colony system inspired by the original Star Control game. Accolade hired Legend Entertainment to develop the game after original creators Paul Reiche III and Fred Ford decided to pursue other projects. Though the game was considered a critical and commercial success upon release, it would receive unfavourable comparisons to the award-winning Star Control II.
Cancelled Star Control 4
In the late 1990s, Accolade was developing Star Control 4. Also known as StarCon, it was designed as a 3D space combat game. By this time, Electronic Arts had agreed to become the distributor for all games developed by Accolade. Accolade producer George MacDonald announced that "we want to move away from the adventure element and concentrate on what it seems the players really want – action!" Though heavier on combat than previous titles, players would still have the opportunity to fly to planets and communicate with different aliens. The team also created a Star Control History Compendium, to help them resolve storylines from the previous games. In a playable alpha version of the game, players could control a fleet carrier, with the ability to launch a fighter that could be controlled by either the same player or a second player. The game was later announced for the Sony PlayStation with plans for release in 1999, featuring a 40-hour variable storyline, and both competitive and co-operative multiplayer. Electronic Arts and Accolade promoted the choice of playing as "one of two alliances (Hyperium or Crux)", with the option of operating a fighter, carrier, or turrets. Another publication described the ability to select from three different alien races, with different missions that impact the storyline, and the ability to destroy entire planets.
Development on the game was halted at the end of 1998. Not happy with the game's progress, Accolade put the project on hold with intentions to re-evaluate their plans for the Star Control license. In 1999, Accolade was acquired by Infogrames SA for $50 million, as one of many corporate restructurings that eventually led to Infogrames merging with Atari and re-branding under a revived Atari brand. Star Control 3 thus marked the last official instalment to the series.
The Ur-Quan Masters
By the early 2000s, Accolade's copyright license for Star Control expired, triggered by a contractual clause when the games were no longer generating royalties. As the games were no longer available in stores, Reiche and Ford wanted to keep their work in the public eye, to maintain an audience for a potential sequel. Reiche and Ford still owned the copyrights in Star Control I and II, but they could not successfully purchase the Star Control trademark from Accolade, leading them to consider a new title for a potential follow-up. This led them to remake Star Control II as The Ur-Quan Masters, which they released in 2002 as a free download under an open source copyright license. The official free release is maintained by an active fan community, and prevented Star Control II from becoming abandonware.
Aftermath
Fans continued to demand a new Star Control game well into the late 2000s. In the early 2000s, thousands of fans signed a petition in hopes of a sequel. Toys for Bob producer Alex Ness responded in April 2006 with an article on the company website, stating that "if enough of you people out there send me emails requesting that Toys For Bob do a legitimate sequel to Star Control 2, I'll be able to show them to Activision, along with a loaded handgun, and they will finally be convinced to roll the dice on this thing". In the months that followed, Ness announced the petition's impact, reporting that "there did honestly seem to be some real live interest on [Activision's] part. At least on the prototype and concept-test level. This is something we may in fact get to do when we finish our current game". In a 2011 interview about their next game Skylanders: Spyro's Adventure, Paul Reiche declared that they will one day make the real sequel.
Intellectual property split
By the early 2000s, the Star Control trademark was held by Infogrames Entertainment. Star Control publisher Accolade had sold their company to Infogrames in 1999, who merged with Atari and re-branded under the Atari name in 2003. In September 2007, Atari released an online flash game with the name "Star Control", created by independent game developer Iocaine Studios. Atari ordered the game to be delivered in just four days, which Iocaine produced in two days. Also in September, Atari applied to renew the Star Control trademark with the United States Patent and Trademark Office, citing images of Iocaine's flash game to demonstrate their Declaration of Use in Commerce.
Atari declared bankruptcy in 2013, and their assets were listed for auction. When Stardock became the top bidder for Atari's Star Control assets, Paul Reiche indicated that he still owned the copyrighted materials from the first two Star Control games, which implied that Stardock must have purchased the Star Control trademark and the copyright in any original elements of Star Control 3. Stardock confirmed this intellectual property split soon after. As Stardock began developing their new Star Control game, they re-iterated that they did not acquire the copyright to the first two games, and that they would need a license from Reiche and Ford to use their content and lore. Reiche and Ford echoed this understanding in their 2015 Game Developer Conference interview, stating that Stardock's game would use the Star Control trademark only. After a lawsuit, the parties ultimately agreed on the same intellectual property split.
Notes and references
External links
Creators of Star Control—developer blog
The Pages of Now and Forever—a fan site
Star Control on classic reload
1990 video games
Accolade (company) games
Amiga games
Amstrad CPC games
Commodore 64 games
DOS games
Games about extraterrestrial life
Games commercially released with DOSBox
MacOS games
Multidirectional shooters
Multiplayer and single-player video games
Sega Genesis games
Space combat simulators
Space opera video games
Toys for Bob games
Video games developed in the United States
Video games using code wheel copy protection
ZX Spectrum games |
18949571 | https://en.wikipedia.org/wiki/OpenBSD | OpenBSD | OpenBSD is a security-focused, free and open-source, Unix-like operating system based on the Berkeley Software Distribution (BSD). Theo de Raadt created OpenBSD in 1995 by forking NetBSD. According to the website, the OpenBSD project emphasizes "portability, standardization, correctness, proactive security and integrated cryptography."
The OpenBSD project maintains portable versions of many subsystems as packages for other operating systems. Because of the project's preferred BSD license, many components are reused in proprietary and corporate-sponsored software projects. The firewall code in Apple's macOS is based on OpenBSD's PF firewall code, Android's Bionic C standard library is based on OpenBSD code, LLVM uses OpenBSD's regular expression library, and Windows 10 uses OpenSSH (OpenBSD Secure Shell) with LibreSSL.
The word "open" in the name OpenBSD refers to the availability of the operating system source code on the Internet, although the word "open" in the name OpenSSH means "OpenBSD". It also refers to the wide range of hardware platforms the system supports.
History
In December 1994, Theo de Raadt, a founding member of the NetBSD project, was asked to resign from the NetBSD core team over disagreements and conflicts with the other members of the NetBSD team. In October 1995, De Raadt founded OpenBSD, a new project forked from NetBSD 1.0. The initial release, OpenBSD 1.2, was made in July 1996, followed by OpenBSD 2.0 in October of the same year. Since then, the project has issued a release every six months, each of which is supported for one year.
On 25 July 2007, OpenBSD developer Bob Beck announced the formation of the OpenBSD Foundation, a Canadian non-profit organization formed to "act as a single point of contact for persons and organizations requiring a legal entity to deal with when they wish to support OpenBSD."
Usage statistics
It is hard to determine how widely OpenBSD is used, because the developers do not publish or collect usage statistics.
In September 2005, the BSD Certification Group surveyed 4330 individual BSD users, showing that 32.8% used OpenBSD, behind FreeBSD with 77%, ahead of NetBSD with 16.3% and DragonFly BSD with 2.6%. However, the authors of this survey clarified that it is neither "exhaustive" nor "completely accurate", since the survey was spread mainly through mailing lists, forums and word of mouth. This combined with other factors, like the lack of a control group, a pre-screening process or significant outreach outside of the BSD community, makes the survey unreliable for judging BSD usage globally.
Uses
Network appliances
OpenBSD features a robust TCP/IP networking stack, and can be used as a router or wireless access point. OpenBSD's security enhancements, built-in cryptography, and packet filter make it suitable for security purposes such as firewalls, intrusion-detection systems, and VPN gateways.
Several proprietary systems are based on OpenBSD, including devices from Armorlogic (Profense web application firewall), Calyptix Security, GeNUA, RTMX, and .vantronix.
Foreign operating systems
Some versions of Microsoft's Services for UNIX, an extension to the Windows operating system to provide Unix-like functionality, use much of the OpenBSD code base that is included in the Interix interoperability suite, developed by Softway Systems Inc., which Microsoft acquired in 1999. Core Force, a security product for Windows, is based on OpenBSD's pf firewall. The pf firewall is also found in other operating systems: including FreeBSD, and macOS.
Personal computers
OpenBSD ships with Xenocara, an implementation of the X Window System, and is suitable as a desktop operating system for personal computers, including laptops. , OpenBSD includes approximately 8000 packages in its software repository, including desktop environments such as GNOME, Plasma 4, and Xfce, and web browsers such as Firefox and Chromium. The project also includes three window managers in the main distribution: cwm, FVWM (part of the default configuration for Xenocara), and twm.
Servers
OpenBSD features a full server suite and can be configured as a mail server, web server, FTP server, DNS server, router, firewall, NFS file server, or any combination of these. Since version 6.8, OpenBSD has also shipped with native in-kernel WireGuard support.
Security
Shortly after OpenBSD was created, De Raadt was contacted by a local security software company named Secure Networks (later acquired by McAfee). They were developing a network security auditing tool called Ballista, which was intended to find and exploit software security flaws. This coincided with De Raadt's interest in security, so the two cooperated leading up to the release of OpenBSD 2.3. This collaboration helped to define security as the focus of the OpenBSD project.
OpenBSD includes numerous features designed to improve security, such as:
Secure alternatives to POSIX functions in the C standard library, such as strlcat for strcat and strlcpy for strcpy
Toolchain alterations, including a static bounds checker
Memory protection techniques to guard against invalid accesses, such as ProPolice and the W^X page protection feature
Strong cryptography and randomization
System call and filesystem access restrictions to limit process capabilities
To reduce the risk of a vulnerability or misconfiguration allowing privilege escalation, many programs have been written or adapted to make use of privilege separation, privilege revocation and chrooting. Privilege separation is a technique, pioneered on OpenBSD and inspired by the principle of least privilege, where a program is split into two or more parts, one of which performs privileged operations and the other—almost always the bulk of the code—runs without privilege. Privilege revocation is similar and involves a program performing any necessary operations with the privileges it starts with then dropping them. Chrooting involves restricting an application to one section of the file system, prohibiting it from accessing areas that contain private or system files. Developers have applied these enhancements to OpenBSD versions of many common applications, such as tcpdump, file, tmux, smtpd, and syslogd.
OpenBSD developers were instrumental in the creation and development of OpenSSH (aka OpenBSD Secure Shell), which is developed in the OpenBSD CVS repositories. OpenBSD Secure Shell is based on the original SSH. It first appeared in OpenBSD 2.6 and is now by far the most popular SSH client and server, available on many operating systems.
The project has a policy of continually auditing source code for problems, work that developer Marc Espie has described as "never finished ... more a question of process than of a specific bug being hunted." He went on to list several typical steps once a bug is found, including examining the entire source tree for the same and similar issues, "try[ing] to find out whether the documentation ought to be amended", and investigating whether "it's possible to augment the compiler to warn against this specific problem."
Security record
The OpenBSD website features a prominent reference to the system's security record. Until June 2002, it read:
In June 2002, Mark Dowd of Internet Security Systems disclosed a bug in the OpenSSH code implementing challenge–response authentication. This vulnerability in the OpenBSD default installation allowed an attacker remote access to the root account, which was extremely serious not only to OpenBSD, but also to the large number of other operating systems that were using OpenSSH by that time. This problem necessitated the adjustment of the slogan on the OpenBSD website to:
The quote remained unchanged as time passed, until on 13 March 2007, when Alfredo Ortega of Core Security Technologies disclosed a network-related remote vulnerability. The quote was subsequently changed to:
This statement has been criticized because the default install contains few running services, and many use cases require additional services. Also, because the ports tree contains unaudited third-party software, it is easy for users to compromise security by installing or improperly configuring packages. However, the project maintains that the slogan is intended to refer to a default install and that it is correct by that measure.
One of the fundamental ideas behind OpenBSD is a drive for systems to be simple, clean, and secure by default. The default install is quite minimal, which the project states is to ensure novice users "do not need to become security experts overnight", which fits with open-source and code auditing practices considered important elements of a security system.
Alleged backdoor
On 11 December 2010, Gregory Perry, a former technical consultant for the Federal Bureau of Investigation (FBI), emailed De Raadt alleging that the FBI had paid some OpenBSD ex-developers 10 years prior to insert backdoors into the OpenBSD Cryptographic Framework. De Raadt made the email public on 14 December by forwarding it to the openbsd-tech mailing list and suggested an audit of the IPsec codebase. De Raadt's response was skeptical of the report and he invited all developers to independently review the relevant code. In the weeks that followed, bugs were fixed but no evidence of backdoors was found. De Raadt stated "I believe that NetSec was probably contracted to write backdoors as alleged. If those were written, I don't believe they made it into our tree. They might have been deployed as their own product."
Criticisms
In December 2017, Ilja van Sprundel, director at IOActive, gave a talk at the CCC as well as DEF CON, entitled "Are all BSDs created equally? — A survey of BSD kernel vulnerabilities", in which he stated that although OpenBSD was the clear winner of the BSDs in terms of security, "Bugs are still easy to find in those kernels, even in OpenBSD".
Two years later, in 2019, a talk named "A systematic evaluation of OpenBSD's mitigations" was given at the CCC, arguing that while OpenBSD has some effective mitigations, a significant part of them are "useless at best and based on pure luck and superstition", arguing for a more rational approach when it comes to designing them.
Hardware compatibility
Supported platforms and devices are listed in the OpenBSD Supported Platforms Notes. Other configurations may also work, but simply have not been tested or documented yet. Rough automatically extracted lists of supported device ids are available in a third party repository.
In 2020, a new project was introduced to automatically collect information about tested hardware configurations.
Subprojects
Many open source projects started as components of OpenBSD, including:
bioctl, a generic RAID management interface similar to ifconfig
CARP, a free alternative to Cisco's patented HSRP/VRRP redundancy protocols
cwm, a stacking window manager
doas, a safer replacement for sudo
OpenBSD httpd, an implementation of httpd
hw.sensors, a sensors framework used by over 100 drivers
LibreSSL, an implementation of the SSL and TLS protocols, forked from OpenSSL 1.0.1g
OpenBGPD, an implementation of BGP-4
OpenIKED, an implementation of IKEv2
OpenNTPD, a simpler alternative to ntp.org's NTP daemon
OpenOSPFD, an implementation of OSPF
OpenSMTPD, an SMTP daemon with IPv4/IPv6, PAM, Maildir, and virtual domains support
OpenSSH, an implementation of SSH
PF, an IPv4/IPv6 stateful firewall with NAT, PAT, QoS and traffic normalization support
pfsync, a firewall state synchronization protocol for PF with high availability support using CARP
sndio, a compact audio and MIDI framework
spamd, a spam filter with greylisting support designed to inter-operate with PF
Xenocara, a customized X.Org build infrastructure
Some subsystems have been integrated into other BSD operating systems, and many are available as packages for use in other Unix-like systems.
Linux administrator Carlos Fenollosa commented on moving from Linux to OpenBSD that the system is faithful to the Unix philosophy of small, simple tools that work together well: "Some base components are not as feature-rich, on purpose. Since 99% of the servers don't need the flexibility of Apache, OpenBSD's httpd will work fine, be more secure, and probably faster". He characterized the developer community's attitude to components as: "When the community decides that some module sucks, they develop a new one from scratch. OpenBSD has its own NTPd, SMTPd and, more recently, HTTPd. They work great". As a result, OpenBSD is relatively prolific in creating components that become widely reused by other systems.
OpenBSD runs nearly all of its standard daemons within chroot and privsep security structures by default, as part of hardening the base system.
The Calgary Internet Exchange was formed in 2012, in part to serve the needs of the OpenBSD project.
Third-party components
OpenBSD includes a number of third-party components, many with OpenBSD-specific patches, such as X.Org, Clang (the default compiler on several architectures), GCC, Perl, NSD, Unbound, ncurses, GNU binutils, GDB, and AWK.
Development
Development is continuous, and team management is open and tiered. Anyone with appropriate skills may contribute, with commit rights being awarded on merit and De Raadt acting as coordinator. Two official releases are made per year, with the version number incremented by 0.1, and these are each supported for twelve months (two release cycles). Snapshot releases are also available at frequent intervals.
Maintenance patches for supported releases may be applied using syspatch, manually or by updating the system against the patch branch of the CVS source repository for that release. Alternatively, a system administrator may opt to upgrade to the next snapshot release using sysupgrade, or by using using the branch of the CVS repository, in order to gain pre-release access to recently added features. The sysupgrade tool can also upgrade to the latest stable release version.
The generic OpenBSD kernel provided by default is strongly recommended for end users, in contrast to operating systems that recommend user kernel customization.
Packages outside the base system are maintained by CVS through a ports tree and are the responsibility of the individual maintainers, known as porters. As well as keeping the current branch up to date, porters are expected to apply appropriate bug-fixes and maintenance fixes to branches of their package for OpenBSD's supported releases. Ports are generally not subject to the same continuous auditing as the base system due to lack of manpower.
Binary packages are built centrally from the ports tree for each architecture. This process is applied for the current version, for each supported release, and for each snapshot. Administrators are recommended to use the package mechanism rather than build the package from the ports tree, unless they need to perform their own source changes.
OpenBSD's developers regularly meet at special events called hackathons, where they "sit down and code", emphasizing productivity.
Most new releases include a song.
Open source and open documentation
OpenBSD is known for its high-quality documentation.
When OpenBSD was created, De Raadt decided that the source code should be available for anyone to read. At the time, a small team of developers generally had access to a project's source code. Chuck Cranor and De Raadt concluded this practice was "counter to the open source philosophy" and inconvenient to potential contributors. Together, Cranor and De Raadt set up the first public, anonymous revision control system server. De Raadt's decision allowed users to "take a more active role", and established the project's commitment to open access. OpenBSD is notable for its continued use of CVS (more precisely an unreleased, OpenBSD-managed fork named OpenCVS), when most other projects that used it have migrated to other systems.
OpenBSD does not include closed source binary drivers in the source tree, nor do they include code requiring the signing of non-disclosure agreements.
Since OpenBSD is based in Canada, no United States export restrictions on cryptography apply, allowing the distribution to make full use of modern algorithms for encryption. For example, the swap space is divided into small sections and each section is encrypted with its own key, ensuring that sensitive data does not leak into an insecure part of the system.
OpenBSD randomizes various behaviors of applications, making them less predictable and thus more difficult to attack. For example, PIDs are created and associated randomly to processes; the bind system call uses random port numbers; files are created with random inode numbers; and IP datagrams have random identifiers. This approach also helps expose bugs in the kernel and in user space programs.
The OpenBSD policy on openness extends to hardware documentation: in the slides for a December 2006 presentation, De Raadt explained that without it "developers often make mistakes writing drivers", and pointed out that "the [oh my god, I got it to work] rush is harder to achieve, and some developers just give up." He went on to say that vendor-supplied binary drivers are unacceptable for inclusion in OpenBSD, that they have "no trust of vendor binaries running in our kernel" and that there is "no way to fix [them] ... when they break."
Licensing
OpenBSD maintains a strict license policy, preferring the ISC license and other variants of the BSD license. The project attempts to "maintain the spirit of the original Berkeley Unix copyrights," which permitted a "relatively un-encumbered Unix source distribution." The widely used Apache License and GNU General Public License are considered overly restrictive.
In June 2001, triggered by concerns over Darren Reed's modification of IPFilter's license wording, a systematic license audit of the OpenBSD ports and source trees was undertaken. Code in more than a hundred files throughout the system was found to be unlicensed, ambiguously licensed or in use against the terms of the license. To ensure that all licenses were properly adhered to, an attempt was made to contact all the relevant copyright holders: some pieces of code were removed, many were replaced, and others, such as the multicast routing tools and , were relicensed so that OpenBSD could continue to use them. Also removed during this audit was all software produced by Daniel J. Bernstein. At the time, Bernstein requested that all modified versions of his code be approved by him prior to redistribution, a requirement to which OpenBSD developers were unwilling to devote time or effort.
Because of licensing concerns, the OpenBSD team has reimplemented software from scratch or adopted suitable existing software. For example, OpenBSD developers created the PF packet filter after unacceptable restrictions were imposed on IPFilter. PF first appeared in OpenBSD 3.0 and is now available in many other operating systems. OpenBSD developers have also replaced GPL-licensed tools (such as CVS, diff, grep and pkg-config) with permissively licensed equivalents.
Funding
Although the operating system and its portable components are used in commercial products, De Raadt says that little of the funding for the project comes from the industry: "traditionally all our funding has come from user donations and users buying our CDs (our other products don't really make us much money). Obviously, that has not been a lot of money."
For a two-year period in the early 2000s, the project received funding from DARPA, which "paid the salaries of 5 people to work completely full-time, bought about $30k in hardware, and paid for 3 hackathons", from the POSSE project.
In 2006, the OpenBSD project experienced financial difficulties. The Mozilla Foundation and GoDaddy are among the organizations that helped OpenBSD to survive. However, De Raadt expressed concern about the asymmetry of funding: "I think that contributions should have come first from the vendors, secondly from the corporate users, and thirdly from individual users. But the response has been almost entirely the opposite, with almost a 15-to-1 dollar ratio in favor of the little people. Thanks a lot, little people!"
On 14 January 2014, Bob Beck issued a request for funding to cover electrical costs. If sustainable funding was not found, Beck suggested the OpenBSD project would shut down. The project soon received a US$20,000 donation from Mircea Popescu, the Romanian creator of the MPEx bitcoin stock exchange, paid in bitcoins. The project raised US$150,000 in response to the appeal, enabling it to pay its bills and securing its short-term future.
OpenBSD Foundation
The OpenBSD Foundation is a Canadian federal non-profit organization founded by the OpenBSD project as a "single point of contact for persons and organizations requiring a legal entity to deal with when they wish to support OpenBSD." It was announced to the public by OpenBSD developer Bob Beck on 25 July 2007. It also serves as a legal safeguard over other projects which are affiliated with OpenBSD, including OpenSSH, OpenBGPD, OpenNTPD, OpenCVS, OpenSMTPD and LibreSSL.
Since 2014, several large contributions to the OpenBSD Foundation have come from corporations such as Microsoft, Facebook, and Google as well as the Core Infrastructure Initiative.
In 2015, Microsoft became the foundation's first gold level contributor donating between $25,000-50,000 to support development of OpenSSH, which had been integrated into PowerShell in July, and later into Windows Server in 2018. Other contributors include Google, Facebook and DuckDuckGo.
During the 2016 and 2017 fundraising campaigns, Smartisan, a Chinese company, was the leading financial contributor to the OpenBSD Foundation.
Distribution
OpenBSD is freely available in various ways: the source can be retrieved by anonymous CVS, and binary releases and development snapshots can be downloaded by FTP, HTTP, and rsync. Prepackaged CD-ROM sets through version 6.0 can be ordered online for a small fee, complete with an assortment of stickers and a copy of the release's theme song. These, with their artwork and other bonuses, have been one of the project's few sources of income, funding hardware, Internet service, and other expenses. Beginning with version 6.1, CD-ROM sets are no longer released.
OpenBSD provides a package management system for easy installation and management of programs which are not part of the base operating system. Packages are binary files which are extracted, managed and removed using the package tools. On OpenBSD, the source of packages is the ports system, a collection of Makefiles and other infrastructure required to create packages. In OpenBSD, the ports and base operating system are developed and released together for each version: this means that the ports or packages released with, for example, 4.6 are not suitable for use with 4.5 and vice versa.
Songs and artwork
Initially, OpenBSD used a haloed version of the BSD daemon mascot drawn by Erick Green, who was asked by De Raadt to create the logo for the 2.3 and 2.4 versions of OpenBSD. Green planned to create a full daemon, including head and body, but only the head was completed in time for OpenBSD 2.3. The body as well as pitchfork and tail was completed for OpenBSD 2.4.
Subsequent releases used variations such as a police daemon by Ty Semaka, but eventually settled on a pufferfish named Puffy. Since then, Puffy has appeared on OpenBSD promotional material and featured in release songs and artwork.
The promotional material of early OpenBSD releases did not have a cohesive theme or design, but later the CD-ROMs, release songs, posters and tee-shirts for each release have been produced with a single style and theme, sometimes contributed to by Ty Semaka of the Plaid Tongued Devils. These have become a part of OpenBSD advocacy, with each release expounding a moral or political point important to the project, often through parody.
Themes have included Puff the Barbarian in OpenBSD 3.3, which included an 80s rock song and parody of Conan the Barbarian alluding to open documentation, The Wizard of OS in OpenBSD 3.7, related to the project's work on wireless drivers, and Hackers of the Lost RAID, a parody of Indiana Jones referencing the new RAID tools in OpenBSD 3.8.
Releases
The following table summarizes the version history of the OpenBSD operating system.
See also
Comparison of BSD operating systems
Comparison of open-source operating systems
KAME project, responsible for OpenBSD's IPv6 support
OpenBSD Journal
OpenBSD security features
Security-focused operating system
Unix security
Notes
References
External links
GitHub mirror
OpenBSD manual pages
OpenBSD ports & packages (latest)
OpenBSD source code search
OpenBSD
Cryptographic software
Free software programmed in C
Lightweight Unix-like systems
OpenBSD software using the ISC license
PowerPC operating systems
Software forks
Software using the BSD license
1996 software
ARM operating systems
IA-32 operating systems
X86-64 operating systems
Foundation
Foundation |
1291003 | https://en.wikipedia.org/wiki/Internet%20Server%20Application%20Programming%20Interface | Internet Server Application Programming Interface | The Internet Server Application Programming Interface (ISAPI) is an N-tier API of Internet Information Services (IIS), Microsoft's collection of Windows-based web server services. The most prominent application of IIS and ISAPI is Microsoft's web server.
The ISAPI has also been implemented by Apache's module so that server-side web applications written for Microsoft's IIS can be used with Apache, and other third-party web servers like Zeus Web Server offer ISAPI interfaces.
Microsoft's web server application software is called Internet Information Services, which is made up of a number of "sub-applications" and is very configurable. ASP.NET is one such slice of IIS, allowing a programmer to write web applications in their choice of programming language (VB.NET, C#, F#) that's supported by the Microsoft .NET CLR. ISAPI is a much lower-level programming system, giving much better performance, at the expense of simplicity.
ISAPI applications
ISAPI consists of two components: Extensions and Filters. These are the only two types of applications that can be developed using ISAPI. Both Filters and Extensions must be compiled into DLL files which are then registered with IIS to be run on the web server.
ISAPI applications can be written using any language which allows the export of standard C functions, for instance C, C++, Delphi. There are a couple of libraries available which help to ease the development of ISAPI applications, and in Delphi Pascal the Intraweb components for web-application development. MFC includes classes for developing ISAPI applications. Additionally, there is the ATL Server technology which includes a C++ library dedicated to developing ISAPI applications.
Extensions
ISAPI Extensions are true applications that run on IIS. They have access to all of the functionality provided by IIS. ISAPI extensions are implemented as DLLs that are loaded into a process that is controlled by IIS. Clients can access ISAPI extensions in the same way they access a static HTML page. Certain file extensions or a complete folder or site can be mapped to be handled by an ISAPI extension.
Filters
ISAPI filters are used to modify or enhance the functionality provided by IIS. They always run on an IIS server and filter every request until they find one they need to process. Filters can be programmed to examine and modify both incoming and outgoing streams of data. Internally programmed and externally configured priorities determine in which order filters are called.
Filters are implemented as DLLs and can be registered on an IIS server on a site level or a global level (i.e., they apply to all sites on an IIS server). Filters are initialised when the worker process is started and listens to all requests to the site on which it is installed.
Common tasks performed by ISAPI filters include:
Changing request data (URLs or headers) sent by the client
Controlling which physical file gets mapped to the URL
Controlling the user name and password used with anonymous or basic authentication
Modifying or analyzing a request after authentication is complete
Modifying a response going back to the client
Running custom processing on "access denied" responses
Running processing when a request is complete
Run processing when a connection with the client is closed
Performing special logging or traffic analysis.
Performing custom authentication.
Handling encryption and compression.
Common ISAPI applications
This is a list of common ISAPI applications implemented as ISAPI extensions:
Active Server Pages (ASP), installed as standard
ActiveVFP, Active Visual FoxPro installed on IIS
ASP.NET, installed as standard on IIS 6.0 onwards
ColdFusion, later versions of ColdFusion are installable on IIS
Perl ISAPI (aka Perliis), available for free to install
PHP, available for free to install, not maintained anymore.
ISAPI Development
ISAPI applications can be developed using any development tool that can generate a Windows DLL. Wizards for generating ISAPI framework applications have been available in Microsoft development tools since Visual C++ 4.0.
See also
Internet Information Services
ATL Server
SAPI
C++
PHP
FastCGI
References
Microsoft application programming interfaces |
2964397 | https://en.wikipedia.org/wiki/One%20Per%20Desk | One Per Desk | The One Per Desk, or OPD, was an innovative hybrid personal computer/telecommunications terminal based on the hardware of the Sinclair QL. The One Per Desk was built by International Computers Limited (ICL) and launched in the UK in 1984. It was the result of a collaborative project between ICL, Sinclair Research and British Telecom begun in 1983, originally intended to incorporate Sinclair's flat-screen CRT technology.
Rebadged versions of the OPD were sold in the United Kingdom as the Merlin Tonto and as the Computerphone by Telecom Australia and the New Zealand Post Office. The initial orders placed for the One Per Desk were worth £4.5 million (for 1500 units) from British Telecom and £8 million from Telecom Australia, with ICL focusing on telecommunications providers as the means to reach small- and medium-sized businesses.
Hardware
From the QL, the OPD borrowed the 68008 CPU, ZX8301/8302 ULAs, 128 KB of RAM and dual Microdrives (re-engineered by ICL for greater reliability) but not the 8049 Intelligent Peripheral Controller. Unique to the OPD was a "telephony module" incorporating an Intel 8051 microcontroller (which also controlled the keyboard), two PSTN lines and a V.21/V.23 modem, plus a built-in telephone handset and a TI TMS5220 speech synthesiser (for automatic answering of incoming calls).
The OPD was supplied with either a 9-inch monochrome (white) monitor, priced at £1,195 plus VAT, or with a 14-inch colour monitor, priced at £1,625 plus VAT. Both monitors also housed the power supply for the OPD itself.
Later, 3.5" floppy disk drives were also available from third-party vendors.
Software
The system firmware (BFS or "Basic Functional Software") was unrelated to the QL's Qdos operating system, although a subset of SuperBASIC was provided on Microdrive cartridge. The BFS provided application-switching, voice/data call management, call answering, phone number directories, viewdata terminal emulation and a simple calculator.
The Psion applications suite bundled with the QL was also ported to the OPD as Xchange and was available as an optional ROM pack, priced at £130.
Other optional application software available on ROM included various terminal emulators such as Satellite Computing's ICL7561 emulator, plus their Action Diary and Presentation Software, address book, and inter-OPD communications utilities.
An ICL supplied application was used to synchronise a national bingo game across hundreds of bingo halls in the UK. The integral V.23 dialup modem was used to provide remote communications to the central server.
Several UK ICL Mainframe (Series 39) customers, in Local Government and Ministry of Defence sectors, used statistics applications on OPD systems to view graphical representations of mainframe reports. Once again, the integral V.23 modem was used to download from the mainframe.
Merlin Tonto
British Telecom Business Systems sold the OPD as the Merlin M1800 Tonto. BT intended the Tonto to be a centralised desktop information system able to access online services, mainframes and other similar systems through the BT telephone network. The Tonto retailed at £1,500 at launch. OPD peripherals and software ROM cartridges were also badged under the Merlin brand. BT withdrew support for the Tonto in February 1993.
The name Tonto was derived from "The Outstanding New Telecoms Opportunity"
A data communications adapter was introduced for the Tonto as a plug-in option or fitted on new units, providing a standard RS423 interface for use with mainframe computers or data communications networks, permitting the use of the Tonto as a VT100 terminal. A separate VT Link product provided support for VT52 and VT100 emulation for mainframe access over dial-up connections.
Work on the Tonto influenced the design of a follow-on product by BT's Communications Terminal Products Group and Rathdown Industries known as the QWERTYphone, this aiming to provide the telephony features of the Tonto at "a much lower cost and in a more user-friendly manner".
ComputerPhone
Aimed at the "office automation" market and seeking to integrate computing and telecommunications technology, combining support for both voice and data, the One Per Desk product was perceived as the first of its kind designed to meet the needs of managers, who would be relying on old-fashioned paper-based practices to perform their "complex and heavy workloads" involving a variety of ongoing activities including meetings, telephone calls, research, administration and numerous other tasks. Such potential users of information technology had apparently been ignored by office automation efforts, and personal computers were perceived as "exceeding most managers' requirements". The ComputerPhone attempted to sit between more specialised telephony devices and more advanced workstations, being marketed as an "executive" workstation in Australia, somewhat more towards middle management in New Zealand. Advertisements emphasised the telephony, office suite, desktop calculator, videotex, terminal and electronic messaging capabilities.
MegaOPD
An enhanced version of the OPD was produced in small numbers for the United States market. This had a 68008FN CPU, 256 KB of RAM as standard, an RS-232 port and enhanced firmware.
The telephone answering function had a female voice, with a slight New Jersey accent.
Legacy
ICL were the preferred supplier for UK local government, and OPDs found their way onto desks of council officers. Due to the cost, they tended to be issued only to the most senior, who were often elderly, had no interest in computers, and had secretaries to handle their administrative work, so many devices were simply used as telephones.
References
External links
Description of Merlin Tonto from BT Engineering
ICL One Per Desk page at rwapsoftware.co.uk including a floppy disk project
Computer-related introductions in 1984
Personal computers
Sinclair Research
ICL workstations
BT Group
68k architecture |
337862 | https://en.wikipedia.org/wiki/Table%20%28information%29 | Table (information) | A table is an arrangement of information or data, typically in rows and columns, or possibly in a more complex structure. Tables are widely used in communication, research, and data analysis. Tables appear in print media, handwritten notes, computer software, architectural ornamentation, traffic signs, and many other places. The precise conventions and terminology for describing tables vary depending on the context. Further, tables differ significantly in variety, structure, flexibility, notation, representation and use. Information or data conveyed in table form is said to be in tabular format (adjective). In books and technical articles, tables are typically presented apart from the main text in numbered and captioned floating blocks.
Basic description
A table consists of an ordered arrangement of rows and columns. This is a simplified description of the most basic kind of table. Certain considerations follow from this simplified description:
the term row has several common synonyms (e.g., record, k-tuple, n-tuple, vector);
the term column has several common synonyms (e.g., field, parameter, property, attribute, stanchion);
a column is usually identified by a name;
a column name can consist of a word, phrase or a numerical index;
the intersection of a row and a column is called a cell.
The elements of a table may be grouped, segmented, or arranged in many different ways, and even nested recursively. Additionally, a table may include metadata, annotations, a header, a footer or other ancillary features.
Simple table
The following illustrates a simple table with three columns and nine rows. The first row is not counted, because it is only used to display the column names. This is called a "header row".
Multi-dimensional table
The concept of dimension is also a part of basic terminology. Any "simple" table can be represented as a "multi-dimensional"
table by normalizing the data values into ordered hierarchies. A common example of such a table is a multiplication table.
In multi-dimensional tables, each cell in the body of the table (and the value of that cell) relates to the values at the beginnings of the column (i.e. the header), the row, and other structures in more complex tables. This is an injective relation: each combination of the values of the headers row (row 0, for lack of a better term) and the headers column (column 0 for lack of a better term) is related to a unique cell in
the table:
Column 1 and row 1 will only correspond to cell (1,1);
Column 1 and row 2 will only correspond to cell (2,1) etc.
The first column often presents information dimension description by which the rest of the table is navigated. This column is called "stub column". Tables may contain three or multiple dimensions and can be classified by the number of dimensions. Multi-dimensional tables may have super-rows - rows that describe additional dimensions for the rows that are presented below that row and are usually grouped in a tree-like structure. This structure is typically visually presented with an appropriate number of white spaces in front of each stub's label.
In literature tables often present numerical values, cumulative statistics, categorical values, and at times parallel descriptions in form of text. They can condense large amount of information to a limited space and therefore they are popular in scientific literature in many fields of study.
Generic representation
As a communication tool, a table allows a form of generalization of information from an unlimited number of different social or scientific contexts. It provides a familiar way to convey information that might otherwise not be obvious or readily understood.
For example, in the following diagram, two alternate representations of the same information are presented side by side. On the left is the NFPA 704 standard "fire diamond" with example values indicated and on the right is a simple table displaying the same values, along with additional information. Both representations convey essentially the same information, but the tabular representation is arguably more comprehensible to someone who is not familiar with the NFPA 704 standard. The tabular representation may not, however, be ideal for every circumstance (for example because of space limitations, or safety reasons).
Specific uses
There are several specific situations in which tables are routinely used as a matter of custom or formal convention.
Publishing
Cross-reference (Table of contents)
Mathematics
Arithmetic (Multiplication table)
Logic (Truth table)
Natural sciences
Chemistry (Periodic table)
Oceanography (tide table)
Information technology
Software applications
Modern software applications give users the ability to generate, format, and edit tables and tabular data for a wide variety of uses, for example:
word processing applications;
spreadsheet applications;
presentation software;
tables specified in HTML or another markup language
Software development
Tables have uses in software development for both high-level specification and low-level implementation.
Usage in software specification can encompass ad hoc inclusion of simple decision tables in textual documents through to the use of tabular specification methodologies, examples of which include Software Cost Reduction and Statestep.
Proponents of tabular techniques, among whom David Parnas is prominent, emphasize their understandability, as well as the quality and cost advantages of a format allowing systematic inspection, while corresponding shortcomings experienced with a graphical notation were cited in motivating the development of at least two tabular approaches.
At a programming level, software may be implemented using constructs generally represented or understood as tabular, whether to store data (perhaps to memoize earlier results), for example, in arrays or hash tables, or control tables determining the flow of program execution in response to various events or inputs.
Databases
Database systems often store data in structures called tables; in which columns are data fields and rows represent data records.
Historical relationship to furniture
In medieval counting houses, the tables were covered with a piece of checkered cloth, to count money. Exchequer is an archaic term for the English institution which accounted for money owed to the monarch. Thus the checkerboard tables of stacks of coins are a concrete realization of this information.
See also
Chart
Diagram
Abstract data type
Column (database)
Information graphics
Periodic table
Reference table
Row (database)
Table (database)
Table (HTML)
Tensor
Dependent and independent variables
References
External links
Infographics
Data modeling |
142981 | https://en.wikipedia.org/wiki/UNIVAC%20I | UNIVAC I | The UNIVAC I (UNIVersal Automatic Computer I) was the first general-purpose electronic digital computer design for business application produced in the United States. It was designed principally by J. Presper Eckert and John Mauchly, the inventors of the ENIAC. Design work was started by their company, Eckert–Mauchly Computer Corporation (EMCC), and was completed after the company had been acquired by Remington Rand (which later became part of Sperry, now Unisys). In the years before successor models of the UNIVAC I appeared, the machine was simply known as "the UNIVAC".
The first Univac was accepted by the United States Census Bureau on March 31, 1951, and was dedicated on June 14 that year. The fifth machine (built for the U.S. Atomic Energy Commission) was used by CBS to predict the result of the 1952 presidential election. With a sample of a mere 5.5% of the voter turnout, it famously predicted an Eisenhower landslide.
History
Market positioning
The UNIVAC I was the first American computer designed at the outset for business and administrative use with fast execution of relatively simple arithmetic and data transport operations, as opposed to the complex numerical calculations required of scientific computers. As such, the UNIVAC competed directly against punch-card machines, though the UNIVAC originally could neither read nor punch cards. That shortcoming hindered sales to companies concerned about the high cost of manually converting large quantities of existing data stored on cards. This was corrected by adding offline card processing equipment, the UNIVAC Tape to Card converter, to transfer data between cards and UNIVAC magnetic tapes. However, the early market share of the UNIVAC I was lower than the Remington Rand Company wished.
To promote sales, the company joined with CBS to have UNIVAC I predict the result of the 1952 Presidential election. After it predicted Eisenhower would have a landslide victory over Adlai Stevenson, as opposed to the final Gallup Poll which had predicted that Eisenhower would win the popular vote by 51–49 in a close contest, the CBS crew was so certain that UNIVAC was wrong that they believed it was not working.
As the election continued, it became clear it was correct all along: UNIVAC had predicted Eisenhower would receive 32,915,949 votes and win the Electoral College 438–93, while the final result had Eisenhower receive 34,075,029 votes in a 442–89 Electoral College victory. UNIVAC had come within 3.5% of Eisenhower's popular vote tally, and four votes of his electoral vote total.
After the announcers admitted their sleight of hand, and their reluctance to believe the prediction, the machine became famous. This gave rise to a greater public awareness of computing technology, while computerized predictions were a must-have part of election night broadcasts.
Installations
The first contracts were with government agencies such as the Census Bureau, the U.S. Air Force, and the U.S. Army Map Service. Contracts were also signed by the ACNielsen Company, and the Prudential Insurance Company. Following the sale of Eckert–Mauchly Computer Corporation to Remington Rand, due to the cost overruns on the project, Remington Rand convinced Nielsen and Prudential to cancel their contracts.
The first sale, to the Census Bureau, was marked with a formal ceremony on March 31, 1951, at the Eckert–Mauchly Division's factory at 3747 Ridge Avenue, Philadelphia. The machine was not actually shipped until the following December, because, as the sole fully set-up model, it was needed for demonstration purposes, and the company was apprehensive about the difficulties of dismantling, transporting, and reassembling the delicate machine. As a result, the first installation was with the second computer, delivered to the Pentagon in June 1952.
UNIVAC installations, 1951–1954
Originally priced at US$159,000, the UNIVAC I rose in price until they were between $1,250,000 and $1,500,000. A total of 46 systems were eventually built and delivered.
The UNIVAC I was too expensive for most universities, and Sperry Rand, unlike companies such as IBM, was not strong enough financially to afford to give many away. However, Sperry Rand donated UNIVAC I systems to Harvard University (1956), the University of Pennsylvania (1957), and Case Institute of Technology in Cleveland, Ohio (1957). The UNIVAC I at Case was still operable in 1965 but had been supplanted by a UNIVAC 1107.
A few UNIVAC I systems stayed in service long after they were made obsolete by advancing technology. The Census Bureau used its two systems until 1963, amounting to 12 and 9 years of service, respectively. Sperry Rand itself used two systems in Buffalo, New York until 1968. The insurance company Life and Casualty of Tennessee used its system until 1970, totaling over 13 years of service.
Technical description
Major physical features
UNIVAC I used about 5,000 vacuum tubes, weighed , consumed 125 kW, and could perform about 1,905 operations per second running on a 2.25 MHz clock. The Central Complex alone (i.e. the processor and memory unit) was 4.3 m by 2.4 m by 2.6 m high. The complete system occupied more than 35.5 m2 (382 ft²) of floor space.
Main memory details
The main memory consisted of 1000 words of 12 characters each. When representing numbers, they were written as 11 decimal digits plus sign. The 1000 words of memory consisted of 100 channels of 10-word mercury delay-line registers. The input/output buffers were 60 words each, consisting of 12 channels of 10-word mercury delay-line registers. There are 6 channels of 10-word mercury delay-line registers as spares. With modified circuitry, seven more channels control the temperature of the seven mercury tanks, and one more channel is used for the 10-word "Y" register. The total of 126 mercury channels is contained in the seven mercury tanks mounted on the backs of sections MT, MV, MX, NT, NV, NX, and GV. Each mercury tank is divided into 18 mercury channels.
Each 10-word mercury delay-line channel is made up of three sections:
A channel in a column of mercury, with receiving and transmitting quartz piezo-electric crystals mounted at opposite ends.
An intermediate frequency chassis, connected to the receiving crystal, containing amplifiers, detector, and compensating delay, mounted on the shell of the mercury tank.
A recirculation chassis, containing cathode follower, pulse former and retimer, modulator, which drives the transmitting crystal, and input, clear, and memory-switch gates, mounted in the sections adjacent to the mercury tanks.
Instructions and data
Instructions were six alphanumeric characters, packed two instructions per word. The addition time was 525 microseconds and the multiplication time was 2150 microseconds. A non-standard modification called "Overdrive" did exist, that allowed for three four-character instructions per word under some circumstances. (Ingerman's simulator for the UNIVAC, referenced below, also makes this modification available.)
Digits were represented internally using excess-3 ("XS3") binary-coded decimal (BCD) arithmetic with six bits per digit using the same value as the digits of the alphanumeric character set (and one parity bit per digit for error checking), allowing 11-digit signed magnitude numbers. But with the exception of one or two machine instructions, UNIVAC was considered by programmers to be a decimal machine, not a binary machine, and the binary representation of the characters was irrelevant. If a non-digit character was encountered in a position during an arithmetic operation the machine passed it unchanged to the output, and any carry into the non-digit was lost. (Note, however, that a peculiarity of UNIVAC I's addition/subtraction circuitry was that the "ignore", space, and minus characters were occasionally treated as numeric, with values of –3, –2, and –1, respectively, and the apostrophe, ampersand, and left parenthesis were occasionally treated as numeric, with values 10, 11, and 12.)
Input/output
Besides the operator's console, the only I/O devices connected to the UNIVAC I were up to 10 UNISERVO tape drives, a Remington Standard electric typewriter and a Tektronix oscilloscope. The UNISERVO was the first commercial computer tape drive commercially sold. It used data density 128 bits per inch (with real transfer rate 7,200 characters per second) on magnetically plated phosphor bronze tapes. The UNISERVO could also read and write UNITYPER created tapes at 20 bits per inch. The UNITYPER was an offline typewriter to tape device, used by programmers and for minor data editing. Backward and forward tape read and write operations were possible on the UNIVAC and were fully overlapped with instruction execution, permitting high system throughput in typical sort/merge data processing applications. Large volumes of data could be submitted as input via magnetic tapes created on offline card to tape system and made as output via a separate offline tape to printer system. The operators console had three columns of decimal coded switches that allowed any of the 1000 memory locations to be displayed on the oscilloscope. Since the mercury delay-line memory stored bits in a serial format, a programmer or operator could monitor any memory location continuously and with sufficient patience, decode its contents as displayed on the scope. The on-line typewriter was typically used for announcing program breakpoints, checkpoints, and for memory dumps.
Operations
A typical UNIVAC I installation had several ancillary devices. There were:
The UNIPRINTER read metal UNIVAC magnetic tape using a tape reader and typed the data at 10 characters per second using a modified Remington typewriter.
The UNIVAC Card to Tape converter read punched cards at 240 cards per minute and wrote their data on metal UNIVAC magnetic tape using a UNISERVO tape drive.
A tape-to-card converter, that read a magnetic tape and produced punched cards.
UNIVAC did not provide an operating system. Operators loaded on a UNISERVO a program tape which could be loaded automatically by processor logic. The appropriate source and output data tapes would be mounted and the program started. Results tapes then went to the offline printer or typically for data processing into short-term storage to be updated with the next set of data produced on the offline card to tape unit. The mercury delay-line memory tank temperature was very closely controlled as the speed of sound in mercury varies with temperature. In the event of a power failure, many hours could elapse before the temperature stabilized.
Reliability
Eckert and Mauchly were uncertain about the reliability of digital logic circuits and little was known about them at the time. The UNIVAC I was designed with parallel computation circuits and result comparison. In practice, only failing components yielded comparison faults as their circuit designs were very reliable. Tricks were used to manage the reliability of tubes. Prior to use in the machine, large lots of the predominant tube type 25L6 were burned in and carefully tested. Often half of a production lot would be thrown away. Technicians installed a tested and burned-in tube in an easily diagnosed location such as the memory recirculate amplifiers. Then, when aged further, this "golden" tube was sent to stock to be used in a difficult to diagnose logic position. It took about 30 minutes to turn on the computer as all filament power supplies were stepped up to operating value over that time, to reduce in-rush current and thermal stress on the tubes. As a result, uptimes (MTBF) of many days to weeks were obtained on the processor. The UNISERVO did not have vacuum columns but springs and strings to buffer tape from the reels to the capstan. These were a frequent source of failures.
See also
List of UNIVAC products
History of computing hardware
List of vacuum-tube computers
Ferranti Mark 1
Grace Hopper
Notes
External links
UNIVAC Conference Oral history on 17–18 May 1990. Charles Babbage Institute, University of Minnesota, Minneapolis. 171-page transcript of oral history with computer pioneers, including Jean Bartik, involved with the Univac computer, held on 17–18 May 1990. The meeting involved 25 engineers, programmers, marketing representatives, and salesmen who were involved with the UNIVAC, as well as representatives from users such as General Electric, Arthur Andersen, and the U.S. Census.
Margaret R. Fox Papers, 1935–1976, Charles Babbage Institute, University of Minnesota. collection contains reports, including the original report on the ENIAC, UNIVAC, and many early in-house National Bureau of Standards (NBS) activity reports; memoranda on and histories of SEAC, SWAC, and DYSEAC; programming instructions for the UNIVAC, LARC, and MIDAC; patent evaluations and disclosures relevant to computers; system descriptions; speeches and articles written by Margaret Fox's colleagues; and correspondence of Samuel Alexander, Margaret Fox, and Samuel Williams.
UNIVAC I documentation – From computer documentation repository www.bitsavers.org
The UNIVAC and the Legacy of the ENIAC – From the University of Pennsylvania Library (PENN UNIVERSITY/exhibitions)
UNIVAC 1 Computer System – By Allan G. Reiter, formerly of the ERA division of Remington Rand
UNIVAC I & II Simulator – By Peter Zilahy Ingerman; Shareware simulator of the UNIVAC I and II. Archived download
Core memory slide show – This slide show contains a photo of a 1951 core memory module for a UNIVAC I
Remington-Rand Presents UNIVAC – Promotional film from the collection of the Computer History Museum, Mountain View, California
"Want To Buy A Brain", May 1949, Popular Science early illustrated article on the UNIVAC for the general public
YouTube Video: 1951 UNIVAC 1 Computer Basic System Components – Computer History Archives Project
UNIVAC 0001
Vacuum tube computers
Computer-related introductions in 1951 |
2280818 | https://en.wikipedia.org/wiki/Broadcast%20encryption | Broadcast encryption | Broadcast encryption is the cryptographic problem of delivering encrypted content (e.g. TV programs or data on DVDs) over a broadcast channel in such a way that only qualified users (e.g. subscribers who have paid their fees or DVD players conforming to a specification) can decrypt the content. The challenge arises from the requirement that the set of qualified users can change in each broadcast emission, and therefore revocation of individual users or user groups should be possible using broadcast transmissions, only, and without affecting any remaining users. As efficient revocation is the primary objective of broadcast encryption, solutions are also referred to as revocation schemes.
Rather than directly encrypting the content for qualified users, broadcast encryption schemes distribute keying information that allows qualified users to reconstruct the content encryption key whereas revoked users find insufficient information to recover the key. The typical setting considered is that of a unidirectional broadcaster and stateless users (i.e., users do not keep bookmarking of previous messages by the broadcaster), which is especially challenging. In contrast, the scenario where users are supported with a bi-directional communication link with the broadcaster and thus can more easily maintain their state, and where users are not only dynamically revoked but also added (joined), is often referred to as multicast encryption.
The problem of practical broadcast encryption has first been formally studied by Amos Fiat and Moni Naor in 1994. Since then, several solutions have been described in the literature, including combinatorial constructions, one-time revocation schemes based on secret sharing techniques, and tree-based constructions. In general, they offer various trade-offs between the increase in the size of the broadcast, the number of keys that each user needs to store, and the feasibility of an unqualified user or a collusion of unqualified users being able to decrypt the content. Luby and Staddon have used a combinatorial approach to study the trade-offs for some general classes of broadcast encryption algorithms. A particularly efficient tree-based construction is the "subset difference" scheme, which is derived from a class of so-called subset cover schemes. The subset difference scheme is notably implemented in the AACS for HD DVD and Blu-ray Disc encryption. A rather simple broadcast encryption scheme is used for the CSS for DVD encryption.
The problem of rogue users sharing their decryption keys or the decrypted content with unqualified users is mathematically insoluble. Traitor tracing algorithms aim to minimize the damage by retroactively identifying the user or users who leaked their keys, so that punitive measures, legal or otherwise, may be undertaken. In practice, pay TV systems often employ set-top boxes with tamper-resistant smart cards that impose physical restraints on a user learning their own decryption keys. Some broadcast encryption schemes, such as AACS, also provide tracing capabilities.
See also
Multicast encryption
Threshold cryptosystem
Digital Rights Management
References
Digital rights management
Copy protection
Broadcasting
Key management
Cryptographic protocols |
18934464 | https://en.wikipedia.org/wiki/Embrace%2C%20extend%2C%20and%20extinguish | Embrace, extend, and extinguish | "Embrace, extend, and extinguish" (EEE), also known as "embrace, extend, and exterminate", is a phrase that the U.S. Department of Justice found that was used internally by Microsoft to describe its strategy for entering product categories involving widely used standards, extending those standards with proprietary capabilities, and then using those differences in order to strongly disadvantage its competitors.
Origin
The strategy and phrase "embrace and extend" were first described outside Microsoft in a 1996 article in The New York Times titled "Tomorrow, the World Wide Web! Microsoft, the PC King, Wants to Reign Over the Internet", in which writer John Markoff said, "Rather than merely embrace and extend the Internet, the company's critics now fear, Microsoft intends to engulf it." The phrase "embrace and extend" also appears in a facetious motivational song by an anonymous Microsoft employee, and in an interview of Steve Ballmer by The New York Times.
A variant of the phrase, "embrace, extend then innovate", is used in J Allard's 1994 memo "Windows: The Next Killer Application on the Internet" to Paul Maritz and other executives at Microsoft. The memo starts with a background on the Internet in general, and then proposes a strategy on how to turn Windows into the next "killer app" for the Internet:
The addition of "extinguish" in the phrase "embrace, extend and extinguish" was first introduced in the United States v. Microsoft Corp. antitrust trial when then vice president of Intel, Steven McGeady, used the phrase to explain Microsoft vice president Paul Maritz's statement in a 1995 meeting with Intel that described Microsoft's strategy to "kill HTML by extending it".
Strategy
The strategy's three phases are:
Embrace: Development of software substantially compatible with a competing product, or implementing a public standard.
Extend: Addition and promotion of features not supported by the competing product or part of the standard, creating interoperability problems for customers who try to use the "simple" standard.
Extinguish: When extensions become a de facto standard because of their dominant market share, they marginalize competitors that do not or cannot support the new extensions.
Microsoft has claimed that the original strategy is not anti-competitive, but rather an exercise of its discretion to implement features it believes customers want.
Examples by Microsoft
Browser incompatibilities:
The plaintiffs in an antitrust case claimed that Microsoft had added support for ActiveX controls in the Internet Explorer Web browser to break compatibility with Netscape Navigator, which used components based on Java and Netscape's own plugin system.
On CSS, data:, etc.: A decade after the original Netscape-related antitrust suit, the Web browser company Opera Software filed an antitrust complaint against Microsoft with the European Union, saying it "calls on Microsoft to adhere to its own public pronouncements to support these standards, instead of stifling them with its notorious 'Embrace, Extend and Extinguish' strategy".
Office documents: In a memo to the Office product group in 1998, Bill Gates stated: "One thing we have got to change in our strategy – allowing Office documents to be rendered very well by other people's browsers is one of the most destructive things we could do to the company. We have to stop putting any effort into this and make sure that Office documents very well depends on PROPRIETARY IE capabilities. Anything else is suicide for our platform. This is a case where Office has to avoid doing something to destory Windows."
Breaking Java's portability: The antitrust case's plaintiffs also accused Microsoft of using an "embrace and extend" strategy with regard to the Java platform, which was designed explicitly with the goal of developing programs that could run on any operating system, be it Windows, Mac, or Linux. They claimed that, by omitting the Java Native Interface (JNI) from its implementation and providing J/Direct for a similar purpose, Microsoft deliberately tied Windows Java programs to its platform, making them unusable on Linux and Mac systems. According to an internal communication, Microsoft sought to downplay Java's cross-platform capability and make it "just the latest, best way to write Windows applications". Microsoft paid Sun US$20 million in January 2001 (equivalent to $ million in ) to settle the resulting legal implications of their breach of contract.
More Java issues: Sun sued Microsoft over Java again in 2002 and Microsoft agreed to settle out of court for US$2 billion (equivalent to US$ billion in ).
Instant messaging: In 2001, CNet's News.com described an instance concerning Microsoft's instant messaging program. "Embrace" AOL's IM protocol, the de facto standard of the 1990s and early 2000s. "Extend" the standard with proprietary Microsoft addons which added new features, but broke compatibility with AOL's software. Gain dominance, since Microsoft had 95% OS share and their MSN Messenger was provided for free. Finally, "extinguish" and lock out AOL's IM software, since AOL was unable to use the modified MS-patented protocol.
Adobe fears: Adobe Systems refused to let Microsoft implement built-in PDF support in Microsoft Office, citing fears of EEE. Current versions of Microsoft Office have built-in support for PDF, as well as several other ISO standards.
Employee testimony: In 2007, Microsoft employee Ronald Alepin gave sworn expert testimony for the plaintiffs in Comes v. Microsoft in which he cited internal Microsoft emails to justify the claim that the company intentionally employed this practice.
Email protocols: Microsoft supported POP3, IMAP, and SMTP email protocols in their Microsoft Outlook email client. At the same time, they developed their own email protocol, MAPI, which has since been documented but is largely unused by third parties. Microsoft has announced that they will end support for basic authentication access to Exchange Online APIs for Office 365 customers, which disables most use of IMAP or POP3 and requires significant upgrades to applications in order to continue to use those protocols; some customers have responded by simply shutting off older protocols.
Unix/Linux: Microsoft included a bare-minimum POSIX layer from the beginning of NT, later replaced with Windows Services for UNIX, a more full-featured UNIX based on Interix with various unique features that were not portable to other Unix-like operating systems. Windows Subsystem for Linux replaced it in 2018, a heavily modified Linux compatibility layer that caused fears of EEE. The current WSL2 has moved away from reimplementing Linux to virtualizing an actual Linux kernel and allowing full distribution installations, beginning with Ubuntu.
Web browsers
Netscape
During the browser wars, Netscape implemented the "font" tag, among other HTML extensions, without seeking review from a standards body. With the rise of Internet Explorer, the two companies became locked in a dead heat to out-implement each other with non-standards-compliant features. In 2004, to prevent a repeat of the "browser wars", and the resulting morass of conflicting standards, the browser vendors Apple Inc. (Safari), Mozilla Foundation (Firefox), and Opera Software (Opera browser) formed the Web Hypertext Application Technology Working Group (WHATWG) to create open standards to complement those of the World Wide Web Consortium. Microsoft refused to join, citing the group's lack of a patent policy as the reason.
Google Chrome
With its dominance in the web browser market, Google has been accused of using Google Chrome and Blink development to push new web standards that are proposed in-house by Google and subsequently implemented by its services first and foremost. These have led to performance disadvantages and compatibility issues with competing browsers, and in some cases, developers intentionally refusing to test their websites on any other browser than Chrome. Tom Warren of The Verge went as far as comparing Chrome to Internet Explorer 6, the default browser of Windows XP that was often targeted by competitors due to its similar ubiquity in the early 2000s.
See also
Criticism of Microsoft
Halloween documents
Microsoft and open source
Network effect
Path dependence
Vendor lock-in
32-bit vs 64-bit
AARD code
References
External links
Report on Microsoft documents relating to Office and IE Embrace, extend and extinguish
Microsoft criticisms and controversies
Interoperability
Marketing techniques |
32984923 | https://en.wikipedia.org/wiki/TELEMAC | TELEMAC | In computational fluid dynamics, TELEMAC is short for the open TELEMAC-MASCARET system, or a suite of finite element computer program owned by the Laboratoire National d'Hydraulique et Environnement (LNHE), part of the R&D group of Électricité de France. After many years of commercial distribution, a Consortium (the TELEMAC-MASCARET Consortium) was officially created in January 2010 to organize the open source distribution of the open TELEMAC-MASCARET system now available under GPLv3.
Available modules
TELEMAC-2D
It 2D hydrodynamics module, TELEMAC-2D, solves the so-called shallow water equations, also known as the Saint Venant equations. TELEMAC-2D solves the Saint-Venant equations using the finite-element or finite-volume method and a computation mesh of triangular elements. It can perform simulations in transient and permanent conditions. TELEMAC-2D can take into account the following phenomena:
Propagation of long waves, taking into account non-linear effects
Bed friction
Influence of Coriolis force
Influence of meteorological factors: atmospheric pressure and wind
Turbulence
Torrent and river flows
Influence of horizontal temperature or salinity gradients on density
Cartesian or spherical coordinates for large domains
Dry areas in the computational domain: intertidal flats and flood plains
Current entrainment and diffusion of a tracer, with source and sink terms
Monitoring of floats and Lagrangian drifts
Treatment of singular points: sills, dikes, pipes.
TELEMAC-2D is used in many fields of application. In the maritime field, particular mention may be made of harbour structure design, studies of the effect of building submersible breakwaters or dredging works, the impact of discharges from a sea outfall, study of thermal plumes; and, with regard to rivers, the impact of various types of construction (bridges, sills, groynes), dam breaks, flood studies, transport of dissipating or non-dissipating tracers. TELEMAC-2D can also be used for a number of special applications, such as industrial reservoir failures, avalanches falling into reservoirs, etc.
TELEMAC-3D
It 3D hydrodynamics module, TELEMAC-3D, uses the same horizontally unstructured mesh as TELEMAC-2D but solves the Navier-Stokes equations, whether in hydrostatic or non-hydrostatic mode so allowing shorter waves than those in a shallow water context (where wavelengths are required to be at least twenty times the water depth). The wave formulation for the updating of the free surface is used for efficiency. The 3D mesh is developed as a series of meshed surfaces between the bed and the free surface. Flexibility in the placement of these planes permits the use of a sigma grid (each plane at a given proportion of the spacing between bed and surface) or a number of other strategies for intermediate surface location. One useful example is to include some planes which are at a fixed distance below the water surface, or above the bed. In the presence of a near-surface thermocline or halocline this is advantageous in so far as mixing water between the near-surface planes, where the greatest density gradients are located, can be avoided. When drying occurs the water depth falls to zero exactly and the planes collapse to a zero inter-layer spacing.
MASCARET"MASCARET: a 1-D Open-Source Software for Flow Hydrodynamic and Water Quality in Open Channel Networks", N. Goutal, J.-M. Lacombe, F. Zaoui and K. El-Kadi-Abderrezzak, River Flow 2012 – Murillo (Ed.), pp. 1169-1174
MASCARET includes 1-Dimensional free surface flow modelling engines. Based on the Saint-Venant equations, different modules can model various phenomenon over large areas and for varied geometries: meshed or branched network, subcritical or supercritical flows, steady or unsteady flows. MASCARET can represent:
Flood propagation and modelling of floodplains
Submersion wave resulting from dam break
Regulation of managed rivers
Flow in torrents,
Canals wetting
Sediment Transport
Water quality (temperature, passive tracers ...)
ARTEMIS
ARTEMIS is a scientific software dedicated to the simulation of wave propagation towards the coast or into harbours, over a geographical domain of about few square km. The domain may be larger for simulation of long waves or resonance. The frequency dependence and directional spreading of the wave energy is taken into account by ARTEMIS. The computation retrieves the main wave characteristics over the computational domain: significant wave height, wave incidence, orbital velocities, breaking rate, ...
ARTEMIS solves the Berkhoff's equation or Mild Slope Equation through finite element formulation. The Mild Slope Equation has been extended to integrate dissipation processes. With a consistent set of boundary conditions, ARTEMIS is able to model the following processes:
Bottom refraction
Diffraction by obstacles
Depth induced wave breaking
Bottom friction
Full or partial reflections against walls, breakwaters, dikes, ...
Radiation or free outflow conditions
ARTEMIS has been validated on a set of reference tests and has been successfully used for numerous studies. The software has shown its ability to provide reliable wave agitation results in coastal areas, in the vicinity of maritime works and structures, or in the surf zone. ARTEMIS is an operational tool to determine project conditions:
structure design,
coastal management,
wave conditions for wave driven currents and associated
sand transport, ...
breaking rate in the surroundings of a harbour for two different wave directions ...
easily carrying into effect with the help of adapted pre and post-processors for mesh generation and results visualization.
TOMAWAC
TOMAWAC is used to model wave propagation in coastal areas. By means of a finite-element type method, it solves a simplified equation for the spectro-angular density of wave action. This is done for steady-state conditions (i.e. with a fixed depth of water throughout the simulation).
TOMAWAC is particularly simple to use. It can take into account any of the following physical phenomena:
Wind-generated waves
Refraction on the bottom
Refraction by currents
Dissipation through bathymetric wave breaking
Dissipation through counter-current wave breaking
At each point of the computational mesh, TOMAWAC calculates the following information:
Significant wave height
Mean wave frequency
Mean wave direction
Peak wave frequency
Wave-induced currents
Radiation stresses
Validated with a variety of test cases and already used in numerous studies, TOMAWACis ideal for engineering projects: design of maritime structures, sediment transport by waves, current studies, etc.
Like all the other modules of the open TELEMAC-MASCARET system, TOMAWAC has the benefit of the system's powerful mesh generation and results display functions. It is also easy to link TOMAWAC and the hydrodynamic or solid transport modules, and to use the same computation grid for various modules (TELEMAC-2D, SISYPHE, TELEMAC-3D, etc.).
Like all the modules of the open TELEMAC-MASCARET system, TOMAWAC was developed in accordance with the quality assurance procedures followed in Electricité de France's Studies and Research Division. The software is supplied with a complete set of documents: theoretical description, user's manual and first steps, validation file, etc.
SISYPHE
SISYPHE is the state of the art sediment transport and bed evolution module of the TELEMAC-MASCARET modelling system. SISYPHE can be used to model complex morphodynamics processes in diverse environments, such as coastal, rivers, lakes and estuaries, for different flow rates, sediment size classes and sediment transport modes.
In SISYPHE, sediment transport processes are grouped as bed-load, suspended-load or total-load, with an extensive library of bed-load transport relations. SISYPHE is applicable to non-cohesive sediments that can be uniform (single-sized) or non-uniform (multiple-sized), cohesive sediments (multi-layer consolidation models), as well as sand-mud mixtures. A number of physically-based processes are incorporated into SISYPHE, such as the influence of secondary currents to precisely capture the complex flow field induced by channel curvature, the effect of bed slope associated with the influence of gravity, bed roughness predictors, and areas of unerodable bed, among others.
For currents only, SISYPHE can be tightly coupled to the depth-averaged shallow water module TELEMAC-2D or to the three-dimensional Reynolds-averaged Navier-Stokes module TELEMAC-3D. In order to account for the effect of waves or combined waves and currents, SISYPHE can be internally coupled to the waves module TOMAWAC.
SISYPHE can be easily expanded and customized to particular requirements by modifying friendly, easy to read Fortran files. To help the community of users and developers, SISYPHE includes a large number of examples, verification and validation tests for a range of applications.
Common techniques
Common to all its modules, finite volume style numerical techniques are used to ensure that both water and tracer can be well conserved in the presence of drying and subsequent wetting.
External links
www.opentelemac.org, www.openmascaret.org
docs.opentelemac.org
wiki.opentelemac.org
cis.opentelemac.org
svn.opentelemac.org
References
Finite element software
Computational fluid dynamics
Finite element software for Linux |
6636707 | https://en.wikipedia.org/wiki/East%20Asia%20%28album%29 | East Asia (album) | East Asia is the 20th studio album recorded by Japanese singer-songwriter Miyuki Nakajima, released in October 1992.
The album features "Shallow Sleep (Asai Nemuri)", a hit single released in July 1992. Nakajima wrote the song as a theme for Shin'ai Naru Mono e, a television drama that she made guest appearance as a doctor. Shin'ai Naru Mono e's themes song peaked at No. 2 on the Japan's Oricon chart in summer of 1992, and thus became her first single. This song has sold more than a million copies.
Prior to the release of the album Shin'ai Naru Mono e, the songs "Two Boats" and "Haginohara" were already performed on Yakai, which are experimental theaters Nakajima has performed annually since 1989.
"Thread (Ito)" is a love song Nakajima dedicated to Zenji Nakayama, a later leader of Tenrikyo who got married at that time. In 1998, it was featured on the television drama Seija no Koushin and was also released as a double A-Side single with "Another Name for Life". This song has become well known through a cover version recorded by the Bank Band, a project which Kazutoshi Sakurai and Takeshi Kobayashi launched for a charity. Their interpretation, featuring Sakurai's vocals, was included on their 2004 Soushi Souai album.
In December 1992, East Asia won the 34th Japan Record Awards for 10 Excellent Albums, which is a prize that honors ten exceptional studio albums.
Track listing
All songs written and composed by Miyuki Nakajima, arranged by Ichizo Seo (except "East Asia" co-arranged by David Campbell)
"East Asia" – 6:48
"" – 4:39
"" – 5:21
"" – 6:35
"" – 6:49
"" – 4:49
"" – 4:39
"" – 8:12
"" – 5:07
Personnel
Miyuki Nakajima – lead and backing vocals
Hideo Yamaki – drums
Eiji Shimamura – drums
Jun Aoyama – drums programming
Kenji Takamizu – electric bass
Yasuo Tomikura – electric bass
Chiharu Mikuzuki – electric bass
Tsuyoshi Kon – electric guitar, pedal steel guitar
Takayuki Hijikata – electric guitar
Shigeru Suzuki – electric guitar
Chuei Yoshikawa – acoustic guitar
Elton Nagata – acoustic piano, keyboards
Yasuharu Nakanishi – acoustic piano, keyboards
Nobuo Kurata – acoustic piano, keyboards, synth bass
Nobu Saito – percussion
Toshihiko Furumura – alto sax
Joe's Group – strings
Neko Saito Group – strings
Syd Page Group – strings
Keishi Urata – computer programming
Nobuhiko Nakayama – computer programming
Tatsuhiko Mori – computer programming
Ichizo Seo – computer programming, backing vocals
Yuiko Tsubokura – backing vocals, featuring vocals on "Two Boats"
Kazuyo Sugimoto – backing vocals, featuring vocals on "Two Boats"
Keiko Wada – backing vocals
Yoko Yamauchi – backing vocals
Raven Kane – backing vocals
Julia Waters – backing vocals
Maxine Waters – backing vocals
Akiya – backing vocals
Production
Recording engineer: Tad Goto
Additional engineers: Takanobu Ichikawa, Ray Blair
Assiatant engineers: Yutaka Uematsu, Yoshiyuki Yokoyama, Hajime Nagai, Masataka Itoh, Takamasa Kido, Naomi Matsuo, Nobuhiko Nakayama, Tomotaka Takehara, Masashi Kudo, Shouji Sekine, Kenji Nakamura, Jim Gillens
Mixing engineers: Tad Goto, Joe Chiccarelli
Assistants for the mixing engineer: Tomotaka Takehara, Jamie Seyberth
Music coordinators: Koji Kimura, Fumio Miyata, Tomoko Takaya, Ruriko Duer
Art direction and photographer: Jin Tamura
Cover designer: Hirofumi Arai
Illustrator: Shigeko Kashima
Hair and make-up: Noriko Izumisawa
Artist management: Kouji Suzuki
Assistant: Maki Nishida
Management desk: Atsuko Hayashi
General management: Takahiro Uno
Promoter: Tadayoshi Okamoto, Shoko Aoki. Narihiko Yoshida
Artists and repertoire: Yuzo Watanabe, Koichi Suzuki
Assistant for the record producer: Tsuyoshi Ito
Promoter for the recording artist: Yoshio Kan
Dad: Genichi Kawakami
Mastering at Future Disc Systems in Los Angeles, by Tom Baker
Chart positions
Album
Singles
Awards
Release history
References
Miyuki Nakajima albums
1992 albums
Pony Canyon albums |
39803047 | https://en.wikipedia.org/wiki/Dina%20St%20Johnston | Dina St Johnston | Dina St Johnston (née Aldrina Nia Vaughan, 20 September 1930 – 30 June/1 July 2007) was a British computer programmer credited with founding the UK's first software house in 1959.
Early life and education
Born Aldrina Nia Vaughan in south London, St Johnston was educated at Selhurst Grammar School for Girls before leaving school at 16 or 17 (accounts vary) to work for the British Non-Ferrous Metals Research Association. St Johnston worked and studied part-time, studying at Croydon Polytechnic and later Sir John Cass College before gaining an external London University degree in Mathematics.
Early career
In 1953, St Johnston left the British Non-Ferrous Metals Research Association and joined Borehamwood Laboratories of Elliott Brothers (London) Ltd, where she worked in the Theory Division. The company was an early computer company and had produced its first computer in 1950. St Johnston learned to programme at the company and also at the 1954 Cambridge Summer School on Programming and, showing a real flair for programming, began working on EDSAC and the Elliott 400 and 800 series computers. By 1954, St Johnston was responsible for the programming of the Elliott 153 Direction Finding (DF) digital computer for the Admiralty and soon after for programming Elliott's own payroll computer; her work was said to have been inventive and structured, but also very accurate, hardly ever requiring ‘de-bugging’.
Vaughan Computers
Shortly after her marriage to Andrew St Johnston - head of the Elliott computing department - in 1958, St Johnston founded Vaughan Programming Services (VPS) in Hertfordshire in 1959, taking software contracts, and training and hiring additional programmers as needed. On its tenth anniversary in 1969, company literature stated that "VPS was the first registered independent Software unit in the UK (February 1959), that was not a part of a computer manufacturer, not a part of a computer bureau, not a part of a users' organisation and not a part of a consultancy operation."
Very significant contracts came to St Johnston and VPS, such as programming early nuclear power stations, but in 1970 she branched out into hardware, producing her own computer, the 4M, and the company changed its name to Vaughan Systems and Programming in 1975 to reflect the new area of work. One of the 4M Vaughan computers is in The National Museum of Computing.
St Johnston and Vaughan produced software for companies like the BBC, Unilever, and GEC, flight simulators for the RAF and software that provided real-time information for passengers on British Rail, the type of work for which the company became most well known. The company became well known for transport signalling and display systems.
Later life
In 1996, Vaughan Systems and Programming was sold to Harmon Industries, an American railway signalling company.
St Johnston continued programming until the mid-1990s. She retired in 1999 and died on 30 June/1 July 2007.
See also
Steve Shirley
References
1930 births
2007 deaths
British computer programmers
British women in business
20th-century British businesspeople
People from London |
18698799 | https://en.wikipedia.org/wiki/HTC%20Dream | HTC Dream | The HTC Dream (also known as the T-Mobile G1 in the United States and parts of Europe, and as the Era G1 in Poland) is a smartphone developed by HTC. First released in September 2008, the Dream was the first commercially released device to use the Linux-based Android operating system, which was purchased and further developed by Google and the Open Handset Alliance to create an open competitor to other major smartphone platforms of the time, such as Symbian, BlackBerry OS, and iPhone OS. The operating system offers a customizable graphical user interface, integration with Google services such as Gmail, a notification system that shows a list of recent messages pushed from apps, and Android Market for downloading additional apps.
The Dream was released to mostly positive reception. While the Dream was praised for its solid and robust hardware design, the introduction of the Android operating system was met with criticism for its lack of certain functionality and third-party software in comparison to more established platforms, but was still considered to be innovative due to its open nature, notifications system, and heavy integration with Google services, like Gmail.
History
Development
In July 2005, Google acquired Android Inc., a company led by Andy Rubin which was working on unspecified software for mobile devices. Under the leadership of Google, the team was in the process of developing a standardized, Linux-based operating system for mobile phones to compete against the likes of Symbian and Windows Mobile, which would be offered for use by individual original equipment manufacturers. Initial development of what would become Android was targeted towards a prototype device codenamed "Sooner"; the device was a messaging phone in the style of BlackBerry, with a small, non-touch screen, navigation keys, and a physical QWERTY keyboard. The January 2007 unveiling of the iPhone, Apple's first smartphone, and its pioneering design aspects, caught Rubin off-guard and led to a change in course for the project. The operating system's design was quickly reworked, and attention shifted to a new prototype device codenamed "Dream"—a touchscreen device with a sliding, physical keyboard. The inclusion of a physical keyboard was intentional, as Android developers recognized users did not like the idea of a virtual keyboard as they lacked the physical feedback that makes hardware keyboards useful.
The Android operating system was officially unveiled in November 2007 along with the founding of the Open Handset Alliance (OHA); a consortium of hardware, software, and telecommunication companies devoted to advancing open standards for mobile devices. These companies included Google, along with HTC, a company which was at the time, one of the largest manufacturers of phones. While Google indicated in 2008 that several Linux devices were being tested in preparation for the official public launch of Android, only one was to be released in the United States that year—the HTC Dream. Plans called for the Dream to be released on T-Mobile USA by the end of the year (with some reports suggesting October 2008), targeting the holiday shopping season. Sprint had worked with the OHA, but had not yet unveiled any plans to release an Android phone of its own, while Verizon Wireless and AT&T did not have any plans for Android devices yet at all.
Release
HTC officially announced the Dream on 23 September 2008. It would first be released by T-Mobile as the T-Mobile G1, starting in the United States on 20 October 2008 in its 3G-enabled markets only (it became available in all markets on 24 January 2009), followed by a British release in November 2008, and a release in other European territories in early 2009. On 10 March 2009, it became available in Poland as the Era G1 on Era. On 2 June 2009, both the Dream and its successor (the HTC Magic) were released by Rogers Wireless in Canada.
The Dream was discontinued by T-Mobile on 27 July 2010. The G1 was spiritually succeeded in October 2010 by the T-Mobile G2, a new HTC device which also featured stock Android and a sliding keyboard, and was T-Mobile USA's first "4G" smartphone. In Canada, Rogers suspended sales of the Dream on 15 January 2010 due to a bug affecting the proper use of emergency calls.
Features
Hardware
The Dream's exterior uses a soft, smooth matte plastic shell, and was made available in white, black, and bronze colors. The Dream's design features a distinctive "chin" on the bottom, which houses 5 navigation buttons ("Call", "Home", "Menu", "Back", and "End Call") and a clickable trackball in the center which can be used for scrolling and selecting. The device uses a capacitive touchscreen LCD at a resolution of 320×480; the screen can be slid along a curved hinge to expose a five-row QWERTY keyboard—as the first releases of Android did not include a virtual keyboard, the keyboard was originally the only method of text input on the device. While supporting multitouch at the hardware level, the Linux kernel in the Dream's Android distribution was patched to remove multitouch support from its touchscreen drivers for undisclosed reasons. The Dream does not include a traditional headphone jack, requiring an adapter for HTC's proprietary (but Mini-USB compatible) "ExtUSB" port located on the bottom of the device. The rear of the device houses a 3.15-megapixel rear camera with auto-focus.
The Dream uses a 528 MHz Qualcomm MSM7201A system on a chip with 192 MB of RAM, and comes with 256 MB of internal storage, which can be expanded by up to 16 GB using a Micro SD card slot. For network connectivity, the Dream supports Quad-band GSM 850/900/1800/1900 MHz and GPRS/EDGE, plus Dual band UMTS Bands I and IV (1700 & 2100 MHz) and HSDPA/HSUPA (in US/Europe) at 7.2/2 Mbit/s. The device also supports standalone GPS and A-GPS.
Software
The HTC Dream was the first ever smartphone to ship with the Android operating system. The operating system heavily integrates with, and provides apps for various Google services, such as Gmail (with push email support), Maps, Search, Talk, and YouTube, while the contacts and calendar apps can sync with the online Google Contacts and Google Calendar services respectively. The device also ships with an email app supporting other POP3 and IMAP-based mail services, an instant messaging app with support for multiple services, and a WebKit-based web browser. A notification system displays icons for certain events (such as e-mails and text messages) on the left side of the status bar across the top of the screen; dragging down from the top of the screen exposes a tray with more detailed information for each notification. The Android Market can be used to download additional apps for the device. The G1 as sold by T-Mobile also shipped with an Amazon MP3 app, allowing users to purchase DRM-free music online, and download them straight to the device via Wi-Fi.
The Dream could also be upgraded to newer versions of Android, which added new features and enhancements to the platform. The latest version of Android officially made available for the Dream, 1.6 "Donut", was released for T-Mobile USA's G1 in October 2009. The 1.6 update was not released on the Rogers HTC Dream in Canada (which stayed on 1.5 "Cupcake"); Rogers claimed that the update was only being made available for "'Google'-branded" models of the device.
Development and modding
Due to the open source nature of the Android platform, the Dream became a popular target for modding. Shortly after the release of the Dream, developers discovered a software exploit which would allow a user to gain superuser access to the phone—a process which would be referred to as "rooting". As a parallel to "jailbreaking" on iOS devices, root access would enable users to perform tweaks and other changes at the system level that cannot be performed under normal circumstances (such as adding auto-rotation, and installing a custom kernel that restored the aforementioned multitouch support).
After the Dream's bootloader was dumped, work began on modifying it so that it could install third-party firmware, and on converting official Android update files into a format that could be installed using the modified bootloader. Around the same time, Google made the Android Dev Phone 1 available for registered Android developers; the Dev Phone 1 was a SIM- and hardware-unlocked version of the HTC Dream that came pre-configured for superuser access to the internal files of the phone, allowing users to completely replace the bootloader and operating system.
As a result of these developments, a dedicated community, centered on forums such as XDA Developers, emerged surrounding the creation of custom firmware ("ROMs") built from the Android source code. Projects such as CyanogenMod continued to produce ports of newer versions of Android for the Dream and later Android devices, while adding their own features and enhancements to the operating system as well.
On later Android devices, where a number of factors (including carrier practices, and custom software provided by device manufacturers that sit atop Android, such as HTC Sense and Samsung TouchWiz) led to fragmentation regarding the availability of newer versions of the OS for certain devices, the development and use of custom ROMs (which are usually based on the "stock" version of Android) have ultimately become an important, yet controversial aspect of the Android ecosystem. In August 2012, a group of users released an unofficial port of a later version of Android, 4.1 "Jelly Bean", for the Dream as a proof of concept. However, the port lacked key functionality, and had severe performance issues due to the phone's relatively weak hardware in comparison to the modern devices that 4.1 was designed for.
Reception
Critical reception
The Dream was released to mixed reviews. The design of the Dream was considered to be solid and robust; Joshua Topolsky of Engadget considered its hardware design a contrast to that of the iPhone, due to its numerous navigation buttons (in comparison to just a home button) and its "charming, retro-future look; like a gadget in a 1970's sci-fi movie set in the year 2038." The Dream's keyboard, as the only method of text input prior to Android 1.5's introduction of a virtual keyboard, was considered to be sufficient, although some felt that its keys were too small. Its display was considered sufficient for a phone of its class, but John Brandon of TechRadar felt that it was not good enough for watching videos due to its poor contrast and small size in comparison to the iPhone. Android itself was considered to still be in its infancy (primarily due to its bare-bones functionality in certain areas, limited application catalog, lack of multitouch gestures, or syncing with certain enterprise platforms), but showed promise through its customizable interface, increased flexibility over iOS, its notification system, ability to display security permissions when downloading apps, and its heavy integration with Google services.
Brandon gave the Dream a 4.5/5, despite stating that it was "no Apple iPhone killer", given its lower quality of its application selection and multimedia features in comparison. In conclusion, the Dream was considered to be a "stellar" phone that "points to a future when a phone is as flexible and useful as the PC on your desk." Engadget felt that the Dream "isn't going to blow anyone's mind right out of the gate" due to its hardware, but that the Android platform as a whole held its own against its competitors, and that early adopters of Android through the G1 were "buying into one of the most exciting developments in the mobile world in recent memory." GSMArena noted that the Dream would have been "another average smart QWERTY messenger" had it not been for its introduction of Android; in conclusion, the Dream was considered "far from the perfect package", but still believed that "it gets the things that matter done and gets them done right."
Commercial reception
In April 2009, T-Mobile announced that it had sold over a million G1s in the United States, accounting for two-thirds of the devices on its 3G network. AdMob estimated in March 2009 that Android and the G1 had reached a market share of 6% in the United States.
See also
HTC Hero, HTC's first Android device with its Sense software.
Nexus One, an Android device developed for Google by HTC to launch the Nexus series of flagship devices
HTC Touch Diamond, HTC's Windows Mobile flagship at the time
References
External links
T-Mobile G1 product page (archived)
Deutsche Telekom
Dream
Smartphones
Touchscreen portable media players
Mobile phones introduced in 2008
Discontinued smartphones
Android (operating system) devices
Mobile phones with an integrated hardware keyboard
Mobile phones with user-replaceable battery
Slider phones |
84777 | https://en.wikipedia.org/wiki/Fingerprint | Fingerprint | A fingerprint is an impression left by the friction ridges of a human finger. The recovery of partial fingerprints from a crime scene is an important method of forensic science. Moisture and grease on a finger result in fingerprints on surfaces such as glass or metal. Deliberate impressions of entire fingerprints can be obtained by ink or other substances transferred from the peaks of friction ridges on the skin to a smooth surface such as paper. Fingerprint records normally contain impressions from the pad on the last joint of fingers and thumbs, though fingerprint cards also typically record portions of lower joint areas of the fingers.
Human fingerprints are detailed, nearly unique, difficult to alter, and durable over the life of an individual, making them suitable as long-term markers of human identity. They may be employed by police or other authorities to identify individuals who wish to conceal their identity, or to identify people who are incapacitated or deceased and thus unable to identify themselves, as in the aftermath of a natural disaster.
Biology
Fingerprints are impressions left on surfaces by the friction ridges on the finger of a human. The matching of two fingerprints is among the most widely used and most reliable biometric techniques. Fingerprint matching considers only the obvious features of a fingerprint.
A friction ridge is a raised portion of the epidermis on the digits (fingers and toes), the palm of the hand or the sole of the foot, consisting of one or more connected ridge units of friction ridge skin. These are sometimes known as "epidermal ridges" which are caused by the underlying interface between the dermal papillae of the dermis and the interpapillary (rete) pegs of the epidermis. These epidermal ridges serve to amplify vibrations triggered, for example, when fingertips brush across an uneven surface, better transmitting the signals to sensory nerves involved in fine texture perception. These ridges may also assist in gripping rough surfaces and may improve surface contact in wet conditions.
Classification systems
Before computerization, manual filing systems were used in large fingerprint repositories. A fingerprint classification system groups fingerprints according to their characteristics and therefore helps in the matching of a fingerprint against a large database of fingerprints. A query fingerprint that needs to be matched can therefore be compared with a subset of fingerprints in an existing database. Early classification systems were based on the general ridge patterns, including the presence or absence of circular patterns, of several or all fingers. This allowed the filing and retrieval of paper records in large collections based on friction ridge patterns alone. The most popular systems used the pattern class of each finger to form a numeric key to assist lookup in a filing system. Fingerprint classification systems included the Roscher System, the Juan Vucetich System and the Henry Classification System. The Roscher System was developed in Germany and implemented in both Germany and Japan. The Vucetich System was developed in Argentina and implemented throughout South America. The Henry Classification System was developed in India and implemented in most English-speaking countries.
In the Henry Classification System there are three basic fingerprint patterns: loop, whorl, and arch, which constitute 60–65 percent, 30–35 percent, and 5 percent of all fingerprints respectively. There are also more complex classification systems that break down patterns even further, into plain arches or tented arches, and into loops that may be radial or ulnar, depending on the side of the hand toward which the tail points. Ulnar loops start on the pinky-side of the finger, the side closer to the ulna, the lower arm bone. Radial loops start on the thumb-side of the finger, the side closer to the radius. Whorls may also have sub-group classifications including plain whorls, accidental whorls, double loop whorls, peacock's eye, composite, and central pocket loop whorls.
The system used by most experts, although complex, is similar to the Henry Classification System. It consists of five fractions, in which R stands for right, L for left, i for index finger, m for middle finger, t for thumb, r for ring finger and p(pinky) for little finger. The fractions are as follows:
Ri/Rt + Rr/Rm + Lt/Rp + Lm/Li + Lp/Lr
The numbers assigned to each print are based on whether or not they are whorls. A whorl in the first fraction is given a 16, the second an 8, the third a 4, the fourth a 2, and 0 to the last fraction. Arches and loops are assigned values of 0. Lastly, the numbers in the numerator and denominator are added up, using the scheme:
(Ri + Rr + Lt + Lm + Lp)/(Rt + Rm + Rp + Li + Lr)
A 1 is added to both top and bottom, to exclude any possibility of division by zero. For example, if the right ring finger and the left index finger have whorls, the fraction used is:
0/0 + 8/0 + 0/0 + 0/2 + 0/0 + 1/1
The resulting calculation is:
(0 + 8 + 0 + 0 + 0 + 1)/(0 + 0 + 0 + 2 + 0 + 1) = 9/3 = 3
Fingerprint identification
Fingerprint identification, known as dactyloscopy, or hand print identification, is the process of comparing two instances of friction ridge skin impressions (see Minutiae), from human fingers or toes, or even the palm of the hand or sole of the foot, to determine whether these impressions could have come from the same individual. The flexibility of friction ridge skin means that no two finger or palm prints are ever exactly alike in every detail; even two impressions recorded immediately after each other from the same hand may be slightly different. Fingerprint identification, also referred to as individualization, involves an expert, or an expert computer system operating under threshold scoring rules, determining whether two friction ridge impressions are likely to have originated from the same finger or palm (or toe or sole).
An intentional recording of friction ridges is usually made with black printer's ink rolled across a contrasting white background, typically a white card. Friction ridges can also be recorded digitally, usually on a glass plate, using a technique called Live Scan. A "latent print" is the chance recording of friction ridges deposited on the surface of an object or a wall. Latent prints are invisible to the naked eye, whereas "patent prints" or "plastic prints" are viewable with the unaided eye. Latent prints are often fragmentary and require the use of chemical methods, powder, or alternative light sources in order to be made clear. Sometimes an ordinary bright flashlight will make a latent print visible.
When friction ridges come into contact with a surface that will take a print, material that is on the friction ridges such as perspiration, oil, grease, ink, or blood, will be transferred to the surface. Factors which affect the quality of friction ridge impressions are numerous. Pliability of the skin, deposition pressure, slippage, the material from which the surface is made, the roughness of the surface, and the substance deposited are just some of the various factors which can cause a latent print to appear differently from any known recording of the same friction ridges. Indeed, the conditions surrounding every instance of friction ridge deposition are unique and never duplicated. For these reasons, fingerprint examiners are required to undergo extensive training. The scientific study of fingerprints is called dermatoglyphics.
Fingerprinting techniques
Exemplar
Exemplar prints, or known prints, is the name given to fingerprints deliberately collected from a subject, whether for purposes of enrollment in a system or when under arrest for a suspected criminal offense. During criminal arrests, a set of exemplar prints will normally include one print taken from each finger that has been rolled from one edge of the nail to the other, plain (or slap) impressions of each of the four fingers of each hand, and plain impressions of each thumb. Exemplar prints can be collected using live scan or by using ink on paper cards.
Latent
In forensic science a partial fingerprint lifted from a surface, is called a latent fringerprint. Moisture and grease on fingers result in latent fingerprints on surfaces such as glass. But because they are not clearly visible, their detection may require chemical development through powder dusting, the spraying of ninhydrin, iodine fuming, or soaking in silver nitrate. Depending on the surface or the material on which a latent fingerprint has been found, different methods of chemical development must be used. Forensic scientists use different techniques for porous surfaces, such as paper, and nonporous surfaces, such as glass, metal or plastic. Nonporous surfaces require the dusting process, where fine powder and a brush are used, followed by the application of transparent tape to lift the latent fingerprint off the surface.
While the police often describe all partial fingerprints found at a crime scene as latent prints, forensic scientists call partial fingerprints that are readily visible patent prints. Chocolate, toner, paint or ink on fingers will result in patent fingerprints. Latent fingerprints impressions that are found on soft material, such as soap, cement or plaster, are called plastic prints by forensic scientists.
Capture and detection
Live scan devices
Fingerprint image acquisition is considered to be the most critical step in an automated fingerprint authentication system, as it determines the final fingerprint image quality, which has a drastic effect on the overall system performance. There are different types of fingerprint readers on the market, but the basic idea behind each is to measure the physical difference between ridges and valleys.
All the proposed methods can be grouped into two major families: solid-state fingerprint readers and optical fingerprint readers. The procedure for capturing a fingerprint using a sensor consists of rolling or touching with the finger onto a sensing area, which according to the physical principle in use (optical, ultrasonic, capacitive, or thermalsee ) captures the difference between valleys and ridges. When a finger touches or rolls onto a surface, the elastic skin deforms. The quantity and direction of the pressure applied by the user, the skin conditions and the projection of an irregular 3D object (the finger) onto a 2D flat plane introduce distortions, noise, and inconsistencies in the captured fingerprint image. These problems result in inconsistent and non-uniform irregularities in the image. During each acquisition, therefore, the results of the imaging are different and uncontrollable. The representation of the same fingerprint changes every time the finger is placed on the sensor plate, increasing the complexity of any attempt to match fingerprints, impairing the system performance and consequently, limiting the widespread use of this biometric technology.
In order to overcome these problems, as of 2010, non-contact or touchless 3D fingerprint scanners have been developed. Acquiring detailed 3D information, 3D fingerprint scanners take a digital approach to the analog process of pressing or rolling the finger. By modelling the distance between neighboring points, the fingerprint can be imaged at a resolution high enough to record all the necessary detail.
Fingerprinting dead humans
The human skin itself, which is a regenerating organ until death, and environmental factors such as lotions and cosmetics, pose challenges when fingerprinting a human. Following the death of a human the skin dries and cools. Obtaining fingerprints from a dead human, to aid identification, is hindered by the fact that only the coroner or medical examiner is allowed to examine the dead body. Fingerprints of dead humans may be obtained during an autopsy.
Latent fingerprint detection
In the 1930s criminal investigators in the United States first discovered the existence of latent fingerprints on the surfaces of fabrics, most notably on the insides of gloves discarded by perpetrators.
Since the late nineteenth century, fingerprint identification methods have been used by police agencies around the world to identify suspected criminals as well as the victims of crime. The basis of the traditional fingerprinting technique is simple. The skin on the palmar surface of the hands and feet forms ridges, so-called papillary ridges, in patterns that are unique to each individual and which do not change over time. Even identical twins (who share their DNA) do not have identical fingerprints. The best way to render latent fingerprints visible, so that they can be photographed, can be complex and may depend, for example, on the type of surfaces on which they have been left. It is generally necessary to use a "developer", usually a powder or chemical reagent, to produce a high degree of visual contrast between the ridge patterns and the surface on which a fingerprint has been deposited.
Developing agents depend on the presence of organic materials or inorganic salts for their effectiveness, although the water deposited may also take a key role. Fingerprints are typically formed from the aqueous-based secretions of the eccrine glands of the fingers and palms with additional material from sebaceous glands primarily from the forehead. This latter contamination results from the common human behaviors of touching the face and hair. The resulting latent fingerprints consist usually of a substantial proportion of water with small traces of amino acids and chlorides mixed with a fatty, sebaceous component which contains a number of fatty acids and triglycerides. Detection of a small proportion of reactive organic substances such as urea and amino acids is far from easy.
Fingerprints at a crime scene may be detected by simple powders, or by chemicals applied in situ. More complex techniques, usually involving chemicals, can be applied in specialist laboratories to appropriate articles removed from a crime scene. With advances in these more sophisticated techniques, some of the more advanced crime scene investigation services from around the world were, as of 2010, reporting that 50% or more of the fingerprints recovered from a crime scene had been identified as a result of laboratory-based techniques.
Forensic laboratories
Although there are hundreds of reported techniques for fingerprint detection, many of these are only of academic interest and there are only around 20 really effective methods which are currently in use in the more advanced fingerprint laboratories around the world.
Some of these techniques, such as ninhydrin, diazafluorenone and vacuum metal deposition, show great sensitivity and are used operationally. Some fingerprint reagents are specific, for example ninhydrin or diazafluorenone reacting with amino acids. Others such as ethyl cyanoacrylate polymerisation, work apparently by water-based catalysis and polymer growth. Vacuum metal deposition using gold and zinc has been shown to be non-specific, but can detect fat layers as thin as one molecule.
More mundane methods, such as the application of fine powders, work by adhesion to sebaceous deposits and possibly aqueous deposits in the case of fresh fingerprints. The aqueous component of a fingerprint, whilst initially sometimes making up over 90% of the weight of the fingerprint, can evaporate quite quickly and may have mostly gone after 24 hours. Following work on the use of argon ion lasers for fingerprint detection, a wide range of fluorescence techniques have been introduced, primarily for the enhancement of chemically developed fingerprints; the inherent fluorescence of some latent fingerprints may also be detected. Fingerprints can for example be visualized in 3D and without chemicals by the use of infrared lasers.
A comprehensive manual of the operational methods of fingerprint enhancement was last published by the UK Home Office Scientific Development Branch in 2013 and is used widely around the world.
A technique proposed in 2007 aims to identify an individual's ethnicity, sex, and dietary patterns.
Crime scene investigations
The application of the new scanning Kelvin probe (SKP) fingerprinting technique, which makes no physical contact with the fingerprint and does not require the use of developers, has the potential to allow fingerprints to be recorded whilst still leaving intact material that could subsequently be subjected to DNA analysis. A forensically usable prototype was under development at Swansea University during 2010, in research that was generating significant interest from the British Home Office and a number of different police forces across the UK, as well as internationally. The hope is that this instrument could eventually be manufactured in sufficiently large numbers to be widely used by forensic teams worldwide.
Detection of drug use
The secretions, skin oils and dead cells in a human fingerprint contain residues of various chemicals and their metabolites present in the body. These can be detected and used for forensic purposes. For example, the fingerprints of tobacco smokers contain traces of cotinine, a nicotine metabolite; they also contain traces of nicotine itself. Caution should be used, as its presence may be caused by mere contact of the finger with a tobacco product. By treating the fingerprint with gold nanoparticles with attached cotinine antibodies, and then subsequently with a fluorescent agent attached to cotinine antibodies, the fingerprint of a smoker becomes fluorescent; non-smokers' fingerprints stay dark. The same approach, as of 2010, is being tested for use in identifying heavy coffee drinkers, cannabis smokers, and users of various other drugs.
Police force databases
Most American law enforcement agencies use Wavelet Scalar Quantization (WSQ), a wavelet-based system for efficient storage of compressed fingerprint images at 500 pixels per inch (ppi). WSQ was developed by the FBI, the Los Alamos National Lab, and the National Institute of Standards and Technology (NIST). For fingerprints recorded at 1000 ppi spatial resolution, law enforcement (including the FBI) uses JPEG 2000 instead of WSQ.
Validity
Fingerprints collected at a crime scene, or on items of evidence from a crime, have been used in forensic science to identify suspects, victims and other persons who touched a surface. Fingerprint identification emerged as an important system within police agencies in the late 19th century, when it replaced anthropometric measurements as a more reliable method for identifying persons having a prior record, often under a false name, in a criminal record repository. Fingerprinting has served all governments worldwide during the past 100 years or so to provide identification of criminals. Fingerprints are the fundamental tool in every police agency for the identification of people with a criminal history.
The validity of forensic fingerprint evidence has been challenged by academics, judges and the media. In the United States fingerprint examiners have not developed uniform standards for the identification of an individual based on matching fingerprints. In some countries where fingerprints are also used in criminal investigations, fingerprint examiners are required to match a number of identification points before a match is accepted. In England 16 identification points are required and in France 12, to match two fingerprints and identify an individual. Point-counting methods have been challenged by some fingerprint examiners because they focus solely on the location of particular characteristics in fingerprints that are to be matched. Fingerprint examiners may also uphold the one dissimilarity doctrine, which holds that if there is one dissimilarity between two fingerprints, the fingerprints are not from the same finger. Furthermore, academics have argued that the error rate in matching fingerprints has not been adequately studied. And it has been argued that fingerprint evidence has no secure statistical foundation. Research has been conducted into whether experts can objectively focus on feature information in fingerprints without being misled by extraneous information, such as context.
Fingerprints can theoretically be forged and planted at crime scenes.
Professional certification
Fingerprinting was the basis upon which the first forensic professional organization was formed, the International Association for Identification (IAI), in 1915. The first professional certification program for forensic scientists was established in 1977, the IAI's Certified Latent Print Examiner program, which issued certificates to those meeting stringent criteria and had the power to revoke certification where an individual's performance warranted it. Other forensic disciplines have followed suit and established their own certification programs.
History
Antiquity and the medieval period
Fingerprints have been found on ancient clay tablets, seals, and pottery. They have also been found on the walls of Egyptian tombs and on Minoan, Greek, and Chinese pottery. In ancient China officials authenticated government documents with their fingerprints. In about 200 BC fingerprints were used to sign written contracts in Babylon. Fingerprints from 3D-scans of cuneiform tablets are extracted using the GigaMesh Software Framework.
With the advent of silk and paper in China, parties to a legal contract impressed their handprints on the document. Sometime before 851 CE, an Arab merchant in China, Abu Zayd Hasan, witnessed Chinese merchants using fingerprints to authenticate loans.
Although ancient peoples probably did not realize that fingerprints could uniquely identify individuals, references from the age of the Babylonian king Hammurabi (reigned 1792–1750 BCE) indicate that law officials would take the fingerprints of people who had been arrested. During China's Qin Dynasty, records have shown that officials took hand prints and foot prints as well as fingerprints as evidence from a crime scene. In 650 the Chinese historian Kia Kung-Yen remarked that fingerprints could be used as a means of authentication. In his Jami al-Tawarikh (Universal History), the Iranian physician Rashid-al-Din Hamadani (1247–1318) refers to the Chinese practice of identifying people via their fingerprints, commenting: "Experience shows that no two individuals have fingers exactly alike."
Europe in the 17th and 18th centuries
From the late 16th century onwards, European academics attempted to include fingerprints in scientific studies. But plausible conclusions could be established only from the mid-17th century onwards. In 1686 the professor of anatomy at the University of Bologna Marcello Malpighi identified ridges, spirals and loops in fingerprints left on surfaces. In 1788 a German anatomist Johann Christoph Andreas Mayer was the first European to conclude that fingerprints were unique to each individual. In 1880 Henry Faulds suggested, based on his studies, that fingerprints are unique to a human.
19th century
In 1823 Jan Evangelista Purkyně identified nine fingerprint patterns. The nine patterns include the tented arch, the loop, and the whorl, which in modern-day forensics are considered ridge details. In 1840, following the murder of Lord William Russell, a provincial doctor, Robert Blake Overton, wrote to Scotland Yard suggesting checking for fingerprints. In 1853 the German anatomist Georg von Meissner (1829–1905) studied friction ridges, and in 1858 Sir William James Herschel initiated fingerprinting in India. In 1877 he first instituted the use of fingerprints on contracts and deeds to prevent the repudiation of signatures in Hooghly near Kolkata and he registered government pensioners' fingerprints to prevent the collection of money by relatives after a pensioner's death.
In 1880 Henry Faulds, a Scottish surgeon in a Tokyo hospital, published his first paper on the usefulness of fingerprints for identification and proposed a method to record them with printing ink. Returning to Great Britain in 1886, he offered the concept to the Metropolitan Police in London but it was dismissed at that time. Up until the early 1890s police forces in the United States and on the European continent could not reliably identify criminals to track their criminal record. Francis Galton published a detailed statistical model of fingerprint analysis and identification in his 1892 book Finger Prints. He had calculated that the chance of a "false positive" (two different individuals having the same fingerprints) was about 1 in 64 billion. In 1892 Juan Vucetich, an Argentine chief police officer, created the first method of recording the fingerprints of individuals on file. In that same year, Francisca Rojas was found in a house with neck injuries, whilst her two sons were found dead with their throats cut. Rojas accused a neighbour, but despite brutal interrogation, this neighbour would not confess to the crimes. Inspector Alvarez, a colleague of Vucetich, went to the scene and found a bloody thumb mark on a door. When it was compared with Rojas' prints, it was found to be identical with her right thumb. She then confessed to the murder of her sons. This was the first known murder case to be solved using fingerprint analysis.
In Kolkata a fingerprint Bureau was established in 1897, after the Council of the Governor General approved a committee report that fingerprints should be used for the classification of criminal records. The bureau employees Azizul Haque and Hem Chandra Bose have been credited with the primary development of a fingerprint classification system eventually named after their supervisor, Sir Edward Richard Henry.
20th century
The French scientist Paul-Jean Coulier developed a method to transfer latent fingerprints on surfaces to paper using iodine fuming. It allowed the London Scotland Yard to start fingerprinting individuals and identify criminals using fingerprints in 1901. Soon after, American police departments adopted the same method and fingerprint identification became a standard practice in the United States. The Scheffer case of 1902 is the first case of the identification, arrest, and conviction of a murderer based upon fingerprint evidence. Alphonse Bertillon identified the thief and murderer Scheffer, who had previously been arrested and his fingerprints filed some months before, from the fingerprints found on a fractured glass showcase, after a theft in a dentist's apartment where the dentist's employee was found dead. It was able to be proved in court that the fingerprints had been made after the showcase was broken.
The identification of individuals through fingerprints for law enforcement has been considered essential in the United States since the beginning of the 20th century. Body identification using fingerprints has also been valuable in the aftermath of natural disasters and anthropogenic hazards. In the United States, the FBI manages a fingerprint identification system and database called the Integrated Automated Fingerprint Identification System (IAFIS), which currently holds the fingerprints and criminal records of over 51 million criminal record subjects and over 1.5 million civil (non-criminal) fingerprint records. OBIM, formerly U.S. VISIT, holds the largest repository of biometric identifiers in the U.S. government at over 260 million individual identities. When it was deployed in 2004, this repository, known as the Automated Biometric Identification System (IDENT), stored biometric data in the form of two-finger records. Between 2005 and 2009, the DHS transitioned to a ten-print record standard in order to establish interoperability with IAFIS.
In 1910, Edmond Locard established the first forensic lab in France. Criminals may wear gloves to avoid leaving fingerprints. However, the gloves themselves can leave prints that are as unique as human fingerprints. After collecting glove prints, law enforcement can match them to gloves that they have collected as evidence or to prints collected at other crime scenes. In many jurisdictions the act of wearing gloves itself while committing a crime can be prosecuted as an inchoate offense.
Use of fingerprints in schools
The non-governmental organization (NGO) Privacy International in 2002 made the cautionary announcement that tens of thousands of UK school children were being fingerprinted by schools, often without the knowledge or consent of their parents. That same year, the supplier Micro Librarian Systems, which uses a technology similar to that used in US prisons and the German military, estimated that 350 schools throughout Britain were using such systems to replace library cards. By 2007, it was estimated that 3,500 schools were using such systems. Under the United Kingdom Data Protection Act, schools in the UK do not have to ask parental consent to allow such practices to take place. Parents opposed to fingerprinting may bring only individual complaints against schools. In response to a complaint which they are continuing to pursue, in 2010 the European Commission expressed 'significant concerns' over the proportionality and necessity of the practice and the lack of judicial redress, indicating that the practice may break the European Union data protection directive.
In March 2007, the UK government was considering fingerprinting all children aged 11 to 15 and adding the prints to a government database as part of a new passport and ID card scheme and disallowing opposition for privacy concerns. All fingerprints taken would be cross-checked against prints from 900,000 unsolved crimes. Shadow Home secretary David Davis called the plan "sinister". The Liberal Democrat home affairs spokesman Nick Clegg criticised "the determination to build a surveillance state behind the backs of the British people". The UK's junior education minister Lord Adonis defended the use of fingerprints by schools, to track school attendance as well as access to school meals and libraries, and reassured the House of Lords that the children's fingerprints had been taken with the consent of the parents and would be destroyed once children left the school. An Early Day Motion which called on the UK Government to conduct a full and open consultation with stakeholders about the use of biometrics in schools, secured the support of 85 Members of Parliament (Early Day Motion 686). Following the establishment in the United Kingdom of a Conservative and Liberal Democratic coalition government in May 2010, the UK ID card scheme was scrapped.
Serious concerns about the security implications of using conventional biometric templates in schools have been raised by a number of leading IT security experts, one of whom has voiced the opinion that "it is absolutely premature to begin using 'conventional biometrics' in schools". The vendors of biometric systems claim that their products bring benefits to schools such as improved reading skills, decreased wait times in lunch lines and increased revenues. They do not cite independent research to support this view. One education specialist wrote in 2007: "I have not been able to find a single piece of published research which suggests that the use of biometrics in schools promotes healthy eating or improves reading skills amongst children... There is absolutely no evidence for such claims".
The Ottawa Police in Canada have advised parents who fear their children may be kidnapped to fingerprint their children.
Absence or mutilation of fingerprints
A very rare medical condition, adermatoglyphia, is characterized by the absence of fingerprints. Affected persons have completely smooth fingertips, palms, toes and soles, but no other medical signs or symptoms. A 2011 study indicated that adermatoglyphia is caused by the improper expression of the protein SMARCAD1. The condition has been called immigration delay disease by the researchers describing it, because the congenital lack of fingerprints causes delays when affected persons attempt to prove their identity while traveling. Only five families with this condition had been described as of 2011.
People with Naegeli–Franceschetti–Jadassohn syndrome and dermatopathia pigmentosa reticularis, which are both forms of ectodermal dysplasia, also have no fingerprints. Both of these rare genetic syndromes produce other signs and symptoms as well, such as thin, brittle hair.
The anti-cancer medication capecitabine may cause the loss of fingerprints. Swelling of the fingers, such as that caused by bee stings, will in some cases cause the temporary disappearance of fingerprints, though they will return when the swelling recedes.
Since the elasticity of skin decreases with age, many senior citizens have fingerprints that are difficult to capture. The ridges get thicker; the height between the top of the ridge and the bottom of the furrow gets narrow, so there is less prominence.
Fingerprints can be erased permanently and this can potentially be used by criminals to reduce their chance of conviction. Erasure can be achieved in a variety of ways including simply burning the fingertips, using acids and advanced techniques such as plastic surgery. John Dillinger burned his fingers with acid, but prints taken during a previous arrest and upon death still exhibited almost complete relation to one another.
Fingerprint verification
Fingerprints can be captured as graphical ridge and valley patterns. Because of their uniqueness and permanence, fingerprints emerged as the most widely used biometric identifier in the 2000s. Automated fingerprint verification systems were developed to meet the needs of law enforcement and their use became more widespread in civilian applications. Despite being deployed more widely, reliable automated fingerprint verification remained a challenge and was extensively researched in the context of pattern recognition and image processing. The uniqueness of a fingerprint can be established by the overall pattern of ridges and valleys, or the logical ridge discontinuities known as minutiae. In the 2000s minutiae features were considered the most discriminating and reliable feature of a fingerprint. Therefore, the recognition of minutiae features became the most common basis for automated fingerprint verification. The most widely used minutiae features used for automated fingerprint verification were the ridge ending and the ridge bifurcation.
Patterns
The three basic patterns of fingerprint ridges are the arch, loop, and whorl:
Arch: The ridges enter from one side of the finger, rise in the center forming an arc, and then exit the other side of the finger.
Loop: The ridges enter from one side of a finger, form a curve, and then exit on that same side.
Whorl: Ridges form circularly around a central point on the finger.
Scientists have found that family members often share the same general fingerprint patterns, leading to the belief that these patterns are inherited.
Minutiae features
Features of fingerprint ridges, called minutiae, include:
Ridge ending: The abrupt end of a ridge
Bifurcation: A single ridge dividing in two
Short or independent ridge: A ridge that commences, travels a short distance and then ends
Island or dot: A single small ridge inside a short ridge or ridge ending that is not connected to all other ridges
Lake or ridge enclosure: A single ridge that bifurcates and reunites shortly afterward to continue as a single ridge
Spur: A bifurcation with a short ridge branching off a longer ridge
Bridge or crossover: A short ridge that runs between two parallel ridges
Delta: A Y-shaped ridge meeting
Core: A circle in the ridge pattern
Fingerprint sensors
A fingerprint sensor is an electronic device used to capture a digital image of the fingerprint pattern. The captured image is called a live scan. This live scan is digitally processed to create a biometric template (a collection of extracted features) which is stored and used for matching. Many technologies have been used including optical, capacitive, RF, thermal, piezoresistive, ultrasonic, piezoelectric, and MEMS.
Optical scanners take a visual image of the fingerprint using a digital camera.
Capacitive or CMOS scanners use capacitors and thus electrical current to form an image of the fingerprint.
Ultrasound fingerprint scanners use high frequency sound waves to penetrate the epidermal (outer) layer of the skin.
Thermal scanners sense the temperature differences on the contact surface, in between fingerprint ridges and valleys.
Consumer electronics login authentication
Since 2000 electronic fingerprint readers have been introduced as consumer electronics security applications. Fingerprint sensors could be used for login authentication and the identification of computer users. However, some less sophisticated sensors have been discovered to be vulnerable to quite simple methods of deception, such as fake fingerprints cast in gels. In 2006, fingerprint sensors gained popularity in the laptop market. Built-in sensors in laptops, such as ThinkPads, VAIO, HP Pavilion and EliteBook laptops, and others also double as motion detectors for document scrolling, like the scroll wheel.
Two of the first smartphone manufacturers to integrate fingerprint recognition into their phones were Motorola with the Atrix 4G in 2011 and Apple with the iPhone 5S on September 10, 2013. One month after, HTC launched the One Max, which also included fingerprint recognition. In April 2014, Samsung released the Galaxy S5, which integrated a fingerprint sensor on the home button.
Following the release of the iPhone 5S model, a group of German hackers announced on September 21, 2013, that they had bypassed Apple's new Touch ID fingerprint sensor by photographing a fingerprint from a glass surface and using that captured image as verification. The spokesman for the group stated: "We hope that this finally puts to rest the illusions people have about fingerprint biometrics. It is plain stupid to use something that you can't change and that you leave everywhere every day as a security token." In September 2015, Apple included a fingerprint scanner in the iPhone home button with the iPhone 6s. The use of the Touch ID fingerprint scanner was optional and could be configured to unlock the screen or pay for mobile apps purchases. Since December 2015, cheaper smartphones with fingerprint recognition have been released, such as the $100 UMI Fair. Samsung introduced fingerprint sensors to its mid-range A series smartphones in 2014.
By 2017 Hewlett Packard, Asus, Huawei, Lenovo and Apple were using fingerprint readers in their laptops. Synaptics says the SecurePad sensor is now available for OEMs to start building into their laptops. In 2018, Synaptics revealed that their in-display fingerprint sensors would be featured on the new Vivo X21 UD smartphone. This was the first mass-produced fingerprint sensor to be integrated into the entire touchscreen display, rather than as a separate sensor.
Video
Videos have become a pronounced way of identifying information. There are features in videos that look at how intense certain parts of a frame are compared to others which help with identification.
Algorithms
Matching algorithms are used to compare previously stored templates of fingerprints against candidate fingerprints for authentication purposes. In order to do this either the original image must be directly compared with the candidate image or certain features must be compared.
Pre-processing
Pre-processing enhances the quality of an image by filtering and removing extraneous noise. The minutiae-based algorithm is only effective with 8-bit gray scale fingerprint images. One reason for this is that an 8-bit gray fingerprint image is a fundamental base when converting the image to a 1-bit image with value 1 for ridges and value 0 for furrows. This process allows for enhanced edge detection so the fingerprint is revealed in high contrast, with the ridges highlighted in black and the furrows in white. To further optimize the input image's quality, two more steps are required: minutiae extraction and false minutiae removal. The minutiae extraction is carried out by applying a ridge-thinning algorithm that removes redundant pixels of ridges. As a result, the thinned ridges of the fingerprint image are marked with a unique ID to facilitate the conduction of further operations. After the minutiae extraction, the false minutiae removal is carried out. The lack of the amount of ink and the cross link among the ridges could cause false minutiae that led to inaccuracy in fingerprint recognition process.
Pattern-based (or image-based) algorithms
Pattern based algorithms compare the basic fingerprint patterns (arch, whorl, and loop) between a previously stored template and a candidate fingerprint. This requires that the images can be aligned in the same orientation. To do this, the algorithm finds a central point in the fingerprint image and centers on that. In a pattern-based algorithm, the template contains the type, size, and orientation of patterns within the aligned fingerprint image. The candidate fingerprint image is graphically compared with the template to determine the degree to which they match.
In other species
Some other animals have evolved their own unique prints, especially those whose lifestyle involves climbing or grasping wet objects; these include many primates, such as gorillas and chimpanzees, Australian koalas, and aquatic mammal species such as the North American fisher. According to one study, even with an electron microscope, it can be quite difficult to distinguish between the fingerprints of a koala and a human.
In fiction
Mark Twain
Mark Twain's memoir Life on the Mississippi (1883), notable mainly for its account of the author's time on the river, also recounts parts of his later life and includes tall tales and stories allegedly told to him. Among them is an involved, melodramatic account of a murder in which the killer is identified by a thumbprint. Twain's novel Pudd'nhead Wilson, published in 1893, includes a courtroom drama that turns on fingerprint identification.
Crime fiction
The use of fingerprints in crime fiction has, of course, kept pace with its use in real-life detection. Sir Arthur Conan Doyle wrote a short story about his celebrated sleuth Sherlock Holmes which features a fingerprint: "The Norwood Builder" is a 1903 short story set in 1894 and involves the discovery of a bloody fingerprint which helps Holmes to expose the real criminal and free his client.
The British detective writer R. Austin Freeman's first Thorndyke novel The Red Thumb-Mark was published in 1907 and features a bloody fingerprint left on a piece of paper together with a parcel of diamonds inside a safe-box. These become the center of a medico-legal investigation led by Dr. Thorndyke, who defends the accused whose fingerprint matches that on the paper, after the diamonds are stolen.
Film and television
In the television series Bonanza (1959–1973) the Chinese character Hop Sing uses his knowledge of fingerprints to free Little Joe from a murder charge.
The 1997 movie Men in Black required Agent J to remove his ten fingerprints by putting his hands on a metal ball, an action deemed necessary by the MIB agency to remove the identity of its agents.
In the 2009 science fiction movie Cold Souls, a mule who smuggles souls wears latex fingerprints to frustrate airport security terminals. She can change her identity by simply changing her wig and latex fingerprints.
See also
Biometric technology in access control
Eye vein verification
Fingerprint Verification Competition
Finger vein recognition
Footprint
Heredity
Iris recognition
Shirley McKie, misidentified fingerprint
Medical ultrasonography
Piezoelectricity
References
External links
General
Fingerprint Sourcebook Multi-organization compendium on Fingerprints
International Association for Identification
Scientific Working Group on Friction Ridge Analysis, Study and Technology International Working Group on Fingerprints
Interpol Fingerprint Research
The Science of Fingerprints FBI Publication
FBI Fingerprint Guide
The History of Fingerprints
Fingerprinting.com
Fingerprint Articles at Crime & Clues
Galton's Finger Prints
Henry, Faulds, and Herschel's works on fingerprints
Extensive bibliography So. Calif. Assn. of Fingerprint Officers.
Errors and concerns
Publications Critical of Fingerprint Identification
Will West as fable
Do Fingerprints Lie? The New Yorker (2002)
Why Experts Make Errors, Itiel E. Dror, David Charlton, Journal of Forensic Identification
Surgeon jailed for removing fingerprints – Sydney Morning Herald (news article)
Science and statistics
Fingerprint research and evaluation at the U.S. National Institute of Standards and Technology.
Fingerprint pattern distribution statistics
Biometrics
Print
Human anatomy
Identification |
27804876 | https://en.wikipedia.org/wiki/RhoMobile%20Suite | RhoMobile Suite | RhoMobile Suite, based on the Rhodes open source framework, is a set of development tools for creating data-centric, cross-platform, native mobile consumer and enterprise applications. It allows developers to build native mobile apps using web technologies, such as CSS3, HTML5, JavaScript and Ruby. Developers can deploy RhoMobile Suite to write an app once and run it on the most-used operating systems, including iOS, Android, Windows Phone, Windows Mobile, Windows CE, Windows 10 Mobile and Windows Desktop. Developers control how apps behave on different devices. RhoMobile Suite consists of a set of tools for building, testing, debugging, integrating, deploying and managing consumer and enterprise apps. It consists of the products Rhodes, RhoElements, RhoStudio, RhoConnect, and RhoGallery, and includes a built-in Model View Controller pattern, an Object Relational Mapper for data intensive apps, integrated data synchronization, and a broad API set. These mobile development services are offered in the cloud and include hosted build, synchronization and application management.
RhoMobile was part of Zebra Technologies following the October 2014 acquisition of Motorola Solutions by Zebra until 2016 when the project was open sourced.
RhoMobile source code is maintained by Tau Technologies, an independent software vendor founded by RhoMobile team members, who provides RhoMobile related consulting and development services.
History
Formerly known as Rhodes Framework, RhoMobile was founded by Adam Blum in September 2008, along with the creation of the Rhodes project on GitHub. The subsequent months saw releases that added iPhone, Windows Mobile, and Android development support. In May 2009, RhoMobile was a winner at Interop 2009 as the event’s "Best Start Up Company." In November 2009 RhoHub was launched as the beginning of RhoMobile’s hosted, cross-platform development services. In May 2010, RhoMobile was a Web 2.0 Expo LaunchPad winner. Motorola Solutions then acquired the company in October 2011. In 2012, RhoMobile was one of InfoWorld's 2012 Technology of the Year Award winners. In 2013, RhoMobile Suite won the About.com Reader’s Choice Award for being the Best Tool for Cross-Platform Formatting on Apps.
In April 2014, Zebra Technologies acquired Motorola Solutions for $3.45 billion, with the transaction completed in October 2014.
Since 2016 the project is maintained by Tau Technologies.
Overview
RhoMobile Suite Products
RhoMobile Suite includes Rhodes, RhoElements, RhoStudio, RhoConnect, RhoHub and RhoGallery.
Rhodes
Rhodes is a free and open source framework and the foundation for the RhoMobile application development platform. It enables developers to use their existing HTML, CSS, JavaScript and Ruby skills to build native apps for all popular operating systems, including iOS, Android, Windows Phone 8. Developers can leverage a large and mature open source community, which has developed thousands of RhoMobile apps.
RhoElements
RhoElements provides enterprise grade features on top of Rhodes - adding support for enterprise grade Zebra devices including Windows Mobile and Windows CE operating systems. It offers a built-in Model View Controller pattern, an Object Relational Mapper for data intensive apps, integrated data synchronization, and a large API set. The Model View Controller separates an app’s interface from its logic to simplify development and help with control. The Object Relational Mapper offers automatic synchronization of backend data. The broad base of enterprise APIs supports features such as RFID capture, bar code scanning and payment processing. RhoElements features automatic data encryption for data at rest security, protecting information and mitigating risk.
RhoStudio
RhoStudio is a free Eclipse plug-in, allowing users to develop an application once for deployment on many mobile platforms. Developers can generate, develop, debug and test applications in one place, with no emulators or different hardware needed. Popular OS platforms can be simulated by dropdown box selection. The rationale is that one-time development can mean fewer errors, less hardware costs, and faster deployment.
RhoConnect
RhoConnect allows developers to build data synchronization into apps for offline data access. It simplifies an enterprise mobile app’s basic backend application integration, enabling users to have their data with them at all times whether or not they connect. It is available on the cloud or on the premises.
RhoGallery
RhoGallery enables enterprise app distribution, which allows an app store to control and push applications. IT departments are able to deliver applications and updates as needed across multiple operating system and devices.
RhoHub
RhoHub is the cloud service that comes with a paid subscription and supports cloud builds, RhoConnect hosted service and RhoGallery hosted service.
Architecture
RhoMobile uses a Model-View-Controller pattern. Views are written in HTML (including HTML5). Controllers are written in Ruby.
RhoMobile 7.x and Simplified Pricing Structure
On July 29, 2014, the release of RhoMobile 5.0 was accompanied by a new streamlined service model offering multi-tier transparent pricing, including a free level and two paid subscription levels, Silver and Gold. This new pricing was created to meet the needs of the developer. In this pricing structure, Rhodes, the basic app framework, and RhoStudio are free to use. Both paid levels include Rhodes and RhoStudio as well as RhoElements (enhanced enterprise features such as barcode reading and automatic data encryption), Cloud Build and a Visual Studio plug-in, RhoConnect, RhoGallery and online support. The purchase of a subscription comes with one month of free services.
Since 2015 RhoMobile Suite is distributed with MIT license free to use, with commercial support provided by Tau Technologies. Flexible support options available over request to maintainer company.
See also
Multiple phone web based application framework
Mobile application development
Zebra Technologies
References
External links
rhomobile.com
tau-platform.com
Computing platforms |
645156 | https://en.wikipedia.org/wiki/AirPort%20Extreme | AirPort Extreme | The AirPort Extreme is a residential gateway combining the functions of a router, network switch, wireless access point and NAS as well as varied other functions, and one of Apple's former AirPort products. The latest model, the 6th generation, supports 802.11ac networking in addition to older standards. Versions of the same system with a built-in network-accessible hard drive are known as the AirPort Time Capsule.
Apple discontinued developing its lineup of wireless routers in 2016, but continues limited hardware and software support.
History
The name "AirPort Extreme" originally referred to any one of Apple's AirPort products that implemented the (then) newly introduced 802.11g Wi-Fi standard, differentiating it from earlier devices that ran the slower 802.11a and b standards. At that time the gateway part of this lineup was known as the AirPort Extreme Base Station. With the addition of the even faster Draft-N standards in early 2009 this naming was dropped, and from then on only the gateway has been known as the AirPort Extreme. Several minor upgrades followed, mostly to change antenna and power in the Wi-Fi. In 2013, a major upgrade added 802.11ac support and more internal antennas.
The AirPort Extreme has gone through three distinct physical forms. The earliest models were packaged similar to the original AirPort Base Station, in a round housing known as the "flying saucer". From 2007 to 2013 the Extreme was packaged in a rounded-rectangle white plastic housing, similar in layout and size to the Mac mini or earlier Apple TVs. The 2013 802.11ac model was re-packaged into a more vertical case, taller than it is square.
Discontinuation
In approximately 2016, Apple disbanded the wireless router team that developed the AirPort Time Capsule and AirPort Extreme router. In 2018, Apple formally discontinued both products, exiting the router market. Bloomberg News noted that "Apple rarely discontinues product categories" and that its decision to leave the business was "a boon for other wireless router makers."
Features
Overview
Fully featured 802.11ac Wi-Fi base station
Sleep Proxy Service
4 Ethernet ports (3 LAN ports, 1 WAN port) — all ports are gigabit Ethernet on newer versions
USB 2.0 interface for disk and printer sharing
Built-in file server (AFP and SMB)
Runs VxWorks Operating System by WindRiver or a customized version of NetBSD.
AirPort Disk
The AirPort Disk feature allows users to plug a USB hard drive into the AirPort Extreme for use as a network-attached storage (NAS) device for Mac OS X and Microsoft Windows clients. Users may also connect a USB hub and printer. The performance of USB hard drives attached to an AirPort Extreme is slower than if the drive were connected directly to a computer. This is due to the processor speed on the AirPort Extreme. Depending on the setup and types of reads and writes, performance ranges from 0.5 to 17.5 MB/s for writing and 1.9 to 25.6 MB/s for reading. Performance for the same disk connected directly to a computer would be 6.6 to 31.6 MB/s for writing and 7.1 to 37.2 MB/s for reading. NTFS-formatted drives are not supported.
AirPort Extreme models by generation
Original generation
The original AirPort Extreme Base Station was so named because of its support for the 802.11g standard of the day, as well as for its ability to serve up to 50 Macs or PCs simultaneously. One feature found in most models of this generation was an internal 56K dial-up modem, allowing homes that lacked a broadband connection to enjoy wireless connectivity, albeit at dial-up speeds. It was the last generation to retain the "flying saucer" form factor. Later generations would adopt the short, rounded-square form factor that would be seen until 2013.
1st generation
On January 9, 2007 the AirPort Extreme began shipping, with support for 802.11n draft specification, and built-in wireless print and storage server.
2nd generation
On March 19, 2008, Apple released a firmware update for both models of the AirPort Extreme that, according to third-party reports, allowed AirPort Disks to be used in conjunction with Time Machine, similar to the functionality provided by AirPort Time Capsule.
3rd generation
On March 3, 2009, Apple unveiled a new AirPort Extreme with simultaneous dual-band 802.11 Draft-N radios. This allowed full 802.11 Draft-N 2x2 communication in both 802.11 Draft-N bands at the same time.
4th generation
On October 20, 2009, Apple unveiled an updated AirPort Extreme with antenna improvements.
5th generation
On June 21, 2011, Apple unveiled an updated AirPort Extreme, referred to as AirPort Extreme 802.11n (5th Generation).
The detailed table of output power comparison between the 4th generation model MC340LL/A and the 5th generation model MD031LL/A can be seen below:
{| class="wikitable" border="1" style="text-align:center"
|-
! Frequency range (MHz)
! Mode
! AirPort Extreme model
! Output power (dBm)
! Output power (mW)
! Comparison (percent)
! Difference (percent)
|-
| rowspan="6"| 2412–2462
| rowspan="2"| 802.11b
| 4th generation
| 24.57
| 286.42
| 100
| rowspan="2" style="background:none; color:red"| -10.3
|-
| 5th generation
| 24.10
| 257.04
| 89.7
|-
| rowspan="2"| 802.11g
| 4th generation
| 21.56
| 143.22
| 100
| rowspan="2" style="background:none; color:green"| +114.8
|-
| 5th generation
| 24.88
| 307.61
| 214.8
|-
| rowspan="2"| 802.11n HT20
| 4th generation
| 21.17
| 130.92
| 100
| rowspan="2" style="background:none; color:green"| +96.8
|-
| 5th generation
| 24.11
| 257.63
| 196.8
|-
| rowspan="2"| 5745–5825
| rowspan="2"| 802.11a
| 4th generation
| 23.07
| 202.77
| 100
| rowspan="2" style="background:none; color:green"| +61.1
|-
| 5th generation
| 25.14
| 326.59
| 161.1
|-
| rowspan="2"| 5745–5805
| rowspan="2"| 802.11n HT20
| 4th generation
| 22.17
| 164.82
| 100
| rowspan="2" style="background:none; color:green"| +104.6
|-
| 5th generation
| 25.28
| 337.29
| 204.6
|-
| rowspan="2"| 5755–5795
| rowspan="2"| 802.11n HT40
| 4th generation
| 21.44
| 139.32
| 100
| rowspan="2" style="background:none; color:green"| +181.8
|-
| 5th generation
| 25.94
| 392.64
| 281.8
|}
Note: A 3dB increase is equivalent to a doubling of power output.
6th generation
On June 10, 2013, Apple unveiled an updated AirPort Extreme, referred to as AirPort Extreme 802.11ac (6th Generation). The 6th generation AirPort Extreme (and 5th generation AirPort Time Capsule) featured three-stream 802.11ac Wi-Fi technology with a maximum data rate of 1.3Gbit/s, which is nearly three times faster than 802.11n. Time Machine was now supported using an external USB hard drive connected to AirPort Extreme (802.11ac model only).
Comparison chart
*802.11n draft-specification support in 1st- to 3rd-generation models.
**802.11ac draft-specification support in 6th-generation model.
***All models support IPv6 tunnel mode.
****Supported by Apple.
Discontinuation and support
According to a Bloomberg report on November 21, 2016, "Apple Inc. has disbanded its division that develops wireless routers, another move to try to sharpen the company’s focus on consumer products that generate the bulk of its revenue, according to people familiar with the matter."
In an April 2018 statement to 9to5Mac, Apple announced the discontinuation of its AirPort line, effectively leaving the consumer router market. Apple continued supporting the AirPort Extreme.
See also
List of router firmware projects
AirPort Express
Notes
Apple Inc. peripherals
Discontinued Apple Inc. products |
43574240 | https://en.wikipedia.org/wiki/Mieczyslaw%20Rys-Trojanowski | Mieczyslaw Rys-Trojanowski | Mieczyslaw Rys-Trojanowski (October 21, 1881 in Krośniewice – April 4, 1945 in Mauthausen-Gusen concentration camp) was a General brygady of Polish Army in the Second Polish Republic.
Rys-Trojanowski was born in a patriotic Polish family: his father Szymon fought in the January Uprising. After high school, Mieczyslaw went to Kraków, to study at Jagiellonian University. There he got in touch with patriotic organizations, which fought for the independence of the nation (see Partitions of Poland). Rys-Trojanowski participated in the Revolution of 1905, during which he was arrested on suspicion of attempting to kill Russian governor of Warsaw (see Congress Poland). In 1908 he moved to Austrian Galicia, where he became one of key members of the Union of Active Struggle, and organizer of local branches of the Riflemen's Association.
Promoted to the rank of officer, Rys-Trojanowski fought in the Polish Legions in World War I, participating in all major battles of the unit, including the Battle of Kostiuchnowka. On November 1, 1916, he was named commandant of the 5th Legions' Infantry Regiment. In July 1917, after the so-called Oath Crisis, Rys-Trojanowski was imprisoned in Beniaminow. Later, he was kept in German camps at Rastatt, Holzminden and Werl.
In early November 1918, Rys-Trojanowski became commandant of Chełm Military District and Chełm Infantry Regiment (later renamed into 35th Infantry Regiment), with which he fought in the Polish–Soviet War. On May 20, 1920, Rys-Trojanowski was transferred to the 17th Infantry Brigade, and on September 2 of that year he was named commandant of the 9th Infantry Division, remaining in this post until late July 1926.
On December 1, 1924, President Stanislaw Wojciechowski upon request of Minister of Military Affairs Władysław Sikorski, promoted Rys-Trojanowski to the rank of General brygady. On July 31, 1926, President Ignacy Mościcki named Rys-Trojanowski commandant of Military District Nr 9 in Brzesc nad Bugiem. In 1935, he was transferred to Warsaw, becoming commandant of Military District Nr 1.
Rys-Trojanowski remained in this post until the Invasion of Poland (September 1, 1939). On September 4 he left Polish capital, tasked with creation of Prusy Army. Due to rapid German advance, this army was not created: Rys-Trojanowski involved himself with creation of Warszawa Army. He personally visited checkpoints on roads east of Warsaw, controlling the evacuation of civil servants, police officers and army personnel.
In mid-September, he helped with creation of Lublin Army, gathering soldiers scattered in northern Lesser Poland, Podlasie and eastern Mazovia. On September 20, after Soviet invasion of Poland, Rys-Trojanowski fled to Hungary, where he remained until March 1944. He was a very active member of Polish organizations in Hungary, cooperating both with Home Army and Hungarian headquarters.
On March 19, 1944, upon the invasion and occupation of Hungary by Nazi Germany, Rys-Trojanowski was arrested and sent to Mauthausen-Gusen concentration camp. He was murdered there on April 4, 1945.
Promotions
Captain: September 29, 1914,
Major: June 15, 1915,
Lieutenant colonel: 1918,
Colonel: May 22, 1920,
General brygady: December 1, 1924.
Awards
Silver Cross of the Virtuti Militari
Commander's Cross with Star of the Order of Polonia Restituta,
Cross of Independence with Swords.
Golden Cross of Merit.
1881 births
1945 deaths
People from Krośniewice
People from Warsaw Governorate
Polish generals
Polish legionnaires (World War I)
Polish people of the Polish–Soviet War
Polish military personnel of World War II
Polish prisoners of war
World War II prisoners of war held by Germany
Commanders of the Virtuti Militari
Grand Crosses of the Order of Polonia Restituta
Recipients of the Gold Cross of Merit (Poland) |
19381951 | https://en.wikipedia.org/wiki/Iliad | Iliad | The Iliad (; , ; sometimes referred to as the Song of Ilion or Song of Ilium) is an ancient Greek epic poem in dactylic hexameter, traditionally attributed to Homer. Usually considered to have been written down circa the 8th century BC, the Iliad is among the oldest extant works of Western literature, along with the Odyssey, another epic poem attributed to Homer, which tells of Odysseus's experiences after the events of the Iliad. In the modern vulgate (the standard accepted version), the Iliad contains 15,693 lines, divided into 24 books; it is written in Homeric Greek, a literary amalgam of Ionic Greek and other dialects. It is usually grouped in the Epic Cycle.
Set during the Trojan War, the ten-year siege of the city of Troy (Ilium) by a coalition of Mycenaean Greek states (Achaeans), it tells of the battles and events during the weeks of a quarrel between King Agamemnon and the warrior Achilles.
Although the story covers only a few weeks in the final year of the war, the Iliad mentions or alludes to many of the Greek legends about the siege; the earlier events, such as the gathering of warriors for the siege, the cause of the war, and related concerns, tend to appear near the beginning. Then the epic narrative takes up events prophesied for the future, such as Achilles's imminent death and the fall of Troy, although the narrative ends before these events take place. However, as these events are prefigured and alluded to more and more vividly, when it reaches an end, the poem has told a more or less complete tale of the Trojan War.
Synopsis
Note: Book numbers are in parentheses and come before the synopsis of the book.
() After an invocation to the Muses, the story launches in medias res towards the end of the Trojan War between the Trojans and the besieging Achaeans. Chryses, a Trojan priest of Apollo, offers the Achaeans wealth for the return of his daughter Chryseis, held captive by Agamemnon, the Achaean leader. Although most of the Achaean army is in favour of the offer, Agamemnon refuses. prays for Apollo's help, and Apollo causes a plague to afflict the Achaean army.
After nine days of plague, Achilles, the leader of the Myrmidon contingent, calls an assembly to deal with the problem. Under pressure, Agamemnon agrees to return Chryseis to her father, but decides to take Achilles' captive, Briseis, as compensation. Achilles furiously declares that he and his men will no longer fight for Agamemnon and will go home. Odysseus takes a ship and returns Chryseis to her father, whereupon Apollo ends the plague.
In the meantime, Agamemnon's messengers take Briseis away. Achilles becomes very upset, sits by the seashore, and prays to his mother, Thetis. Achilles asks his mother to ask Zeus to bring the Achaeans to the breaking point by the Trojans, so Agamemnon will realize how much the Achaeans need Achilles. Thetis does so, and Zeus agrees.
() Zeus sends a dream to Agamemnon, urging him to attack Troy. Agamemnon heeds the dream but first decides to test the Achaean army's morale, by telling them to go home. The plan backfires, and only the intervention of Odysseus, inspired by Athena, stops a rout.
Odysseus confronts and beats Thersites, a common soldier who voices discontent about fighting Agamemnon's war. After a meal, the Achaeans deploy in companies upon the Trojan plain. The poet takes the opportunity to describe the provenance of each Achaean contingent.
When news of the Achaean deployment reaches King Priam, the Trojans respond in a sortie upon the plain. In a list similar to that for the Achaeans, the poet describes the Trojans and their allies.
() The armies approach each other, but before they meet, Paris offers to end the war by fighting a duel with Menelaus, urged by his brother and head of the Trojan army, Hector. The initial cause of the entire war is alluded to here, when Helen is said to be "embroidering the struggles between Trojans and Achaeans, that Ares had made them fight for her sake." This allusion is then made definitive at the paragraph's close, when Helen is told that Paris and "Menelaus are going to fight about yourself, and you are to be the wife of him who is the victor." Both sides swear a truce and promise to abide by the outcome of the duel. Paris is beaten, but Aphrodite rescues him and leads him to bed with Helen before Menelaus can kill him.
() Pressured by Hera's hatred of Troy, Zeus arranges for the Trojan Pandaros to break the truce by wounding Menelaus with an arrow. Agamemnon rouses the Achaeans, and battle is joined.
() In the fighting, Diomedes kills many Trojans, including Pandaros, and defeats Aeneas, whom Aphrodite rescues, but Diomedes attacks and wounds the goddess. Apollo faces Diomedes and warns him against warring with gods. Many heroes and commanders join in, including Hector, and the gods supporting each side try to influence the battle. Emboldened by Athena, Diomedes wounds Ares and puts him out of action.
() Hector rallies the Trojans and prevents a rout; the Achaean Diomedes and the Trojan Glaukos find common ground, and exchange unequal gifts, while Glaukos tells Diomedes the story of Bellerophon. Hector enters the city, urges prayers and sacrifices, incites Paris to battle, bids his wife Andromache and son Astyanax farewell on the city walls, and rejoins the battle.
() Hector duels with Ajax, but nightfall interrupts the fight, and both sides retire. The Achaeans agree to burn their dead, and build a wall to protect their ships and camp, while the Trojans quarrel about returning Helen. Paris offers to return the treasure he took and give further wealth as compensation, but not Helen, and the offer is refused. A day's truce is agreed for burning the dead, during which the Achaeans also build their wall and a trench.
() The next morning, Zeus prohibits the gods from interfering, and fighting begins anew. The Trojans prevail and force the Achaeans back to their wall, while Hera and Athena are forbidden to help. Night falls before the Trojans can assail the Achaean wall. They camp in the field to attack at first light, and their watchfires light the plain like stars.
() Meanwhile, the Achaeans are desperate. Agamemnon admits his error, and sends an embassy composed of Odysseus, Ajax, Phoenix, and two heralds to offer Briseis and extensive gifts to Achilles, who has been camped next to his ships throughout, if only he will return to the fighting. Achilles and his companion Patroclus receive the embassy well, but Achilles angrily refuses Agamemnon's offer and declares that he would only return to battle if the Trojans reached his ships and threatened them with fire. The embassy returns empty-handed.
() Later that night, Odysseus and Diomedes venture out to the Trojan lines, kill the Trojan Dolon, and wreak havoc in the camps of some Thracian allies of Troy's.
() In the morning, the fighting is fierce, and Agamemnon, Diomedes, and Odysseus are all wounded. Achilles sends Patroclus from his camp to inquire about the Achaean casualties, and while there Patroclus is moved to pity by a speech of Nestor's.
() The Trojans attack the Achaean wall on foot. Hector, ignoring an omen, leads the terrible fighting. The Achaeans are overwhelmed and routed, the wall's gate is broken, and Hector charges in.
() Poseidon is pitied upon the Acheans. He disobeys Zeus and arrives at the battlefield and helps the Acheans. The feats of Idomeneus. Many fall on both sides. The Trojan seer Polydamas urges Hector to fall back and warns him about Achilles, but is ignored.
() Hera seduces Zeus and lures him to sleep, allowing Poseidon to help the Greeks, and the Trojans are driven back onto the plain.
() Zeus awakes and is enraged by Poseidon's intervention. Against the mounting discontent of the Achaean-supporting gods, Zeus sends Apollo to aid the Trojans, who once again breach the wall, and the battle reaches the ships.
() Patroclus cannot stand to watch any longer and begs Achilles to be allowed to defend the ships. Achilles relents and lends Patroclus his armor, but sends him off with a stern admonition not to pursue the Trojans, lest he take Achilles' glory. Patroclus leads the Myrmidons into battle and arrives as the Trojans set fire to the first ships. The Trojans are routed by the sudden onslaught, and Patroclus begins his assault by killing Zeus's son Sarpedon, a leading ally of the Trojans. Patroclus, ignoring Achilles' command, pursues and reaches the gates of Troy, where Apollo himself stops him. Patroclus is set upon by Apollo and Euphorbos, and is finally killed by Hector.
() Hector takes Achilles' armor from the fallen Patroclus, but fighting develops around Patroclus' body.
() Achilles is mad with grief when he hears of Patroclus' death and vows to take vengeance on Hector; his mother Thetis grieves, too, knowing that Achilles is fated to die young if he kills Hector. Achilles is urged to help retrieve Patroclus' body but has no armour. Bathed in a brilliant radiance by Athena, Achilles stands next to the Achaean wall and roars in rage. The Trojans are dismayed by his appearance, and the Achaeans manage to bear Patroclus' body away. Polydamas urges Hector again to withdraw into the city; again Hector refuses, and the Trojans camp on the plain at nightfall. Patroclus is mourned. Meanwhile, at Thetis' request, Hephaestus fashions a new set of armor for Achilles, including a magnificently wrought shield.
() In the morning, Agamemnon gives Achilles all the promised gifts, including Briseis, but Achilles is indifferent to them. Achilles fasts while the Achaeans take their meal, straps on his new armor, and takes up his great spear. His horse Xanthos prophesies to Achilles his death. Achilles drives his chariot into battle.
() Zeus lifts the ban on the gods' interference, and the gods freely help both sides. Achilles, burning with rage and grief, slays many.
() Driving the Trojans before him, Achilles cuts off half their number in the river Skamandros and proceeds to slaughter them, filling the river with the dead. The river, angry at the killing, confronts Achilles but is beaten back by Hephaestus' firestorm. The gods fight among themselves. The great gates of the city are opened to receive the fleeing Trojans, and Apollo leads Achilles away from the city by pretending to be a Trojan.
() When Apollo reveals himself to Achilles, the Trojans have retreated into the city, all except for Hector, who, having twice ignored the counsels of Polydamas, feels the shame of the rout and resolves to face Achilles, despite the pleas of his parents, Priam and Hecuba. When Achilles approaches, Hector's will fails him, and he is chased around the city by Achilles. Finally, Athena tricks him into stopping, and he turns to face his opponent. After a brief duel, Achilles stabs Hector through the neck. Before dying, Hector reminds Achilles that he, too, is fated to die in the war. Achilles takes Hector's body and dishonours it by dragging it behind his chariot.
() The ghost of Patroclus comes to Achilles in a dream, urging him to carry out his burial rites and to arrange for their bones to be entombed together. The Achaeans hold a day of funeral games, and Achilles gives out the prizes.
() Dismayed by Achilles' continued abuse of Hector's body, Zeus decides that it must be returned to Priam. Led by Hermes, Priam takes a wagon out of Troy, across the plains, and into the Achaean camp unnoticed. He clasps Achilles by the knees and begs for his son's body. Achilles is moved to tears, and the two lament their losses in the war. After a meal, Priam carries Hector's body back into Troy. Hector is buried, and the city mourns.
Major characters
The many characters of the Iliad are catalogued; the latter half of Book II, the "Catalogue of Ships", lists commanders and cohorts; battle scenes feature quickly slain minor characters.
Achaeans
The Achaeans (), Danaans () or Argives ()
Agamemnon – King of Mycenae, leader of the Achaeans.
Menelaus – King of Sparta, husband of Helen and brother of Agamemnon.
Achilles – Leader of the Myrmidons and King of Phthia, son of Peleus and divine Thetis, the foremost warrior.
Odysseus – King of Ithaca, Greek commander, the smartest warrior.
Nestor – King of Pylos and trusted advisor to Agamemnon, the wisest warrior.
Ajax the Great – King of Salamis, son of Telamon.
Diomedes – King of Argos, son of Tydeus.
Ajax the Lesser – Commander of the Locrians, son of Oileus.
Idomeneus – Commander of the Cretans.
Patroclus – Achilles' closest companion.
Neoptolemus – Leader of the Myrmidons after Achilles death, killer of Priam.
Achilles and Patroclus
Much debate has surrounded the nature of the relationship of Achilles and Patroclus, as to whether it can be described as a homoerotic one or not. Some Classical and Hellenistic Athenian scholars perceived it as pederastic, while others perceived it as a platonic warrior-bond.
Trojans
The Trojan men
Dardanos – First king of Troy, and he originally named the city Dardania.
Hector – Prince of Troy, son of King Priam, and the foremost Trojan warrior.
Aeneas – son of Anchises and Aphrodite.
Deiphobus – brother of Hector and Paris.
Helenus – Troy's chief augur and brother of Hector.
Paris – Prince of Troy, son of King Priam, and Helen's lover/abductor.
Priam – the aged King of Troy.
Polydamas – a prudent commander whose advice is ignored; he is Hector's foil.
Agenor – son of Antenor, a Trojan warrior who attempts to fight Achilles (Book XXI).
Sarpedon, son of Zeus – killed by Patroclus. Was friend of Glaucus and co-leader of the Lycians (fought for the Trojans).
Glaucus, son of Hippolochus – friend of Sarpedon and co-leader of the Lycians (fought for the Trojans).
Euphorbus – first Trojan warrior to wound Patroclus.
Dolon – a spy upon the Greek camp (Book X).
Antenor – King Priam's advisor, who argues for returning Helen to end the war.
Polydorus – the youngest son of Priam and Hecuba.
Pandarus – famous archer and son of Lycaon.
The Trojan women
Hecuba () – Priam's wife; mother of Hector, Cassandra, Paris, and others.
Helen () – daughter of Zeus; Menelaus's wife; espoused first to Paris, then to Deiphobus; her being taken by Paris back to Troy precipitated the war.
Andromache – Princess of Troy, Hector's wife, mother of Astyanax.
Cassandra – Priam's daughter.
Briseis – a Trojan woman captured by Achilles from a previous siege, over whom Achilles's quarrel with Agamemnon began.
The gods
In the literary Trojan War of the Iliad, the Olympian gods, goddesses, and minor deities fight among themselves and participate in human warfare, often by interfering with humans to counter other gods. Unlike their portrayals in Greek religion, Homer's portrayal of gods suited his narrative purpose. The gods in traditional thought of 4th-century Athenians were not spoken of in terms familiar to the works of Homer. The Classical-era historian Herodotus says that Homer and Hesiod, his contemporary, were the first writers to name and describe the gods' appearance and character.
Mary Lefkowitz (2003) discusses the relevance of divine action in the Iliad, attempting to answer the question of whether or not divine intervention is a discrete occurrence (for its own sake), or if such godly behaviors are mere human character metaphors. The intellectual interest of Classic-era authors, such as Thucydides and Plato, was limited to their utility as "a way of talking about human life rather than a description or a truth", because, if the gods remain religious figures, rather than human metaphors, their "existence"—without the foundation of either dogma or a bible of faiths—then allowed Greek culture the intellectual breadth and freedom to conjure gods fitting any religious function they required as a people.
The religion had no founder and was not the creation of an inspired teacher which were popular origins of existing religions in the world. The individuals were free to believe what they wanted, as the Greek religion was created out of a consensus of the people. These beliefs coincide to the thoughts about the gods in polytheistic Greek religion. Adkins and Pollard (2020/1998), agree with this by saying, "the early Greeks personalized every aspect of their world, natural and cultural, and their experiences in it. The earth, the sea, the mountains, the rivers, custom-law (themis), and one's share in society and its goods were all seen in personal as well as naturalistic terms."
As a result of this thinking, each god or goddess in Polytheistic Greek religion is attributed to an aspect of the human world. For example, Poseidon is the god of the sea, Aphrodite is the goddess of beauty, Ares is the god of war, and so on and so forth for many other gods. This is how Greek culture was defined as many Athenians felt the presence of their gods through divine intervention in significant events in their lives. Oftentimes they found these events to be mysterious and inexplicable.
Psychologist Julian Jaynes (1976) uses the Iliad as a major piece of evidence for his theory of the Bicameral Mind, which posits that until about the time described in the Iliad, humans had a far different mentality from present-day humans. He says that humans during that time were lacking what is today called consciousness. He suggests that humans heard and obeyed commands from what they identified as gods, until the change in human mentality that incorporated the motivating force into the conscious self. He points out that almost every action in the Iliad is directed, caused, or influenced by a god, and that earlier translations show an astonishing lack of words suggesting thought, planning, or introspection. Those that do appear, he argues, are misinterpretations made by translators imposing a modern mentality on the characters.
Divine intervention
Some scholars believe that the gods may have intervened in the mortal world because of quarrels they may have had among each other. Homer interprets the world at this time by using the passion and emotion of the gods to be determining factors of what happens on the human level. An example of one of these relationships in the Iliad occurs between Athena, Hera, and Aphrodite. In the final book of the poem Homer writes, "He offended Athena and Hera—both goddesses." Athena and Hera are envious of Aphrodite because of a beauty pageant on Mount Olympus in which Paris chose Aphrodite to be the most beautiful goddess over both Hera and Athena. Wolfgang Kullmann further goes on to say, "Hera's and Athena's disappointment over the victory of Aphrodite in the Judgement of Paris determines the whole conduct of both goddesses in The Iliad and is the cause of their hatred for Paris, the Judge, and his town Troy."
Hera and Athena then continue to support the Achaean forces throughout the poem because Paris is part of the Trojans, while Aphrodite aids Paris and the Trojans. The emotions between the goddesses often translate to actions they take in the mortal world. For example, in Book 3 of the Iliad, Paris challenges any of the Achaeans to a single combat and Menelaus steps forward. Menelaus was dominating the battle and was on the verge of killing Paris. "Now he'd have hauled him off and won undying glory but Aphrodite, Zeus's daughter, was quick to the mark, snapped the rawhide strap." Aphrodite intervened out of her own self-interest to save Paris from the wrath of Menelaus because Paris had helped her to win the beauty pageant. The partisanship of Aphrodite towards Paris induces constant intervention by all of the gods, especially to give motivational speeches to their respective proteges, while often appearing in the shape of a human being they are familiar with. This connection of emotions to actions is just one example out of many that occur throughout the poem.
The major deities:
Zeus (Neutral)
Hera (Achaeans)
Artemis (Trojans)
Apollo (Trojans)
Hades (Neutral)
Aphrodite (Trojans)
Ares (Trojans, then Achaeans)
Athena (Achaeans)
Hermes (Neutral/Achaeans)
Poseidon (Achaeans)
Hephaestus (Achaeans)
The minor deities:
Eris (Trojans)
Iris (Neutral)
Thetis (Achaeans)
Leto (Trojans)
Proteus (Achaeans)
Scamander (Trojans)
Phobos (Trojans)
Deimos (Trojans)
Hypnos (Achaeans)
Themes
Fate
Fate () propels most of the events of the Iliad. Once set, gods and men abide it, neither truly able nor willing to contest it. How fate is set is unknown, but it is told by the Fates and by Zeus through sending omens to seers such as Calchas. Men and their gods continually speak of heroic acceptance and cowardly avoidance of one's slated fate. Fate does not determine every action, incident, and occurrence, but it does determine the outcome of life—before killing him, Hector calls Patroclus a fool Patroclus retorts:
Here, Patroclus alludes to fated death by Hector's hand, and Hector's fated death by Achilles's hand. Each accepts the outcome of his life, yet, no one knows if the gods can alter fate. The first instance of this doubt occurs in Book XVI. Seeing Patroclus about to kill Sarpedon, his mortal son, Zeus says:
About his dilemma, Hera asks Zeus:
In deciding between losing a son or abiding fate, Zeus, King of the Gods, allows it. This motif recurs when he considers sparing Hector, whom he loves and respects. This time, it is Athene who challenges him:
Again, Zeus appears capable of altering fate, but does not, deciding instead to abide set outcomes; similarly, fate spares Aeneas, after Apollo convinces the over-matched Trojan to fight Achilles. Poseidon cautiously speaks:
Divinely aided, Aeneas escapes the wrath of Achilles and survives the Trojan War. Whether or not the gods can alter fate, they do abide it, despite its countering their human allegiances; thus, the mysterious origin of fate is a power beyond the gods. Fate implies the primeval, tripartite division of the world that Zeus, Poseidon, and Hades effected in deposing their father, Cronus, for its dominion. Zeus took the Air and the Sky, Poseidon the Waters, and Hades the Underworld, the land of the dead—yet they share dominion of the Earth. Despite the earthly powers of the Olympic gods, only the Three Fates set the destiny of Man.
(, "glory, fame") is the concept of glory earned in heroic battle. Yet, Achilles must choose only one of the two rewards, either or . In Book IX (IX.410–16), he poignantly tells Agamemnon's envoys—Odysseus, Phoenix, Ajax—begging his reinstatement to battle about having to choose between two fates (, 9.411).
The passage reads:
In forgoing his , he will earn the greater reward of (, "fame imperishable"). In the poem, (, "imperishable") occurs five other times, each occurrence denotes an object: Agamemnon's sceptre, the wheel of Hebe's chariot, the house of Poseidon, the throne of Zeus, the house of Hephaestus. Translator Lattimore renders as 'forever immortal' and as 'forever imperishable'—connoting Achilles's mortality by underscoring his greater reward in returning to battle Troy.
is often given visible representation by the prizes won in battle. When Agamemnon takes Briseis from Achilles, he takes away a portion of the he had earned.
Achilles' shield, crafted by Hephaestus and given to him by his mother Thetis, bears an image of stars in the centre. The stars conjure profound images of the place of a single man, no matter how heroic, in the perspective of the entire cosmos.
(, "homecoming") occurs seven times in the poem, making it a minor theme in the Iliad itself. Yet the concept of homecoming is much explored in other Ancient Greek literature, especially in the post-war homeward fortunes experienced by the Atreidae (Agamemnon and Menelaus), and Odysseus (see the Odyssey).
Pride
Pride drives the plot of the Iliad. The Achaeans gather on the plain of Troy to wrest Helen from the Trojans. Though the majority of the Trojans would gladly return Helen to the Achaeans, they defer to the pride of their prince, Alexandros, also known as Paris. Within this frame, Homer's work begins. At the start of the Iliad, Agamemnon's pride sets forth a chain of events that leads him to take from Achilles, Briseis, the girl that he had originally given Achilles in return for his martial prowess. Due to this slight, Achilles refuses to fight and asks his mother, Thetis, to make sure that Zeus causes the Achaeans to suffer on the battlefield until Agamemnon comes to realize the harm he has done to Achilles.
Achilles' pride allows him to beg Thetis for the deaths of his Achaean friends. When in Book 9 his friends urge him to return, offering him loot and his girl, Briseis, he refuses, stuck in his vengeful pride. Achilles remains stuck until the very end, when his anger at himself for Patroclus' death overcomes his pride at Agamemnon's slight and he returns to kill Hector. He overcomes his pride again when he keeps his anger in check and returns Hector to Priam at epic's close. From epic start to epic finish, pride drives the plot.
Akin to is (, "respect, honor"), the concept denoting the respectability an honorable man accrues with accomplishment (cultural, political, martial), per his station in life. In Book I, the Achaean troubles begin with King Agamemnon's dishonorable, unkingly behavior—first, by threatening the priest Chryses (1.11), then, by aggravating them in disrespecting Achilles, by confiscating Briseis from him (1.171). The warrior's consequent rancor against the dishonorable king ruins the Achaean military cause.
(hubris)
() plays a part similar to . The epic takes as its thesis the anger of Achilles and the destruction it brings. Anger disturbs the distance between human beings and the gods. Uncontrolled anger destroys orderly social relationships and upsets the balance of correct actions necessary to keep the gods away from human beings. Despite the epic's focus on Achilles' rage, also plays a prominent role, serving as both kindling and fuel for many destructive events.
Agamemnon refuses to ransom Chriseis up out of and harms Achilles' pride when he demands Briseis. Hubris forces Paris to fight against Menelaus. Agamemnon spurs the Achaean to fight, by calling into question Odysseus, Diomedes, and Nestor's pride, asking why they were cowering and waiting for help when they should be the ones leading the charge. While the events of the Iliad focus on the Achilles' rage and the destruction it brings on, fuels and stokes them both.
The poem's initial word, (; acc. , , "wrath," "rage," "fury"), establishes the Iliads principal theme: the "Wrath of Achilles". His personal rage and wounded soldier's pride propel the story: the Achaeans' faltering in battle, the slayings of Patroclus and Hector, and the fall of Troy. In Book I, the Wrath of Achilles first emerges in the Achilles-convoked meeting, between the Greek kings and the seer Calchas. King Agamemnon dishonours Chryses, the Trojan priest of Apollo, by refusing with a threat the restitution of his daughter, Chryseis—despite the proffered ransom of "gifts beyond count." The insulted priest prays to Apollo for help, and a nine-day rain of divine plague arrows falls upon the Achaeans. Moreover, in that meeting, Achilles accuses Agamemnon of being "greediest for gain of all men." To that, Agamemnon replies:
After that, only Athena stays Achilles's wrath. He vows to never again obey orders from Agamemnon. Furious, Achilles cries to his mother, Thetis, who persuades Zeus's divine intervention—favouring the Trojans—until Achilles's rights are restored. Meanwhile, Hector leads the Trojans to almost pushing the Achaeans back to the sea (Book XII). Later, Agamemnon contemplates defeat and retreat to Greece (Book XIV). Again, the Wrath of Achilles turns the war's tide in seeking vengeance when Hector kills Patroclus. Aggrieved, Achilles tears his hair and dirties his face. Thetis comforts her mourning son, who tells her:
Accepting the prospect of death as fair price for avenging Patroclus, he returns to battle, dooming Hector and Troy, thrice chasing him around the Trojan walls, before slaying him, then dragging the corpse behind his chariot, back to camp.
Date and textual history
The poem dates to the archaic period of Classical Antiquity. Scholarly consensus mostly places it in the 8th century BC, although some favour a 7th-century date. In any case, the for the dating of the Iliad is 630 BC, as evidenced by reflection in art and literature.
Herodotus, having consulted the Oracle at Dodona, placed Homer and Hesiod at approximately 400 years before his own time, which would place them at .
The historical backdrop of the poem is the time of the Late Bronze Age collapse, in the early 12th century BC. Homer is thus separated from his subject matter by about 400 years, the period known as the Greek Dark Ages. Intense scholarly debate has surrounded the question of which portions of the poem preserve genuine traditions from the Mycenaean period. The Catalogue of Ships in particular has the striking feature that its geography does not portray Greece in the Iron Age, the time of Homer, but as it was before the Dorian invasion.
The title (; gen. ) is an ellipsis of , meaning "the Trojan poem". , is the specifically feminine adjective form from . The masculine adjective form would be or . It is used by Herodotus.
Venetus A, copied in the 10th century AD, is the oldest fully extant manuscript of the Iliad.
The first edition of the "Iliad", , edited by Demetrius Chalcondyles and published by Bernardus Nerlius, and Demetrius Damilas in Florence in 1489.
As oral tradition
In antiquity, the Greeks applied the Iliad and the Odyssey as the bases of pedagogy. Literature was central to the educational-cultural function of the itinerant rhapsode, who composed consistent epic poems from memory and improvisation, and disseminated them, via song and chant, in his travels and at the Panathenaic Festival of athletics, music, poetics, and sacrifice, celebrating Athena's birthday.
Originally, Classical scholars treated the Iliad and the Odyssey as written poetry, and Homer as a writer. Yet, by the 1920s, Milman Parry (1902–1935) had launched a movement claiming otherwise. His investigation of the oral Homeric style—"stock epithets" and "reiteration" (words, phrases, stanzas)—established that these formulae were artifacts of oral tradition easily applied to a hexametric line. A two-word stock epithet (e.g. "resourceful Odysseus") reiteration may complement a character name by filling a half-line, thus, freeing the poet to compose a half-line of "original" formulaic text to complete his meaning. In Yugoslavia, Parry and his assistant, Albert Lord (1912–1991), studied the oral-formulaic composition of Serbian oral poetry, yielding the Parry/Lord thesis that established oral tradition studies, later developed by Eric Havelock, Marshall McLuhan, Walter Ong, and Gregory Nagy.
In The Singer of Tales (1960), Lord presents likenesses between the tragedies of the Achaean Patroclus, in the Iliad, and of the Sumerian Enkidu, in the Epic of Gilgamesh, and claims to refute, with "careful analysis of the repetition of thematic patterns", that the Patroclus storyline upsets Homer's established compositional formulae of "wrath, bride-stealing, and rescue"; thus, stock-phrase reiteration does not restrict his originality in fitting story to rhyme. Likewise, James Armstrong (1958) reports that the poem's formulae yield richer meaning because the "arming motif" diction—describing Achilles, Agamemnon, Paris, and Patroclus—serves to "heighten the importance of…an impressive moment," thus, "[reiteration] creates an atmosphere of smoothness," wherein, Homer distinguishes Patroclus from Achilles, and foreshadows the former's death with positive and negative turns of phrase.
In the Iliad, occasional syntactic inconsistency may be an oral tradition effect—for example, Aphrodite is "laughter-loving", despite being painfully wounded by Diomedes (Book V, 375); and the divine representations may mix Mycenaean and Greek Dark Age () mythologies, parallelling the hereditary nobles (lower social rank rulers) with minor deities, such as Scamander, et al.
Depiction of warfare
Depiction of infantry combat
Despite Mycenae and Troy being maritime powers, the Iliad features no sea battles. The Trojan shipwright (of the ship that transported Helen to Troy), Phereclus, instead fights afoot, as an infantryman. The battle dress and armour of hero and soldier are well-described. They enter battle in chariots, launching javelins into the enemy formations, then dismount—for hand-to-hand combat with yet more javelin throwing, rock throwing, and if necessary hand to hand sword and a shoulder-borne (shield) fighting. Ajax the Greater, son of Telamon, sports a large, rectangular shield () with which he protects himself and Teucer, his brother:
Ajax's cumbersome shield is more suitable for defence than for offence, while his cousin, Achilles, sports a large, rounded, octagonal shield that he successfully deploys along with his spear against the Trojans:
In describing infantry combat, Homer names the phalanx formation, but most scholars do not believe the historical Trojan War was so fought. In the Bronze Age, the chariot was the main battle transport-weapon (e.g. the Battle of Kadesh). The available evidence, from the Dendra armour and the Pylos Palace paintings, indicate the Mycenaeans used two-man chariots, with a long-spear-armed principal rider, unlike the three-man Hittite chariots with short-spear-armed riders, and unlike the arrow-armed Egyptian and Assyrian two-man chariots. Nestor spearheads his troops with chariots; he advises them:
Although Homer's depictions are graphic, it can be seen in the very end that victory in war is a far more somber occasion, where all that is lost becomes apparent. On the other hand, the funeral games are lively, for the dead man's life is celebrated. This overall depiction of war runs contrary to many other ancient Greek depictions, where war is an aspiration for greater glory.
Modern reconstructions of armor, weapons and styles
Few modern (archeologically, historically and Homerically accurate) reconstructions of arms, armor and motifs as described by Homer exist. Some historical reconstructions have been done by Salimbeti et al.
Influence on classical Greek warfare
While the Homeric poems (particularly, the Iliad) were not necessarily revered scripture of the ancient Greeks, they were most certainly seen as guides that were important to the intellectual understanding of any educated Greek citizen. This is evidenced by the fact that in the late 5th century BC, "it was the sign of a man of standing to be able to recite the Iliad and Odyssey by heart." Moreover, it can be argued that the warfare shown in the Iliad, and the way in which it was depicted, had a profound and very traceable effect on Greek warfare in general. In particular, the effect of epic literature can be broken down into three categories: tactics, ideology, and the mindset of commanders. In order to discern these effects, it is necessary to take a look at a few examples from each of these categories.
Much of the detailed fighting in the Iliad is done by the heroes in an orderly, one-on-one fashion. Much like the Odyssey, there is even a set ritual which must be observed in each of these conflicts. For example, a major hero may encounter a lesser hero from the opposing side, in which case the minor hero is introduced, threats may be exchanged, and then the minor hero is slain. The victor often strips the body of its armor and military accoutrements. Here is an example of this ritual and this type of one-on-one combat in the Iliad:
The biggest issue in reconciling the connection between the epic fighting of the Iliad and later Greek warfare is the phalanx, or hoplite, warfare seen in Greek history well after Homer's Iliad. While there are discussions of soldiers arrayed in semblances of the phalanx throughout the Iliad, the focus of the poem on the heroic fighting, as mentioned above, would seem to contradict the tactics of the phalanx. However, the phalanx did have its heroic aspects. The masculine one-on-one fighting of epic is manifested in phalanx fighting on the emphasis of holding one's position in formation. This replaces the singular heroic competition found in the Iliad.
One example of this is the Spartan tale of 300 picked men fighting against 300 picked Argives. In this battle of champions, only two men are left standing for the Argives and one for the Spartans. Othryades, the remaining Spartan, goes back to stand in his formation with mortal wounds while the remaining two Argives go back to Argos to report their victory. Thus, the Spartans claimed this as a victory, as their last man displayed the ultimate feat of bravery by maintaining his position in the phalanx.
In terms of the ideology of commanders in later Greek history, the Iliad has an interesting effect. The Iliad expresses a definite disdain for tactical trickery, when Hector says, before he challenges the great Ajax:
However, despite examples of disdain for this tactical trickery, there is reason to believe that the Iliad, as well as later Greek warfare, endorsed tactical genius on the part of their commanders. For example, there are multiple passages in the Iliad with commanders such as Agamemnon or Nestor discussing the arraying of troops so as to gain an advantage. Indeed, the Trojan War is won by a notorious example of Achaean guile in the Trojan Horse. This is even later referred to by Homer in the Odyssey. The connection, in this case, between guileful tactics of the Achaeans and the Trojans in the Iliad and those of the later Greeks is not a difficult one to find. Spartan commanders, often seen as the pinnacle of Greek military prowess, were known for their tactical trickery, and, for them, this was a feat to be desired in a commander. Indeed, this type of leadership was the standard advice of Greek tactical writers.
Ultimately, while Homeric (or epic) fighting is certainly not completely replicated in later Greek warfare, many of its ideals, tactics, and instruction are.
Hans van Wees argues that the period that the descriptions of warfare relate can be pinned down fairly specifically—to the first half of the 7th century BC.
Influence on arts and pop culture
The Iliad was a standard work of great importance already in Classical Greece and remained so throughout the Hellenistic and Byzantine periods. Subjects from the Trojan War were a favourite among ancient Greek dramatists. Aeschylus' trilogy, the Oresteia, comprising Agamemnon, The Libation Bearers, and The Eumenides, follows the story of Agamemnon after his return from the war. Homer also came to be of great influence in European culture with the resurgence of interest in Greek antiquity during the Renaissance, and it remains the first and most influential work of the Western canon. In its full form the text made its return to Italy and Western Europe beginning in the 15th century, primarily through translations into Latin and the vernacular languages.
Prior to this reintroduction, however, a shortened Latin version of the poem, known as the , was very widely studied and read as a basic school text. The West tended to view Homer as unreliable as they believed they possessed much more down to earth and realistic eyewitness accounts of the Trojan War written by Dares and Dictys Cretensis, who were supposedly present at the events. These late antique forged accounts formed the basis of several eminently popular medieval chivalric romances, most notably those of Benoît de Sainte-Maure and Guido delle Colonne.
These in turn spawned many others in various European languages, such as the first printed English book, the 1473 Recuyell of the Historyes of Troye. Other accounts read in the Middle Ages were antique Latin retellings such as the and works in the vernaculars such as the Icelandic Troy Saga. Even without Homer, the Trojan War story had remained central to Western European medieval literary culture and its sense of identity. Most nations and several royal houses traced their origins to heroes at the Trojan War; Britain was supposedly settled by the Trojan Brutus, for instance.
William Shakespeare used the plot of the Iliad as source material for his play Troilus and Cressida, but focused on a medieval legend, the love story of Troilus, son of King Priam of Troy, and Cressida, daughter of the Trojan soothsayer Calchas. The play, often considered to be a comedy, reverses traditional views on events of the Trojan War and depicts Achilles as a coward, Ajax as a dull, unthinking mercenary, etc.
William Theed the elder made an impressive bronze statue of Thetis as she brought Achilles his new armor forged by Hephaesthus. It has been on display in the Metropolitan Museum of Art in New York City since 2013.
Robert Browning's poem Development discusses his childhood introduction to the matter of the Iliad and his delight in the epic, as well as contemporary debates about its authorship.
According to Suleyman al-Boustani, a 19th-century poet who made the first Arabic translation of the Iliad to Arabic, the epic may have been widely circulated in Syriac and Pahlavi translations during the early Middle Ages. Al-Boustani credits Theophilus of Edessa with the Syriac translation, which was supposedly (along with the Greek original) widely read or heard by the scholars of Baghdad in the prime of the Abbasid Caliphate, although those scholars never took the effort to translate it to the official language of the empire; Arabic. The Iliad was also the first full epic poem to be translated to Arabic from a foreign language, upon the publication of Al-Boustani's complete work in 1904.
20th-century arts
Simone Weil wrote the essay "The Iliad or the Poem of Force" in 1939, shortly after the commencement of World War II. The essay describes how the Iliad demonstrates the way force, exercised to the extreme in war, reduces both victim and aggressor to the level of the slave and the unthinking automaton.
Lesya Ukrainka wrote a dramatic poem "Cassandra" in 1901-1907 based on the Iliad. It describes the story of Kassandra, a prophetess.
The 1954 Broadway musical The Golden Apple, by librettist John Treville Latouche and composer Jerome Moross, was freely adapted from the Iliad and the Odyssey, re-setting the action to America's Washington state in the years after the Spanish–American War, with events inspired by the Iliad in Act One and events inspired by the Odyssey in Act Two.
The opera King Priam by Sir Michael Tippett (which received its premiere in 1962) is based loosely on the Iliad.
Christopher Logue's poem War Music, an "account", not a translation, of the Iliad, was begun in 1959 as a commission for radio. He continued working on it until his death in 2011. Described by Tom Holland as "one of the most remarkable works of post-war literature", it has been an influence on Kae Tempest and Alice Oswald, who says that it "unleashes a forgotten kind of theatrical energy into the world."
Christa Wolf's novel Cassandra (1983) is a critical engagement with the Iliad. Wolf's narrator is Cassandra, whose thoughts are heard at the moment just before her murder by Clytemnestra in Sparta. Wolf's narrator presents a feminist's view of the war, and of war in general. Cassandra's story is accompanied by four essays which Wolf delivered as the Frankfurter Poetik-Vorlesungen. The essays present Wolf's concerns as a writer and rewriter of this canonical story and show the genesis of the novel through Wolf's own readings and in a trip she took to Greece.
David Melnick's Men in Aida (cf. ) (1983) is a postmodern homophonic translation of Book One into a farcical bathhouse scenario, preserving the sounds but not the meaning of the original.
Marion Zimmer Bradley's 1987 novel The Firebrand retells the story from the point of view of Kassandra, a princess of Troy and a prophetess who is cursed by Apollo.
Contemporary popular culture
Eric Shanower's Image Comics series Age of Bronze, which began in 1998, retells the legend of the Trojan War.
Dan Simmons' epic science fiction adaptation/tribute Ilium was released in 2003, receiving a Locus Award for best science fiction novel of 2003.
Troy (2004), a loose film adaptation of the Iliad, received mixed reviews but was a commercial success, particularly in international sales. It grossed $133 million in the United States and $497 million worldwide, making it the 188th top-grossing movie of all time.
Madeline Miller's 2011 debut novel The Song of Achilles tells the story of Achilles' and Patroclus' life together as children, lovers, and soldiers. The novel, which won the 2012 Women's Prize for Fiction, draws on the Iliad as well as the works of other classical authors such as Statius, Ovid, and Virgil.
Alice Oswald's sixth collection, Memorial (2011), is based on but departs from the narrative form of the Iliad to focus on, and so commemorate, the individually-named characters whose deaths are mentioned in that poem. Later in October 2011, Memorial was shortlisted for the T. S. Eliot Prize, but in December 2011, Oswald withdrew the book from the shortlist, citing concerns about the ethics of the prize's sponsors.
The Rage of Achilles, by American author and Yale Writers' Conference founder Terence Hawkins, recounts the Iliad as a novel in modern, sometimes graphic language. Informed by Julian Jaynes' theory of the bicameral mind and the historicity of the Trojan War, it depicts its characters as real men to whom the gods appear only as hallucinations or command voices during the sudden and painful transition to truly modern consciousness.
English translations
George Chapman published his translation of the Iliad, in installments, beginning in 1598, published in "fourteeners", a long-line ballad metre that "has room for all of Homer's figures of speech and plenty of new ones, as well as explanations in parentheses. At its best, as in Achilles' rejection of the embassy in Iliad Nine; it has great rhetorical power." It quickly established itself as a classic in English poetry. In the preface to his own translation, Pope praises "the daring fiery spirit" of Chapman's rendering, which is "something like what one might imagine Homer, himself, would have writ before he arrived at years of discretion."
John Keats praised Chapman in the sonnet On First Looking into Chapman's Homer (1816). John Ogilby's mid-17th-century translation is among the early annotated editions; Alexander Pope's 1715 translation, in heroic couplet, is "The classic translation that was built on all the preceding versions," and, like Chapman's, it is a major poetic work in its own right. William Cowper's Miltonic, blank verse 1791 edition is highly regarded for its greater fidelity to the Greek than either the Chapman or the Pope versions: "I have omitted nothing; I have invented nothing," Cowper says in prefacing his translation.
In the lectures On Translating Homer (1861), Matthew Arnold addresses the matters of translation and interpretation in rendering the Iliad to English; commenting upon the versions contemporarily available in 1861, he identifies the four essential poetic qualities of Homer to which the translator must do justice:
[i] that he is eminently rapid; [ii] that he is eminently plain and direct, both in the evolution of his thought and in the expression of it, that is, both in his syntax and in his words; [iii] that he is eminently plain and direct in the substance of his thought, that is, in his matter and ideas; and, finally, [iv] that he is eminently noble.
After a discussion of the metres employed by previous translators, Arnold argues for a poetical dialect hexameter translation of the Iliad, like the original. "Laborious as this meter was, there were at least half a dozen attempts to translate the entire Iliad or Odyssey in hexameters; the last in 1945. Perhaps the most fluent of them was by J. Henry Dart [1862] in response to Arnold." In 1870, the American poet William Cullen Bryant published a blank verse version, that Van Wyck Brooks describes as "simple, faithful."
An 1898 translation by Samuel Butler was published by Longmans. Butler had read Classics at Cambridge University, graduating during 1859.
Since 1950, there have been several English translations. Richmond Lattimore's version (1951) is "a free six-beat" line-for-line rendering that explicitly eschews "poetical dialect" for "the plain English of today." It is literal, unlike older verse renderings. Robert Fitzgerald's version (Oxford World's Classics, 1974) strives to situate the Iliad in the musical forms of English poetry. His forceful version is freer, with shorter lines that increase the sense of swiftness and energy.
Robert Fagles (Penguin Classics, 1990) and Stanley Lombardo (1997) are bolder than Lattimore in adding dramatic significance to Homer's conventional and formulaic language. Rodney Merrill's translation (University of Michigan Press, 2007) not only renders the work in English verse like the dactylic hexameter of the original, but also conveys the oral-formulaic nature of the epic song, to which that musical meter gives full value. Barry B. Powell's translation (Oxford University Press, 2014) renders the Homeric Greek with a simplicity and dignity reminiscent of the original.
Peter Green translated the Iliad in 2015, a version published by the University of California Press.
Caroline Alexander published the first full-length English translation by a woman in 2015.
Manuscripts
There are more than 2000 manuscripts of Homer. Some of the most notable manuscripts include:
Rom. Bibl. Nat. gr. 6 + Matriti. Bibl. Nat. 4626 from 870–890 AD
Venetus A = Venetus Marc. 822 from the 10th century
Venetus B = Venetus Marc. 821 from the 11th century
Ambrosian Iliad
Papyrus Oxyrhynchus 20
Papyrus Oxyrhynchus 21
Codex Nitriensis (palimpsest)
See also
Mask of Agamemnon
Parallels between Virgil's Aeneid and Homer's Iliad and Odyssey
Heinrich Schliemann
References
Notes
Citations
Bibliography
Further reading
De Jong, Irene (2012). Iliad. Book XXII, Cambridge University Press.
Edwards, Mark W.; Janko, Richard; Kirk, G.S., The Iliad: A Commentary: Volume IV, Books 13–16, Cambridge University Press, 1992.
Edwards, Mark W.; Kirk, G.S., The Iliad: A Commentary: Volume V, Books 17–20, Cambridge University Press, 1991.
Graziosi, Barbara; Haubold, Johannes, Iliad: Book VI, Cambridge University Press, 2010.
Hainsworth, Bryan; Kirk, G.S., The Iliad: A Commentary: Volume III, Books 9–12, Cambridge University Press, 1993. ]
Kirk, G.S., The Iliad: A Commentary: Volume I, Books 1–4, Cambridge University Press, 1985.
Kirk, G.S., The Iliad: A Commentary: Volume II, Books 5–8, Cambridge University Press, 1990.
Murray, A.T.; Wyatt, William F., Homer: The Iliad, Books I–XII, Loeb Classical Library, Harvard University Press, 1999,
Richardson, Nicholas; Kirk, G.S., The Iliad: A Commentary: Volume VI, Books 21–24, Cambridge University Press, 1993.
West, Martin L., Studies in the text and transmission of the Iliad, München : K.G. Saur, 2001.
External links
Multiple translations of the Iliad at Project Gutenberg:
The Iliad of Homer, by George Chapman, at Project Gutenberg
The Iliad of Homer, by Alexander Pope, at Project Gutenberg
The Iliad of Homer, by William Cowper, at Project Gutenberg
The Iliad of Homer, by Theodore Alois Buckley, at Project Gutenberg
The Iliad of Homer, by Edward, Earl of Derby, at Project Gutenberg
The Iliad of Homer, by Andrew Lang, Walter Leaf and Ernest Meyers, at Project Gutenberg
The Iliad of Homer, by Samuel Butler, at Project Gutenberg
Iliad : from the Perseus Project (PP), with the Murray and Butler translations and hyperlinks to mythological and grammatical commentary
Gods, Achaeans and Troyans. An interactive visualization of The Iliads characters flow and relations.
The Iliad: A Study Guide
Comments on background, plot, themes, authorship, and translation issues by 2008 translator Herbert Jordan.
Flaxman illustrations of the Iliad
The Iliad study guide, themes, quotes, teacher resources
Digital facsimile of the first printed publication (editio princeps) of the Iliad in Homeric Greek by Demetrios Chalkokondyles, Bayerische Staatsbibliothek
8th-century BC books
Ancient Greek religion
Poems adapted into films
Public domain books
Epic Cycle
Ancient Greek epic poems |
3133975 | https://en.wikipedia.org/wiki/Diskless%20Remote%20Boot%20in%20Linux | Diskless Remote Boot in Linux | DRBL (Diskless Remote Boot in Linux) is a NFS-/NIS server providing a diskless or systemless environment for client machines.
It could be used for
cloning machines with Clonezilla software inbuilt,
providing for a network installation of Linux distributions like Fedora, Debian, etc.,
providing machines via PXE boot (or similar means) with a small size operation system (e.g., DSL, Puppy Linux, FreeDOS).
Providing a DRBL-Server
Installation on a machine running a supported Linux distribution via installation script,
Live CD.
Installation is possible on a machine with Debian, Ubuntu, Mandriva, Red Hat Linux, Fedora, CentOS or SuSE already installed. Unlike LTSP, it uses distributed hardware resources and makes it possible for clients to fully access local hardware, thus making it feasible to use server machines with less power. It also includes Clonezilla, a partitioning and disk cloning utility similar to Symantec Ghost.
DRBL comes under the terms of the GNU GPL license so providing the user with the ability to customize it.
Features
DRBL excels in two main categories.
Disk Cloning
Clonezilla (packaged with DRBL) uses Partimage to avoid copying free space, and gzip to compress Hard Disk images. The stored image can then be restored to multiple machines simultaneously using multicast packets, thus greatly reducing the time it takes to image large numbers of computers. The DRBL Live CD allows you to do all of this without actually installing anything on any of the machines, by simply booting one machine (the server) from the CD, and PXE booting the rest of the machines.
Diskless node
A diskless node is an excellent way to make use of old hardware. Using old hardware as thin clients is a good solution, but has some disadvantages that a diskless node can make up for.
Streaming audio/video - A terminal server must decompress, recompress, and send video over the network to the client. A diskless node does all decompression locally, and can make use of any graphics hardware capabilities on the local machine.
Software that requires real-time input - Since all input at a thin client is sent over the network before it is registered by the operating system, there can be substantial delay. This is a major problem in software that requires real-time input (i.e. video games). Diskless nodes run the software locally, and as such, do not have this problem.
DRBL allows one to set up multiple diskless nodes with relative ease.
How it works
The client computer is set to boot from the network card using PXE or Etherboot. The client requests an IP address, and tftp image to boot from, both are provided by the DRBL server. The client boots the initial RAM disk provided by the DRBL server via tftp, and proceeds to mount an nfs share (also provided by the DRBL server) as its root (/) partition. From there, the client boots either the linux distribution on which the DRBL server is installed, Clonezilla, or an installer for various Linux distributions, depending on how that particular client was configured on the DRBL server.
All system resources reside on the local machine except storage, which resides on the DRBL server.
Keys to a successful diskless node environment with DRBL
The main bottleneck in a DRBL installation is between the storage on the DRBL server, and the client workstation. Fast storage on the server (RAID), and a fast network (Gigabit Ethernet), are ideal in this type of environment.
External resources
DRBL
Clonezilla
References
Booting
Embedded Linux
Linux software
Operating system distributions bootable from read-only media |
78575 | https://en.wikipedia.org/wiki/Antianeira | Antianeira | Antianeira () was the name of a number of women in Greek mythology:
Antianeira, possibly mother of the Argonaut Idmon by the god Apollo.
Antianeira, mother of the Argonauts Eurytus and Echion.
Antianeira, an Amazon who succeeded Penthesilea as Queen of the Amazons. In some version of the myth, she was killed during the Trojan War fighting for the latter.
Notes
References
Apollonius Rhodius, Argonautica translated by Robert Cooper Seaton (1853-1915), R. C. Loeb Classical Library Volume 001. London, William Heinemann Ltd, 1912. Online version at the Topos Text Project.
Apollonius Rhodius, Argonautica. George W. Mooney. London. Longmans, Green. 1912. Greek text available at the Perseus Digital Library.
The Orphic Argonautica, translated by Jason Colavito. © Copyright 2011. Online version at the Topos Text Project.
Tzetzes, John Posthomerica translated by Ana Untila.
Amazons (Greek mythology)
Women of the Trojan war
People of the Trojan War
Women in Greek mythology
Characters in Greek mythology |
7805253 | https://en.wikipedia.org/wiki/The%20Advisory%20Board%20Company | The Advisory Board Company | The Advisory Board Company was a consulting firm focusing on health care organizations and educational institutions. It began in 1979 in Washington, DC. Its educational business was spun off and the remaining company was acquired by Optum in 2017.
History
The company was founded by David G. Bradley in 1979 as the Research Council of Washington. Its original mission was to answer "any question for any company for any industry," but in 1983 the company began to specialize in research for the financial services industry and changed its name to The Advisory Board Company. By 1986, the company had launched its health care-focused strategic research division, including its first membership program, the Health Care Advisory Board. Across the next four years the firm grew to 150 employees, served more than 500 health care members, and published 15 major reports and 2,000 research briefs each year.
In 1993, the firm launched a strategic research membership for large companies, bringing on almost half of the Fortune 500 within 18 months. The firm expanded in 1994 to include its first clinically based program, the Cardiology Roundtable, later the Cardiovascular Roundtable.
In 1997, the company spun off its corporate membership group, forming The Corporate Executive Board (later CEB Inc.) as an independent company. It maintained its focus on the health care sector, working with more than 1,500 health care organizations. H*Works, a consulting business offering best practice implementation support, launched in 2000.
The Advisory Board Company filed an initial public offering in 2001, in which Bradley sold his ownership interest.
By 2002, the firm topped 2,100 memberships and 500 employees, and launched The Advisory Board Academies leadership development division (later the company's Talent Development division) to address leadership in health care.
In 2003 the Advisory Board Company launched Compass, offering business intelligence and analytics.
In 2007, the company launched its first membership programs in higher education, working with student and academic affairs executives at several U.S. research universities. It formed an education division, the Education Advisory Board.
In 2008 it acquired Crimson, a data, analytics, and business intelligence software provider focused on physician performance, quality metrics, and cost of care outcomes. By 2009, The Advisory Board Company expanded to San Francisco, its fourth U.S. office. The firm grew to over 1,000 employees and over 2,800 health care and higher education members. That year the firm acquired Southwind, a health care industry management and consulting firm.
From 2010 to 2011, the company continued to expand, acquiring and partnering with a series of technology firms, including Milliman MedInsight, which provides population risk analytics; Cielo MedSolutions, which provides ambulatory patient registry software; and PivotHealth, a physician practice management firm.
In 2012, The Advisory Board Company acquired ActiveStrategy, a performance improvement technology firm, and 360Fresh, a leading provider of clinical data analytics. During that year, the firm also provided $1 million in benefit to non-profit organizations through its Community Impact program.
That year, the Advisory Board acquired Care Team Connect, a care management workflow platform, and launched the Student Success Collaborative, a software-based program that helps colleges and universities improve outcomes for at-risk and off-path students. The company also announced the acquisition of Medical Referral Source, a technology firm with software that facilitates a seamless referral process.
2014 brought the acquisitions of health care software firm HealthPost, and the higher education consulting firm Royall & Company.
The Advisory Board Company partnered with the de Beaumont Foundation, Kresge Foundation, and Robert Wood Johnson Foundation to launch the BUILD Health Challenge to identify and support health partnerships that were improving health in low-income, urban communities.
In 2015, the firm acquired Clinovations, a health care information technology firm, and GradesFirst, a software company that helped colleges and universities to identify and to support at-risk students. As of 2016, the company had grown to more than 3,600 employees, with offices on three continents.
In January 2017, the company reduced its healthcare workforce by 220 employees, or 5.7%, exited several businesses and announced a plan to close four offices by the end of 2017.
On August 29, 2017, the company sold and split off its two business units. The health care business was acquired by Optum, a global health company. On November 17, 2017, its education division, now EAB, announced that it would be established as an independent entity, separate from The Advisory Board Company. That business was acquired by Vista Equity Partners, a leading investment firm. The total value of the deal was roughly $2.5B. CEB, which had been spun out in 1997, had been acquired by Gartner in April 2017.
Awards
In 2005, the Advisory Board Company was named to Forbes’ Top 200 High-Growth Companies and to Washingtonian magazine's Great Places to Work in 2003 and 2005. The company was named to Modern Healthcare’s Best Places to Work.
In 2012, the firm was named as one of "top 100 health care IT firms" by Healthcare Informatics and Modern Healthcare magazine's top 40 fastest-growing health care firms list. In 2013, the Advisory Board Company was named the “#1 Best Large Company to Work For" by Modern Healthcare. It also became the first for-profit company of its size to achieve 100% participation in community service.
The company's Community Impact program received the 2014 Corporate Engagement Award of Excellence from Points of Light, the world's largest organization dedicated to volunteer service.
In 2016 it was named as a Modern Healthcare Best Place to Work for the eighth straight year.
Notable current and former employees
Jeffrey Zients, White House COVID Coordinator; Former Director of the White House Office of Management and Budget
David Bradley, Owner of the Atlantic Media Company
Aneesh Chopra, Former Chief Technology Officer of the United States
Lisa Monaco, Assistant Attorney General
References
External links
Companies formerly listed on the Nasdaq
Companies based in Washington, D.C.
Consulting firms established in 1979
1979 establishments in Washington, D.C.
Consulting firms of the United States
2017 mergers and acquisitions
Consulting firms disestablished in 2017
2017 disestablishments in Washington, D.C.
2001 initial public offerings |
42160312 | https://en.wikipedia.org/wiki/Earthlock | Earthlock | Earthlock (formerly titled Earthlock: Festival of Magic) is a role-playing video game developed and published by the Norwegian company Snowcastle Games for Microsoft Windows, OS X, Linux, PlayStation 4, Xbox One, and Wii U. The Xbox One version was launched on September 1, 2016 worldwide.
The PC/Mac version, initially planned for release at the same time as the Xbox One release, was postponed to September 27, 2016. The game is planned to be the first volume of an Earthlock trilogy. A version for the Nintendo Switch was released on March 8, 2018.
A sequel, titled Earthlock 2, is scheduled for a 2022 release.
Gameplay
The game plays as a non-linear role-playing video game with turn based battles. In combat, characters fight in pairs - a "warrior" and a "protector". The warrior uses consumable items such as ammunition, while the protector can perform spells like healing and shields. When the characters are damaged by enemies, they accumulate support points, which can be used to activate other moves. Pairing different combinations of characters unlocks different moves and tactics, greatly affecting the flow of battle.
Plot
The game's story takes place in the world of Umbra, where a cataclysmic event occurs that stops the planet from spinning. This leaves part of the world scorched by constant sunlight, part of it perpetually cold due to lack of sunlight, and a patch of livable land in between. The incident ends up burying much of the past advanced civilization, but humanity survives and society begins anew in the habitable part of the planet. The story follows the adventurer Amon, who, among his usual activities of scavenging ruins, gets entangled in a much larger conflict with the ruling Suvian Empire.
Development
The game is inspired by early Square Japanese role-playing games, though Snowcastle wanted to subvert common genre character stereotypes. Development for the game started as early as 2011, with a pre-alpha build trailer being released in May 2013. The game was originally just known by its subtitle, Festival of Magic.
In November 2013, the game was placed on crowdfunding platform Kickstarter with the aim of securing $250,000 in funding. However, Snowcastle later pulled from the platform, due to it getting lost in the crowd during the holiday season. A fresh Kickstarter campaign was launched in March 2014 with a lowered goal of $150,000. This second campaign was successful, raising $178,000 in April 2014. The game's plot was written by comic book writer Magnus Aspli.
An early concept demo was released in March 2014 for the Windows, OS X and Linux platforms.
Snowcastle Games partnered with Cross Function to release the game in Japan. Despite waning interest in the Wii U in 2016 after the announcement of the Nintendo Switch, Snowcastle Games upheld their promise of a Wii U version, with an unnamed third-party handling porting duties. The Wii U version was ultimately released in September 2017 as a digital-only title.
References
2016 video games
Fantasy video games
Indie video games
Kickstarter-funded video games
Linux games
MacOS games
Nintendo Switch games
PlayStation 4 games
PlayStation Network games
Role-playing video games
Single-player video games
Steam Greenlight games
Steampunk video games
Video games developed in Norway
Video games featuring protagonists of selectable gender
Video games set on fictional planets
Wii U eShop games
Windows games
Xbox One games |
23763457 | https://en.wikipedia.org/wiki/Voddler | Voddler | Voddler was a Stockholm, Sweden-based provider of a video-on-demand (VOD) platform and a streaming technology for over-the-top (OTT) streaming on the public Internet. In Scandinavia, Voddler was primarily known for the commercial VOD-service Voddler, which was launched in 2009. As a company, Voddler was founded in 2005 and developed its own streaming solution, called Vnet. Vnet is based on peer-to-peer (p2p), where all users contribute by streaming movies to each other, but, unlike traditional p2p, Vnet has a central administrator who decides which users that have access to which movies. Due to this exception, Vnet has been referred to as a "hybrid p2p distribution system", "walled garden p2p" or "controlled p2p". In addition to running the consumer service Voddler, the company Voddler also offers, since 2013, Vnet as a stand-alone technology for other streaming platforms. The service Bollyvod, a global VOD-service for Bollywood-content that Voddler built for the Indian movie industry, was released as a pilot in 2014.
Voddler Group went bankrupt in January 2018.
Voddler's streaming technology Vnet
Voddler's streaming technology, called Vnet by the company, is a peer-to-peer-based video content delivery solution. With p2p-streaming, movies are not streamed from a central server or content delivery network (CDN), but from other users who have parts of the movie on their units after seeing the movie earlier. This process begins when a user clicks play for a movie and continues throughout the viewing time, allowing for seamless viewing. After the viewer has completed watching the video, parts of the video file remains for a time on the user's device. Popular content, that is watched by many other users, remain longer on any one user's device than less popular content, which more quickly is removed from the network nodes.
Compared to server-based streaming, p2p-based streaming saves on data costs for the service provider, at the same time as the distribution becomes more robust, since it only grows stronger with every additional user. What distinguishes Vnet from traditional p2p is that Vnet allows an administrator to have central control over which movies are in the network and which users that can see them. Publishing into the network and access to the network is thus centrally controlled.
The Vnet client is a separate application that uses closed source proprietary code from Voddler and is run as background daemon or service).
Vnet is a patented solution with 28 patent in two patent families.
Voddler's own VOD-service
The VOD-service Voddler, which is accessible via web browsers and apps in selected markets, allows registered users with a broadband connection to stream movies and TV-shows over the public Internet. The service was released in beta in Sweden on 28 October 2009, initially only for customers of Swedish ISP Bredbandsbolaget. After requiring users to have an invitation to the service during the first months, Voddler was fully opened in Sweden on 1 July 2010 and soon thereafter in Norway, Denmark and Finland.
The content catalog was initially completely free to the users and monetized via advertising. The catalog soon, however, became a mix of free movies (ad-funded or AVOD); rental movies (pay-per-view or TVOD); films that were part of a package (subscription or SVOD); and titles for purchase (Electronic Sell-Through or EST). The catalog contains primarily Hollywood- and other American titles, together with Scandinavian movies, primarily Swedish. Voddler built its catalog through license agreements with content owners such as Warner Bros., Paramount, Sony and Disney, including subsidiaries such as Touchstone Pictures and Miramax Films.
According to the company itself, Voddler reached over 1 million registered users in the Nordics, and also opened the service in Spain in 2012. To users in Spain, the catalog was more limited than in the Nordics.
Player and software clients
Voddler's first media player client required a separate download and its graphical 10-foot user interface was primarily designed for the living-room TV with a remote control, instead of a desktop computer interface. In March 2010, Voddler updated the interface to allow for mouse and keyboard control, both for selecting movies and for playing them in the Voddler media player. This first mediaplayer was based on the GNU Public Licensed (GPL) source code of XBMC Media Center, a free and open-source software, that Voddler used as its application framework for the media player. After a controversy in 2010 surrounding the source code for Voddler's video player (see below under "GPL controversy"), Voddler changed the framework for its player and has based it since then on Adobe Flash and Adobe Air, both of which are not open source code. At about the same time, Voddler also stopped demanding a separate download of the media player and instead started using a player that was embedded directly into the browser page. This new player, just like the old one, takes it stream from Voddler's streaming cloud Vnet.
Mobile units
On 23 June 2011, Voddler announced the launch of an Android app. Subsequently, Voddler also released apps for iPhone, iPad, Windows Phone, SymbianSymbian och MeeGo. Starting in 2013, Voddler has increasingly started to use browser-based streaming instead of building dedicated apps for each platform.
GPL controversy
For its first video player, which was based on the GNU Public Licensed (GPL) source code of XBMC Media Center, Voddler also developed its own encryption module, to protect the movies streamed via the player from unauthorized copying or downloading. On 24 February 2010, the company closed down the service, having been hacked by anonymous programmers who had recreated these missing code parts that Voddler had added to its media player. The missing code made it possible for other media player to attach to Voddler's own, so that the user could save the streamed films to their harddrive. This use case violated the licensing deals that Voddler had signed with the content owners. As motivation for their attack, the anonymous programmers said that Voddler, according to GPL, should have published the code for the encryption module back to the open source project. Voddler claimed they had met the requirements of GPL, which the anonymous group argued was wrong, insisting that Voddler had to distribute all of the Voddler source code needed to compile the Voddler player executable. When Voddler re-opened its service on 8 March 2010, it was with a new media player, no longer based on XBMC. Since encryption protection was essential for Voddler, in order to keep its content license agreements, and that giving the code module back to the open source project was tantamount to removing the encryption, Voddler chose to completely replace the player with a commercially available framework.
The Company
Voddler Sweden AB is a venture-backed, privately held company, based in Stockholm, Sweden. The company was founded 2005 by Martin Alsen, Magnus Dalhamn and Mattias Bergström and held for a while offices also in Palo Alto and Beijing. The company was reorganized in 2008 and investors Marcus Starberger and Mathias Hjelmstedt invested in Voddler Inc. Marcus Starberger and Mathias Hjelmstedt took their place as new founders and marked a new era for streaming services, video on demand (VOD). Marcus Starberger with a leading position in film and the TV companies' premiere events and marketing, as well as his deep knowledge of film companies value chain - the cinema distributor - video store - broadcast / satellite TV and their transaction models. Contributed to Marcus Starberger being given a mandate to implement a distribution system and interface to the end customer for films and TV series via the web, which recently became the model for the revolution of an entire era of traditional moving media. At the beginning of 2010, Marcus Starberger chose to resign his leading position at Voddler after learning that Voddler had infringed copyright in its use of the GPL license. The company is financially backed by venture capital companies such as Swedish Deseven, Starberger Group AB and German Cipio Partners. Other investors include Nokia Growth Partners (Finland), Eqvitec (Finland), and Elisa Oyj (a Finnish telecommunications company). The company's CEO since 2009 is Marcus Bäcklund.
References
External links
Voddler Official Website
Voddler Official Community Support offered at GetSatisfaction.com
Cross-platform software
Linux media players
Internet software for Linux
MacOS media players
Multimedia software
Windows media players
Software forks |
45486848 | https://en.wikipedia.org/wiki/Teamwork%20%28project%20management%29 | Teamwork (project management) | Teamwork is an Irish, privately owned, web-based software company headquartered in Cork, Ireland. Teamwork creates task management and team collaboration software. Founded in 2007, as of 2016 the company stated that its software was in use by over 370,000 organisations worldwide (including Disney, Spotify and HP), and that it had over 2.4m users.
Company history
Peter Coppinger and Dan Mackey founded a company, Digital Crew, in 2007. This company built websites, intranets and custom web-based solutions for clients in Cork, Ireland. Frustrated by whiteboards and software management tools, Coppinger wanted a software system that would help manage client projects and which would be easy-to-use and generic enough to be used by different types of companies. Originally 37signals Basecamp users themselves, Coppinger and Mackey were frustrated by the limited feature set and by Basecamp's apparent inaction on their feedback. In October 2007, Coppinger and Mackey launched Teamwork Project Manager, nicknamed TeamworkPM. In March 2015, this was renamed as Teamwork Projects.
In 2014, after two years of negotiations, TeamworkPM bought the domain name 'Teamwork.com' for US$675,000 (€500,000). At the time this was one of the most expensive domain name purchases by an Irish company, and involved the transfer of a domain name which had been dormant since it was first acquired by the original owner in 1999.
In 2015, Teamwork was named by Gartner to be one of their "Cool Vendors" in the Program and Portfolio Management Category. This was followed by the launch of a new real-time messaging product, Teamwork Chat, in January 2015. In June 2015, the company announced a drive to recruit for 40 positions by the end of the year. This was followed by the announcement that the company was investing more than €1 million in a new office, and had leased office space in Park House, Blackpool.
In June 2016, Teamwork undertook a further recruitment drive to entice developers to Cork.
In July 2021, the company announced has raised an investment of $70 million (€59.1 million) from venture capital firm Bregal Milestone to fund further growth.
Products
Teamwork markets a number of cloud-based applications, including Teamwork, Teamwork Desk, Teamwork Spaces, Teamwork CRM and Teamwork Chat. Teamwork was launched on 4 October 2007, at which time it had time management, milestone management, file sharing, time tracking, and messaging features.
Teamwork's platform reportedly integrates with martech software like HubSpot, as well as other productivity tools like Slack, G Suite, MS Teams, Zapier, Dropbox and QuickBooks.
Awards
In 2016, Teamwork was awarded Cork's Best SME in the Cork Chamber of Commerce "Company of the Year" awards.
In 2016, Teamwork was named number 7 in Deloitte's Fast 50 tech companies hit €1.6bn turnover.
In 2015, Teamwork was identified as a Gartner "Cool Vendor" in the Program and Portfolio Mangagement Category.
See also
List of collaborative software
List of project management software
References
External links
Project management software
Task management software
Web applications
Collaborative software
Groupware
Software project management
Document management systems |
3051464 | https://en.wikipedia.org/wiki/MacTheRipper | MacTheRipper | MacTheRipper is a Mac OS X application that enables users to create a playable copy of the contents of a Video DVD by defeating the Content Scramble System. During this process it may optionally modify or disable the DVD region code or the User operation prohibition features of the copied data. The previous lack of an OS X equivalent to the PC software DVDShrink gave this standalone DVD ripper widespread popularity among Macintosh users.
The current public release is version 2.6.6. The latest version, v4.2.7, is available at the MTR-4 forum, which is accessible only after a registration with, and an approval from, an administrator. Even documentation such as pricing (it's no longer free) and the FAQ are locked off.
Legal issues
Previous releases of MacTheRipper violated the GNU General Public License (GPL) of the libdvdread and libdvdcss software libraries, on which MacTheRipper is built. However, with MacTheRipper 4 and newer the libdvdread and libdvdcss libraries are distributed separately and must be installed separately for MacTheRipper to work.
The creation and distribution of MacTheRipper may violate the anti-circumvention laws which the U.S. and EU have adopted as part of the WIPO Copyright Treaty. In a case against the maker of a program similar to MacTheRipper, the court found that "the downstream uses of the software [...], whether legal or illegal, are not relevant to determining whether [the manufacturer] itself is violating the statute." In that case and others that followed it, the court found the software manufacturer in violation of the DMCA.
See also
DVD ripping: an article about extracting the content of DVD, CD, and Blu-ray discs
HandBrake: a free open-source transcoder application for converting DVD content into other formats
References
External links
The original historical MacTheRipper page (for v1.0 - v2.6.6) including usage instructions URL, that was listed at Wikipedia, that was at http://mactheripper.org/ is gone - now replaced by a re-direct link to a URL for a commercial corporate made product called "Mac DVDRipper Pro" by DVDSuki Software; Note: "MacTheRipper" and "Mac DVDRipper Pro" are different programs and should not be confused with each other...
Official MacTheRipper Support Forums
MacTheRipper on MacUpdate
Download MacTheRipper 2.6.6
What is MacTheRipper?
DVD rippers
Freeware
MacOS-only software |
2392076 | https://en.wikipedia.org/wiki/John%20Michels | John Michels | John Spiegel Michels (born March 19, 1973 in La Jolla, California) is a former American football offensive tackle in the National Football League, and current Interventional Pain Management Physician at Interventional Spine & Pain in Dallas, Texas (www.johnmichelsmd.com).
High school career
Michels attended La Jolla High School, where he was a three time letterman in football, basketball, and track. In football, he was a two-way starter and was named First Team All-American and Western League Defensive MVP as a defensive tackle, and First Team All-San Diego County as an offensive tackle. In track and field, he was the 1991 Western League Champion in the discus.
Michels was named as one of San Diego's 100 all-time greatest prep football players by the San Diego Union-Tribune.
College career
Michels played college football at the University of Southern California and was a First Team All-Pac-10 and a Second Team All-American offensive tackle, after being converted from a defensive end. He helped to lead the Trojans to a victory over Northwestern University in the 1996 Rose Bowl. After his senior year at USC, Michels was selected as a starter in the 1996 Senior Bowl All-Star game.
Professional career
Michels was drafted in the first round, 27th pick overall, of the 1996 NFL Draft by the Green Bay Packers. When then-starter and fellow Trojan Ken Ruettgers went down with a knee injury, Michels took over the left tackle duties. He started 9 games in his rookie season, helping the Packers win Super Bowl XXXI. He was named the Green Bay Packers 1996 Co-Rookie of the Year (along with Tyrone Williams) and earned NFL All-Rookie honors.
In 1997, he returned as the starting left tackle, starting the first five games of the season before injuring his right knee against the Detroit Lions. He was sidelined for the rest of the season and replaced by that year's first round pick Ross Verba. After having his best training camp as a professional in 1998, he again injured his right knee and spent the year on injured reserve. Unable to recover from his knee injury, he struggled in training camp in 1999 and was traded to the Philadelphia Eagles for defensive end Jon Harris. Michels only lasted a couple of weeks in Philadelphia before his knee injury ultimately ended his career.
Personal
Born John Spiegel Michels, Jr., Michels is the great-great-grandson of Joseph Spiegel, the founder of Spiegel Catalog, which was one of the most important firms in the mail-order industry, and arguably the first.
From 2000-2002 he served as the Youth Director at Canyon Hills Church in Mission Viejo, California.
In 2008 Michels received his medical degree from the Keck School of Medicine at the University of Southern California. He completed a residency in Diagnostic Radiology at Baylor College of Medicine in Houston, Texas, and a fellowship in Interventional Pain Medicine at the University of California, Irvine. He is a diplomate of the American Board of Radiology and the American Board of Pain Medicine.
He currently practices at Interventional Spine & Pain (www.johnmichelsmd.com) in Dallas, Texas. He has a passion for utilizing conservative therapies and minimally invasive procedures to alleviate pain and improve human performance. He looks to have a medical career that helps prevent, treat, and rehabilitate injury so that people can perform at their optimal levels and live out their passions, pain free.
References
1973 births
Living people
American football offensive tackles
American people of German-Jewish descent
Players of American football from San Diego
USC Trojans football players
Keck School of Medicine of USC alumni
Baylor College of Medicine alumni
Green Bay Packers players
Philadelphia Eagles players
Spiegel family |
144787 | https://en.wikipedia.org/wiki/EDonkey2000 | EDonkey2000 | eDonkey2000 (nicknamed "ed2k") was a peer-to-peer file sharing application developed by US company MetaMachine (Jed McCaleb and Sam Yagan), using the Multisource File Transfer Protocol. This client supports both the eDonkey2000 network and the Overnet network.
On September 28, 2005, eDonkey was discontinued following a cease and desist letter from the RIAA.
eDonkey2000 network
Users on the eDonkey2000 network predominantly share large files of tens or hundreds of megabytes, such as CD images, videos, games, and software programs. To ease file searching, some websites list the checksums of sought-after files in the form of an ed2k link. Some of those websites also have lists of active servers for users to update.
MetaMachines has also created another file-sharing network called Overnet, which interoperates with the eDonkey network, but without the use of servers. Most eDonkey clients also now use the Overnet network. In 2004, MetaMachines announced it would stop development of Overnet to concentrate on eDonkey2000 (though the eDonkey2000 client now includes the Overnet protocol).
eDonkey has since been closed down.
Early history and design
eDonkey2000 was created by Jed McCaleb, cofounder of Stellar, and was first released on September 6, 2000. On September 16, 2000, client and server versions were available for Microsoft Windows and Linux.
Compared to earlier P2P file-sharing program Napster, eDonkey2000 featured "swarming" downloads, meaning that clients could download different pieces of a single file from different peers, effectively utilizing the combined bandwidth of all of the peers instead of being limited to the bandwidth of a single peer.
At first, servers were isolated from one another as with Napster, but later versions of the eDonkey2000 server software enabled servers to form a search network. This allowed servers to forward search queries from their locally connected clients to other servers, allowing clients to effectively find peers connected to any server on the server network, thereby increasing download swarm size. It also allowed clients to find and download files not available from clients connected to the same server.
A third improvement compared to Napster was the use of file hashes instead of simple filenames in search results. File searches initiated by the user were keyword-based and matched against the filename list stored on the eDonkey2000 server, but the server returned a list of filenames paired with the hash values of those files to the client. When selecting a file from the list presented to the user, the client would actually initiate a download by hash value. This meant that a file could have many different filenames across different peers, but would be considered identical for purposes of downloading if its hash was the same.
The two-level (client and server) peer-to-peer network architecture offered a balance between centralized systems like Napster, and decentralized systems like Gnutella. Where Napster ultimately proved to be vulnerable was its centralized server cluster, which was a stable target for legal action. Gnutella's original design, featuring total elimination of the server network in favor of purely peer-to-peer searching, quickly proved to be infeasible due to massive search traffic overhead between peers.
Later 2nd-level P2P file sharing systems use a similar design to eDonkey2000 (downloading files in pieces by hash from multiple peers simultaneously) but innovate in the design of the server network, such as in the case of BitTorrent, which separates the file search feature ("torrent search") from the download peer locating feature ("torrent tracker").
eDonkey2000 client
The latest version of the official eDonkey2000 client included a plugin that allowed BitTorrent files to be downloaded. Once a torrent download begins the search facility within eDonkey can find the same file within the eDonkey/Overnet network and synchronise its download. This effectively allowed a torrent to be used as another source for the download, vastly increasing speed as well as virtually eliminating problems with fakes. Torrents are very "clean" in terms of falsely labelled files and their use as file size verifiers in addition to eDonkey2000's own user-based fake warning system has vastly improved the network's functionality. By effectively combining the range of the existing Overnet and eDonkey networks with the lightning-fast file distribution of the BitTorrent system, eDonkey2000 was following a growing trend amongst peer-to-peer programmes of integrating downloads from multiple networks. This has the advantage of maximising the number of files available while limiting vulnerability to problems on a single network.
eDonkey sued by RIAA
In September 2005, officials from the company MetaMachine received a cease and desist letter from the RIAA as a result of the June 2005 Supreme Court ruling MGM Studios v. Grokster that makers of software that facilitates copyright infringement are liable for that infringement. Many news sites reported that on September 22, 2005, MetaMachine's corporate offices have closed. This was apparently inaccurate, based on the aforementioned news sites checking for the old eDonkey headquarters in New York (the new ones being in New Jersey, as they had moved there).
However, on September 28, 2005, eDonkey officially closed its doors. MetaMachine President Sam Yagan said in a statement that the company would "convert eDonkey's user base to an online content retailer operating in a closed P2P environment," and "such a transaction to take place as soon as we can reach a settlement with the RIAA.". This had little effect on the network as a whole, as eDonkey clients only made up a small minority of the whole network.
On September 12, 2006 it was reported that MetaMachine, Inc. had agreed to settle with the RIAA for $30 million, and the website is replaced by a text advertisement reflecting the RIAA's interpretation of copyright law.
Nevertheless, the eDonkey Network is still available through other clients, such as eMule or aMule.
See also
Comparison of eDonkey software
Comparison of file sharing applications
eDonkey network
References
External links
Old client and plugin
eDonkey2000 Archived News – November 9, 2000 Internet Archive snapshot of eDonkey2000.com home page showing original release announcement
eDonkey2000 Overview – February 13, 2001 Internet Archive snapshot of eDonkey2000.com Overview page explaining network architecture
eDonkey2000 – Overnet – August 27, 2006 Internet Archive snapshot of edonkey.com home page
News
eDonkey2000 becomes the eMule Project – September 28, 2005 MP3 Newswire article
Slashdot.org article "eDonkey Pays the Recording Industry $30M" – September 12, 2006
File sharing software
Cross-platform software
Internet services shut down by a legal challenge
2000 software |
6330250 | https://en.wikipedia.org/wiki/Software%20for%20handling%20chess%20problems | Software for handling chess problems | This article covers computer software designed to solve, or assist people in creating or solving, chess problems – puzzles in which pieces are laid out as in a game of chess, and may at times be based upon real games of chess that have been played and recorded, but whose aim is to challenge the problemist to find a solution to the posed situation, within the rules of chess, rather than to play games of chess from the beginning against an opponent.
This is usually distinct from actually playing and analyzing games of chess. Many chess playing programs also have provision for solving some kinds of problem such as checkmate in a certain number of moves (directmates), and some also have support for helpmates and selfmates.
Software for chess problems can be used for creating and solving problems, including checking the soundness of a concept and position, storing it in a database, printing and publishing, and saving and exporting the problem. As such they can not only solve direct mates, helpmates and selfmates, but at times even problems with fairy pieces and other fairy chess problems. There have also been some attempts to have computers "compose" problems, largely autonomously.
Software
Alybadix
First developed in 1980 by Ilkka Blom, Alybadix is a suite of chess problem solving programs for DOS and Commodore 64. Alybadix supports solving classical problems: selfmates, reflex mates, series mates, Circe, maximummers, and many Fairy types. It comes with a large problem collection and supports quality printing. In 1993, Schach und Spiele magazine considered Alybadix to be six times faster than other playing machines including the RISC 2500.
Popeye
Popeye is a chess problem-solving software accommodating many fairy chess rules and able to investigate set play and tries. It can be used with several operating systems and can be connected to several existing graphical interfaces since it comes with freely available source code, cf. . Since its origin, Popeye was designed as a general-purpose, extensible tool for checking fairy and heterodox chess problems. The original author of Popeye was Philippe Schnoebelen who wrote it in Pascal under MS-DOS around 1983-84. In 1986 the code was donated in the spirit of the free software movement. Elmar Bartel, Norbert Geissler, Thomas Maeder, Torsten Linss, Stefan Hoening, Stefan Brunzen, Harald Denker, Thomas Bark and Stephen Emmerson, converted Popeye to the C programming language, and now maintain the program.
A good graphic interface "AP WIN" a freeware, for using with Windows XP or Windows 7 has since been developed by Paul H. Wiereyn. Using this one can create diagrams and use Popeye for solving problems directly from the diagram.
Chloe and Winchloe
Chloe (DOS) and Winchloe (proprietary software) are solving programs written by Christian Poisson. Winchloe not only supports classical problems — direct mates, helpmates and selfmates — but also many fairy pieces and conditions with different sized chessboards (up to 250 by 250 squares). It comes with a collection of more than 300,000 problems that can be updated via the Internet. Christian Poisson also maintains the Web site Problemesis.
Natch and iNatch
Natch and iNatch are freeware programs written by Pascal Wassong for DOS and Linux. Natch solves retrograde analysis problems by constructing a "proof game" - the shortest possible game leading to a certain position. Natch is a command line utility, but there is a Java based graphical interface. iNatch also provides moves with fairy conditions: monochrome chess, Einstein chess, vertical cylinder.
Problemist(e)
Problemist is a shareware program written by Matthieu Leschamelle for Windows and Windows Mobile. Problemist solves direct mates, helpmates, selfmates and reflexmates. It can rotate positions, print diagrams and much more. With Problemist come two TrueType chess fonts, and from its web page one can download more than 100,000 problems. Problemist is the first chess problems exchange format.
Jacobi
Jacobi is a program to solve fairy chess proof game problems by François Labelle. It is written in JavaScript and run from browser . In 2003, Labelle already developed chess-related programs and published computer-generated chess problems .
Chest
Chest was created by Heiner Marxen in 1999. It is written in C, and distributed as source code
. It solves direct mates, self mates, and help
mates (as well as stalemates for self- and help mates). A UCI adapter (written by Franz Huber) is also available,
allowing Chest to be used as solving engine in any UCI-capable chess GUI.
Databases
Chess Problem Database Server
Chess Problem Database Server is online database of all types of chess problems, maintained by Gerd Wilts, hosted by Die Schwalbe. Database incorporated John Niemann collection and the work of a lot of contributors. Database has 428,703 problems (as of November 2019). Problems are represented graphically with solutions and commentary.
Other
LaTeX Diagram Style
Diagram is a style file for LaTeX for typesetting chess diagrams. The style was originally created by Thomas Brand and further developed by Stefan Hoening, both based on ideas of a TeX package from Elmar Bartel. The style is used to produce the German problem chess magazine Die Schwalbe.
External links
Chess Problem Database Server
See also
Chess aesthetics
List of chess software
References
Chess problems
Chess software |
195870 | https://en.wikipedia.org/wiki/IBM%20WebSphere | IBM WebSphere | IBM WebSphere refers to a brand of proprietary computer software products in the genre of enterprise software known as "application and integration middleware". These software products are used by end-users to create and integrate applications with other applications. IBM WebSphere has been available to the general market since 1998.
History
IBM introduced the first product in this brand, IBM WebSphere Performance Pack, in June 1998. this original component forms a part of IBM WebSphere2 Application Server Network Deployment, which itself is one of many WebSphere-branded enterprise software products.
IBM WebSphere Software
The following complete list of IBM WebSphere software uses IBM classifications. Several tools appear in more than one category.
IBM has also classified WebSphere software according to the capabilities offered for individual industries.
Application Infrastructure
Main Products
IBM WebSphere Application Server - a web application server
IBM Workload Deployer - a hardware appliance that provides access to IBM middleware virtual images and patterns
IBM WebSphere eXtreme Scale - an in-memory data grid for use in high-performance computing
IBM HTTP Server
IBM WebSphere Adapters
IBM Websphere Business Events
IBM Websphere Edge Components
IBM Websphere Host On-Demand (HOD) Host On-Demand Web-based TN3270, TN5250 and VT420 Terminal Emulation.
IBM WebSphere Message Broker
IBM WebSphere Multichannel Bank Transformation Toolkit
IBM MQ (previously known as IBM MQSeries and IBM WebSphere MQ)
IBM WebSphere Portlet Factory
IBM WebSphere Process Server
Divested (Sold) Products
WebSphere Commerce to HCL Technologies in July 2019
WebSphere Portal to HCL Technologies in July 2019
See also
IBM InfoSphere DataStage
Enterprise application integration
Notes and references
Further reading
External links
Websphere Liberty Profile
Java enterprise platform
WebSphere
Portal software
Service-oriented architecture-related products |
12610483 | https://en.wikipedia.org/wiki/Android%20%28operating%20system%29 | Android (operating system) | Android is a mobile operating system based on a modified version of the Linux kernel and other open source software, designed primarily for touchscreen mobile devices such as smartphones and tablets. Android is developed by a consortium of developers known as the Open Handset Alliance and commercially sponsored by Google. It was unveiled in November 2007, with the first commercial Android device, the HTC Dream, being launched in September 2008.
Most versions of Android are proprietary. The core components are taken from
the Android Open Source Project (AOSP), which is free and open-source software primarily licensed under the Apache License. When Android is actually installed on devices, ability to modify the otherwise FOSS software is usually restricted, either by not providing the corresponding source code or preventing reinstallation through technical measures, rendering the installed version proprietary. Most Android devices ship with additional proprietary software pre-installed, most notably Google Mobile Services (GMS) which includes core apps such as Google Chrome, the digital distribution platform Google Play, and associated Google Play Services development platform.
Over 70 percent of Android smartphones run Google's ecosystem; some with vendor-customized user interface and software suite, such as TouchWiz and later One UI by Samsung, and HTC Sense. Competing Android ecosystems and forks include Fire OS (developed by Amazon) or LineageOS. However the "Android" name and logo are trademarks of Google which imposes standards to restrict the use of Android branding by "uncertified" devices outside their ecosystem.
The source code has been used to develop variants of Android on a range of other electronics, such as game consoles, digital cameras, portable media players, PCs, each with a specialized user interface. Some well known derivatives include Android TV for televisions and Wear OS for wearables, both developed by Google. Software packages on Android, which use the APK format, are generally distributed through proprietary application stores like Google Play Store, Amazon Appstore (including for Windows 11), Samsung Galaxy Store, Huawei AppGallery, Cafe Bazaar, and GetJar, or open source platforms like Aptoide or F-Droid.
Android has been the best-selling OS worldwide on smartphones since 2011 and on tablets since 2013. , it has over three billion monthly active users, the largest installed base of any operating system, and , the Google Play Store features over 3 million apps. Android 12, released on October 4, 2021, is the latest version.
History
Android Inc. was founded in Palo Alto, California, in October 2003 by Andy Rubin, Rich Miner, Nick Sears, and Chris White. Rubin described the Android project as having "tremendous potential in developing smarter mobile devices that are more aware of its owner's location and preferences". The early intentions of the company were to develop an advanced operating system for digital cameras, and this was the basis of its pitch to investors in April 2004. The company then decided that the market for cameras was not large enough for its goals, and five months later it had diverted its efforts and was pitching Android as a handset operating system that would rival Symbian and Microsoft Windows Mobile.
Rubin had difficulty attracting investors early on, and Android was facing eviction from its office space. Steve Perlman, a close friend of Rubin, brought him $10,000 in cash in an envelope, and shortly thereafter wired an undisclosed amount as seed funding. Perlman refused a stake in the company, and has stated "I did it because I believed in the thing, and I wanted to help Andy."
In 2005, Rubin tried to negotiate deals with Samsung and HTC. Shortly afterwards, Google acquired the company in July of that year for at least $50 million; this was Google's "best deal ever" according to Google's then-vice president of corporate development, David Lawee, in 2010. Android's key employees, including Rubin, Miner, Sears, and White, joined Google as part of the acquisition. Not much was known about the secretive Android Inc. at the time, with the company having provided few details other than that it was making software for mobile phones. At Google, the team led by Rubin developed a mobile device platform powered by the Linux kernel. Google marketed the platform to handset makers and carriers on the promise of providing a flexible, upgradeable system. Google had "lined up a series of hardware components and software partners and signaled to carriers that it was open to various degrees of cooperation".
Speculation about Google's intention to enter the mobile communications market continued to build through December 2006. An early prototype had a close resemblance to a BlackBerry phone, with no touchscreen and a physical QWERTY keyboard, but the arrival of 2007's Apple iPhone meant that Android "had to go back to the drawing board". Google later changed its Android specification documents to state that "Touchscreens will be supported", although "the Product was designed with the presence of discrete physical buttons as an assumption, therefore a touchscreen cannot completely replace physical buttons". By 2008, both Nokia and BlackBerry announced touch-based smartphones to rival the iPhone 3G, and Android's focus eventually switched to just touchscreens. The first commercially available smartphone running Android was the HTC Dream, also known as T-Mobile G1, announced on September 23, 2008.
On November 5, 2007, the Open Handset Alliance, a consortium of technology companies including Google, device manufacturers such as HTC, Motorola and Samsung, wireless carriers such as Sprint and T-Mobile, and chipset makers such as Qualcomm and Texas Instruments, unveiled itself, with a goal to develop "the first truly open and comprehensive platform for mobile devices". Within a year, the Open Handset Alliance faced two other open source competitors, the Symbian Foundation and the LiMo Foundation, the latter also developing a Linux-based mobile operating system like Google. In September 2007, InformationWeek covered an Evalueserve study reporting that Google had filed several patent applications in the area of mobile telephony.
Since 2008, Android has seen numerous updates which have incrementally improved the operating system, adding new features and fixing bugs in previous releases. Each major release is named in alphabetical order after a dessert or sugary treat, with the first few Android versions being called "Cupcake", "Donut", "Eclair", and "Froyo", in that order. During its announcement of Android KitKat in 2013, Google explained that "Since these devices make our lives so sweet, each Android version is named after a dessert", although a Google spokesperson told CNN in an interview that "It's kind of like an internal team thing, and we prefer to be a little bit — how should I say — a bit inscrutable in the matter, I'll say".
In 2010, Google launched its Nexus series of devices, a lineup in which Google partnered with different device manufacturers to produce new devices and introduce new Android versions. The series was described as having "played a pivotal role in Android's history by introducing new software iterations and hardware standards across the board", and became known for its "bloat-free" software with "timely ... updates". At its developer conference in May 2013, Google announced a special version of the Samsung Galaxy S4, where, instead of using Samsung's own Android customization, the phone ran "stock Android" and was promised to receive new system updates fast. The device would become the start of the Google Play edition program, and was followed by other devices, including the HTC One Google Play edition, and Moto G Google Play edition. In 2015, Ars Technica wrote that "Earlier this week, the last of the Google Play edition Android phones in Google's online storefront were listed as "no longer available for sale" and that "Now they're all gone, and it looks a whole lot like the program has wrapped up".
From 2008 to 2013, Hugo Barra served as product spokesperson, representing Android at press conferences and Google I/O, Google's annual developer-focused conference. He left Google in August 2013 to join Chinese phone maker Xiaomi. Less than six months earlier, Google's then-CEO Larry Page announced in a blog post that Andy Rubin had moved from the Android division to take on new projects at Google, and that Sundar Pichai would become the new Android lead. Pichai himself would eventually switch positions, becoming the new CEO of Google in August 2015 following the company's restructure into the Alphabet conglomerate, making Hiroshi Lockheimer the new head of Android.
On Android 4.4 Kit Kat, shared writing access to MicroSD memory cards has been locked for user-installed applications, to which only the dedicated directories with respective package names, located inside Android/data/, remained writeable. Writing access has been reinstated with Android 5 Lollipop through the backwards-incompatible Google Storage Access Framework interface.
In June 2014, Google announced Android One, a set of "hardware reference models" that would "allow [device makers] to easily create high-quality phones at low costs", designed for consumers in developing countries. In September, Google announced the first set of Android One phones for release in India. However, Recode reported in June 2015 that the project was "a disappointment", citing "reluctant consumers and manufacturing partners" and "misfires from the search company that has never quite cracked hardware". Plans to relaunch Android One surfaced in August 2015, with Africa announced as the next location for the program a week later. A report from The Information in January 2017 stated that Google is expanding its low-cost Android One program into the United States, although The Verge notes that the company will presumably not produce the actual devices itself. Google introduced the Pixel and Pixel XL smartphones in October 2016, marketed as being the first phones made by Google, and exclusively featured certain software features, such as the Google Assistant, before wider rollout. The Pixel phones replaced the Nexus series, with a new generation of Pixel phones launched in October 2017.
In May 2019, the operating system became entangled in the trade war between China and the United States involving Huawei, which, like many other tech firms, had become dependent on access to the Android platform. In the summer of 2019, Huawei announced it would create an alternative operating system to Android known as Harmony OS, and has filed for intellectual property rights across major global markets. Huawei does not currently have any plans to replace Android in the near future, as Harmony OS is designed for internet of things devices, rather than for smartphones.
On August 22, 2019, it was announced that Android "Q" would officially be branded as Android 10, ending the historic practice of naming major versions after desserts. Google stated that these names were not "inclusive" to international users (due either to the aforementioned foods not being internationally known, or being difficult to pronounce in some languages). On the same day, Android Police reported that Google had commissioned a statue of a giant number "10" to be installed in the lobby of the developers' new office. Android 10 was released on September 3, 2019 to Google Pixel phones first.
With scoped storage, conventional writing access to the shared internal user storage has been locked, and only app-specific directories remain accessible as usual. Files and directories outside only remain accessible through the backwards-incompatible Storage Access Framework. While these restrictions are claimed to improve user privacy, private app-specific directories already existed under /data/ since early versions of the operating system.
Features
Interface
Android's default user interface is mainly based on direct manipulation, using touch inputs that loosely correspond to real-world actions, like swiping, tapping, pinching, and reverse pinching to manipulate on-screen objects, along with a virtual keyboard. Game controllers and full-size physical keyboards are supported via Bluetooth or USB. The response to user input is designed to be immediate and provides a fluid touch interface, often using the vibration capabilities of the device to provide haptic feedback to the user. Internal hardware, such as accelerometers, gyroscopes and proximity sensors are used by some applications to respond to additional user actions, for example adjusting the screen from portrait to landscape depending on how the device is oriented, or allowing the user to steer a vehicle in a racing game by rotating the device, simulating control of a steering wheel.
Home screen
Android devices boot to the home screen, the primary navigation and information "hub" on Android devices, analogous to the desktop found on personal computers. Android home screens are typically made up of app icons and widgets; app icons launch the associated app, whereas widgets display live, auto-updating content, such as a weather forecast, the user's email inbox, or a news ticker directly on the home screen. A home screen may be made up of several pages, between which the user can swipe back and forth. Third-party apps available on Google Play and other app stores can extensively re-theme the home screen, and even mimic the look of other operating systems, such as Windows Phone. Most manufacturers customize the look and features of their Android devices to differentiate themselves from their competitors.
Status bar
Along the top of the screen is a status bar, showing information about the device and its connectivity. This status bar can be pulled (swiped) down from to reveal a notification screen where apps display important information or updates, as well as quick access to system controls and toggles such as display brightness, connectivity settings (WiFi, Bluetooth, cellular data), audio mode, and flashlight. Vendors may implement extended settings such as the ability to adjust the flashlight brightness.
Notifications
Notifications are "short, timely, and relevant information about your app when it's not in use", and when tapped, users are directed to a screen inside the app relating to the notification. Beginning with Android 4.1 "Jelly Bean", "expandable notifications" allow the user to tap an icon on the notification in order for it to expand and display more information and possible app actions right from the notification.
App lists
An "All Apps" screen lists all installed applications, with the ability for users to drag an app from the list onto the home screen. The app list may be accessed using a gesture or a button, depending on the Android version. A "Recents" screen, also known as "Overview", lets users switch between recently used apps.
The recent list may appear side-by-side or overlapping, depending on the Android version and manufacturer.
Navigation buttons
Many early Android OS smartphones were equipped with a dedicated search button for quick access to a web search engine and individual apps' internal search feature. More recent devices typically allow the former through a long press or swipe away from the home button.
The dedicated option key, also known as menu key, and its on-screen simulation, is no longer supported since Android version 10. Google recommends mobile application developers to locate menus within the user interface. On more recent phones, its place is occupied by a task key used to access the list of recently used apps when actuated. Depending on device, its long press may simulate a menu button press or engage split screen view, the latter of which is the default behaviour since stock Android version 7.
Split-screen view
Native support for split screen view has been added in stock Android version 7.0 Nougat.
The earliest vendor-customized Android-based smartphones known to have featured a split-screen view mode are the 2012 Samsung Galaxy S3 and Note 2, the former of which received this feature with the premium suite upgrade delivered in TouchWiz with Android 4.1 Jelly Bean.
Charging while powered off
When connecting or disconnecting charging power and when shortly actuating the power button or home button, all while the device is powered off, a visual battery meter whose appearance varies among vendors appears on the screen, allowing the user to quickly assess the charge status of a powered-off without having to boot it up first. Some display the battery percentage.
Audio-coupled haptic effect
Since stock Android version 12, released early 2021, synchronous vibration can be set to complement audio. Such feature initially existed under the name "Auto Haptic" on the Android-based 2012 Samsung Galaxy S III, released with a vendor-modified (TouchWiz) installation of Android 4.1 Jelly Bean.
Applications
Many, to almost all, Android devices come with preinstalled Google apps including Gmail, Google Maps, Google Chrome, YouTube, Google Play Music, Google Play Movies & TV, and many more.
Applications ("apps"), which extend the functionality of devices (and must be 64-bit), are written using the Android software development kit (SDK) and, often, Kotlin programming language, which replaced Java as Google's preferred language for Android app development in May 2019, and was originally announced in May 2017. Java is still supported (originally the only option for user-space programs, and is often mixed with Kotlin), as is C++. Java or other JVM languages, such as Kotlin, may be combined with C/C++, together with a choice of non-default runtimes that allow better C++ support. The Go programming language is also supported, although with a limited set of application programming interfaces (API).
The SDK includes a comprehensive set of development tools, including a debugger, software libraries, a handset emulator based on QEMU, documentation, sample code, and tutorials. Initially, Google's supported integrated development environment (IDE) was Eclipse using the Android Development Tools (ADT) plugin; in December 2014, Google released Android Studio, based on IntelliJ IDEA, as its primary IDE for Android application development. Other development tools are available, including a native development kit (NDK) for applications or extensions in C or C++, Google App Inventor, a visual environment for novice programmers, and various cross platform mobile web applications frameworks. In January 2014, Google unveiled a framework based on Apache Cordova for porting Chrome HTML 5 web applications to Android, wrapped in a native application shell. Additionally, Firebase was acquired by Google in 2014 that provides helpful tools for app and web developers.
Android has a growing selection of third-party applications, which can be acquired by users by downloading and installing the application's APK (Android application package) file, or by downloading them using an application store program that allows users to install, update, and remove applications from their devices. Google Play Store is the primary application store installed on Android devices that comply with Google's compatibility requirements and license the Google Mobile Services software. Google Play Store allows users to browse, download and update applications published by Google and third-party developers; , there are more than three million applications available for Android in Play Store. , 50 billion application installations had been performed. Some carriers offer direct carrier billing for Google Play application purchases, where the cost of the application is added to the user's monthly bill. , there are over one billion active users a month for Gmail, Android, Chrome, Google Play and Maps.
Due to the open nature of Android, a number of third-party application marketplaces also exist for Android, either to provide a substitute for devices that are not allowed to ship with Google Play Store, provide applications that cannot be offered on Google Play Store due to policy violations, or for other reasons. Examples of these third-party stores have included the Amazon Appstore, GetJar, and SlideMe. F-Droid, another alternative marketplace, seeks to only provide applications that are distributed under free and open source licenses.
In October 2020, Google removed several Android applications from Play Store, as they were identified breaching its data collection rules. The firm was informed by International Digital Accountability Council (IDAC) that apps for children like Number Coloring, Princess Salon and Cats & Cosplay, with collective downloads of 20 million, were violating Google's policies.
At the Windows 11 announcement event in June 2021, Microsoft showcased the new Windows Subsystem for Android (WSA) that will enable support for the Android Open Source Project (AOSP) and will allow users to run Android apps on their Windows desktop.
File manager
Since Android 6 Marshmallow, a minimalistic file manager codenamed DocumentsUI is part of the operating system's core, and based on the file selector. It is only accessible through the storage menu in the system settings.
Adoptable storage
Android 6.0 Marshmallow brought adoptable storage, an option to format and mount the memory card as extension of the internal storage instead of default separate portable storage.
While possibly facilitating on-device file management due to files stored on both internal storage and memory card appearing in one place, adopted storage denies data recovery at technical defect and instant reuse in a different device unless reformatted.
For these reasons, the major vendors Samsung and LG opted to exclude adoptable storage.
Applications moved to the memory card were previously stored as .asec files inside an ".android_secure" directory.
Memory management
Since Android devices are usually battery-powered, Android is designed to manage processes to keep power consumption at a minimum. When an application is not in use the system suspends its operation so that, while available for immediate use rather than closed, it does not use battery power or CPU resources. Android manages the applications stored in memory automatically: when memory is low, the system will begin invisibly and automatically closing inactive processes, starting with those that have been inactive for the longest amount of time. Lifehacker reported in 2011 that third-party task-killer applications were doing more harm than good.
Developer options
Some settings for use by developers for debugging and power users are located in a "Developer options" sub menu, such as the ability to highlight updating parts of the display, show an overlay with the current status of the touch screen, show touching spots for possible use in screencasting, notify the user of unresponsive background processes with the option to end them ("Show all ANRs", i.e. "App's Not Responding"), prevent a Bluetooth audio client from controlling the system volume ("Disable absolute volume"), and adjust the duration of transition animations or deactivate them completely to speed up navigation.
Developer options are initially hidden since Android 4.2 "Jelly Bean", but can be enabled by actuating the operating system's build number in the device information seven times. Hiding developers options again requires deleting user data for the "Settings" app, possibly resetting some other preferences.
Hardware
The main hardware platform for Android is ARM (the ARMv7 and ARMv8-A architectures), with x86 and x86-64 architectures also officially supported in later versions of Android. The unofficial Android-x86 project provided support for x86 architectures ahead of the official support. Since 2012, Android devices with Intel processors began to appear, including phones and tablets. While gaining support for 64-bit platforms, Android was first made to run on 64-bit x86 and then on ARM64. Since Android 5.0 "Lollipop", 64-bit variants of all platforms are supported in addition to the 32-bit variants. An unofficial experimental port of the operating system to the RISC-V architecture was released in 2021.
Requirements for the minimum amount of RAM for devices running Android 7.1 range from in practice 2 GB for best hardware, down to 1 GB for the most common screen. Android supports all versions of OpenGL ES and Vulkan (and version 1.1 available for some devices).
Android devices incorporate many optional hardware components, including still or video cameras, GPS, orientation sensors, dedicated gaming controls, accelerometers, gyroscopes, barometers, magnetometers, proximity sensors, pressure sensors, thermometers, and touchscreens. Some hardware components are not required, but became standard in certain classes of devices, such as smartphones, and additional requirements apply if they are present. Some other hardware was initially required, but those requirements have been relaxed or eliminated altogether. For example, as Android was developed initially as a phone OS, hardware such as microphones were required, while over time the phone function became optional. Android used to require an autofocus camera, which was relaxed to a fixed-focus camera if present at all, since the camera was dropped as a requirement entirely when Android started to be used on set-top boxes.
In addition to running on smartphones and tablets, several vendors run Android natively on regular PC hardware with a keyboard and mouse. In addition to their availability on commercially available hardware, similar PC hardware-friendly versions of Android are freely available from the Android-x86 project, including customized Android 4.4. Using the Android emulator that is part of the Android SDK, or third-party emulators, Android can also run non-natively on x86 architectures. Chinese companies are building a PC and mobile operating system, based on Android, to "compete directly with Microsoft Windows and Google Android". The Chinese Academy of Engineering noted that "more than a dozen" companies were customizing Android following a Chinese ban on the use of Windows 8 on government PCs.
Development
Android is developed by Google until the latest changes and updates are ready to be released, at which point the source code is made available to the Android Open Source Project (AOSP), an open source initiative led by Google. The AOSP code can be found without modification on select devices, mainly the former Nexus and current Android One series of devices.
The source code is, in turn, customized by original equipment manufacturers (OEMs) to run on their hardware. Android's source code does not contain the device drivers, often proprietary, that are needed for certain hardware components. As a result, most Android devices, including Google's own, ship with a combination of free and open source and proprietary software, with the software required for accessing Google services falling into the latter category.
Update schedule
Google announces major incremental upgrades to Android on a yearly basis. The updates can be installed on devices over-the-air. The latest major release is Android 12.
The extensive variation of hardware in Android devices has caused significant delays for software upgrades and security patches. Each upgrade has had to be specifically tailored, a time- and resource-consuming process. Except for devices within the Google Nexus and Pixel brands, updates have often arrived months after the release of the new version, or not at all. Manufacturers often prioritize their newest devices and leave old ones behind. Additional delays can be introduced by wireless carriers who, after receiving updates from manufacturers, further customize Android to their needs and conduct extensive testing on their networks before sending out the upgrade. There are also situations in which upgrades are impossible due to a manufacturer not updating necessary drivers.
The lack of after-sale support from manufacturers and carriers has been widely criticized by consumer groups and the technology media. Some commentators have noted that the industry has a financial incentive not to upgrade their devices, as the lack of updates for existing devices fuels the purchase of newer ones, an attitude described as "insulting". The Guardian complained that the method of distribution for updates is complicated only because manufacturers and carriers have designed it that way. In 2011, Google partnered with a number of industry players to announce an "Android Update Alliance", pledging to deliver timely updates for every device for 18 months after its release; however, there has not been another official word about that alliance since its announcement.
In 2012, Google began de-coupling certain aspects of the operating system (particularly its central applications) so they could be updated through the Google Play store independently of the OS. One of those components, Google Play Services, is a closed-source system-level process providing APIs for Google services, installed automatically on nearly all devices running Android 2.2 "Froyo" and higher. With these changes, Google can add new system functions and update apps without having to distribute an upgrade to the operating system itself. As a result, Android 4.2 and 4.3 "Jelly Bean" contained relatively fewer user-facing changes, focusing more on minor changes and platform improvements.
HTC's then-executive Jason Mackenzie called monthly security updates "unrealistic" in 2015, and Google was trying to persuade carriers to exclude security patches from the full testing procedures. In May 2016, Bloomberg Businessweek reported that Google was making efforts to keep Android more up-to-date, including accelerated rates of security updates, rolling out technological workarounds, reducing requirements for phone testing, and ranking phone makers in an attempt to "shame" them into better behavior. As stated by Bloomberg: "As smartphones get more capable, complex and hackable, having the latest software work closely with the hardware is increasingly important". Hiroshi Lockheimer, the Android lead, admitted that "It's not an ideal situation", further commenting that the lack of updates is "the weakest link on security on Android". Wireless carriers were described in the report as the "most challenging discussions", due to their slow approval time while testing on their networks, despite some carriers, including Verizon Wireless and Sprint Corporation, already shortening their approval times. In a further effort for persuasion, Google shared a list of top phone makers measured by updated devices with its Android partners, and is considering making the list public. Mike Chan, co-founder of phone maker Nextbit and former Android developer, said that "The best way to solve this problem is a massive re-architecture of the operating system", "or Google could invest in training manufacturers and carriers 'to be good Android citizens.
In May 2017, with the announcement of Android 8.0, Google introduced Project Treble, a major re-architect of the Android OS framework designed to make it easier, faster, and less costly for manufacturers to update devices to newer versions of Android. Project Treble separates the vendor implementation (device-specific, lower-level software written by silicon manufacturers) from the Android OS framework via a new "vendor interface". In Android 7.0 and earlier, no formal vendor interface exists, so device makers must update large portions of the Android code to move a device to a newer version of the operating system. With Treble, the new stable vendor interface provides access to the hardware-specific parts of Android, enabling device makers to deliver new Android releases simply by updating the Android OS framework, "without any additional work required from the silicon manufacturers."
In September 2017, Google's Project Treble team revealed that, as part of their efforts to improve the security lifecycle of Android devices, Google had managed to get the Linux Foundation to agree to extend the support lifecycle of the Linux Long-Term Support (LTS) kernel branch from the 2 years that it has historically lasted to 6 years for future versions of the LTS kernel, starting with Linux kernel 4.4.
In May 2019, with the announcement of Android 10, Google introduced Project Mainline to simplify and expedite delivery of updates to the Android ecosystem. Project Mainline enables updates to core OS components through the Google Play Store. As a result, important security and performance improvements that previously needed to be part of full OS updates can be downloaded and installed as easily as an app update.
Google reported rolling out new amendments in Android 12 aimed at making the use of third-party application stores easier. This announcement rectified the concerns reported regarding the development of Android apps, including a fight over an alternative in-app payment system and difficulties faced by businesses moving online because of COVID-19.
Linux kernel
Android's kernel is based on the Linux kernel's long-term support (LTS) branches. , Android uses versions 4.14, 4.19 or 5.4 of the Linux kernel. The actual kernel depends on the individual device.
Android's variant of the Linux kernel has further architectural changes that are implemented by Google outside the typical Linux kernel development cycle, such as the inclusion of components like device trees, ashmem, ION, and different out of memory (OOM) handling. Certain features that Google contributed back to the Linux kernel, notably a power management feature called "wakelocks", were initially rejected by mainline kernel developers partly because they felt that Google did not show any intent to maintain its own code. Google announced in April 2010 that they would hire two employees to work with the Linux kernel community, but Greg Kroah-Hartman, the current Linux kernel maintainer for the stable branch, said in December 2010 that he was concerned that Google was no longer trying to get their code changes included in mainstream Linux. Google engineer Patrick Brady once stated in the company's developer conference that "Android is not Linux", with Computerworld adding that "Let me make it simple for you, without Linux, there is no Android". Ars Technica wrote that "Although Android is built on top of the Linux kernel, the platform has very little in common with the conventional desktop Linux stack".
In August 2011, Linus Torvalds said that "eventually Android and Linux would come back to a common kernel, but it will probably not be for four to five years". In December 2011, Greg Kroah-Hartman announced the start of Android Mainlining Project, which aims to put some Android drivers, patches and features back into the Linux kernel, starting in Linux 3.3. Linux included the autosleep and wakelocks capabilities in the 3.5 kernel, after many previous attempts at a merger. The interfaces are the same but the upstream Linux implementation allows for two different suspend modes: to memory (the traditional suspend that Android uses), and to disk (hibernate, as it is known on the desktop). Google maintains a public code repository that contains their experimental work to re-base Android off the latest stable Linux versions.
Android is a Linux distribution according to the Linux Foundation, Google's open-source chief Chris DiBona, and several journalists. Others, such as Google engineer Patrick Brady, say that Android is not Linux in the traditional Unix-like Linux distribution sense; Android does not include the GNU C Library (it uses Bionic as an alternative C library) and some other components typically found in Linux distributions.
With the release of Android Oreo in 2017, Google began to require that devices shipped with new SoCs had Linux kernel version 4.4 or newer, for security reasons. Existing devices upgraded to Oreo, and new products launched with older SoCs, were exempt from this rule.
Rooting
The flash storage on Android devices is split into several partitions, such as /system/ for the operating system itself, and /data/ for user data and application installations.
In contrast to typical desktop Linux distributions, Android device owners are not given root access to the operating system and sensitive partitions such as /system/ are read-only. However, root access can be obtained by exploiting security flaws in Android, which is used frequently by the open-source community to enhance the capabilities and customizability of their devices, but also by malicious parties to install viruses and malware.
The process of enabling root access may require the device's bootloader, which is locked by default, to be in an unlocked state. The unlocking process resets the system to factory state, erasing all user data.
Software stack
On top of the Linux kernel, there are the middleware, libraries and APIs written in C, and application software running on an application framework which includes Java-compatible libraries. Development of the Linux kernel continues independently of Android's other source code projects.
Android uses Android Runtime (ART) as its runtime environment (introduced in version 4.4), which uses ahead-of-time (AOT) compilation to entirely compile the application bytecode into machine code upon the installation of an application. In Android 4.4, ART was an experimental feature and not enabled by default; it became the only runtime option in the next major version of Android, 5.0. In versions no longer supported, until version 5.0 when ART took over, Android previously used Dalvik as a process virtual machine with trace-based just-in-time (JIT) compilation to run Dalvik "dex-code" (Dalvik Executable), which is usually translated from the Java bytecode. Following the trace-based JIT principle, in addition to interpreting the majority of application code, Dalvik performs the compilation and native execution of select frequently executed code segments ("traces") each time an application is launched.
For its Java library, the Android platform uses a subset of the now discontinued Apache Harmony project. In December 2015, Google announced that the next version of Android would switch to a Java implementation based on the OpenJDK project.
Android's standard C library, Bionic, was developed by Google specifically for Android, as a derivation of the BSD's standard C library code. Bionic itself has been designed with several major features specific to the Linux kernel. The main benefits of using Bionic instead of the GNU C Library (glibc) or uClibc are its smaller runtime footprint, and optimization for low-frequency CPUs. At the same time, Bionic is licensed under the terms of the BSD licence, which Google finds more suitable for the Android's overall licensing model.
Aiming for a different licensing model, toward the end of 2012, Google switched the Bluetooth stack in Android from the GPL-licensed BlueZ to the Apache-licensed BlueDroid. A new Bluetooth stack, called Gabeldorsche, was developed to try to fix the bugs in the BlueDroid implementation.
Android does not have a native X Window System by default, nor does it support the full set of standard GNU libraries. This made it difficult to port existing Linux applications or libraries to Android, until version r5 of the Android Native Development Kit brought support for applications written completely in C or C++. Libraries written in C may also be used in applications by injection of a small shim and usage of the JNI.
In current versions of Android, "Toybox", a collection of command-line utilities (mostly for use by apps, as Android does not provide a command-line interface by default), is used (since the release of Marshmallow) replacing a similar "Toolbox" collection found in previous Android versions.
Android has another operating system, Trusty OS, within it, as a part of "Trusty" "software components supporting a Trusted Execution Environment (TEE) on mobile devices." "Trusty and the Trusty API are subject to change. [..] Applications for the Trusty OS can be written in C/C++ (C++ support is limited), and they have access to a small C library. [..] All Trusty applications are single-threaded; multithreading in Trusty userspace currently is unsupported. [..] Third-party application development is not supported in" the current version, and software running on the OS and processor for it, run the "DRM framework for protected content. [..] There are many other uses for a TEE such as mobile payments, secure banking, full-disk encryption, multi-factor authentication, device reset protection, replay-protected persistent storage, wireless display ("cast") of protected content, secure PIN and fingerprint processing, and even malware detection."
Open-source community
Android's source code is released by Google under an open source license, and its open nature has encouraged a large community of developers and enthusiasts to use the open-source code as a foundation for community-driven projects, which deliver updates to older devices, add new features for advanced users or bring Android to devices originally shipped with other operating systems. These community-developed releases often bring new features and updates to devices faster than through the official manufacturer/carrier channels, with a comparable level of quality; provide continued support for older devices that no longer receive official updates; or bring Android to devices that were officially released running other operating systems, such as the HP TouchPad. Community releases often come pre-rooted and contain modifications not provided by the original vendor, such as the ability to overclock or over/undervolt the device's processor. CyanogenMod was the most widely used community firmware, now discontinued and succeeded by LineageOS.
There are, as of August 2019, a handful of notable custom Android distributions (ROMs) of the latest Android version 9.0 Pie, which was released publicly in August 2018. See List of custom Android distributions.
Historically, device manufacturers and mobile carriers have typically been unsupportive of third-party firmware development. Manufacturers express concern about improper functioning of devices running unofficial software and the support costs resulting from this. Moreover, modified firmware such as CyanogenMod sometimes offer features, such as tethering, for which carriers would otherwise charge a premium. As a result, technical obstacles including locked bootloaders and restricted access to root permissions are common in many devices. However, as community-developed software has grown more popular, and following a statement by the Librarian of Congress in the United States that permits the "jailbreaking" of mobile devices, manufacturers and carriers have softened their position regarding third party development, with some, including HTC, Motorola, Samsung and Sony, providing support and encouraging development. As a result of this, over time the need to circumvent hardware restrictions to install unofficial firmware has lessened as an increasing number of devices are shipped with unlocked or unlockable bootloaders, similar to Nexus series of phones, although usually requiring that users waive their devices' warranties to do so. However, despite manufacturer acceptance, some carriers in the US still require that phones are locked down, frustrating developers and customers.
Device codenames
Internally, Android identifies each supported device by its device codename, a short string, which may or may not be similar to the model name used in marketing the device. For example, the device codename of the Pixel smartphone is sailfish.
The device codename is usually not visible to the end user, but is important for determining compatibility with modified Android versions. It is sometimes also mentioned in articles discussing a device, because it allows to distinguish different hardware variants of a device, even if the manufacturer offers them under the same name. The device codename is available to running applications under android.os.Build.DEVICE.
Security and privacy
In 2020, Google launched the Android Partner Vulnerability Initiative to improve the security of Android. They also formed an Android security team.
Common security threats
Research from security company Trend Micro lists premium service abuse as the most common type of Android malware, where text messages are sent from infected phones to premium-rate telephone numbers without the consent or even knowledge of the user. Other malware displays unwanted and intrusive advertisements on the device, or sends personal information to unauthorised third parties. Security threats on Android are reportedly growing exponentially; however, Google engineers have argued that the malware and virus threat on Android is being exaggerated by security companies for commercial reasons, and have accused the security industry of playing on fears to sell virus protection software to users. Google maintains that dangerous malware is actually extremely rare, and a survey conducted by F-Secure showed that only 0.5% of Android malware reported had come from the Google Play store.
In 2021, journalists and researchers reported the discovery of spyware, called Pegasus, developed and distributed by a private company which can and has been used to infect both iOS and Android smartphones often – partly via use of 0-day exploits – without the need for any user-interaction or significant clues to the user and then be used to exfiltrate data, track user locations, capture film through its camera, and activate the microphone at any time. Analysis of data traffic by popular smartphones running variants of Android found substantial by-default data collection and sharing with no opt-out by this pre-installed software. Both of these issues are not addressed or cannot be addressed by security patches.
Scope of surveillance by public institutions
As part of the broader 2013 mass surveillance disclosures it was revealed in September 2013 that the American and British intelligence agencies, the National Security Agency (NSA) and Government Communications Headquarters (GCHQ), respectively, have access to the user data on iPhone, BlackBerry, and Android devices. They are reportedly able to read almost all smartphone information, including SMS, location, emails, and notes. In January 2014, further reports revealed the intelligence agencies' capabilities to intercept the personal information transmitted across the Internet by social networks and other popular applications such as Angry Birds, which collect personal information of their users for advertising and other commercial reasons. GCHQ has, according to The Guardian, a wiki-style guide of different apps and advertising networks, and the different data that can be siphoned from each. Later that week, the Finnish Angry Birds developer Rovio announced that it was reconsidering its relationships with its advertising platforms in the light of these revelations, and called upon the wider industry to do the same.
The documents revealed a further effort by the intelligence agencies to intercept Google Maps searches and queries submitted from Android and other smartphones to collect location information in bulk. The NSA and GCHQ insist their activities comply with all relevant domestic and international laws, although the Guardian stated "the latest disclosures could also add to mounting public concern about how the technology sector collects and uses information, especially for those outside the US, who enjoy fewer privacy protections than Americans."
Leaked documents published by WikiLeaks, codenamed Vault 7 and dated from 2013 to 2016, detail the capabilities of the Central Intelligence Agency (CIA) to perform electronic surveillance and cyber warfare, including the ability to compromise the operating systems of most smartphones (including Android).
Security patches
In August 2015, Google announced that devices in the Google Nexus series would begin to receive monthly security patches. Google also wrote that "Nexus devices will continue to receive major updates for at least two years and security patches for the longer of three years from initial availability or 18 months from last sale of the device via the Google Store." The following October, researchers at the University of Cambridge concluded that 87.7% of Android phones in use had known but unpatched security vulnerabilities due to lack of updates and support. Ron Amadeo of Ars Technica wrote also in August 2015 that "Android was originally designed, above all else, to be widely adopted. Google was starting from scratch with zero percent market share, so it was happy to give up control and give everyone a seat at the table in exchange for adoption. [...] Now, though, Android has around 75–80 percent of the worldwide smartphone market—making it not just the world's most popular mobile operating system but arguably the most popular operating system, period. As such, security has become a big issue. Android still uses a software update chain-of-command designed back when the Android ecosystem had zero devices to update, and it just doesn't work". Following news of Google's monthly schedule, some manufacturers, including Samsung and LG, promised to issue monthly security updates, but, as noted by Jerry Hildenbrand in Android Central in February 2016, "instead we got a few updates on specific versions of a small handful of models. And a bunch of broken promises".
In a March 2017 post on Google's Security Blog, Android security leads Adrian Ludwig and Mel Miller wrote that "More than 735 million devices from 200+ manufacturers received a platform security update in 2016" and that "Our carrier and hardware partners helped expand deployment of these updates, releasing updates for over half of the top 50 devices worldwide in the last quarter of 2016". They also wrote that "About half of devices in use at the end of 2016 had not received a platform security update in the previous year", stating that their work would continue to focus on streamlining the security updates program for easier deployment by manufacturers. Furthermore, in a comment to TechCrunch, Ludwig stated that the wait time for security updates had been reduced from "six to nine weeks down to just a few days", with 78% of flagship devices in North America being up-to-date on security at the end of 2016.
Patches to bugs found in the core operating system often do not reach users of older and lower-priced devices. However, the open-source nature of Android allows security contractors to take existing devices and adapt them for highly secure uses. For example, Samsung has worked with General Dynamics through their Open Kernel Labs acquisition to rebuild Jelly Bean on top of their hardened microvisor for the "Knox" project.
Location-tracking
Android smartphones have the ability to report the location of Wi-Fi access points, encountered as phone users move around, to build databases containing the physical locations of hundreds of millions of such access points. These databases form electronic maps to locate smartphones, allowing them to run apps like Foursquare, Google Latitude, Facebook Places, and to deliver location-based ads. Third party monitoring software such as TaintDroid, an academic research-funded project, can, in some cases, detect when personal information is being sent from applications to remote servers.
Further notable exploits
In 2018, Norwegian security firm Promon has unearthed a serious Android security hole which can be exploited to steal login credentials, access messages, and track location, which could be found in all versions of Android, including Android 10. The vulnerability came by exploiting a bug in the multitasking system enabling a malicious app to overlay legitimate apps with fake login screens that users are not aware of when handing in security credentials. Users can also be tricked into granting additional permissions to the malicious apps, which later enable them to perform various nefarious activities, including intercepting texts or calls and stealing banking credentials. Avast Threat Labs also discovered that many pre-installed apps on several hundred new Android devices contain dangerous malware and adware. Some of the preinstalled malware can commit ad fraud or even take over its host device.
In 2020, the Which? watchdog reported that more than a billion Android devices released in 2012 or earlier, which was 40% of Android devices worldwide, were at risk of being hacked. This conclusion stemmed from the fact that no security updates were issued for the Android versions below 7.0 in 2019. Which? collaborated with the AV Comparatives anti-virus lab to infect five phone models with malware, and it succeeded in each case. Google refused to comment on the watchdog's speculations.
On August 5, 2020, Twitter published a blog urging its users to update their applications to the latest version with regards to a security concern that allowed others to access direct messages. A hacker could easily use the "Android system permissions" to fetch the account credentials in order to do so. The security issue is only with Android 8 (Android Oreo) and Android 9 (Android Pie). Twitter confirmed that updating the app will restrict such practices.
Technical security features
Android applications run in a sandbox, an isolated area of the system that does not have access to the rest of the system's resources, unless access permissions are explicitly granted by the user when the application is installed, however this may not be possible for pre-installed apps. It is not possible, for example, to turn off the microphone access of the pre-installed camera app without disabling the camera completely. This is valid also in Android versions 7 and 8.
Since February 2012, Google has used its Google Bouncer malware scanner to watch over and scan apps available in the Google Play store. A "Verify Apps" feature was introduced in November 2012, as part of the Android 4.2 "Jelly Bean" operating system version, to scan all apps, both from Google Play and from third-party sources, for malicious behaviour. Originally only doing so during installation, Verify Apps received an update in 2014 to "constantly" scan apps, and in 2017 the feature was made visible to users through a menu in Settings.
Before installing an application, the Google Play store displays a list of the requirements an app needs to function. After reviewing these permissions, the user can choose to accept or refuse them, installing the application only if they accept. In Android 6.0 "Marshmallow", the permissions system was changed; apps are no longer automatically granted all of their specified permissions at installation time. An opt-in system is used instead, in which users are prompted to grant or deny individual permissions to an app when they are needed for the first time. Applications remember the grants, which can be revoked by the user at any time. Pre-installed apps, however, are not always part of this approach. In some cases it may not be possible to deny certain permissions to pre-installed apps, nor be possible to disable them. The Google Play Services app cannot be uninstalled, nor disabled. Any force stop attempt, result in the app restarting itself. The new permissions model is used only by applications developed for Marshmallow using its software development kit (SDK), and older apps will continue to use the previous all-or-nothing approach. Permissions can still be revoked for those apps, though this might prevent them from working properly, and a warning is displayed to that effect.
In September 2014, Jason Nova of Android Authority reported on a study by the German security company Fraunhofer AISEC in antivirus software and malware threats on Android. Nova wrote that "The Android operating system deals with software packages by sandboxing them; this does not allow applications to list the directory contents of other apps to keep the system safe. By not allowing the antivirus to list the directories of other apps after installation, applications that show no inherent suspicious behavior when downloaded are cleared as safe. If then later on parts of the app are activated that turn out to be malicious, the antivirus will have no way to know since it is inside the app and out of the antivirus’ jurisdiction". The study by Fraunhofer AISEC, examining antivirus software from Avast, AVG, Bitdefender, ESET, F-Secure, Kaspersky, Lookout, McAfee (formerly Intel Security), Norton, Sophos, and Trend Micro, revealed that "the tested antivirus apps do not provide protection against customized malware or targeted attacks", and that "the tested antivirus apps were also not able to detect malware which is completely unknown to date but does not make any efforts to hide its malignity".
In August 2013, Google announced Android Device Manager (renamed Find My Device in May 2017), a service that allows users to remotely track, locate, and wipe their Android device, with an Android app for the service released in December. In December 2016, Google introduced a Trusted Contacts app, letting users request location-tracking of loved ones during emergencies. In 2020, Trusted Contacts was shut down and the location-sharing feature rolled into Google Maps.
On October 8, 2018, Google announced new Google Play store requirements to combat over-sharing of potentially sensitive information, including call and text logs. The issue stems from the fact that many apps request permissions to access users' personal information (even if this information is not needed for the app to function) and some users unquestionably grant these permissions. Alternatively, a permission might be listed in the app manifest as required (as opposed to optional) and the app would not install unless user grants the permission; users can withdraw any, even required, permissions from any app in the device settings after app installation, but few users do this. Google promised to work with developers and create exceptions if their apps require Phone or SMS permissions for "core app functionality". The new policies enforcement started on January 6, 2019, 90 days after policy announcement on October 8, 2018. Furthermore, Google announced a new "target API level requirement" (targetSdkVersion in manifest) at least Android 8.0 (API level 26) for all new apps and app updates. The API level requirement might combat the practice of app developers bypassing some permission screens by specifying early Android versions that had a coarser permission model.
Google Play Services and vendor changes
Dependence on proprietary Google Play Services and customizations added on top of the operating system by vendors who license Android from Google is causing privacy concerns.
Licensing
The source code for Android is open-source: it is developed in private by Google, with the source code released publicly when a new version of Android is released. Google publishes most of the code (including network and telephony stacks) under the non-copyleft Apache License version 2.0. which allows modification and redistribution. The license does not grant rights to the "Android" trademark, so device manufacturers and wireless carriers have to license it from Google under individual contracts. Associated Linux kernel changes are released under the copyleft GNU General Public License version 2, developed by the Open Handset Alliance, with the source code publicly available at all times. The only Android release which was not immediately made available as source code was the tablet-only 3.0 Honeycomb release. The reason, according to Andy Rubin in an official Android blog post, was because Honeycomb was rushed for production of the Motorola Xoom, and they did not want third parties creating a "really bad user experience" by attempting to put onto smartphones a version of Android intended for tablets.
Only the base Android operating system (including some applications) is open-source software, whereas most Android devices ship with a substantial amount of proprietary software, such as Google Mobile Services, which includes applications such as Google Play Store, Google Search, and Google Play Services a software layer that provides APIs for the integration with Google-provided services, among others. These applications must be licensed from Google by device makers, and can only be shipped on devices which meet its compatibility guidelines and other requirements. Custom, certified distributions of Android produced by manufacturers (such as Samsung Experience) may also replace certain stock Android apps with their own proprietary variants and add additional software not included in the stock Android operating system. With the advent of the Google Pixel line of devices, Google itself has also made specific Android features timed or permanent exclusives to the Pixel series. There may also be "binary blob" drivers required for certain hardware components in the device. The best known fully open source Android services are the LineageOS distribution and MicroG which acts as an open source replacement of Google Play Services.
Richard Stallman and the Free Software Foundation have been critical of Android and have recommended the usage of alternatives such as Replicant, because drivers and firmware vital for the proper functioning of Android devices are usually proprietary, and because the Google Play Store application can forcibly install or uninstall applications and, as a result, invite non-free software. In both cases, the use of closed-source software causes the system to become vulnerable to backdoors.
It has been argued that because developers are often required to purchase the Google-branded Android license, this has turned the theoretically open system into a freemium service.
Leverage over manufacturers
Google licenses their Google Mobile Services software, along with the Android trademarks, only to hardware manufacturers for devices that meet Google's compatibility standards specified in the Android Compatibility Program document. Thus, forks of Android that make major changes to the operating system itself do not include any of Google's non-free components, stay incompatible with applications that require them, and must ship with an alternative software marketplace in lieu of Google Play Store. A prominent example of such an Android fork is Amazon's Fire OS, which is used on the Kindle Fire line of tablets, and oriented toward Amazon services. The shipment of Android devices without GMS is also common in mainland China, as Google does not do business there.
In 2014, Google also began to require that all Android devices which license the Google Mobile Services software display a prominent "Powered by Android" logo on their boot screens. Google has also enforced preferential bundling and placement of Google Mobile Services on devices, including mandated bundling of the entire main suite of Google applications, mandatory placement of shortcuts to Google Search and the Play Store app on or near the main home screen page in its default configuration, and granting a larger share of search revenue to OEMs who agree to not include third-party app stores on their devices. In March 2018, it was reported that Google had begun to block "uncertified" Android devices from using Google Mobile Services software, and display a warning indicating that "the device manufacturer has preloaded Google apps and services without certification from Google". Users of custom ROMs can register their device ID to their Google account to remove this block.
Some stock applications and components in AOSP code that were formerly used by earlier versions of Android, such as Search, Music, Calendar, and the location API, were abandoned by Google in favor of non-free replacements distributed through Play Store (Google Search, Google Play Music, and Google Calendar) and Google Play Services, which are no longer open-source. Moreover, open-source variants of some applications also exclude functions that are present in their non-free versions. These measures are likely intended to discourage forks and encourage commercial licensing in line with Google requirements, as the majority of the operating system's core functionality is dependent on proprietary components licensed exclusively by Google, and it would take significant development resources to develop an alternative suite of software and APIs to replicate or replace them. Apps that do not use Google components would also be at a functional disadvantage, as they can only use APIs contained within the OS itself. In turn, third-party apps may have dependencies on Google Play Services.
Members of the Open Handset Alliance, which include the majority of Android OEMs, are also contractually forbidden from producing Android devices based on forks of the OS; in 2012, Acer Inc. was forced by Google to halt production on a device powered by Alibaba Group's Aliyun OS with threats of removal from the OHA, as Google deemed the platform to be an incompatible version of Android. Alibaba Group defended the allegations, arguing that the OS was a distinct platform from Android (primarily using HTML5 apps), but incorporated portions of Android's platform to allow backwards compatibility with third-party Android software. Indeed, the devices did ship with an application store which offered Android apps; however, the majority of them were pirated.
Reception
Android received a lukewarm reaction when it was unveiled in 2007. Although analysts were impressed with the respected technology companies that had partnered with Google to form the Open Handset Alliance, it was unclear whether mobile phone manufacturers would be willing to replace their existing operating systems with Android. The idea of an open-source, Linux-based development platform sparked interest, but there were additional worries about Android facing strong competition from established players in the smartphone market, such as Nokia and Microsoft, and rival Linux mobile operating systems that were in development. These established players were skeptical: Nokia was quoted as saying "we don't see this as a threat", and a member of Microsoft's Windows Mobile team stated "I don't understand the impact that they are going to have."
Since then Android has grown to become the most widely used smartphone operating system and "one of the fastest mobile experiences available". Reviewers have highlighted the open-source nature of the operating system as one of its defining strengths, allowing companies such as Nokia (Nokia X family), Amazon (Kindle Fire), Barnes & Noble (Nook), Ouya, Baidu and others to fork the software and release hardware running their own customised version of Android. As a result, it has been described by technology website Ars Technica as "practically the default operating system for launching new hardware" for companies without their own mobile platforms. This openness and flexibility is also present at the level of the end user: Android allows extensive customisation of devices by their owners and apps are freely available from non-Google app stores and third party websites. These have been cited as among the main advantages of Android phones over others.
Despite Android's popularity, including an activation rate three times that of iOS, there have been reports that Google has not been able to leverage their other products and web services successfully to turn Android into the money maker that analysts had expected. The Verge suggested that Google is losing control of Android due to the extensive customization and proliferation of non-Google apps and services Amazon's Kindle Fire line uses Fire OS, a heavily modified fork of Android which does not include or support any of Google's proprietary components, and requires that users obtain software from its competing Amazon Appstore instead of Play Store. In 2014, in an effort to improve prominence of the Android brand, Google began to require that devices featuring its proprietary components display an Android logo on the boot screen.
Android has suffered from "fragmentation", a situation where the variety of Android devices, in terms of both hardware variations and differences in the software running on them, makes the task of developing applications that work consistently across the ecosystem harder than rival platforms such as iOS where hardware and software varies less. For example, according to data from OpenSignal in July 2013, there were 11,868 models of Android devices, numerous screen sizes and eight Android OS versions simultaneously in use, while the large majority of iOS users have upgraded to the latest iteration of that OS. Critics such as Apple Insider have asserted that fragmentation via hardware and software pushed Android's growth through large volumes of low end, budget-priced devices running older versions of Android. They maintain this forces Android developers to write for the "lowest common denominator" to reach as many users as possible, who have too little incentive to make use of the latest hardware or software features only available on a smaller percentage of devices. However, OpenSignal, who develops both Android and iOS apps, concluded that although fragmentation can make development trickier, Android's wider global reach also increases the potential reward.
Market share
Android is the most used operating system on phones in virtually all countries, with some countries, such as India, having over 96% market share. On tablets, usage is more even, as iOS is a bit more popular globally.
Research company Canalys estimated in the second quarter of 2009, that Android had a 2.8% share of worldwide smartphone shipments. By May 2010, Android had a 10% worldwide smartphone market share, overtaking Windows Mobile, whilst in the US Android held a 28% share, overtaking iPhone OS. By the fourth quarter of 2010, its worldwide share had grown to 33% of the market becoming the top-selling smartphone platform, overtaking Symbian. In the US it became the top-selling platform in April 2011, overtaking BlackBerry OS with a 31.2% smartphone share, according to comScore.
By the third quarter of 2011, Gartner estimated that more than half (52.5%) of the smartphone sales belonged to Android. By the third quarter of 2012 Android had a 75% share of the global smartphone market according to the research firm IDC.
In July 2011, Google said that 550,000 Android devices were being activated every day, up from 400,000 per day in May, and more than 100 million devices had been activated with 4.4% growth per week. In September 2012, 500 million devices had been activated with 1.3 million activations per day. In May 2013, at Google I/O, Sundar Pichai announced that 900 million Android devices had been activated.
Android market share varies by location. In July 2012, "mobile subscribers aged 13+" in the United States using Android were up to 52%, and rose to 90% in China. During the third quarter of 2012, Android's worldwide smartphone shipment market share was 75%, with 750 million devices activated in total. In April 2013, Android had 1.5 million activations per day. 48 billion application ("app") installation have been performed from the Google Play store, and by September 2013, one billion Android devices had been activated.
the Google Play store had over 3 million Android applications published, and apps had been downloaded more than 65 billion times. The operating system's success has made it a target for patent litigation as part of the so-called "smartphone wars" between technology companies.
Android devices account for more than half of smartphone sales in most markets, including the US, while "only in Japan was Apple on top" (September–November 2013 numbers). At the end of 2013, over 1.5 billion Android smartphones had been sold in the four years since 2010, making Android the most sold phone and tablet OS. Three billion Android smartphones were estimated to be sold by the end of 2014 (including previous years). According to Gartner research company, Android-based devices outsold all contenders, every year since 2012. In 2013, it outsold Windows 2.8:1 or by 573 million. Android has the largest installed base of all operating systems; Since 2013, devices running it also sell more than Windows, iOS and Mac OS X devices combined.
According to StatCounter, which tracks only the use for browsing the web, Android is the most popular mobile operating system since August 2013. Android is the most popular operating system for web browsing in India and several other countries (e.g. virtually all of Asia, with Japan and North Korea exceptions). According to StatCounter, Android is most used on mobile in all African countries, and it stated "mobile usage has already overtaken desktop in several countries including India, South Africa and Saudi Arabia", with virtually all countries in Africa having done so already (except for seven countries, including Egypt), such as Ethiopia and Kenya in which mobile (including tablets) usage is at 90.46% (Android only, accounts for 75.81% of all use there).
While Android phones in the Western world almost always include Google's proprietary code (such as Google Play) in the otherwise open-source operating system, Google's proprietary code and trademark is increasingly not used in emerging markets; "The growth of AOSP Android devices goes way beyond just China [..] ABI Research claims that 65 million devices shipped globally with open-source Android in the second quarter of [2014], up from 54 million in the first quarter"; depending on country, percent of phones estimated to be based only on AOSP source code, forgoing the Android trademark: Thailand (44%), Philippines (38%), Indonesia (31%), India (21%), Malaysia (24%), Mexico (18%), Brazil (9%).
According to a January 2015 Gartner report, "Android surpassed a billion shipments of devices in 2014, and will continue to grow at a double-digit pace in 2015, with a 26 percent increase year over year." This made it the first time that any general-purpose operating system has reached more than one billion end users within a year: by reaching close to 1.16 billion end users in 2014, Android shipped over four times more than iOS and OS X combined, and over three times more than Microsoft Windows. Gartner expected the whole mobile phone market to "reach two billion units in 2016", including Android. Describing the statistics, Farhad Manjoo wrote in The New York Times that "About one of every two computers sold today is running Android. [It] has become Earth's dominant computing platform."
According to a Statistica's estimate, Android smartphones had an installed base of 1.8 billion units in 2015, which was 76% of the estimated total number of smartphones worldwide. Android has the largest installed base of any mobile operating system and, since 2013, the highest-selling operating system overall with sales in 2012, 2013 and 2014 close to the installed base of all PCs.
In the second quarter of 2014, Android's share of the global smartphone shipment market was 84.7%, a new record. This had grown to 87.5% worldwide market share by the third quarter of 2016, leaving main competitor iOS with 12.1% market share.
According to an April 2017 StatCounter report, Android overtook Microsoft Windows to become the most popular operating system for total Internet usage. It has maintained the plurality since then.
In September 2015, Google announced that Android had 1.4 billion monthly active users. This changed to 2 billion monthly active users in May 2017.
Adoption on tablets
Despite its success on smartphones, initially Android tablet adoption was slow, then later caught up with the iPad, in most countries. One of the main causes was the chicken or the egg situation where consumers were hesitant to buy an Android tablet due to a lack of high quality tablet applications, but developers were hesitant to spend time and resources developing tablet applications until there was a significant market for them. The content and app "ecosystem" proved more important than hardware specs as the selling point for tablets. Due to the lack of Android tablet-specific applications in 2011, early Android tablets had to make do with existing smartphone applications that were ill-suited to larger screen sizes, whereas the dominance of Apple's iPad was reinforced by the large number of tablet-specific iOS applications.
Despite app support in its infancy, a considerable number of Android tablets, like the Barnes & Noble Nook (alongside those using other operating systems, such as the HP TouchPad and BlackBerry PlayBook) were rushed out to market in an attempt to capitalize on the success of the iPad. InfoWorld has suggested that some Android manufacturers initially treated their first tablets as a "Frankenphone business", a short-term low-investment opportunity by placing a smartphone-optimized Android OS (before Android 3.0 Honeycomb for tablets was available) on a device while neglecting user interface. This approach, such as with the Dell Streak, failed to gain market traction with consumers as well as damaging the early reputation of Android tablets. Furthermore, several Android tablets such as the Motorola Xoom were priced the same or higher than the iPad, which hurt sales. An exception was the Amazon Kindle Fire, which relied upon lower pricing as well as access to Amazon's ecosystem of applications and content.
This began to change in 2012, with the release of the affordable Nexus 7 and a push by Google for developers to write better tablet applications. According to International Data Corporation, shipments of Android-powered tablets surpassed iPads in Q3 2012.
As of the end of 2013, over 191.6 million Android tablets had sold in three years since 2011. This made Android tablets the most-sold type of tablet in 2013, surpassing iPads in the second quarter of 2013.
According to StatCounter's web use statistics, , Android tablets represent the majority of tablet devices used in Africa (70%), South America (65%), while less than half elsewhere, e.g. Europe (44%), Asia (44%), North America (34%) and Oceania/Australia (18%). There are countries on all continents where Android tablets are the majority, for example, Mexico.
In March 2016, Galen Gruman of InfoWorld stated that Android devices could be a "real part of your business [..] there's no longer a reason to keep Android at arm's length. It can now be as integral to your mobile portfolio as Apple's iOS devices are". A year earlier, Gruman had stated that Microsoft's own mobile Office apps were "better on iOS and Android" than on Microsoft's own Windows 10 devices.
Platform information
, just before the release of Android 12, Android 11, the then-most-recent Android version, is the most popular Android version, on both smartphones and tablets.
Android 11 is most popular on smartphones at 31.8%, with Android 10 usage at 30.0%, giving Android 11 and 10 together over 60% the share. Usage of Pie 9.0 and newer, i.e. supported versions, is at 77% (with Oreo 8.1 at 83%), the rest of users are not supported with security updates. Android 11 is most used in many countries, ranging from the United States to India, and in virtually all other countries (e.g. in China) is Android 10 the most popular version.
On tablets, the latest version Android 11 is most popular at 21% overtaking Android 9.0 Pie in July 2021, which is now second at 15% (topped out at over 20%). Usage of Pie 9.0 and newer, i.e. supported versions, is at 45% on Android tablets, and with Oreo 8.1, until recently supported, at 51.1%. The usage share varies a lot by country: e.g. Android 9.0 Pie is the single version with the greatest usage share in the United States (and the UK) at 31.64%, while the latest version Android 11 is most widespread in e.g. India, Canada, Australia, and most European countries, and others all over the world; Oreo 8.1 most used in China.
, 66% of devices have Vulkan support (47% on newer Vulkan 1.1), the successor to OpenGL. At the same time 91.5% of the devices have support for or higher (in addition, the rest of devices, 8.50%, use version 2.0), with 73.50% using the latest version .
Application piracy
In general, paid Android applications can easily be pirated. In a May 2012 interview with Eurogamer, the developers of Football Manager stated that the ratio of pirated players vs legitimate players was 9:1 for their game Football Manager Handheld. However, not every developer agreed that piracy rates were an issue; for example, in July 2012 the developers of the game Wind-up Knight said that piracy levels of their game were only 12%, and most of the piracy came from China, where people cannot purchase apps from Google Play.
In 2010, Google released a tool for validating authorized purchases for use within apps, but developers complained that this was insufficient and trivial to crack. Google responded that the tool, especially its initial release, was intended as a sample framework for developers to modify and build upon depending on their needs, not as a finished piracy solution. Android "Jelly Bean" introduced the ability for paid applications to be encrypted, so that they may work only on the device for which they were purchased.
Legal issues
The success of Android has made it a target for patent and copyright litigation between technology companies, both Android and Android phone manufacturers having been involved in numerous patent lawsuits and other legal challenges.
Patent lawsuit with Oracle
On August 12, 2010, Oracle sued Google over claimed infringement of copyrights and patents related to the Java programming language. Oracle originally sought damages up to $6.1 billion, but this valuation was rejected by a United States federal judge who asked Oracle to revise the estimate. In response, Google submitted multiple lines of defense, counterclaiming that Android did not infringe on Oracle's patents or copyright, that Oracle's patents were invalid, and several other defenses. They said that Android's Java runtime environment is based on Apache Harmony, a clean room implementation of the Java class libraries, and an independently developed virtual machine called Dalvik. In May 2012, the jury in this case found that Google did not infringe on Oracle's patents, and the trial judge ruled that the structure of the Java APIs used by Google was not copyrightable. The parties agreed to zero dollars in statutory damages for a small amount of copied code. On May 9, 2014, the Federal Circuit partially reversed the district court ruling, ruling in Oracle's favor on the copyrightability issue, and remanding the issue of fair use to the district court.
In December 2015, Google announced that the next major release of Android (Android Nougat) would switch to OpenJDK, which is the official open-source implementation of the Java platform, instead of using the now-discontinued Apache Harmony project as its runtime. Code reflecting this change was also posted to the AOSP source repository. In its announcement, Google claimed this was part of an effort to create a "common code base" between Java on Android and other platforms. Google later admitted in a court filing that this was part of an effort to address the disputes with Oracle, as its use of OpenJDK code is governed under the GNU General Public License (GPL) with a linking exception, and that "any damages claim associated with the new versions expressly licensed by Oracle under OpenJDK would require a separate analysis of damages from earlier releases". In June 2016, a United States federal court ruled in favor of Google, stating that its use of the APIs was fair use.
In April 2021, the United Supreme Court ruled that Google's use of the Java APIs was within the bounds of fair use, reversing the Federal Circuit Appeals Court ruling and remanding the case for further hearing. The majority opinion began with the assumption that the APIs may be copyrightable, and thus proceeded with a review of the factors that contributed to fair use.
Anti-competitive challenges in Europe
In 2013, FairSearch, a lobbying organization supported by Microsoft, Oracle and others, filed a complaint regarding Android with the European Commission, alleging that its free-of-charge distribution model constituted anti-competitive predatory pricing. The Free Software Foundation Europe, whose donors include Google, disputed the Fairsearch allegations. On April 20, 2016, the EU filed a formal antitrust complaint against Google based upon the FairSearch allegations, arguing that its leverage over Android vendors, including the mandatory bundling of the entire suite of proprietary Google software, hindering the ability for competing search providers to be integrated into Android, and barring vendors from producing devices running forks of Android, constituted anti-competitive practices. In August 2016, Google was fined US$6.75 million by the Russian Federal Antimonopoly Service (FAS) under similar allegations by Yandex. The European Commission issued its decision on July 18, 2018, determining that Google had conducted three operations related to Android that were in violation of antitrust regulations: bundling Google's search and Chrome as part of Android, blocking phone manufacturers from using forked versions of Android, and establishing deals with phone manufacturers and network providers to exclusively bundle the Google search application on handsets (a practice Google ended by 2014). The EU fined Google for (about ) and required the company to end this conduct within 90 days. Google filed its appeal of the ruling in October 2018, though will not ask for any interim measures to delay the onset of conduct requirements.
On October 16, 2018, Google announced that it would change its distribution model for Google Mobile Services in the EU, since part of its revenues streams for Android which came through use of Google Search and Chrome were now prohibited by the EU's ruling. While the core Android system remains free, OEMs in Europe would be required to purchase a paid license to the core suite of Google applications, such as Gmail, Google Maps and the Google Play Store. Google Search will be licensed separately, with an option to include Google Chrome at no additional cost atop Search. European OEMs can bundle third-party alternatives on phones and devices sold to customers, if they so choose. OEMs will no longer be barred from selling any device running incompatible versions of Android in Europe.
Others
In addition to lawsuits against Google directly, various proxy wars have been waged against Android indirectly by targeting manufacturers of Android devices, with the effect of discouraging manufacturers from adopting the platform by increasing the costs of bringing an Android device to market. Both Apple and Microsoft have sued several manufacturers for patent infringement, with Apple's ongoing legal action against Samsung being a particularly high-profile case. In January 2012, Microsoft said they had signed patent license agreements with eleven Android device manufacturers, whose products account for "70 percent of all Android smartphones" sold in the US and 55% of the worldwide revenue for Android devices. These include Samsung and HTC. Samsung's patent settlement with Microsoft included an agreement to allocate more resources to developing and marketing phones running Microsoft's Windows Phone operating system. Microsoft has also tied its own Android software to patent licenses, requiring the bundling of Microsoft Office Mobile and Skype applications on Android devices to subsidize the licensing fees, while at the same time helping to promote its software lines.
Google has publicly expressed its frustration for the current patent landscape in the United States, accusing Apple, Oracle and Microsoft of trying to take down Android through patent litigation, rather than innovating and competing with better products and services. In August 2011, Google purchased Motorola Mobility for US$12.5 billion, which was viewed in part as a defensive measure to protect Android, since Motorola Mobility held more than 17,000 patents. In December 2011, Google bought over a thousand patents from IBM.
Turkey's competition authority investigations about default search engine in Android, started in 2017, led to a US$17.4 million fine in September 2018 and a fine of 0.05 percent of Google's revenue per day in November 2019 when Google didn't meet the requirements. In December 2019, Google stopped issuing licenses for new Android phone models sold in Turkey.
Other uses
Google has developed several variations of Android for specific use cases, including Android Wear, later renamed Wear OS, for wearable devices such as wrist watches, Android TV for televisions, Android Things for smart or Internet of things devices and Android Automotive for cars. Additionally, by providing infrastructure that combines dedicated hardware and dedicated applications running on regular Android, Google have opened up the platform for its use in particular usage scenarios, such as the Android Auto app for cars, and Daydream, a Virtual Reality platform.
The open and customizable nature of Android allows device makers to use it on other electronics as well, including laptops, netbooks, and desktop computers, cameras, headphones, home automation systems, game consoles, media players, satellites, routers, printers, payment terminals, automated teller machines, and robots. Additionally, Android has been installed and run on a variety of less-technical objects, including calculators, single-board computers, feature phones, electronic dictionaries, alarm clocks, refrigerators, landline telephones, coffee machines, bicycles, and mirrors.
Ouya, a video game console running Android, became one of the most successful Kickstarter campaigns, crowdfunding US$8.5m for its development, and was later followed by other Android-based consoles, such as Nvidia's Shield Portable an Android device in a video game controller form factor.
In 2011, Google demonstrated "Android@Home", a home automation technology which uses Android to control a range of household devices including light switches, power sockets and thermostats. Prototype light bulbs were announced that could be controlled from an Android phone or tablet, but Android head Andy Rubin was cautious to note that "turning a lightbulb on and off is nothing new", pointing to numerous failed home automation services. Google, he said, was thinking more ambitiously and the intention was to use their position as a cloud services provider to bring Google products into customers' homes.
Parrot unveiled an Android-based car stereo system known as Asteroid in 2011, followed by a successor, the touchscreen-based Asteroid Smart, in 2012. In 2013, Clarion released its own Android-based car stereo, the AX1. In January 2014, at the Consumer Electronics Show (CES), Google announced the formation of the Open Automotive Alliance, a group including several major automobile makers (Audi, General Motors, Hyundai, and Honda) and Nvidia, which aims to produce Android-based in-car entertainment systems for automobiles, "[bringing] the best of Android into the automobile in a safe and seamless way."
Android comes preinstalled on a few laptops (a similar functionality of running Android applications is also available in Google's Chrome OS) and can also be installed on personal computers by end users. On those platforms Android provides additional functionality for physical keyboards and mice, together with the "Alt-Tab" key combination for switching applications quickly with a keyboard. In December 2014, one reviewer commented that Android's notification system is "vastly more complete and robust than in most environments" and that Android is "absolutely usable" as one's primary desktop operating system.
In October 2015, The Wall Street Journal reported that Android will serve as Google's future main laptop operating system, with the plan to fold Chrome OS into it by 2017. Google's Sundar Pichai, who led the development of Android, explained that "mobile as a computing paradigm is eventually going to blend with what we think of as desktop today." Also, back in 2009, Google co-founder Sergey Brin himself said that Chrome OS and Android would "likely converge over time." Lockheimer, who replaced Pichai as head of Android and Chrome OS, responded to this claim with an official Google blog post stating that "While we've been working on ways to bring together the best of both operating systems, there's no plan to phase out Chrome OS [which has] guaranteed auto-updates for five years". That is unlike Android where support is shorter with "EOL dates [being..] at least 3 years [into the future] for Android tablets for education".
At Google I/O in May 2016, Google announced Daydream, a virtual reality platform that relies on a smartphone and provides VR capabilities through a virtual reality headset and controller designed by Google itself. The platform is built into Android starting with Android Nougat, differentiating from standalone support for VR capabilities. The software is available for developers, and was released in 2016.
Mascot
The mascot of Android is a green android robot, as related to the software's name. Although it has no official name, the Android team at Google reportedly call it "Bugdroid".
It was designed by then-Google graphic designer Irina Blok on November 5, 2007 when Android was announced. Contrary to reports that she was tasked with a project to create an icon, Blok confirmed in an interview that she independently developed it and made it open source. The robot design was initially not presented to Google, but it quickly became commonplace in the Android development team, with various variations of it created by the developers there who liked the figure, as it was free under a Creative Commons license. Its popularity amongst the development team eventually led to Google adopting it as an official icon as part of the Android logo when it launched to consumers in 2008.
See also
Comparison of mobile operating systems
Index of Android OS articles
List of Android smartphones
References
Explanatory notes
Citations
External links
Android Developers
Android Open Source Project
2008 software
Alphabet Inc.
ARM operating systems
Cloud clients
Computer-related introductions in 2008
Computing platforms
Embedded Linux distributions
Free mobile software
Google acquisitions
Google software
Linux distributions without systemd
Mobile Linux
Operating system families
Smartphones
Software using the Apache license
Tablet operating systems
Linux distributions |
751045 | https://en.wikipedia.org/wiki/Eric%20Schlosser | Eric Schlosser | Eric Matthew Schlosser (born August 17, 1959) is an American journalist and author known for his investigative journalism, such as in his books Fast Food Nation (2001), Reefer Madness (2003), and Command and Control: Nuclear Weapons, the Damascus Accident, and the Illusion of Safety (2013).
Biography
Schlosser was born in New York City, New York; he spent his childhood there and in Los Angeles, California. His parents are Judith (née Gassner) and Herbert Schlosser, a former Wall Street lawyer who turned to broadcasting later in his career, eventually becoming the President of NBC in 1974 and later becoming the vice president of RCA.
Schlosser graduated with an A.B. in history from Princeton University in 1982 after completing an 148-page-long senior thesis titled "Academic Freedom during the McCarthy Era: Anti-Communism, Conformity and Princeton." He then earned a graduate degree in British Imperial History from Oriel College, University of Oxford. He tried playwriting, writing two plays, Americans (1985) and We the People (2007). He is married to Shauna Redford, daughter of actor Robert Redford.
Journalism and books
Schlosser started his career as a journalist with The Atlantic Monthly in Boston, Massachusetts. He quickly gained recognition for his investigative pieces, earning two awards within two years of joining the staff: he won the National Magazine Award for his reporting in his two-part series "Reefer Madness" and "Marijuana and the Law" (The Atlantic Monthly, August and September, 1994), and he won the Sidney Hillman Foundation award for his article "In the Strawberry Fields" (The Atlantic Monthly, November 19, 1995).
Schlosser wrote Fast Food Nation (2001), an exposé on the unsanitary and discriminatory practices of the fast food industry. Fast Food Nation evolved from a two-part article in Rolling Stone. Schlosser helped adapt his book into a 2006 film directed by Richard Linklater. The film opened November 19, 2006. Chew On This (2006), co-written with Charles Wilson, is an adaptation of the book for younger readers. Fortune called Fast Food Nation the "Best Business Book of the Year" in 2001.
His 2003 book Reefer Madness discusses the history and current trade of marijuana, the use of migrant workers in California strawberry fields, and the American pornography industry and its history. William F. Buckley gave Reefer Madness a favorable review, as did BusinessWeek.
Schlosser's book Command and Control: Nuclear Weapons, the Damascus Accident, and the Illusion of Safety was published in September 2013. It focuses on the 1980 Damascus Titan missile explosion, a non-nuclear explosion of a Titan II missile near Damascus, AR. The New Yorkers Louis Menand called it "excellent" and "hair-raising" and said that "Command and Control is how nonfiction should be written." It was a finalist for the 2014 Pulitzer Prize for History.
He has been working on a book on the American prison system, which has been nearly 10 years in the making.
Films
Schlosser appeared in an interview for the DVD of Morgan Spurlock's Super Size Me, having a one-on-one discussion with the filmmaker about the fast-food industry. He did not appear in the film itself. He was interviewed by Franny Armstrong in 2005 and is a feature interviewee in her film McLibel. He co-produced Food, Inc. (2008), with Robert Kenner.
Schlosser also served as co-executive producer on the 2007 film There Will Be Blood. In 2014, he was an executive producer of the farmworker documentary Food Chains, a credit he shared with Eva Longoria. They both won a James Beard Foundation Award for their roles. Schlosser also shared a director credit for the multimedia installation entitled "the bomb", an experimental film about nuclear weaponry coupled with a live score by The Acid.
References
External links
Ubben Lecture at DePauw University
Eric Schlosser at Steven Barclay Agency
1959 births
Living people
Alumni of Oriel College, Oxford
American male journalists
The Atlantic (magazine) people
Dalton School alumni
Journalists from New York City
Princeton University alumni
Journalists from California
20th-century American journalists
20th-century American male writers
21st-century American non-fiction writers
21st-century American journalists
21st-century American male writers
American male non-fiction writers |
18171857 | https://en.wikipedia.org/wiki/History%20of%20the%20floppy%20disk | History of the floppy disk | A floppy disk is a disk storage medium composed of a disk of thin and flexible magnetic storage medium encased in a rectangular plastic carrier. It is read and written using a floppy disk drive (FDD). Floppy disks were an almost universal data format from the 1970s into the 1990s, used for primary data storage as well as for backup and data transfers between computers.
In 1967, at an IBM facility in San Jose (CA), work began on a drive that led to the world's first floppy disk and disk drive. It was introduced into the market in an format in 1972. The more conveniently sized 5¼-inch disks were introduced in 1976, and became almost universal on dedicated word processing systems and personal computers. This format was more slowly replaced by the 3½-inch format, first introduced in 1982. There was a significant period where both were popular. A number of other variant sizes were introduced over time, with limited market success.
Floppy disks remained a popular medium for nearly 40 years, but their use was declining by the mid- to late 1990s. The introduction of high speed computer networking and formats based on the new NAND flash technique (like USB flash drives and memory cards) led to the eventual disappearance of the floppy disk as a standard feature of microcomputers, with a notable point in this conversion being the introduction of the floppy-less iMac in 1998. After 2000, floppy disks were increasingly rare and used primarily with older hardware and especially with legacy industrial computer equipment.
The 8-inch disk
IBM’s decision in the late 1960s to use semiconductor memory as the writeable control store for future systems and control units created a requirement for an inexpensive and reliable read only device and associated medium to store and ship the control store’s microprogram and at system power on to load the microprogram into the control store. The objective was a read only device costing less than $200 and medium costing less than $5.
IBM San Jose's Direct Access Storage Product Manager, Alan Shugart, assigned the job to David L. Noble, who tried to develop a new-style tape for the purpose, but without success. The project was reassigned to Donald L. Wartner, 23FD Disk Drive manager, and Herbert E. Thompson, 23FD Disk manager, along with design engineers Warren L. Dalziel, Jay Brent Nilson, and Ralph Flores; and that team developed the IBM 23FD Floppy Disk Drive System (code name Minnow). The disk is a read-only, flexible diskette called the "memory disk" and holding 80 kilobytes of data. Initially the disk was bare, but dirt became a serious problem so they enclosed it in a plastic envelope lined with fabric that would remove dust particles. The Floppy Disk Patent #3,668,658 was issued on June 6, 1972 with named inventors Ralph Flores and Herbert E. Thompson. The Floppy Disk Drive Patent #3,678,481 was issued July 18, 1972 with named inventors Warren L. Dalziel, Jay. B. Nilson, and Donald L. Wartner. IBM introduced the diskette commercially in 1971.
The new device first shipped in 1971 as the 23FD, the control store load device of the 2835 Storage Control Unit. and then as a standard part of most System 370 processing units and other IBM products. Internally IBM used another device, code named Mackerel, to write floppy disks for distribution to the field.
Other suppliers recognized the opportunity for a read/write FDD in applications such as key entry and data logging. Shugart, by then at Memorex shipped the Memorex 650 in 1972, the first commercially available read-write floppy disk drive. The 650 had a data capacity of 175 kB, with 50 tracks and 8 sectors per track. The Memorex disk was hard sectored, that is, it contained 8 sector holes (plus one index hole) at the outer diameter (outside data track 00) to synchronize the beginning of each data sector and the beginning of a track. Most early 8" disks were hard sectored, meaning that they had a fixed number of disk sectors (usually 8, 16, or 32), marked by physical holes punched around the disk hub, and the drive required the correct media type for its controller.
IBM was developing a read/write FDD but did not see a market opportunity for such a device so came close to cancelling the project. A chance encounter in San Jose between IBM’s Jack Harker and Don Stephenson the site manager of IBM’s General Systems Division, Rochester MN, who needed a product to compete with Mohawk's key to tape system led to the production of IBM’s first read/write FDD, the 33FD code named "IGAR." The 33FD first shipped in May 1973 as a component of the 3740 Data Entry System, designed to directly replace IBM's punched card ("keypunch") data entry machines. The medium sold separately as "Diskette 1". The new system used a soft sector recording format that stored nearly 250 kB on a disk. Drives supporting this format were offered by a number of manufacturers and soon became common for moving smaller amounts of data. This disk format became known as the Single Sided Single Density or SSSD format. It was designed to hold the same amount of data as 3000 punch cards.
In 1973, Shugart founded Shugart Associates which went on to become the dominant manufacturer of 8-inch floppy disk drives. Its SA800 became the industry standard for form factor and interface.
In 1976, media supplier Information Terminals Corporation enhanced resilience further by adding a Teflon coating to the magnetic disk itself.
When the first microcomputers were being developed in the 1970s, the 8-inch floppy found a place on them as one of the few "high speed, mass storage" devices that were even remotely affordable to the target market (individuals and small businesses). The first microcomputer operating system, CP/M, originally shipped on 8-inch disks. However, the drives were still expensive, typically costing more than the computer they were attached to in early days, so most machines of the era used cassette tape instead.
In 1976, IBM introduced the 500 KB Double Sided Single Density (DSSD) format, and in 1977 IBM introduced the 1–1.2 MB Double Sided Double Density (DSDD) format.
Other 8-inch floppy disk formats such as the Burroughs 1 MB unit failed to achieve any market presence.
At the end of 1978 the typical floppy disk price per piece was $5 () to $8 (). Sales in 1978 for all types of drives and media were expected to reach $135 million for media and $875 million for drives.
The 8" floppy disk drive interface standard as developed from the Shugart Associates drives involved a 50-pin interface and a spindle motor that ran directly from the A/C line and spun constantly. Other later models used a DC motor with corresponding changes to the interface to start and stop the motor.
The 5¼-inch minifloppy
In a 1976 meeting, An Wang of Wang Laboratories informed Jim Adkisson and Don Massaro of Shugart Associates that the 8-inch format was simply too large and expensive for the desktop word processing machines he was developing at the time, and argued for a US$100 drive ().
According to Massaro, Adkisson proposed a smaller size and began working with cardboard mockups before the Wang meeting. George Sollman suggests the size was the average of existing tape drives of the era. It is an urban legend that the physical size came about when they met with Wang at a bar in Boston; when he was asked what size would be appropriate, Wang pointed to a cocktail napkin—there was no such meeting.
The new drive of this size stored 98.5 KB, later increased to 110 KB by adding five tracks.
The 5¼ drive was considerably less expensive than 8-inch drives, and soon started appearing on CP/M machines.
Shugart's initial 5.25" drive was the 35-track, single-sided SA-400, which was widely used in many early microcomputers, and which introduced the 34-pin interface that would become an industry standard. It could be used with either a hard or soft sectored controller, and storage capacity was listed as 90k (single density) or 113k (double density). The drive went on sale in late 1976 at a list price of $400, with a box of ten disks at $60. The new, smaller disk format was taken up quickly, and by 1978 ten different manufacturers were producing 5¼-inch drives. At one point, Shugart was producing 4,000 drives a day, but their ascendancy was short-lived; the company's fortunes declined in the early 1980s. Part of this was due to their failure to develop a reliable 80-track drive, increasing competition, and the loss of several lucrative contracts—Apple by 1982 had switched to using cheaper Alps drive mechanisms in their computers, and IBM chose Tandon as their sole supplier of disk drives for the PC. By 1977 Shugart had been purchased by Xerox, who closed the operations in 1985 and sold the brand to a third party.
In 1978 I.T.C. (later called Verbatim), had approximately 35 percent of the estimated $135 million floppy disk market and sold 5¼-inch disks in large quantities for $1.50 each ().
Apple purchased bare SA-400 drive mechanisms for their Disk II drive, which was then equipped with a custom Apple controller board and the faceplate stamped with the Apple logo. Steve Wozniak developed a recording scheme known as Group Coded Recording which allowed 140k of storage, well above the standard 90–113k, although the price of double density controllers fell not long after the Disk II's introduction. GCR recording used software means of detecting the track and sector being accessed, hence there was no need of hard sectored disks or even the index hole.
Commodore also elected to use GCR recording (although a different variation not compatible with Apple's format) in their disk drive line. Tandy however used industry-standard FM on the TRS-80's disk drives, with stock Shugart SA-400s, and so had a mere 85k of storage.
These early drives read only one side of the disk, leading to the popular budget approach of cutting a second write-enable slot and index hole into the carrier envelope and flipping it over (thus, the “flippy disk”) to use the other side for additional storage. This was considered risky by some as single sided disks were only certified by the manufacturer for single-sided use. The reasoning was that, when flipped, the disk would spin in the opposite direction inside its cover, so some of the dirt that had been collected by the fabric lining in the previous rotations would be picked up by the disk and dragged past the read/write head.
Although hard sectored disks were used on some early 8" drives prior to the IBM 33FD (May 1973), they were never widely used in 5¼-inch form, although North Star clung to the format until they went bankrupt in 1984.
Tandon introduced a double-sided drive in 1978, doubling the capacity, and this new “double sided double density” (DSDD) format increased capacity to 360 KB.
By 1979, there were also 77-track 5¼-inch drives available, mostly used in CP/M and other professional computers, and also found in some of Commodore's disk drive line.
By the early 1980s, falling prices of computer hardware and technological advances led to the near-universal adoption of soft sector, double density disk formats. In addition, more compact half-height disk drives began to appear, as well as double-sided drives, although the cost of them meant that single-sided remained the standard for most home computers, and 80-track drives known as "quad density".
For most of the 1970s and 1980s, the floppy drive was the primary storage device for word processors and microcomputers. Since these machines had no hard drive, the OS was usually booted from one floppy disk, which was then removed and replaced by another one containing the application. Some machines using two disk drives (or one dual drive) allowed the user to leave the OS disk in place and simply change the application disks as needed, or to copy data from one floppy to another. In the early 1980s, “quad density” 96-track-per-inch drives appeared, increasing the capacity to 720 KB. RX50 was another proprietary format, used by Digital Equipment Corporation's Rainbow 100, DECmate II, and Professional 300 Series. It held 400 KB on a single side by using 96 tracks per inch and cramming 10 sectors per track.
Floppy disks were supported on IBM's PC DOS and Microsoft's MS-DOS from their beginning on the original IBM PC. With version 1.0 of PC DOS (1981), only single-sided 160 KB floppies were supported. Version 1.1 the next year saw support expand to double-sided 320 KB disks. Finally, in 1983, DOS 2.0 supported 9 sectors per track rather than 8, providing 180 KB on a (formatted) single-sided disk and 360 KB on a double-sided.
In 1984, IBM introduced the 5¼ high density disk format with its new IBM AT machines. The 5¼ HD drive was essentially a scaled-down 8" drive, using the same rotation speed and bit rate, and it provided almost three times as much storage as the 360k format, but had compatibility issues with the older drives due to the narrower read/write head.
Except for labeling, 5¼-inch high-density disks were externally identical to their double-density counterparts. This led to an odd situation wherein the drive itself was unable to determine the density of the disk inserted except by reading the disk media to determine the format. It was therefore possible to use a high-density drive to format a double-density disk to the higher capacity. This usually appeared to work (sometimes reporting a small number of bad sectors)—at least for a time. The problem was that the high-density format was made possible by the creation of a new high-coercivity oxide coating (after soft sector formatting became standard, previous increases in density were largely enabled by improvements in head technology; up until that point, the media formulation had essentially remained the same since 1976). In order to format or write to this high-coercivity media, the high-density drive switched its heads into a mode using a stronger magnetic field. When these stronger fields were written onto a double-density disk (having lower coercivity media), the strongly magnetized oxide particles would begin to affect the magnetic charge of adjacent particles. The net effect is that the disk would begin to erase itself. On the other hand, the opposite procedure (attempting to format an HD disk as DD) would fail almost every time, as the high-coercivity media would not retain data written by the low-power DD field. High-density 3½-inch disks avoided this problem by the addition of a hole in the disk cartridge so that the drive could determine the appropriate density. However, the coercivity rating between the 3½-inch DD and HD formats, 665 and 720 oersteds, is much narrower than that for the -inch format, 600 versus 300 oersteds, and consequently it was possible to format a 3½-inch DD disk as HD with no apparent problems.
By the end of the 1980s, the 5¼-inch disks had been superseded by the 3½-inch disks. Though 5¼-inch drives were still available, as were disks, they faded in popularity as the 1990s began. The main community of users was primarily those who still owned 1980s legacy machines (PCs running DOS or home computers) that had no 3½-inch drive; the advent of Windows 95 (not even sold in stores in a 5¼-inch version; a coupon had to be obtained and mailed in) and subsequent phaseout of stand-alone MS-DOS with version 6.22 forced many of them to upgrade their hardware. On most new computers, the 5¼-inch drives were optional equipment. By the mid-1990s, the drives had virtually disappeared as the 3½-inch disk became the predominant floppy disk.
The "Twiggy" disk
During the development of the Apple Lisa, Apple developed a disk format codenamed Twiggy, and officially known as FileWare. While basically similar to a standard -inch disk, the Twiggy disk had an additional set of write windows on the top of the disk with the label running down the side. The drive was also present in prototypes of the original Apple Macintosh computer, but was removed in both the Mac and later versions of the Lisa in favor of the -inch floppy disk from Sony. The drives were notoriously unreliable and Apple was criticized for needlessly diverging from industry standards.
The 3-inch compact floppy disk
Throughout the early 1980s, the limitations of the -inch format were starting to become clear. Originally designed to be smaller and more practical than the 8-inch format, the -inch system was itself too large, and as the quality of the recording media grew, the same amount of data could be placed on a smaller surface. Another problem was that the -inch disks were simply scaled down versions of the 8-inch disks, which had never really been engineered for ease of use. The thin folded-plastic shell allowed the disk to be easily damaged through bending, and allowed dirt to get onto the disk surface through the opening.
A number of solutions were developed, with drives at 2-inch, -inch, 3-inch and -inch (50, 60, 75 and 90 mm), all being offered by various companies. They all shared a number of advantages over the older format, including a small form factor and a rigid case with a slideable write protect catch. The almost-universal use of the -inch format made it very difficult for any of these new formats to gain any significant market share. Some of these formats included Dysan and Shugart's -inch floppy disk, the later ubiquitous Sony -inch disk and the 3-inch format:
the 3-inch BRG MCD-1 developed in 1973 by , a Hungarian inventor of Budapest Radiotechnic Company (Budapesti Rádiótechnikai Gyár).
the AmDisk-3 Micro-Floppy-disk cartridge system in December 1982, which was originally designed for use with the Apple II Disk II interface card
the Mitsumi Quick Disk 3-inch floppies.
The 3-inch floppy drive itself was manufactured by Hitachi, Matsushita and Maxell. Only Teac outside this "network" is known to have produced drives. Similarly, only three manufacturers of media (Maxell, Matsushita and Tatung) are known (sometimes also branded Yamaha, Amsoft, Panasonic, Schneider, Tandy, Godexco and Dixons), but "no-name" disks with questionable quality have been seen in circulation.
Amstrad included a 3-inch single-sided, double-density (180 KB) drive in their CPC and some models of PCW. The PCW 8512 included a double-sided, quad-density (720 KB) drive as the second drive, and later models, such as the PCW 9512, used quad-density even for the first drive. The single-sided double density (180 KB) drive was "inherited" by the ZX Spectrum +3 computer after Amstrad bought the rights from Sinclair. The Oric-1 and Atmos systems from Oric International also used the 3-inch floppy drives, originally shipping with the Atmos, but also supported on the older Oric-1.
Since all 3-inch media were double-sided in nature, single-sided drive owners were able to flip the disk over to use the other side. The sides were termed "A" and "B" and were completely independent, but single-sided drive units could only access the upper side at one time.
The disk format itself had no more capacity than the more popular (and cheaper) -inch floppies. Each side of a double-density disk held 180 KB for a total of 360 KB per disk, and 720 KB for quad-density disks. Unlike -inch or -inch disks, the 3-inch disks were designed to be reversible and sported two independent write-protect switches. It was also more reliable thanks to its hard casing.
3-inch drives were also used on a number of exotic and obscure CP/M systems such as the Tatung Einstein and occasionally on MSX systems in some regions. Other computers to have used this format are the more unknown Gavilan Mobile Computer and Matsushita's National Mybrain 3000. The Yamaha MDR-1 also used 3-inch drives.
The main problems with this format were the high price, due to the quite elaborate and complex case mechanisms. However, the final tip of the scale was when Sony in 1984 convinced Apple Computer to use the -inch drives in the Macintosh 128K model, effectively making the -inch drive a de facto standard.
Mitsumi's "Quick Disk" 3-inch floppies
Another 3-inch (75 mm) format was Mitsumi's Quick Disk format. The Quick Disk format is referred to in various size references: 2.8-inch, 3-inch×3-inch and 3-inch×4-inch. Mitsumi offered this as OEM equipment, expecting their VAR customers to customize the packaging for their own particular use; disks thus vary in storage capacity and casing size. The Quick Disk uses a 2.8-inch magnetic media, break-off write-protection tabs (one for each side), and contains a see-through hole near center spindle (used to ensure spindle clamping). Nintendo packaged the 2.8-inch magnetic media in a 3-inch×4-inch housing, while others packaged the same media in a 3 inch×3 inch square housing.
The Quick Disk's most successful use was in Nintendo's Famicom Disk System (FDS). The FDS package of Mitsumi's Quick Disk used a 3-inch×4-inch plastic housing called the "Disk Card". Most FDS disks did not have cover protection to prevent media contamination, but a later special series of five games did include a protective shutter.
Mitsumi's "3-inch" Quick Disk media were also used in a 3-inch×3-inch housing for many Smith Corona word processors. The Smith Corona disks are confusingly labeled "DataDisk 2.8-inch", presumably referring to the size of the medium inside the hard plastic case.
The Quick Disk was also used in several MIDI keyboards and MIDI samplers of the mid-1980s. A non-inclusive list includes: the Roland S-10 and MKS100 samplers, the Korg SQD1, the Korg SQD8 MIDI sequencer, Akai's 1985 model MD280 drive for the S-612 MIDI sampler, Akai's X7000 / S700 (rack version) and X3700, the Roland S-220, and the Yamaha MDF1 MIDI disk drive (intended for their DX7/21/100/TX7 synthesizers, RX11/21/21L drum machines, and QX1, QX21 and QX5 MIDI sequencers).
As the cost in the 1980s to add -inch drives was still quite high, the Mitsumi Quick Disk was competing as a lower cost alternative packaged in several now obscure 8-bit computer systems. Another non-inclusive list of Quick Disk versions: QDM-01, QDD (Quick Disk Drive) on French Thomson micro-computers, in the Casio QD-7 drive, in a peripheral for the Sharp MZ-700 & MZ-800 system, in the DPQ-280 Quickdisk for the Daewoo/Dynadata MSX1 DPC-200, in the Dragon 32/64 machine, in the Crescent Quick Disk 128, 128i and 256 peripherals for the ZX Spectrum, and in the Triton Quick Disk peripheral also for the ZX Spectrum.
The World of Spectrum FAQ reveals that the drives did come in different sizes: 128 to 256 kB in Crescent's incarnation, and in the Triton system, with a density of 4410 bits per inch, data transmission rate of 101.6 kbit/s, a 2.8-inch double sided disk type and a capacity of up to 20 sectors per side at 2.5 kB per sector, up to 100 kB per disk. Quick Disk as used in the Famicom Disk System holds 64 kB of data per side, requiring a manual turn-over to access the second side.
Unusually, the Quick Disk utilizes "a continuous linear tracking of the head and thus creates a single spiral track along the disk similar to a record groove." This has led some to compare it more to a "tape-stream" unit than typically what is thought of as a random-access disk drive.
-inch format
In 1981, Sony introduced their -inch floppy disk cartridge (90.0 mm × 94.0 mm) having a single sided unformatted capacity of 218.8 KB and a formatted capacity of 161.2 KB. A double sided version was available in 1982. This initial Sony design was similar to other less than -inch designs but somewhat simpler in construction. The first computer to use this format was Sony's SMC-70 of 1982. Other than Hewlett-Packard's HP-150 of 1983 and Sony's MSX computers that year, this format suffered from a similar fate as the other new formats; the -inch format simply had too much market share.
Things changed dramatically in 1982 when the Microfloppy Industry Committee (MIC), a consortium ultimately of 23 media companies, agreed upon a -inch media specification based upon but differing from the original Sony design. The first single-sided drives compatible with this new media specification shipped in early 1983, followed immediately in 1984 by double-sided compatible versions. In 1984, Apple Computer selected the format for their new Macintosh computers. Then, in 1985, Atari adopted it for their new ST line, and Commodore for their new Amiga. By 1988, the -inch was outselling the -inch. In South Africa, the -inch format was generally called a stiffy disk, to distinguish it from the flexible -inch format.
The term "-inch" or "3.5-inch" disk is and was rounded from the 90 mm actual dimension of one side of the rectangular cartridge. The actual disk diameter is .
The -inch disks had, by way of their rigid case's slide-in-place metal cover, the significant advantage of being much better protected against unintended physical contact with the disk surface than -inch disks when the disk was handled outside the disk drive. When the disk was inserted, a part inside the drive moved the metal cover aside, giving the drive's read/write heads the necessary access to the magnetic recording surfaces. Adding the slide mechanism resulted in a slight departure from the previous square outline. The irregular, rectangular shape had the additional merit that it made it impossible to insert the disk sideways by mistake as had indeed been possible with earlier formats.
3.5" drives included several other advantages over the older drive types, including not requiring a terminating resistor pack, and no need of an index hole.
The shutter mechanism was not without its problems, however. On old or roughly treated disks, the shutter could bend away from the disk. This made it vulnerable to being ripped off completely (which does not damage the disk itself but does leave it much more vulnerable to dust), or worse, catching inside a drive and possibly either getting stuck inside or damaging the drive.
Evolution
Like the -inch, the -inch disk underwent an evolution of its own. When Apple introduced the Macintosh in 1984, it used single-sided -inch disk drives with an advertised capacity of 400 KB. The encoding technique used by these drives was known as GCR, or Group Coded Recording (similar recording methods were used by Commodore on its -inch drives and Sirius Systems Technology in its Victor 9000 non-PC-compatible MS-DOS machine).
Higher capacities
Somewhat later, PC-compatible machines began using single-sided -inch disks with an advertised capacity of 360 KB (the same as a double-sided -inch disk), and a different, incompatible recording format called MFM (Modified Frequency Modulation). GCR and MFM drives (and their formatted disks) were incompatible, although the physical disks were the same. In 1986, Apple introduced double-sided, 800 KB disks, still using GCR, and soon after, IBM began using 720 KB double-sided double-density MFM disks in PCs like the IBM PC Convertible. IBM PC compatibles adopted it too, while the Amiga used MFM encoding on the same disks to give a capacity of 1 MB (880 KB available once formatted).
HD
An MFM-based, "high-density" format, displayed as "HD" on the disks themselves and typically advertised as "1.44 MB" was introduced in 1987; the most common formatted capacity was 1,474,560 bytes (or 1440 KiB), double that of the 720 KiB variant. The term "1.44 MB" is a misnomer caused by dividing the size of 1440 kibibytes (1440 * 1024 bytes) by 1000, thus converting 1440 KiB to "1.44 MB" - where the MB stands for neither a megabyte (1,000,000 bytes) nor a mebibyte (1,048,576 bytes) but instead 1,024,000 bytes. Correctly dividing 1440 KiB by 1024 gives a size of 1.40625 MiB. These HD disks had an extra hole in the case on the opposite side of the write-protect notch. IBM used this format on their PS/2 series introduced in 1987. Apple started using "HD" in 1988, on the Macintosh IIx, and the HD floppy drive soon became universal on virtually all Macintosh and PC hardware. Apple's FDHD (Floppy Disk High Density) drive was capable of reading and writing both GCR and MFM formatted disks, and thus made it relatively easy to exchange files with PC users. Apple later marketed this drive as the SuperDrive. Amiga included "HD" floppy drives relatively late, with releasing of Amiga 4000 in 1992, and was able to store 1760 KB on it, with ability in software to read/write PC's 1440 KB/720 KB formats.
ED
Another advance in the oxide coatings allowed for a new "extra-high density" ("ED") format at 2880 KB introduced in 1990 on the NeXTcube, NeXTstation and IBM PS/2 model 57. However, by this time the increased capacity was too small an advance over the HD format and it never became widely used.
See also
Floppy disk variants
History of hard disk drives
History of IBM magnetic disk drives
List of floppy disk formats
Notes
References
History of computing hardware
Floppy disk computer storage
History
Legacy hardware
History of Silicon Valley |
1952148 | https://en.wikipedia.org/wiki/DVD%20Decrypter | DVD Decrypter | DVD Decrypter is a deprecated software application for Microsoft Windows that can create backup disk images of the DVD-Video structure of DVDs. It can be used to make a copy of any DVD protected with Content Scrambling System (CSS). The program can also record images to disc — functionality that the author has now incorporated into a separate product called ImgBurn. The software also allows a copy of a region-specific DVD to be made region free. It also removes Macrovision content protection, CSS, region codes, and user operation prohibition.
Legality in the United States
As DVD Decrypter facilitates the removal of copy restrictions, certain uses may be illegal under the United States Digital Millennium Copyright Act unless making copies that are covered under the Fair Use doctrine (or in some cases illegal even when making copies under fair use). In countries without similar laws there may not be any legal restrictions.
On June 6, 2005, the developer, Lightning UK!, announced via the CD Freaks website that he received a cease and desist letter from Macrovision. He later stated it was within his best interests to comply with the letter, and stopped development of the program. By June 7, 2005, a mirror site was up, which allowed people to download the final version (3.5.4.0). On November 27, 2005, Afterdawn.com, a Finnish website, announced that it complied with a letter received from Macrovision demanding that DVD Decrypter be taken down from its site.
Under United States federal law, making a backup copy of a DVD-Video or an audio CD by a consumer is legal under fair use protection. However, this provision of United States law conflicts with the Digital Millennium Copyright Act prohibition of so-called "circumvention measures" of copy protections.
In the "321" case, Federal District Judge Susan Illston of the Northern District of California, ruled that the backup copies made with software such as DVD Decrypter are legal but that distribution of the software used to make them is illegal.
In 2010 the Librarian of Congress instituted a DMCA exemption which protects circumvention of CSS protection under certain circumstances. This exemption expired in 2013.
On October 4, 2005, Lightning UK! continued the development of the burning engine used by DVD Decrypter in his new tool, ImgBurn. However, for legal reasons, ImgBurn does not have the ability to circumvent copy protections of encrypted DVDs.
See also
DeCSS
DVD ripper (list of various related programs)
References
DVD rippers
United States Internet case law
Windows-only software
Discontinued software |
50934497 | https://en.wikipedia.org/wiki/Ikee | Ikee | Ikee was a worm that spread by Secure Shell connections between jailbroken iPhones. It was discovered in 2009 and changed wallpapers to a photo of Rick Astley.The code from Ikee was later used to make a more malicious iPhone malware, called Duh.
History
iPhone owners of Australia reported that smart phones had been infected by a worm that changed their iPhone wallpaper to Rick Astley, a 1980s pop singer. It affected smartphones if the owner did not change their default password after installation of SSH. Once the Ikee worm infected, it would find other iPhones on the mobile network which were vulnerable and infect them as well. The worm wouldn't affect users who hadn't jailbroken or installed SSH on their iPhone. The worm does nothing more than changing the infected user's lock screen wallpaper. The source code of the ikee worm says it was written by Ikex.
Two weeks after the release of Ikee, a malicious worm dubbed "Duh", built off the code of Ikee, was discovered. it acted as a Botnet, communicating with a command and control center. it also attempted to steal banking data from ING Direct.
See also
Brain Test
Dendroid (Malware)
Computer virus
File binder
Individual mobility
Malware
Trojan horse (computing)
Worm (computing)
Mobile operating system
References
IOS malware
Computer worms
Software distribution
Privilege escalation exploits
Online advertising
Privacy |
2305534 | https://en.wikipedia.org/wiki/Grid%20Security%20Infrastructure | Grid Security Infrastructure | The Grid Security Infrastructure (GSI), formerly called the Globus Security Infrastructure, is a specification for secret, tamper-proof, delegatable communication between software in a grid computing environment. Secure, authenticatable communication is enabled using asymmetric encryption.
Authentication
Authentication is performed using digital signature technology (see digital signatures for an explanation of how this works); secure authentication allows resources to lock data to only those who should have access to it.
Delegation
Authentication introduces a problem: often a service will have to retrieve data from a resource independent of the user; in order to do this, it must be supplied with the appropriate privileges. GSI allows for the creation of delegated privileges: a new key is created, marked as a delegated and signed by the user; it is then possible for a service to act on behalf of the user to fetch data from the resource.
Security Mechanisms
Communications may be secured using a combination of methods:
Transport Layer Security (TLS) can be used to protect the communication channel from eavesdropping or man-in-the-middle attacks.
Message-Level Security can be used (although currently it is much slower than TLS).
References
A Security Infrastructure for Computational Grids by Ian Foster et al.
A National-Scale Authentication Infrastructure by Randy Butler et al.
External links
Overview of the Grid Security Infrastructure
Grid computing
Cryptographic protocols |
61222518 | https://en.wikipedia.org/wiki/National%20Centre%20for%20Computing%20Education | National Centre for Computing Education | The National Centre for Computing Education is a government-funded initiative, offering teacher training and resources for computer science. The National Centre is delivered by a consortium of STEM Learning, Raspberry Pi Foundation and British Computer Society (BCS).
Function
The National Centre for Computing Education provides training in computing education for primary and secondary schools and colleges, including bursary-funded face-to-face courses around England, and free online courses, delivered through FutureLearn. It also offers a repository of teaching resources for computing through its website, teachcomputing.org.
The NCCE programme is organised around a network of school-based Computing Hubs, geographically distributed around the country. These Hubs ensure that the programme is school-led and reflects the needs of teachers on the ground.
History
The centre was set up following the January 2016 government report Digital Skills for the UK Economy which highlighted the digital skills gap in the UK economy, produced by the Department for Business, Innovation and Skills (BIS), which looked at research carried out by the UK Commission for Employment and Skills (UKCES), which itself closed in 2017.
Funding of £84m was announced in the November 2017 United Kingdom budget to upskill around 8000 computer science teachers. Support would come from the Behavioural Insights Team and FutureLearn (a MOOC online course from the OU).
The centre was created in November 2018 with £84m of government funding.
Chair
Simon Peyton Jones FRS, of Microsoft Research, was appointed as the organisation's chairman in March 2019. It has been created by STEM Learning at the University of York, the BCS (British Computer Society) and the Raspberry Pi Foundation. It is funded by the Department for Education. The Department of Computer Science and Technology, University of Cambridge will also provide assistance.
Network of Computing Hubs
North East England
Cardinal Hume Catholic School, Gateshead in Tyne and Wear
Carmel College, Darlington, with Carmel College Sixth Form
Kings Priory School, Tynemouth in Tyne and Wear
North West England
Bishop Rawstorne Church of England Academy, Croston in Lancashire
The Fallibroome Academy, Cheshire
Priestley College, Warrington
Yorkshire and the Humber
All Saints Roman Catholic School, York, South Bank, York
Bingley Grammar School, West Yorkshire
Harrogate Grammar School, North Yorkshire
East Midlands
Beauchamp College, Oadby in Leicestershire
West Midlands
Bishop Challoner Catholic College, Kings Heath
The Chase School, Malvern, Worcestershire
City of Stoke-on-Trent Sixth Form College, Stoke-on-Trent in Staffordshire
East of England
Chesterton Community College, Chesterton, Cambridge
Dereham Neatherd High School, Dereham in Norfolk
Saffron Walden County High School, Saffron Walden in Essex
Sandringham School, Marshalswick, St Albans in Hertfordshire
St Clement Danes School, Chorleywood in Hertfordshire
West Suffolk College, Suffolk
Westcliff High School for Girls, Southend-on-Sea in Essex
Greater London
Newstead Wood School, Orpington in Bromley
South East England
The Mathematics and Science Learning Centre at the University of Southampton is a delivery partner.
Dartford Grammar School, Kent
Denbigh School, Milton Keynes
Langley Grammar School, Langley, Berkshire
Maidstone Grammar School for Girls, Kent
Park House School, Newbury, Berkshire
South West England
The Castle School
Exeter Mathematics School, a sixth-form in Exeter in Devon
Pate's Grammar School, Cheltenham
Truro and Penwith College, Cornwall
See also
Micro Bit
National Centre for Excellence in the Teaching of Mathematics in south Sheffield
References
2018 establishments in England
British Computer Society
Computer science education in the United Kingdom
Department for Education
Educational organisations based in England
Information technology organisations based in the United Kingdom
Organisations based in York
Research institutes in North Yorkshire
Science and technology in North Yorkshire
University of York |
5113898 | https://en.wikipedia.org/wiki/AVCHD | AVCHD | AVCHD (Advanced Video Coding High Definition) is a file-based format for the digital recording and playback of high-definition video. It is H.264 and Dolby AC-3 packaged into the MPEG transport stream, with a set of constraints designed around the camcorders.
Developed jointly by Sony and Panasonic, the format was introduced in 2006 primarily for use in high definition consumer camcorders. Related specifications include the professional variants AVCCAM and NXCAM.
Favorable comparisons of AVCHD against HDV and XDCAM EX solidified perception of AVCHD as a format acceptable for professional use. Both Panasonic and Sony released the first consumer AVCHD camcorders in spring of 2007. Panasonic released the first AVCHD camcorder aimed at the professional market in 2008, though it was nothing more than the (by then discontinued) FLASH card consumer model rebadged with a different model number.
In 2011 the AVCHD specification was amended to include 1080-line 50-frame/s and 60-frame/s modes (AVCHD Progressive) and stereoscopic video (AVCHD 3D). The new video modes require double the data rate of previous modes.
AVCHD and its logo are trademarks of Sony and Panasonic.
Overview
For video compression, AVCHD uses the H.264/MPEG-4 AVC standard, supporting a variety of standard, high definition, and stereoscopic (3D) video resolutions. For audio compression, it supports both Dolby AC-3 (Dolby Digital) and uncompressed linear PCM audio. Stereo and multichannel surround (5.1) are both supported.
Aside from recorded audio and video, AVCHD includes many user-friendly features to improve media presentation: menu navigation, simple slide shows and subtitles. The menu navigation system is similar to DVD-video, allowing access to individual videos from a common intro screen. Slide shows are prepared from a sequence of AVC still frames, and can be accompanied by a background audio track. Subtitles are used in some camcorders to timestamp the recordings.
Audio, video, subtitle, and ancillary streams are multiplexed into an MPEG transport stream and stored on media as binary files. Usually, memory cards and HDDs use the FAT file system, while optical discs employ UDF or ISO 9660.
At the file system level, the structure of AVCHD is derived from the Blu-ray Disc specification, but is not identical to it. In particular, it uses legacy "8.3" file naming convention, while Blu-ray Discs utilize long filenames (this may be caused by the fact that FAT implementations utilizing long file names are patented by Microsoft and are licensed on a per unit sold basis). Another difference is location of the BDMV directory, which contains media files. On a DVD-based camcorder the BDMV directory is placed at the root level, as on the Blu-ray Disc. On the HDD-based Canon HG10 camcorder the BDMV directory is located in the AVCHD directory, which is placed at the root level. Solid-state Panasonic and Canon camcorders nest the AVCHD directory inside the PRIVATE directory. Following a standard agreed upon by many still camera manufacturers, solid-state camcorders have a root-level DCIM directory for still images.
AVCHD is compatible with the Blu-ray format and can be authored without re-encoding on Blu-rays or DVDs, though not all Blu-ray Disc players are compatible with AVCHD video authored on DVD media, a format known as AVCHD disc.
AVCHD recordings can be transferred to a computer by connecting the camcorder via the USB connection. Removable media like SDHC and Memory Stick cards or DVDs can be read on a computer directly. Copying files from an AVCHD camcorder or from removable media can be performed faster than from a tape-based camcorder, because the transfer speed is not limited by realtime playback.
Just as editing DVCPRO HD and HDV video once demanded an expensive high-end computer, AVCHD editing software requires powerful machines. Compared to HDV, AVCHD requires 2-4x the processing power for realtime playback, placing a greater burden on the computer's CPU and graphics card. Improvements in multi-core computing and graphics processor acceleration bring AVCHD playback to mainstream desktops and laptops.
Video formats
AVCHD supports a variety of video resolutions and scanning methods, which was further extended with the 2011 amendment of the specification. The licensing body of the specification defines a variety of labels for products compliant with specific features.
Most AVCHD camcorders support only a handful of the video and audio formats allowed in the AVCHD standard.
Interlaced video
AVCHD supports both standard definition (AVCHD-SD) and high definition (AVCHD 1080i) interlaced video. AVCHD 1080i is available on most AVCHD camcorders. For some models this is the only recording mode offered.
AVCHD-SD is used in the shoulder-mount Panasonic HDC-MDH1, as well as on its North American AG-AC7 cousin. A successor model - the AG-AC8, is also capable of recording in AVCHD-SD mode. Several models from JVC like the consumer camcorders GZ-HM650, GZ-HM670 and GZ-HM690 as well as the professional camcorder JVC GY-HM70 can record AVCHD-SD video. AVCHD-SD is not compatible with consumer DVD players, because it employs AVC video encoding instead of MPEG-2 Part 2. AVCHD-SD can be played on a Blu-ray Disc player without re-encoding.
Interlaced video had been originally designed for watching on a cathode-ray tube television set. Material recorded for interlaced presentation may exhibit combing or ghosting when it is rescaled, filmed out or watched on a computer or another progressive-scan device without proper deinterlacing.
Some AVCHD 1080i camcorders can capture progressive video and record it within interlaced stream borrowing techniques from television industry. In particular, Progressive segmented frame (PsF) is utilized in some Panasonic (25p Digital Cinema), Canon (PF25, PF30) and Sony camcorders. The 2:3 pulldown technique is used in some 60 Hz versions of Canon (PF24) and Panasonic (24p Digital Cinema) camcorders for recording 24-frame/s progressive video. Most editing tools treat progressive video recorded within an interlaced stream as interlaced, though some editing systems and most standalone Blu-ray Disc players are capable of recognizing the pulldown pattern to recover the original frames using the process known as inverse telecine.
Progressive-scan video
Since the very beginning, the AVCHD specification had supported 720-line progressive recording mode at frame rates of 24 and 60 frames/s for 60 Hz models and 50 frames/s for 50 Hz models. Frame rates of 25 frames/s and 30 frames/s are not directly available in 720p mode, but can be simulated with frame repeating, when every frame is either repeated twice or a special flag in the video stream instructs a decoder to play every frame twice to adhere to output rate of 50 or 60 frames/s.
Many of the digital compact cameras made by Panasonic, such as the DMC-ZS3/DMC-TZ7, DMC-FT1, DMC-FZ35/DMC-FZ38, and DMC-ZS-7/TZ-10 offer 720p video recording with effective frame rate of 25 or 30 frames/s in a format called AVCHD Lite (see below).
Until the advent of AVCHD Progressive mode, native progressive-scan video for 1080-line resolution had been available only in 24 frames/s variant. In 2010, Panasonic introduced a new lineup of consumer AVCHD camcorders with 1080-line 50p/60p progressive-scan mode (frame rate depending on region). Panasonic advised that not all players that support AVCHD playback could play 1080-line 50p/60p video. In 2011, this mode was officially included into the AVCHD specification as part of 2.0 addendum, and has been called AVCHD Progressive. This mode uses the same AVCHD folder structure and container files for storing video, with the maximum bitrate of 28 Mbit/s. In 2011, Sony introduced consumer and professional AVCHD models capable of AVCHD Progressive recording. In 2012 JVC announced the GY-HMQ10 model, which also can record AVCHD Progressive video.
Audio formats
Most AVCHD camcorders record audio using Dolby Digital (AC-3) compression scheme. Stereo and multichannel audio is supported. Audio data rate can range from 64 kbit/s to 640 kbit/s. In practice, data rates of 256 kbit/s and 384 kbit/s have been observed.
Some professional models allow recording uncompressed linear PCM audio.
Media
AVCHD specification allows using recordable DVDs, memory cards, non-removable solid-state memory and hard disk drives as recording media.
DVD
When the AVCHD standard was first announced, recordable DVD was the only recording medium. To reduce camcorder size, manufacturers opted for an 8 cm disc, sometimes called miniDVD. Recording capacity of an 8 cm disc ranges from 1.4 GB for a single-sided single layer disc to 5.2 GB for a double-sided double layer disc.
Pros:
DVDs are familiar to most consumers, thus considered user-friendly.
Recordable DVDs are relatively cheap.
Recorded disc can be played back in most Blu-ray Disc players.
Discs can be used for long-term storage of recorded video.
Cons:
Some argue that the longevity of recordable DVDs may be shorter than expected.
Rewritable DVDs cost more than write-once discs.
DVDs must be "finalized" to play back on set-top players (though DVD-RWs can be unfinalized again).
Double-layer recording is less robust than single-layer recording.
To use both sides of a double-sided disc it must be flipped over, because camcorders have pickup from one side only.
AVCHD DVDs can only be played back on DVD/Blu-ray players specifically designed to do so.
The AVCHD specification limits data rate for DVD-based AVCHD camcorders to 18 Mbit/s, but no DVD-based AVCHD camcorder manufactured to date is capable of recording at data rate higher than 12 Mbit/s (Canon, Sony) or 13 Mbit/s (Panasonic).
A single-sided single-layer 8 cm DVD can fit only 15 minutes of video at 12 Mbit/s, 14 minutes at 13 Mbit/s.
DVD pickup mechanism is very susceptible to vibration.
8 cm DVDs cannot be used in many slot-loading drives and may even damage the drive.
As the capacity of memory cards grew and their price dropped, DVDs use for recordable media declined. No DVD-based AVCHD camcorders have been produced since 2008. While DVDs are no longer used for acquisition, they remain popular as distribution media. Many authoring programs offer "AVCHD" profile for recording high definition video on a DVD. Such AVCHD discs are incompatible with regular DVD-Video players, but play in many Blu-ray Disc players. A conventional single-layer 12 cm DVD can store 35 minutes of video recorded at the maximum bitrate the AVCHD specification allows for DVD media—18 Mbit/s.
Hard disk drive
A hard disk drive was added as an optional recording medium to AVCHD specification shortly after the new video standard had been announced. Presently, capacity of built-in HDDs ranges from 30 GB to 240 GB.
Pros:
Higher capacity than other media types, which allows for longer continuous recording.
Cons:
Sensitive to atmospheric pressure. The HDD may fail if the camcorder is used at altitudes above .
Vulnerable to mechanical shock or fast movement.
All HDD-based AVCHD camcorders employ non-removable disks. To transfer video to a computer the camcorder must be connected with a USB cable. Most camcorders require using an AC power adapter for this operation.
The sound of moving magnetic heads may be heard in the recorded video when recording in quiet environment.
Replacing a damaged HDD requires disassembling a camcorder and cannot be done by a consumer.
Solid-state memory card
Many AVCHD camcorders employ Secure Digital or "Memory Stick" memory cards as removable recording media. Solid-state memory cards offer rewritable storage in a compact form factor with no moving parts.
Panasonic and Sony chose removable flash memory as the sole type of recording media in their professional AVCHD lineups, specifically AVCCAM and NXCAM.
Until 2010, Sony insisted on usage of its own memory card format - Memory Stick. Since 2010, Sony has allowed using both Memory Stick as well as Secure Digital cards in its consumer and professional camcorders. Panasonic as well as other manufacturers of AVCHD camcorders use Secure Digital cards as removable flash media. Most models accept Secure Digital High Capacity cards (SDHC), while some models are also compatible with Secure Digital Extended Capacity (SDXC) cards, which offer higher transfer speed and capacity.
Pros:
Compact and lightweight.
Does not require time for spin-up and initialization.
Not vulnerable to magnetic fields.
Can withstand a wider range of air pressure, humidity and vibration than HDDs.
Can be easily backed up to DVD for viewing and for long-term archiving.
Can store mixed media content, including still images like snapshot photos and still-frame captures.
The recording section contains no moving parts, thus operation is almost silent; also a camera can be made more compact and less prone to mechanical damage in case of being dropped.
Most new computers, many TV sets and Blu-ray Disc players, as well as many personal portable media players have built-in card readers and can play AVCHD video directly from a card.
Cons:
More expensive per minute of recording than a built-in HDD or DVD media.
Not reliable for long term storage and may wear out more rapidly than expected, especially the cards made with MLC technology as opposed to cards using SLC technology.
Vulnerable to electrical damage, such as static discharge, and too high temperature.
A bad memory card can cause data corruption, causing loss of one or more clips.
Non-removable solid-state memory
Some AVCHD camcorders come with built-in solid-state memory either as a sole media, or in addition to other media.
Pros:
Allows making a camcorder smaller if no other media is used.
Always available for recording, in case other type of media is full or missing.
Cons:
Because recording media is non-removable, the recorded images should be backed up either to a computer with a USB cable to transfer video or (if the camera accepts them) to another FLASH card or even a DVD or Blu-ray through an externally connected burner. Usage of an AC power adapter may be required.
Non-removable media cannot be shared, sent or stored separately of the camcorder.
If damaged or worn out, non-removable media cannot easily be replaced like a memory card.
Branding
Panasonic and Sony developed several brand names for their professional as well as simplified versions of AVCHD.
AVCHD Lite
AVCHD Lite is a subset of AVCHD format announced in January 2009, which is limited to 720p60, 720p50 and 720p24 and does not employ Multiview Video Coding. AVCHD Lite cameras duplicate each frame of 25fps/30fps video acquired by camera sensor, producing 720p50/720p60 bitstream compliant with AVCHD and Blu-ray Disc specifications. As of 2013, AVCHD Lite seems to have been all but replaced with other formats. For example, the Panasonic DMC FZ-200 offers AVCHD Progressive recording mode (50fps/60fps acquisition and stream rate) as well as MP4 mode (25fps/30fps acquisition and stream rate).
AVCCAM
Formerly known as "AVCHD with professional features," AVCCAM is the name of professional AVCHD camcorders from Panasonic's Broadcast division. Some of these professional features listed in early Panasonic advertising materials included 1/3-inch progressive 3CCD sensor, XLR microphone input, solid-state media and capability of recording at the maximum AVCHD bitrate - 24 Mbit/s. The aforementioned features are not exclusive to AVCCAM. Moreover, some of these features like CCD sensor technology have been dropped by Panasonic, while 24 Mbit/s recording rate is widely available from rival manufacturers even on consumer models.
AVCHD Pro
Panasonic uses "AVCHD Pro" moniker to describe camcorders like the HDC-MDH1, which combines consumer internal parts and controls with shoulder-mount type body. Panasonic touts that the camcorder is "shaped for Pro-Style shooting in Full-HD" with shoulder-mount type body being "preferred by professionals".
NXCAM
NXCAM is the name of Sony's professional video lineup employing the AVCHD format. NXCAM camcorders offer 1080i, 1080p and 720p recording modes. Unlike AVCCAM, not all NXCAM camcorders offer film-like frame rates — 24p, 25p, 30p — in 720p mode.
Playing back AVCHD video
Recorded AVCHD video can be played back in a variety of ways:
Direct playback — video can be played on a television set from a camcorder through HDMI or component-video cable.
AVCHD disc — AVCHD video, recorded onto DVD can be played on most Blu-ray Disc players or on a PlayStation 3 gaming console.
Blu-ray — AVCHD video, recorded onto Blu-ray can be played on most Blu-ray Disc players (see table below).
AVCHD memory card — AVCHD video, recorded on an SDHC or Memory Stick card can be played on select Blu-ray Disc players, HDTV sets, on a PlayStation 3 gaming console and on some other set-top media players.
USB playback — video files, recorded on an external storage device like a hard disk drive or a USB "stick" can be played on select Blu-ray Disc players, HDTV sets, gaming consoles, set-top media players and from a computer.
Computer playback — any media and target format that is supported by a particular computer hardware and software can be watched on a computer monitor or TV set. Presently, the open-source VLC media player plays AVCHD video files and a wide variety of additional formats, and is freely available for most modern operating systems (including Linux, macOS, MS Windows) and some mobile platforms. Since Mountain Lion, macOS does support native AVCHD playback via the default media player, QuickTime. Some Windows 7 editions can import and play AVCHD video natively, having files with extensions M2TS, MTS and M2T pre-registered in the system. (Windows 7 starter edition does not support AVCHD files out of the box, and so requires a third-party player.) In editions of Windows 7 that do support AVCHD files, Windows Media Player can index content in these files, and Windows Explorer can create thumbnails for each clip. Windows 7 does not support importing of AVCHD video metadata such as thumbnail images, playlists, and clip index files. Joining AVCHD video files during the import is not supported either.
AVCHD as distribution format
A DVD disc with AVCHD high-definition video recorded on it is sometimes called an AVCHD disc. AVCHD discs cannot be played in a standard DVD player, but can be played in many Blu-ray Disc players. Smooth playback is not guaranteed if overall data rate exceeds 18 Mbit/s. It is possible to create simple menus similar to menus used for DVD-video discs.
AVCHD content can also be recorded on SDHC cards and played by many television sets, Blu-ray Disc players and media consoles.
The AVCHD specification does not officially support Blu-ray Disc media, though some software packages allow authoring AVCHD content on Blu-ray Discs. For better compatibility with Blu-ray Disc players AVCHD video can be authored on Blu-ray Disc media as Blu-ray Disc video. Authoring a Blu-ray Disc video title does not require re-encoding of AVCHD audio and video streams. The resultant disc plays in any Blu-ray Disc player, including those that do not explicitly support AVCHD.
Many software vendors support AVCHD mastering. In particular:
Cyberlink PowerDirector and PowerProducer can author a compliant AVCHD disc, or BDMV on DVD media.
Corel (formerly Ulead) DVD MovieFactory 7 can master AVCHD discs with menus.
Various Sonic products can author AVCHD discs using HD/BD Plug-in.
Compressor 3.5 is capable of authoring AVCHD discs; subtitles are not supported.
Nero Vision 9 can create an AVCHD disc with data rate up to 18 Mbit/s, or an AVCHD-compliant folder for distribution on an HDD or a memory card with data rate up to 24 Mbit/s.
Sony DVD Architect 5 can author AVCHD-compliant discs with menus using AVC encoding as well as non-standard discs using MPEG-2 encoding. In both cases data rate is limited to 18 Mbit/s.
Panasonic HD Writer AE can author AVCHD content on DVDs, BD discs and on SD cards.
MultiAVCHD can author AVCHD discs as well as Panasonic-compliant AVCHD memory cards.
Magix Movie Edit Pro 15 Plus with updates can author AVCHD content on DVDs, BD discs.
Pinnacle Studio 11.1.2 and higher offers AVCHD disc output.
Although AVCHD shares many format similarities with Blu-ray Disc, it is not part of the Blu-ray Disc specification. Consequently, AVCHD-playback is not universally supported across Blu-ray Disc players. Blu-ray Disc players with "AVCHD" logo play AVCHD discs authored either on 8 cm or 12 cm DVDs. Players without such a logo are not guaranteed to play AVCHD discs.
The 1080-line 50p/60p AVCHD Progressive recording mode employed in some camcorders, is not compliant with the current Blu-ray Disc specification, though many current player models unofficially support it if they support AVCHD format.
Hardware products
Canon
Depending on model, Canon camcorders offer 1080-line interlaced, PsF, and native 24p recording.
HR10 (DVD)
2007: HG10 (40 GB HDD)
April 2008: HF10 (SDHC, built-in 16 GB flash memory), HF100 (SDHC)
September 2008: HF11 (SDHC, built-in 32 GB flash memory), HG20 (60 GB HDD, SDHC), HG21 (120 GB HDD, SDHC)
January 2009: HF S10 (SDHC, built-in 32 GB flash memory), HF S100 (SDHC), HF20 (SDHC, built-in 32 GB flash memory), HF200 (SDHC)
August 2009: HF S11 (SDHC, built-in 64 GB flash memory, wired LANC remote capability)
January 2010: HF S21 (two SDHC slots, 64 GB flash memory, electronic viewfinder), HF S20 (two SDHC slots, 32 GB flash memory), HF S200 (two SDHC slots); HF M31 (SDHC, 32 GB flash memory), HF M30 (SDHC, 8 GB flash memory), HF M300 (SDHC); HF R11 (32 GB flash memory), HF R10 (SDHC, 8 GB flash memory), HF R100 (SDHC)
April 2011: HF G10 (with inch image sensor)
March 2012: HF M500 (with inch image sensor; 24pf, 30pf, and 60i; removable SDHC/SDXC flash memory) / HF G20 4:2:2
Hitachi
2008: DZ-BD10HA (Three-media recording: Blu-ray Disc, AVCHD on HDD, AVCHD on SDHC)
JVC
2008 June: GZ-HD10 (HDD, MicroSDHC), GZ-HD30/GZ-HD40(HDD, MicroSDHC card, dual AVCHD and TOD recording)
2009 January: GZ-HD320 (120 GB HDD, MicroSD), GZ-HD300 (60 GB HDD, MicroSD), GZ-HM200 (dual SDHC)
2009 February: GZ-X900 (SD/SDHC card)
2009 September: GZ-HM300, GZ-HM400
2009 December: GZ-HD620
2010 March: GZ-HM1
2011 January: GZ-HM30 (pre-released December 2010)
2011: GZ-HM4XX,GZ-HM6XX,GZ-HM8XX, GZ-HM9XX
2013: GZ-EX555
2014: GZ-R10BAA
2018: GZ-R495BE
Leica Camera
Digital still cameras
2010: LEICA D-LUX 5, LEICA V-LUX 2
2012: LEICA D-LUX 6
Panasonic
Panasonic AVCHD camcorders offer interlaced, progressive scan or native progressive recording and combinations of these modes depending on a particular model. 1080-line and 720-line recording is possible depending on a model.
Panasonic AVCHD camcorders use AVC with High Profile @ Level 4.0 for all modes except 1080p50/1080p60, which are encoded with High Profile @ Level 4.2. Maximum data rate is limited to 24 Mbit/s for AVCCAM models, to 17 Mbit/s for most consumer models and to 28 Mbit/s for 1080p50/1080p60 recording modes.
December 2006: HDC-DX1 (DVD), HDC-SD1 (SDHC)
HDC-SD3 (SDHC, available in Japan only)
AG-HSC1U - essentially a rebadged HDC-HC1 (SDHC, comes with portable 40 GB HDD storage)
August 2007: HDC-SD5 (SDHC), HDC-SX5 (DVD, SDHC), HDC-SD7 (SDHC)
January 2008: HDC-SD9 (SDHC), HDC-HS9 (60 GB HDD, SDHC)
April 2008: AG-HMC70 (SDHC)
June 2008: HDC-SD100 (SDHC), HDC-HS100 (60 GB HDD, SDHC)
September 2008: AG-HMC150 (SDHC)
January 2009: HDC-HS300 (120 GB HDD), HDC-HS200 (80 GB HDD), HDC-TM300 (32 GB built-in flash memory, SDHC), HDC-SD300 (SDHC, available in Europe only), HDC-SD200 (SDHC).
June 2009: HDC-TM30/HDC-TM10 (32 GB built-in flash memory, SDHC), HDC-SD10 (SDHC)
June 2009: HDC-TM350 (64 GB built-in flash memory, SDHC, available in Japan and as of October 2009, from Panasonic Stores across the UK)
September 2009: AG-HMC40 (SDHC)
February 2010: HDC-TM700/HDC-SD700/HDC-HS700 (introduced 1080p60/1080p50 modes, depending on region)
March 2010: HDC-SD60/HDC-TM60/HDC-HS60
December 2010: AG-AF100/AG-AF101/AG-AF102 (4/3" large sensor camera)
September 2011: AG-AC130/AG-AC160 (SDXC/SDHC/SD)
June 2014: AG-AC90A; upgrade of the AG-AC90
In 2009 Panasonic introduced AVCHD Lite and AVCHD to selected members of its Lumix line of digital cameras:
2009: DMC-ZS3/TZ7*, DMC-TS1/DMC-FT1* (AVCHD Lite)
2009: DMC-GH1 (AVCHD)
2010: Lumix DMC-ZS7/TZ10*, DMC-G2 (AVCHD lite)
2010: Lumix DMC-GH2, DMC-GF2 (AVCHD)
2011: Lumix DMC-ZS10/TZ20* (AVCHD lite)
2011: Lumix DMC-FX77/FX78*, DMC-TS3*, DMC-FZ45/47/48*
2011: Lumix DMC-GF2, DMC-G3/GF3 (AVCHD)
2012: Lumix DMC-ZS20/TZ30 (AVCHD, AVCHD Progressive: GPH, PSH)
2012: Lumix DMC-G5
2012: Lumix DMC-FZ200
2012: Lumix DMC-GH3 with a bit rate of 28 Megabit per second (AVCHD 2.0)
2012: Panasonic Lumix DMC-LX7
* to avoid European specific tax, Panasonic digital cameras for this market are limited to 30 minutes recording.
Sony
Consumer Sony AVCHD camcorders released before 2011 could record 1080-line interlaced video only, while the prosumer HDR-AX2000 and professional HXR-NX5 cameras were capable of recording in interlaced and progressive formats.
Released in March 2011, the Sony NEX-FS100 is the first professional NXCAM camcorder capable of 1080p50/p60 recording; consumer-grade HandyCam NEX-VG20 followed in August 2011.
The list of AVCHD camcorders includes:
September 2006: HDR-UX1 (DVD), HDR-UX3/UX5 (DVD), HDR-UX7 (DVD)
October 2006: HDR-SR1 (30 GB HDD)
June 2007: HDR-SR5 (40 GB HDD), HDR-SR7 (60 GB HDD)
July 2007: HDR-SR5C (100 GB HDD), HDR-SR8 (100 GB HDD)
Summer 2007: HDR-CX7 (Memory Stick Duo)
March 2008: HDR-SR10 (40GB HDD, Memory Stick), HDR-SR11 (60 GB HDD, Memory Stick), HDR-SR12 (120 GB HDD, Memory Stick)
HDR-TG1/TG3/TG7 (Memory Stick Duo)
August 2008: HDR-CX12 (Memory Stick Duo)
March 2009: HDR-CX100 (8 GB HDD, Memory Stick Duo)
March 2009: HDR-XR520V (240 GB HDD), HDR-XR500V (120 GB HDD Version)
March 2009: HDR-XR200V (120 GB HDD)
March 2009: HDR-XR200VE (120 GB HDD + GPS)
March 2009: HDR-XR100 (80 GB HDD)
July 2009: HDR-CX500E, HDR-CX520E
October 2009: HDR-CX105 (8GB Memory Stick Duo)
January 2010: HXR-NX5, HDR-AX2000.
March 2010: HDR-XR550 (240 GB HDD)
June 2010: Sony NEX-5, NEX-5C (without Eye-Fi support), of both models, variants with AVCHD 1080 50i and AVCHD 1080 60i only exist
July 2010: Sony HXR-MC50E.
March 2011: Sony NEX-FS100
August 2011: NEX-VG20
October 2011: Sony SLT-A65, Sony SLT-A77V, Sony NEX-5N, Sony NEX-7
In 2010, Sony introduced AVCHD to selected members of its Cybershot line of digital cameras.
January 2010: DSC-HX5V (GPS+COMPASS), HX5V-E (European version, limited to 30 minutes recording due to European specific taxes)
March 2011: DSC-HX9V (GPS+COMPASS), HX9V-E (European version, limited to 30 minutes recording due to European specific taxes)
2012: DSC-HX10V, DSC-HX20V, DSC-RX100, DSC-WX50
2013: DSC-RX100 II, DSC-HX50V
2014: DSC-RX100 III
2015: DSC-RX100 IV
Software
Codecs
FFmpeg includes an AVCHD decoder in its libavcodec library that is used for example by ffdshow, a free, Open Source collection of codecs for Microsoft Windows.
CoreAVC is an H.264 decoder for Windows, which can decode AVCHD as well as a variety of other H.264 formats.
Gstreamer uses libavcodec to decode AVCHD on Linux, BSD, OS/X, Windows, and Solaris
Converters
Badaboom is a media converter that uses NVIDIA GPUs to accelerate conversion of AVCHD to mobile devices.
HandBrake converts AVCHD Lite format to MP4 and MKV (tested on macOS; other versions available), AVI and OGM are supported in versions before 0.9.4.
Roxio Toast 10 Titanium on macOS converts AVCHD to most formats.
Total video converter is a converter for most video formats, including converting from AVCHD and burning AVCHD disc.
iDealshare VideoGo can convert AVCHD to MP4, ProRes, MOV, AVI, WMV, FLV, DV, MKV, VOB etc.
Editors
The following video-editing software features support for the AVCHD format:
Apple iMovie for some cameras/camcorders.
Adobe Premiere Pro (from version CS4 onwards). (Creative Cloud 2013 version natively supports AVCHD Dolby Digital.)
Adobe Premiere Elements (version 7 through 9 only support import, no AVCHD output), version 10 supports AVCHD output.
Avidemux Video editor for Linux and Windows
Apple Final Cut Pro X natively supports AVCHD through Import From Camera.
Apple Final Cut Pro for macOS. The latest version of Final Cut Pro 7 claims better integration with Apple's other professional applications and improved codec support for editing HD, DV and SD video formats, including encoding presets for devices such as iPod, Apple TV, and Blu-ray Discs.
Apple Final Cut Express 4, Final Cut Pro 6.0.1, and iMovie '08-'09 (iMovie is bundled with all new Apple computers; Final Cut Express and Pro are sold separately) do not support editing of AVCHD clips directly. Imported AVCHD clips are automatically converted into the Apple Intermediate Codec format, which requires more hard disk space (40GB per hour as opposed to 13.5GB per hour for Standard Definition DV), a more powerful machine (an Intel-based Mac), and a more recent OS (Mac OS X 10.5). Final Cut Pro 6.0.5 "logs and transfers" the footage from AVCHD to AppleProRes by default and also gives the option of converting to the Apple Intermediate Codec. It does not allow native transferring of the *.m2ts clips nor directly editing them. The latest release of Apple's iLife suite (specifically, iMovie) has added support for AVCHD Lite cameras. It automatically imports AVCHD files when attaching a supported camera to the computer, and it can import older MTS or M2TS files that have been rewrapped (see above) e.g. as m4v.
Avid Media Composer (version 5.x and later) supports AVCHD via transcode import. AMA linking is available in Avid Media Composer 6 when a special AMA plugin is downloaded from the Avid download center.
AVS Video Editor supports videos from HD-cameras(HD Video (inc. AVCHD, MPEG-2 HD and WMV HD), TOD, MOD, M2TS.) Burn AVCHD video to CD-R/RW, DVD+/-R, DVD+/-RW, DVD-RAM, Double/Dual Layer on Windows XP, 2003, Vista, 7 (no macOS/Linux support).
Blender supports the AVCHD format by using an FFmpeg decoder. Blender has a little-known, video editing system that integrates with its 3D editing tools. It supports proxy editing at down to 25% scaling, which helps when editing AVCHD video, which is slow.
Corel VideoStudio supports importing, rendering and burning of AVCHD format in Windows system.
Cyberlink PowerDirector 11 is capable of editing AVCHD 2.0 3D/Progressive natively, without transcoding, intermediate formats or proxy files. Using a patented technique (SVRT), AVCHD clips can be edited and output losslessly to AVCHD or Blu-ray Disc. PowerDirector also supports OpenCL encoding acceleration on Intel, AMD and nVidia graphics platforms. PowerDirector can output the finished movie to a variety of video formats, DVD, AVCHD on DVD, removable storage device, SD/SDHC/SDXC memory card, Memory Stick or Blu-ray Disc.
Dayang Montage Extreme [ME] 1.2
Elecard AVC HD Editor AVC HD Editor affords reordering, trimming and merging of AVCHD clips without the need for transcoding.
Grass Valley Edius from 5.5 up to 9.5 (current version) and historically Edius Neo from 2 until 3.5 but not on current Windows versions.
Kdenlive for Linux and BSD platforms
Lightworks for Windows and Linux, starting with version 11.1. AVCHD support is available in the Free and Pro versions, however, the free version requires transcoding into a different format upon import of AVCHD files.
Microsoft Windows Live Movie Maker 2011 (part of the Windows Live Essentials package) converts to lower resolution for editing and playback, but is capable of exporting in HD.
Nero Ultra Edition Enhanced (from version 7 onwards) includes the Nero Vision editor and the Nero Showtime player, which both support AVCHD files. NeroVision can author DVDs in the AVCHD format.
OpenShot Video Editor for Windows, macOS, and Linux
Pinnacle Studio Plus (from version 11 onwards)
Ulead Video Studio 11 has announced a support for MTS/M2TS, however many user report that this statement is completely false and the editor cannot import video of that format, not to mention editing.
VSDC Free Video Editor
Pitivi Video editor for Linux
Sony Vegas 7.0e
Sony Vegas Pro (from version 8 onwards)
Sony Vegas Movie Studio Platinum (from version 8 onwards)
Other developers have pledged their support but it may still take some time for the implementation.
Open Source codecs
The following open source codecs can decode AVCHD files:
ffdshow tryouts, revision 1971 May 23, 2008, decodes AVC (H.264) format video.
libavcodec (part of FFmpeg project) is a codec library that supports AVCHD. It is used in Jahshaka and Blender, notably.
Specifications
For simplicity, the combination of frame rate and video format is denoted using the common simplified notation of NNx, where NN is the frame rate rounded to integer and x is the format ("i" for interlaced and "p" for pregressive). In this table, "60" actually runs at 59.95 frames/sec, "30" actually runs at 29.97 frames/sec, and "24" actually runs at 23.976 frames/sec, a relic of NTSC video.
See also
AVC-Intra: an intra-frame video format based on AVC compression scheme, offered on professional Panasonic video cameras.
iFrame: an intra-frame video format based on AVC compression scheme, marketed by Apple and offered on some consumer camcorders.
AVCREC: a standard to allow recording of broadcast HD programming on recordable DVDs using AVC encoding scheme.
Comparison of video editing software
XAVC
References
External links
AVCHD Official Consortium Web site
High-definition television
Video storage |
7556838 | https://en.wikipedia.org/wiki/Victory%20Bell%20%28UCLA%E2%80%93USC%29 | Victory Bell (UCLA–USC) | The Victory Bell is the trophy that is awarded to the winner of the UCLA–USC football rivalry game. The game is an American college football rivalry between the UCLA Bruins and USC Trojans, part of the overall UCLA–USC rivalry.
The Victory Bell is a brass bell that originally rang atop a Southern Pacific railroad locomotive. It is currently mounted on a special wheeled carriage.
History
The bell was given to the UCLA student body in 1939 as a gift from the school's alumni association. Initially, the UCLA cheerleaders rang the bell after each Bruin point. However, during the opening game of UCLA's 1941 season (through 1981, both schools used the Los Angeles Memorial Coliseum for home games), six members of USC's Trojan Knights (who were also members of the SigEp fraternity) infiltrated the Bruin rooting section, assisted in loading the bell aboard a truck headed back to Westwood, took the key to the truck, and escaped with the bell while UCLA's actual handlers went to find a replacement key. The bell remained hidden from UCLA students for more than a year, first in SigEp’s basement, then in the Hollywood Hills, Santa Ana, and other locations. At one point, it was even concealed beneath a haystack. Bruin students tried to locate the bell, but to no avail. A picture of the bell appeared in a USC periodical. Tension between UCLA and USC students rose as each started to play even more elaborate and disruptive pranks on the other. When the conflict caused the USC president to threaten to cancel the rivalry, a compromise was met: on November 12, 1942, the student body presidents of both schools, in front of Tommy Trojan, signed the agreement that the bell would be the trophy for the game.
The winner of the annual football game keeps the Victory Bell for the next year, and paints it the school's color: blue for UCLA or cardinal for USC.
Team traditions
UCLA
When the bell is in UCLA's possession, the carriage is sandblasted and painted "True Blue." While in the possession of UCLA, the bell is safeguarded by the UCLA Rally Committee. During UCLA home games at the Rose Bowl and whenever UCLA faces USC at the L.A. Coliseum, it resides on the field in front of the student section. It is rung by members of the Rally Committee after each score. The Bruins also ring the bell using a rope attached to the handle, swinging the whole bell, as opposed to the Trojan style of attaching a rope to the tongue or clapper on the inside of the bell. The bell also makes special appearances at rallies and athletic events. It has been used to accompany the UCLA Band during halftime shows. In particular the bell will make an appearance at a major gathering if the bell returns to UCLA.
USC
Before home games, when the bell is in USC's possession, it sits along Trousdale Parkway for fans to ring as they participate in the "Trojan Walk" to the L.A. Coliseum. During home games, and whenever USC faces UCLA at the Rose Bowl, the Victory Bell is displayed at the edge of the field for the first three quarters of the game. Members of the Trojan Knights ring the bell every time the Trojans score. The carriage is painted cardinal red.
Series record
The first victory for UCLA in the series occurred after the agreement over the Victory Bell, making the Bruins the first winner of the trophy. The Bruins made their post-season appearance after the 1942 season in the Rose Bowl. The teams played each other twice in the same season in 1943, 1944, and 1945, due to travel restrictions during World War II; of those six, USC won five and tied the other.
UCLA took the bell back following the 38–28 victory over USC on November 17, 2012. The Victory Bell was held by USC during the 1999–2005 and 2007–2011 football seasons. USC leads with an overall record of in football contests with UCLA (including two wins vacated due to NCAA penalty). Before the streak of seven Trojan wins, the Bruins had won the bell for the eight consecutive years from 1991–1998, the longest streak in the rivalry. There have been seven ties and one overtime game (1996, 2OT) in the history of the series. In the event of a tie, the Victory Bell was retained by the last winner. With the institution of the overtime rule in FBS in 1996, the tie rule became obsolete.
Game results
From 1929 until 1981, the two teams played in the Los Angeles Memorial Coliseum; the Rose Bowl became UCLA's home field in 1982.
See also
UCLA–USC rivalry
Lexus Gauntlet
Victory Bell (disambiguation) – Other trophies also called the "Victory Bell"
Jeweled Shillelagh
1967 UCLA vs. USC football game
References
College football rivalry trophies in the United States
UCLA Bruins football
USC Trojans football
1929 establishments in California |
1908172 | https://en.wikipedia.org/wiki/INI%20file | INI file | An INI file is a configuration file for computer software that consists of a text-based content with a structure and syntax comprising key–value pairs for properties, and sections that organize the properties. The name of these configuration files comes from the filename extension INI, for initialization, used in the MS-DOS operating system which popularized this method of software configuration. The format has become an informal standard in many contexts of configuration, but many applications on other operating systems use different file name extensions, such as conf and cfg.
History
The primary mechanism of software configuration in Windows was originally a text file format that comprised text lines with one key–value pair per line, organized into sections. This format was used for operating system components, such as device drivers, fonts, startup launchers. INI files were also generally used by applications to store individual settings.
The format was maintained in 16-bit Microsoft Windows platforms up through Windows 3.1x. Starting with Windows 95 Microsoft favored the use of the Windows Registry and began to steer developers away from using INI files for configuration. All subsequent versions of Windows have used the Windows Registry for system configuration, but applications built on the .NET Framework use special XML .config files. The initialization-file functions are still available in Windows and developers may still use them.
Linux and Unix systems also use a similar file format for system configuration. In addition, platform-agnostic software may use this file format for configuration. It is human-readable and simple to parse, so it is a usable format for configuration files that do not require much greater complexity.
Git configuration files are similar to INI files.
PHP uses the INI format for its "php.ini" configuration file in both Windows and Linux systems.
Desktop.ini files determine the display of directories in Windows, e.g., the icons for a directory.
Example
The following example file has two sections: one for the owner of the software, and one for a payroll database connection. Comments record the last person who modified the file and the reason for modification.
; last modified 1 April 2001 by John Doe
[owner]
name = John Doe
organization = Acme Widgets Inc.
[database]
; use IP address in case network name resolution is not working
server = 192.0.2.62
port = 143
file = "payroll.dat"
Format
INI is an informal format, with features that vary from parser to parser (INI dialects). Some features are more shared across different parsers than others and can be considered as the hard core of the format (e.g. square brackets for sections, newlines for delimiting different nodes, etc.). Attempts to create parsers able to support as many dialects as possible exist, and in its most advanced form the INI format is able to express a tree object with a power comparable to that of other structured formats (JSON, XML) using a more relaxed syntax.
Keys (properties)
The basic element contained in an INI file is the key or property. Every key has a name and a value, delimited by an equals sign (=). The name appears to the left of the equals sign. In the Windows implementation the equal sign and the semicolon are reserved characters and cannot appear in the key. The value can contain any character.
name = value
Leading and trailing whitespaces around the outside of the property name are ignored.
Sections
Keys may, but need not, be grouped into arbitrarily named sections. The section name appears on a line by itself, in square brackets ([ and ]). All keys after the section declaration are associated with that section. There is no explicit "end of section" delimiter; sections end at the next section declaration, or at the end of the file. Sections cannot be nested.
[section]
key1 = a
key2 = b
Case sensitivity
Section and property names are not case sensitive in the Windows implementation, but other applications may behave differently.
Comments
Semicolons (;) at the beginning of the line indicate a comment. Comment lines are ignored.
; comment text
Order of sections and properties
The order of properties in a section and the order of sections in a file is irrelevant.
Varying features
As the INI file format is not rigidly defined, many parsers support features beyond the basics already described. The following is a list of some common features, which may or may not be implemented in any given program.
Global properties
Optional "global" properties may also be allowed, that are declared before any section is declared.
Name/value delimiter
Some implementations allow a colon (:) as the name/value delimiter (instead of the equals sign). Whitespace is occasionally used in the Linux world.
Hierarchy (section nesting)
Some parsers allow section nesting, using dots as path delimiters:
[section]
domain = wikipedia.org
[section.subsection]
foo = bar
In some cases relative nesting is supported too, where a leading dot expresses nesting to the previous section:
[section]
domain = wikipedia.org
[.subsection]
foo = bar
Historically, ways for expressing nesting alternative to the dot have existed too (for example, IBM's driver file for Microsoft Windows devlist.ini, in which the backslash was used as nesting delimiter in the form of [A\B\C]; or Microsoft Visual Studio's AEMANAGR.INI file, which used a completely different syntax in the form of [A] and B,C,P = V). Some parsers did not offer nesting support at all and were hierarchy-blind, but nesting could still be partially emulated by exploiting the fact that [A.B.C] constitutes a unique identifier.
Comments
Some software supports the use of the number sign (#) as an alternative to the semicolon for indicating comments, especially under Unix, where it mirrors shell comments. The number sign might be included in the key name in other dialects and ignored as such. For instance, the following line may be interpreted as a comment in one dialect, but create a variable named "#var" in another dialect. If the "#var" value is ignored, it would form a pseudo-implementation of a comment.
#var = a
In some implementations, a comment may begin anywhere on a line after a space (inline comments), including on the same line after properties or section declarations.
var = a ; This is an inline comment
foo = bar # This is another inline comment
In others, including the WinAPI function GetPrivateProfileString, comments must occur on lines by themselves.
Duplicate names
Most implementations only support having one property with a given name in a section. The second occurrence of a property name may cause an abort, it may be ignored (and the value discarded), or it may override the first occurrence (with the first value discarded). Some programs use duplicate property names to implement multi-valued properties.
Interpretation of multiple section declarations with the same name also varies. In some implementations, duplicate sections simply merge their properties, as if they occurred contiguously. Others may abort, or ignore some aspect of the INI file.
Quoted values
Some implementations allow values to be quoted, typically using double quotes and/or apostrophes. This allows for explicit declaration of whitespace, and/or for quoting of special characters (equals, semicolon, etc.). The standard Windows function GetPrivateProfileString supports this, and will remove quotation marks that surround the values.
Escape characters
Some implementations offer varying support for an escape character, typically with the backslash (\) following the C syntax. Some support "line continuation", where a backslash followed immediately by EOL (end-of-line) causes the line break to be ignored, and the "logical line" to be continued on the next actual line from the INI file. Implementation of various "special characters" with escape sequences is also seen.
Accessing INI files
Under Windows, the Profile API is the programming interface used to read and write settings from classic Windows .ini files. For example, the GetPrivateProfileString function retrieves a string from the specified section in an initialization file. (The "private" profile is contrasted with , which fetches from WIN.INI.)
The following sample C program demonstrates reading property values from the above sample INI file (let the name of configuration file be dbsettings.ini):
#include <windows.h>
int main(int argc, _TCHAR *argv[])
{
_TCHAR dbserver[1000];
int dbport;
GetPrivateProfileString("database", "server", "127.0.0.1", dbserver, sizeof(dbserver) / sizeof(dbserver[0]), ".\\dbsettings.ini");
dbport = GetPrivateProfileInt("database", "port", 143, ".\\dbsettings.ini");
// N.B. WritePrivateProfileInt() does not exist
return 0;
}
The third parameter of the GetPrivateProfileString function is the default value, which are "127.0.0.1" and 143 respectively in the two function calls above. If the argument supplied for this parameter is NULL, the default is an empty string, "".
Under Unix, many different configuration libraries exist to access INI files. They are often already included in frameworks and toolkits. Examples of INI parsers for Unix include GLib, iniparser and libconfini.
Comparison of INI parsers
File mapping
Initialization file mapping creates a mapping between an INI file and the Registry. It was introduced with Windows NT and Windows 95 as a way to migrate from storing settings in classic .ini files to the new Windows Registry. File mapping traps the Profile API calls and, using settings from the IniFileMapping Registry section, directs reads and writes to appropriate places in the Registry.
Using the example below, a string call could be made to fetch the name key from the owner section from a settings file called, say, dbsettings.ini. The returned value should be the string "John Doe":
GetPrivateProfileString("owner", "name", ... , "c:\\programs\\oldprogram\\dbsettings.ini");
INI mapping takes this Profile API call, ignores any path in the given filename and checks to see if there is a Registry key matching the filename under the directory:
HKEY_LOCAL_MACHINE\Software\Microsoft\Windows NT\ CurrentVersion\IniFileMapping
If this exists, it looks for an entry name matching the requested section. If an entry is found, INI mapping uses its value as a pointer to another part of the Registry. It then looks up the requested INI setting in that part of the Registry.
If no matching entry name is found and there is an entry under the (Default) entry name, INI mapping uses that instead. Thus each section name does not need its own entry.
So, in this case the profile call for the [owner] section is mapped through to:
where the "name" Registry entry name is found to match the requested INI key. The value of "John Doe" is then returned to the Profile call. In this case, the @ prefix on the default prevents any reads from going to the dbsettings.ini file on disk. The result is that any settings not found in the Registry are not looked for in the INI file.
The "database" Registry entry does not have the @ prefix on the value; thus, for the [database] section only, settings in the Registry are taken first followed by settings in the dbsettings.ini file on disk.
Alternatives
Starting with Windows 95, Microsoft began strongly promoting the use of Windows registry over the INI file. INI files are typically limited to two levels (sections and properties) and do not handle binary data well. This decision however has not been immune to critiques, due to the fact that the registry is monolithic, opaque and binary, must be in sync with the filesystem, and represents a single point of failure for the operating system.
Later XML-based configuration files became a popular choice for encoding configuration in text files. XML allows arbitrarily complex levels and nesting, and has standard mechanisms for encoding binary data.
More recently, data serialization formats, such as JSON, TOML, and YAML can serve as configuration formats. These three alternative formats can nest arbitrarily, but have a different syntax than the INI file. Among them, TOML most closely resembles INI, but the idea to make TOML deliberately compatible with a large subset of INI was rejected.
The newest INI parsers however allow the same arbitrary level of nesting of XML, JSON, TOML, and YAML, offer equivalent support of typed values and Unicode, although keep the "informal status" of INI files by allowing multiple syntaxes for expressing the same thing.
See also
BOOT.INI
MSConfig
Sysedit
SYSTEM.INI
TOML, a very similar but more formally-specified configuration file format
WIN.INI
Amiga's IFF files
.DS Store
.properties
References
Infobox - http://filext.com/file-extension/INI
Infobox - https://wikiext.com/ini
External links
libconfini's Library Function Manual: The particular syntax allowed by libconfini.
Cloanto Implementation of INI File Format: The particular syntax allowed by a parser implemented by Cloanto.
A very simple data file metaformat: INI parser tutorial in Apache Groovy.
Microsoft's GetPrivateProfileString() and WritePrivateProfileStringA() functions
Computer file formats
Configuration files |
23739365 | https://en.wikipedia.org/wiki/Mobile%20device%20forensics | Mobile device forensics | Mobile device forensics is a branch of digital forensics relating to recovery of digital evidence or data from a mobile device under forensically sound conditions. The phrase mobile device usually refers to mobile phones; however, it can also relate to any digital device that has both internal memory and communication ability, including PDA devices, GPS devices and tablet computers.
Some of the mobile companies had tried to duplicate the model of the phones which is illegal. So, We see so many new models arriving every year which is the forward step to the further generations. The Process of cloning the mobile phones/devices in crime was widely recognised for some years, but the forensic study of mobile devices is a relatively new field, dating from the late 1990s and early 2000s. A proliferation of phones (particularly smartphones) and other digital devices on the consumer market caused a demand for forensic examination of the devices, which could not be met by existing computer forensics techniques.
Mobile devices can be used to save several types of personal information such as contacts, photos, calendars and notes, SMS and MMS messages. Smartphones may additionally contain video, email, web browsing information, location information, and social networking messages and contacts.
There is growing need for mobile forensics due to several reasons and some of the prominent reasons are:
Use of mobile phones to store and transmit personal and corporate information
Use of mobile phones in online transactions
Law enforcement, criminals and mobile phone devices
Mobile device forensics can be particularly challenging on a number of levels:
Evidential and technical challenges exist. For example, cell site analysis following from the use of a mobile phone usage coverage, is not an exact science. Consequently, whilst it is possible to determine roughly the cell site zone from which a call was made or received, it is not yet possible to say with any degree of certainty, that a mobile phone call emanated from a specific location e.g. a residential address.
To remain competitive, original equipment manufacturers frequently change mobile phone form factors, operating system file structures, data storage, services, peripherals, and even pin connectors and cables. As a result, forensic examiners must use a different forensic process compared to computer forensics.
Storage capacity continues to grow thanks to demand for more powerful "mini computer" type devices.
Not only the types of data but also the way mobile devices are used constantly evolve.
Hibernation behavior in which processes are suspended when the device is powered off or idle but at the same time, remaining active.
As a result of these challenges, a wide variety of tools exist to extract evidence from mobile devices; no one tool or method can acquire all the evidence from all devices. It is therefore recommended that forensic examiners, especially those wishing to qualify as expert witnesses in court, undergo extensive training in order to understand how each tool and method acquires evidence; how it maintains standards for forensic soundness; and how it meets legal requirements such as the Daubert standard or Frye standard.
History
As a field of study, forensic examination of mobile devices dates from the late 1990s and early 2000s. The role of mobile phones in crime had long been recognized by law enforcement. With the increased availability of such devices on the consumer market and the wider array of communication platforms they support (e.g. email, web browsing) demand for forensic examination grew.
Early efforts to examine mobile devices used similar techniques to the first computer forensics investigations: analyzing phone contents directly via the screen and photographing important content. However, this proved to be a time-consuming process, and as the number of mobile devices began to increase, investigators called for more efficient means of extracting data. Enterprising mobile forensic examiners sometimes used cell phone or PDA synchronization software to "back up" device data to a forensic computer for imaging, or sometimes, simply performed computer forensics on the hard drive of a suspect computer where data had been synchronized. However, this type of software could write to the phone as well as reading it, and could not retrieve deleted data.
Some forensic examiners found that they could retrieve even deleted data using "flasher" or "twister" boxes, tools developed by OEMs to "flash" a phone's memory for debugging or updating. However, flasher boxes are invasive and can change data; can be complicated to use; and, because they are not developed as forensic tools, perform neither hash verifications nor (in most cases) audit trails. For physical forensic examinations, therefore, better alternatives remain necessary.
To meet these demands, commercial tools appeared which allowed examiners to recover phone memory with minimal disruption and analyze it separately. Over time these commercial techniques have developed further and the recovery of deleted data from proprietary mobile devices has become possible with some specialist tools. Moreover, commercial tools have even automated much of the extraction process, rendering it possible even for minimally trained first responders—who currently are much more likely to encounter suspects with mobile devices in their possession, compared to computers—to perform basic extractions for triage and data preview purposes.
Professional applications
Mobile device forensics is best known for its application to law enforcement investigations, but it is also useful for military intelligence, corporate investigations, private investigations, criminal and civil defense, and electronic discovery.
Types of evidence
As mobile device technology advances, the amount and types of data that can be found on a mobile device is constantly increasing. Evidence that can be potentially recovered from a mobile phone may come from several different sources, including handset memory, SIM card, and attached memory cards such as SD cards.
Traditionally mobile phone forensics has been associated with recovering SMS and MMS messaging, as well as call logs, contact lists and phone IMEI/ESN information. However, newer generations of smartphones also include wider varieties of information; from web browsing, Wireless network settings, geolocation information (including geotags contained within image metadata), e-mail and other forms of rich internet media, including important data—such as social networking service posts and contacts—now retained on smartphone 'apps'.
Internal memory
Nowadays mostly flash memory consisting of NAND or NOR types are used for mobile devices.
External memory
External memory devices are SIM cards, SD cards (commonly found within GPS devices as well as mobile phones), MMC cards, CF cards, and the Memory Stick.
Service provider logs
Although not technically part of mobile device forensics, the call detail records (and occasionally, text messages) from wireless carriers often serve as "back up" evidence obtained after the mobile phone has been seized. These are useful when the call history and/or text messages have been deleted from the phone, or when location-based services are not turned on. Call detail records and cell site (tower) dumps can show the phone owner's location, and whether they were stationary or moving (i.e., whether the phone's signal bounced off the same side of a single tower, or different sides of multiple towers along a particular path of travel). Carrier data and device data together can be used to corroborate information from other sources, for instance, video surveillance footage or eyewitness accounts; or to determine the general location where a non-geotagged image or video was taken.
The European Union requires its member countries to retain certain telecommunications data for use in investigations. This includes data on calls made and retrieved. The location of a mobile phone can be determined and this geographical data must also be retained. In the United States, however, no such requirement exists, and no standards govern how long carriers should retain data or even what they must retain. For example, text messages may be retained only for a week or two, while call logs may be retained anywhere from a few weeks to several months. To reduce the risk of evidence being lost, law enforcement agents must submit a preservation letter to the carrier, which they then must back up with a search warrant.
Forensic process
The forensics process for mobile devices broadly matches other branches of digital forensics; however, some particular concerns apply. Generally, the process can be broken down into three main categories: seizure, acquisition, and examination/analysis. Other aspects of the computer forensic process, such as intake, validation, documentation/reporting, and archiving still apply.
Seizure
Seizing mobile devices is covered by the same legal considerations as other digital media. Mobiles will often be recovered switched on; as the aim of seizure is to preserve evidence, the device will often be transported in the same state to avoid a shutdown, which would change files. In addition, the investigator or first responder would risk user lock activation.
However, leaving the phone on carries another risk: the device can still make a network/cellular connection. This may bring in new data, overwriting evidence. To prevent a connection, mobile devices will often be transported and examined from within a Faraday cage (or bag). Even so, there are two disadvantages to this method. First, most bags render the device unusable, as its touch screen or keypad cannot be used. However, special cages can be acquired that allow the use of the device with a see-through glass and special gloves. The advantage with this option is the ability to also connect to other forensic equipment while blocking the network connection, as well as charging the device. If this option is not available, network isolation is advisable either through placing the device in Airplane Mode, or cloning its SIM card (a technique which can also be useful when the device is missing its SIM card entirely).
It is to note that while this technique can prevent triggering a remote wipe (or tampering) of the device, it doesn't do anything against a local Dead man's switch.
Acquisition
The second step in the forensic process is acquisition, in this case usually referring to retrieval of material from a device (as compared to the bit-copy imaging used in computer forensics).
Due to the proprietary nature of mobiles it is often not possible to acquire data with it powered down; most mobile device acquisition is performed live. With more advanced smartphones using advanced memory management, connecting it to a recharger and putting it into a faraday cage may not be good practice. The mobile device would recognize the network disconnection and therefore it would change its status information that can trigger the memory manager to write data.
Most acquisition tools for mobile devices are commercial in nature and consist of a hardware and software component, often automated.
Examination and analysis
As an increasing number of mobile devices use high-level file systems, similar to the file systems of computers, methods and tools can be taken over from hard disk forensics or only need slight changes.
The FAT file system is generally used on NAND memory. A difference is the block size used, which is larger than 512 bytes for hard disks and depends on the used memory type, e.g., NOR type 64, 128, 256 and NAND memory 16, 128, 256, or 512 kilobyte.
Different software tools can extract the data from the memory image. One could use specialized and automated forensic software products or generic file viewers such as any hex editor to search for characteristics of file headers. The advantage of the hex editor is the deeper insight into the memory management, but working with a hex editor means a lot of handwork and file system as well as file header knowledge. In contrast, specialized forensic software simplifies the search and extracts the data but may not find everything. AccessData, Sleuthkit, ESI Analyst and EnCase, to mention only some, are forensic software products to analyze memory images. Since there is no tool that extracts all possible information, it is advisable to use two or more tools for examination. There is currently (February 2010) no software solution to get all evidences from flash memories.
Data acquisition types
Mobile device data extraction can be classified according to a continuum, along which methods become more technical and “forensically sound,” tools become more expensive, analysis takes longer, examiners need more training, and some methods can even become more invasive.
Manual acquisition
The examiner utilizes the user interface to investigate the content of the phone's memory. Therefore, the device is used as normal, with the examiner taking pictures of each screen's contents. This method has an advantage in that the operating system makes it unnecessary to use specialized tools or equipment to transform raw data into human interpretable information. In practice this method is applied to cell phones, PDAs and navigation systems. Disadvantages are that only data visible to the operating system can be recovered; that all data is only available in the form of pictures; and the process itself is time-consuming.
Logical acquisition
Logical acquisition implies a bit-by-bit copy of logical storage objects (e.g., directories and files) that reside on a logical storage (e.g., a file system partition). Logical acquisition has the advantage that system data structures are easier for a tool to extract and organize. Logical extraction acquires information from the device using the original equipment manufacturer application programming interface for synchronizing the phone's contents with a personal computer. A logical extraction is generally easier to work with as it does not produce a large binary blob. However, a skilled forensic examiner will be able to extract far more information from a physical extraction.
File system acquisition
Logical extraction usually does not produce any deleted information, due to it normally being removed from the phone's file system. However, in some cases—particularly with platforms built on SQLite, such as iOS and Android—the phone may keep a database file of information which does not overwrite the information but simply marks it as deleted and available for later overwriting. In such cases, if the device allows file system access through its synchronization interface, it is possible to recover deleted information. File system extraction is useful for understanding the file structure, web browsing history, or app usage, as well as providing the examiner with the ability to perform an analysis with traditional computer forensic tools.
Physical acquisition
Physical acquisition implies a bit-for-bit copy of an entire physical store (e.g. flash memory); therefore, it is the method most similar to the examination of a personal computer. A physical acquisition has the advantage of allowing deleted files and data remnants to be examined. Physical extraction acquires information from the device by direct access to the flash memories.
Generally this is harder to achieve because the device original equipment manufacturer needs to secure against arbitrary reading of memory; therefore, a device may be locked to a certain operator. To get around this security, mobile forensics tool vendors often develop their own boot loaders, enabling the forensic tool to access the memory (and often, also to bypass user passcodes or pattern locks).
Generally the physical extraction is split into two steps, the dumping phase and the decoding phase.
Brute force acquisition
Brute force acquisition can be performed by 3rd party passcode brute force tools that send a series of passcodes / passwords to the mobile device. This is a time-consuming method, but effective nonetheless. This technique uses trial and error in an attempt to create the correct combination of password or PIN to authenticate access to the mobile device. Despite the process taking an extensive amount of time, it is still one of the best methods to employ if the forensic professional is unable to obtain the passcode. With current available software and hardware it has become quite easy to break the encryption on a mobile device's password file to obtain the passcode. Two manufacturers have become public since the release of the iPhone5, Cellebrite and GrayShift. These manufacturers are intended for law enforcement agencies and police departments. The Cellebrite UFED Ultimate unit costs over $40,000 US dollars and Grayshifts system costs $15,000. Brute forcing tools are connected to the device and will physically send codes on iOS devices starting from 0000 to 9999 in sequence until the correct code is successfully entered. Once the code entry has been successful, full access to the device is given and data extraction can commence.
Tools
Early investigations consisted of live manual analysis of mobile devices; with examiners photographing or writing down useful material for use as evidence. Without forensic photography equipment such as Fernico ZRT, EDEC Eclipse, or Project-a-Phone, this had the disadvantage of risking the modification of the device content, as well as leaving many parts of the proprietary operating system inaccessible.
In recent years a number of hardware/software tools have emerged to recover logical and physical evidence from mobile devices. Most tools consist of both hardware and software portions. The hardware includes a number of cables to connect the mobile device to the acquisition machine; the software exists to extract the evidence and, occasionally, even to analyze it.
Most recently, mobile device forensic tools have been developed for the field. This is in response both to military units' demand for fast and accurate anti-terrorism intelligence, and to law enforcement demand for forensic previewing capabilities at a crime scene, search warrant execution, or exigent circumstances. Such mobile forensic tools are often ruggedized for harsh environments (e.g. the battlefield) and rough treatment (e.g. being dropped or submerged in water).
Generally, because it is impossible for any one tool to capture all evidence from all mobile devices, mobile forensic professionals recommend that examiners establish entire toolkits consisting of a mix of commercial, open source, broad support, and narrow support forensic tools, together with accessories such as battery chargers, Faraday bags or other signal disruption equipment, and so forth.
Commercial forensic tools
Some current tools include Belkasoft Evidence Center, Cellebrite UFED, Oxygen Forensic Detective, Elcomsoft Mobile Forensic Bundle, Susteen Secure View, MOBILEdit Forensic Express and Micro Systemation XRY.
Some tools have additionally been developed to address increasing criminal usage of phones manufactured with Chinese chipsets, which include MediaTek (MTK), Spreadtrum and MStar. Such tools include Cellebrite's CHINEX, and XRY PinPoint.
Open source
Most open source mobile forensics tools are platform-specific and geared toward smartphone analysis. Though not originally designed to be a forensics tool, BitPim has been widely used on CDMA phones as well as LG VX4400/VX6000 and many Sanyo Sprint cell phones.
Physical tools
Forensic desoldering
Commonly referred to as a "Chip-Off" technique within the industry, the last and most intrusive method to get a memory image is to desolder the non-volatile memory chip and connect it to a memory chip reader. This method contains the potential danger of total data destruction: it is possible to destroy the chip and its content because of the heat required during desoldering. Before the invention of the BGA technology it was possible to attach probes to the pins of the memory chip and to recover the memory through these probes. The BGA technique bonds the chips directly onto the PCB through molten solder balls, such that it is no longer possible to attach probes.
Desoldering the chips is done carefully and slowly, so that the heat does not destroy the chip or data. Before the chip is desoldered the PCB is baked in an oven to eliminate remaining water. This prevents the so-called popcorn effect, at which the remaining water would blow the chip package at desoldering.
There are mainly three methods to melt the solder: hot air, infrared light, and steam-phasing. The infrared light technology works with a focused infrared light beam onto a specific integrated circuit and is used for small chips. The hot air and steam methods cannot focus as much as the infrared technique.
Chip re-balling
After desoldering the chip a re-balling process cleans the chip and adds new tin balls to the chip. Re-balling can be done in two different ways.
The first is to use a stencil. The stencil is chip-dependent and must fit exactly. Then the tin-solder is put on the stencil. After cooling the tin the stencil is removed and if necessary a second cleaning step is done.
The second method is laser re-balling. Here the stencil is programmed into the re-balling unit. A bondhead (looks like a tube/needle) is automatically loaded with one tin ball from a solder ball singulation tank. The ball is then heated by a laser, such that the tin-solder ball becomes fluid and flows onto the cleaned chip. Instantly after melting the ball the laser turns off and a new ball falls into the bondhead. While reloading the bondhead of the re-balling unit changes the position to the next pin.
A third method makes the entire re-balling process unnecessary. The chip is connected to an adapter with Y-shaped springs or spring-loaded pogo pins. The Y-shaped springs need to have a ball onto the pin to establish an electric connection, but the pogo pins can be used directly on the pads on the chip without the balls.
The advantage of forensic desoldering is that the device does not need to be functional and that a copy without any changes to the original data can be made. The disadvantage is that the re-balling devices are expensive, so this process is very costly and there are some risks of total data loss. Hence, forensic desoldering should only be done by experienced laboratories.
JTAG
Existing standardized interfaces for reading data are built into several mobile devices, e.g., to get position data from GPS equipment (NMEA) or to get deceleration information from airbag units.
Not all mobile devices provide such a standardized interface nor does there exist a standard interface for all mobile devices, but all manufacturers have one problem in common. The miniaturizing of device parts opens the question how to automatically test the functionality and quality of the soldered integrated components. For this problem an industry group, the Joint Test Action Group (JTAG), developed a test technology called boundary scan.
Despite the standardization there are four tasks before the JTAG device interface can be used to recover the memory. To find the correct bits in the boundary scan register one must know which processor and memory circuits are used and how they are connected to the system bus. When not accessible from outside one must find the test points for the JTAG interface on the printed circuit board and determine which test point is used for which signal. The JTAG port is not always soldered with connectors, such that it is sometimes necessary to open the device and re-solder the access port. The protocol for reading the memory must be known and finally the correct voltage must be determined to prevent damage to the circuit.
The boundary scan produces a complete forensic image of the volatile and non-volatile memory. The risk of data change is minimized and the memory chip doesn't have to be desoldered. Generating the image can be slow and not all mobile devices are JTAG enabled. Also, it can be difficult to find the test access port.
Command line tools
System commands
Mobile devices do not provide the possibility to run or boot from a CD, connecting to a network share or another device with clean tools. Therefore, system commands could be the only way to save the volatile memory of a mobile device. With the risk of modified system commands it must be estimated if the volatile memory is really important. A similar problem arises when no network connection is available and no secondary memory can be connected to a mobile device because the volatile memory image must be saved on the internal non-volatile memory, where the user data is stored and most likely deleted important data will be lost. System commands are the cheapest method, but imply some risks of data loss. Every command usage with options and output must be documented.
AT commands
AT commands are old modem commands, e.g., Hayes command set and Motorola phone AT commands, and can therefore only be used on a device that has modem support. Using these commands one can only obtain information through the operating system, such that no deleted data can be extracted.
dd
For external memory and the USB flash drive, appropriate software, e.g., the Unix command dd, is needed to make the bit-level copy. Furthermore, USB flash drives with memory protection do not need special hardware and can be connected to any computer. Many USB drives and memory cards have a write-lock switch that can be used to prevent data changes, while making a copy.
If the USB drive has no protection switch, a blocker can be used to mount the drive in a read-only mode or, in an exceptional case, the memory chip can be desoldered. The SIM and memory cards need a card reader to make the copy. The SIM card is soundly analyzed, such that it is possible to recover (deleted) data like contacts or text messages.
The Android operating system includes the dd command. In a blog post on Android forensic techniques, a method to live image an Android device using the dd command is demonstrated.
Non-forensic commercial tools
Flasher tools
A flasher tool is programming hardware and/or software that can be used to program (flash) the device memory, e.g., EEPROM or flash memory. These tools mainly originate from the manufacturer or service centers for debugging, repair, or upgrade services. They can overwrite the non-volatile memory and some, depending on the manufacturer or device, can also read the memory to make a copy, originally intended as a backup. The memory can be protected from reading, e.g., by software command or destruction of fuses in the read circuit.
Note, this would not prevent writing or using the memory internally by the CPU. The flasher tools are easy to connect and use, but some can change the data and have other dangerous options or do not make a complete copy.
Controversies
In general there exists no standard for what constitutes a supported device in a specific product. This has led to the situation where different vendors define a supported device differently. A situation such as this makes it much harder to compare products based on vendor provided lists of supported devices. For instance a device where logical extraction using one product only produces a list of calls made by the device may be listed as supported by that vendor while another vendor can produce much more information.
Furthermore, different products extract different amounts of information from different devices. This leads to a very complex landscape when trying to overview the products. In general this leads to a situation where testing a product extensively before purchase is strongly recommended. It is quite common to use at least two products which complement each other.
Mobile phone technology is evolving at a rapid pace. Digital forensics relating to mobile devices seems to be at a stand still or evolving slowly. For mobile phone forensics to catch up with release cycles of mobile phones, more comprehensive and in depth framework for evaluating mobile forensic toolkits should be developed and data on appropriate tools and techniques for each type of phone should be made available a timely manner.
Anti-forensics
Anti-computer forensics is more difficult because of the small size of the devices and the user's restricted data accessibility. Nevertheless, there are developments to secure the memory in hardware with security circuits in the CPU and memory chip, such that the memory chip cannot be read even after desoldering.
See also
List of digital forensics tools
References
External links
Conference 'Mobile Forensics World'
Chip-Off Forensics (forensicwiki.org)
JTAG Forensics (forensicwiki.org)
Mobile Phone Forensics Case Studies (QCC Global Ltd)
Computer security procedures
Digital forensics
Information technology audit
Mobile computers |
6130983 | https://en.wikipedia.org/wiki/Tusk%20%28song%29 | Tusk (song) | "Tusk" is a song by British-American rock band Fleetwood Mac from the 1979 double LP of the same name. The song peaked at number eight in the United States for three weeks, reached number six in the United Kingdom (where it was certified Silver for sales of over 250,000 copies), number five in Canada, and number three in Australia. It was one of the first songs to be released using a digital mixdown from an original analog source.
The single was released with two different picture sleeves in many territories: the first featured the black and white picture of producer/engineer Ken Caillat's dog Scooter snapping at a trouser leg, the same as that used for the album cover, whilst the second featured a plain cover with the same font as the album cover but without the dog picture. A limited promotional 12-inch version, featuring mono and stereo versions, was also released to US radio stations.
A slightly different mix of the track appeared on the retrospective four-disc compilation 25 Years – The Chain in 1992.
History
Looking for a title track for the as yet unnamed album, Mick Fleetwood suggested that they take the rehearsal riff that Lindsey Buckingham used for sound-checks. Producers Richard Dashut and Ken Caillat then created a drum-driven production. In addition to the standard drum kit, Fleetwood Mac also experimented with different found sounds on the song. Fleetwood and Buckingham played lamb chops and a Kleenex box on the track respectively.
At the request of Mick Fleetwood, the band recruited the University of Southern California's Trojan Marching Band to play on the single. A mobile studio was installed in Los Angeles' Dodger Stadium to record the marching band. The recording session took place on June 4, 1979. Some recorded footage of the session made it into the song's music video. John McVie was in Tahiti during the Dodger Stadium recording, but he is represented in the video by a cardboard cutout carried around by Mick Fleetwood and later positioned in the stands with the other band members.
The Trojan Marching Band's contributions set a record for the highest number of musicians performing on a single. At the time, the marching band had 112 members. During a game at the Los Angeles Memorial Coliseum on October 4, 1980, Lindsey Buckingham, Stevie Nicks, and Mick Fleetwood presented the Trojan Band with a platinum disc for their contributions on "Tusk". The song was also performed with the Trojan Band during Fleetwood Mac's 1997 concert for the recording of the live album The Dance.
For the Tusk Tour, the band used an Oberheim four-voice synthesizer played by keyboard tech Jeffery Sova to cover the horn parts. An OB-X with a cassette interface was kept backstage if the Four-Voice broke down. Christine McVie, who was expected to handle a percussion part for live renditions of "Tusk", instead opted to play the accordion, an instrument she never intended to learn. "It was just laying around the stage one day. I wasn't sure what I was going to play on 'Tusk'. I thought I might wind up playing some kind of percussion, but I just picked it up and started doing the riff."
In 2014, It was featured on the soundtrack of Kevin Smith's movie of the same name.
Reception
Billboard Magazine described Tusk as "an eerie combination of vocals and a heavy percussion track." Billboard suggested that it was "not as accessible" as other Fleetwood Mac songs and that it was more difficult to "get a handle" on the hook. Cash Box said it "may mystify some with its droning drum beat, the inclusion of the USC Marching Band and dissonant break" but it has a mesmerizing quality."
Personnel
Lindsey Buckingham – guitars, Kleenex box, vocals
Christine McVie – electric piano, backing vocals
Stevie Nicks – backing vocals
John McVie – bass guitar
Mick Fleetwood – drums, percussion, lamb chops
Additional personnel
USC Trojan Marching Band – percussion, horns
Charts
Weekly charts
Year-end charts
Certifications
References
External links
Fleetwood Mac songs
1979 singles
Songs written by Lindsey Buckingham
Song recordings produced by Ken Caillat
Song recordings produced by Richard Dashut
Warner Records singles
1979 songs |
40624990 | https://en.wikipedia.org/wiki/Open%20Control%20Architecture | Open Control Architecture | The Open Control Architecture (OCA) is a communications protocol architecture for control, monitoring, and connection management of networked audio and video devices. Such networks are referred to as "media networks".
The official specification of OCA is the Audio Engineering Society (AES) standard known as AES70-2015, or just AES70. This document will use the newer term "AES70" to refer to the standard and the architecture it specifies.
AES70 is an open standard that may be used freely, without licenses, fees, or organization memberships.
Applicability
AES70 is intended to support media networks that combine devices from diverse manufacturers. Targeted for professional applications, AES70 is suitable for media networks of 2 to 10,000 devices, including networks with mission-critical and/or life-safety roles.
AES70 is for device control, monitoring, and connection management only. It does not provide transport of media program material. However, AES70 is designed to work with virtually any media transport scheme, as the application requires.
AES70's parts are separable and may be used independently. For example, a device may implement AES70 connection management, but use other means for operational control and monitoring.
AES70 is termed an "architecture" because it provides the basis for definition of multiple control protocols. These protocols all share a common programming model, but vary in signalling detail, depending on the form of the underlying data transport mechanism. An AES70 application will use whichever AES70 protocol is appropriate for the communications method available.
Background
OCA, the architecture of AES70, was developed by the OCA Alliance, trade association, beginning in 2011. OCA was based on an existing control protocol named OCP, which had been created by Bosch Communications Systems in 2009 and 2010. OCP was in turn based on an embryonic control protocol standard named AES-24
developed by the AES in the early 1990s.
From the outset, it was the intention of all involved to have OCA rendered into an open public standard. The Alliance completed OCA development in the Fall of 2014, and transferred the specification to the AES for rendering into a formal standard. AES70, the formal standard, was published on January 4, 2016.
Today, the OCA Alliance works to develop and enhance the functionality of AES70 and to promote AES70's adoption throughout the professional media systems industry. The Alliance promotes understanding and adoption of AES70, facilitates the creation of AES70 implementations and related tools and technologies, and develops future functional enhancements of the AES70 standard.
Structural Overview
Scope
AES70 defines the control interface that a media device presents to a network to which it is connected. Thus, AES70 is concerned with the representation of device functions in a systematic way, and with the control and monitoring of those functions via a well-defined family of protocols.
Media networks normally include one or more devices called "controllers" with user interfaces that allow humans to control and monitor the audio and/or video functioning of the networked devices. In AES70-compliant networks, controllers use AES70 protocols to communicate with the devices they control.
AES70 defines the control protocol used between controllers and devices; its scope does not extend to cover the design or construction of controllers or their user interfaces.
AES70 is intended to be used for professional applications. Technical requirements for such applications have been detailed elsewhere
. OCA's scope excludes applications in homes, automobiles, and other consumer areas.
Device Model
The AES70 Device Model is the canonical description of the control interface that an AES70-compliant device presents to the network. The AES70 Device Model is object-oriented.
It defines a required and an optional set of objects ("OCA objects") that the device's control interface implements. Using an AES70 protocol, controllers may access the properties of these objects to perform control, monitoring, and connection management operations.
OCA objects are abstractions that represent device control and monitoring points and media connections. They may or may not correspond to actual programming objects or hardware components inside the device. If a device correctly implements an AES70 protocol, it is AES70-compliant. AES70 does not define how that may or should be accomplished.
Generally speaking, the AES70 device model tends to differ from device models in other control architectures.
in several ways:
AES70 does not presume a hierarchical device structure.
AES70 does not predefine specific processing configurations, signal processing modules, device types, or device families.
AES70 does not define controller user interfaces or user interface elements.
AES70 has strong support for dynamically reconfigurable devices.
AES70 offers a strong and transport-agnostic model for connection management.
AES70's repertoire of management and housekeeping functions is relatively rich.
Class Structure
The AES70 Class Structure defines a set of classes ("OCA Classes") that devices may use to instantiate OCA objects. There are three kinds of classes:
Workers, which represent application functions of devices—gain controls, level meters, switches, equalizers, et cetera.
Agents, which modify and assist control functions in various ways.
Managers, which represent various global device states.
OCA classes may be broadly grouped into three functional sets:
Management classes, which provide basic device management and housekeeping functions.
Control and monitoring classes, which are concerned with device operation.
Connection management classes, which are concerned with setup, supervision, and teardown of media stream connections, and with directory (aka "discovery") services for location and identification of network devices.
Protocols
As noted above, the AES70 architecture supports multiple protocols, depending on the nature of the network medium used. At present, AES70 defines one protocol, named OCP.1. OCP.1 is the AES70 protocol for TCP/IP networks. Future plans include OCP.2, a byte-serial version for USB networks, Bluetooth connections, and point-to-point links, and OCP.3, a text version in JSON.
Each AES70 protocol defines three kinds of messages, as follows:
Commands - directives from a controller to an object in a device, requesting some kind of action or retrieving some parameter value;
Responses - replies from an object to a controller, indicating success or failure of a previous command, and returning parameter values, where requested;
Notifications - automatically generated messages from an object in a device to a controller, indicating the occurrence of some condition or periodically reporting a parameter value such as signal amplitude.
Control Repertoire
The AES70 control repertoire covers control, monitoring, and connection management of audio devices. Future versions will expand the audio control repertoire, and may address video devices as well.
AES70 includes features that allow manufacturers to extend the OCA class structure to address functions not in the standard repertoire. Such extensions may be public or confidential, as the manufacturer chooses.
Table 1 summarizes the AES70-2015 control repertoire.
Notable Features
Connection Management
Although AES70 does not itself provide media transport functions, it is designed to interface with modern media transport standards to control signal routing and other connection setup functions, and to interface with network directory/discovery services. In this capacity, AES70 provides a useful level of abstraction for applications, allowing controllers and devices to use one common software model for managing stream connections of various transport architectures.
The OCA Alliance defines recommended practices for interfacing AES70 with various well-known media transport architectures. The specification for interfacing AES70 with a given media transport scheme is called an AES70 Adaptation.
Control Grouping
AES70 includes an architectural solution to the problems of control grouping, i.e. the use of a single control input to effect multiple operating parameters. An example of control grouping is a master gain control covering multiple device channels in one or more devices.
Control grouping poses difficult problems, especially in systems where a given operating parameter may be affected by multiple control groups. For example, in a stereophonic multiway sound system, the gain of the left-channel high-frequency amplifier may be affected by settings of master controls for (a) overall high-frequency level, (b) left-channel level, and (c) overall level of the entire system. In such systems, machine intelligence is required to manage cumulative settings effects that lead to overrange or underrange parameter values. The AES70 grouping mechanism provides a basis for such management, for one or many devices.
Snapshot and Preset Management
AES70 includes a powerful and general mechanism for applying, storing, recalling, uploading, and downloading sets of operating parameter values. Both partial and full snapshots are supported.
Reconfigurable DSP Device Setup
AES70 includes complete support for managing the configurations of reconfigurable DSP devices, i.e. software-based devices whose signal processing topologies can be defined and redefined at runtime by external controllers. For such devices, AES70 supports creation, configuration, and deletion of signal processing elements and the internal signal paths that connect them.
Proprietary Extensibility
AES70 is designed to support proprietary extensions with maximum compatibility. Manufacturers may define their own extensions to the control repertoire, and these will coexist peacefully with standard elements.
Upward / Downward Compatibility
AES70 devices and controllers will continue to interoperate as AES70 evolves over the years. Devices that use various versions of OCA will generally be intermixable in one media network without problems.
Security
AES70 protocols offer encryption and authentication options that allow the construction of secure control and monitoring networks. Completely secure media networks will require encryption of transmitted program content as well; the mechanisms for such encryption lie outside the scope of OCAAES70 although AES70 may be used to configure and control them.
Reliable Firmware Update Capability
AES70 defines primitives that allow reliable update of device firmware over the network. These primitives may be used by maintenance software to ensure that incomplete firmware updates do not render critical devices and networks inoperative.
Availability
AES70 is an open and license-free standard. It may be freely used in products as manufacturers choose. Although AES70 is nurtured and promoted by the OCA Alliance, membership in the Alliance is not required in order to use AES70.
AES70 Documents
AES70 documents are available from the Audio Engineering Society (AES) Standards Store. The standard is in three parts and two significant appendices, as follows:
1. AES70 Framework
Also known as OCF, this specification describes the overall architecture of AES70 and describes its mechanisms. OCF is published in a document named AES-1-2015: AES standard for Audio applications of networks - Open Control Architecture - Part 1: Framework.
2. AES70 Class Structure
Also known as OCC, this specification describes the object-oriented class structure that defines the functional repertoire (connection management, control, and monitoring) of AES70. OCC is published in a document named AES70-2-2015: AES standard for Audio applications of networks - Open Control Architecture - Part 2: Class Structure
It is critical for readers also to download this document's Appendix A in either of two forms (see below for explanation):
AES70-2-2015 Appendix A (Enterprise Architect format)
or
AES70-2-2015 Appendix A (XMI format)
3. AES70 Protocols
Also known as OCP.1, OCP.2, etcetera, these specifications describe protocols that implement OCA control over various types of networks.
In AES70-2015, only one protocol -- OCP.1 -- is defined. It is for TCP/IP networks. Future updates to the standard will define additional protocols. OCP.1 is published in a document named AES70-3-2015: AES standard for Audio applications of networks - Open Control Architecture - Part 3: Protocol for TCP/IP Networks
Readers should also download this document's Appendix B in either of two forms (see below for explanation):
AES70-3-2015 Appendix B (Enterprise Architect format)
or
AES70-23-2015 Appendix B (XMI format)
The Appendices
The two appendices listed above are Universal Modeling Language (UML) specifications.
The UML files are in two forms:
The *.eap files are master files from a UML tool named Enterprise Architect from Sparx Systems. The usual version of the tool costs US$240, but Sparx Systems offers a free viewer.
The *.xmi files are master files in XMI 2.1, a standard format for representation of UML information. XMI stands for "XML Metadata Interchange". XMI files can be opened by most UML editors, including free ones. See XML Metadata Interchange for more information.
The OCA Alliance
The OCA Alliance, is a non-profit corporation originally formed to secure the standardization of the OCA. With the publication of the AES70 standard in 2016, the Alliance's purposes have evolved, and are now:
Promoting the adoption of AES70 through marketing, education and training.
Developing documents and tools that complement the AES70 standard, by providing useful advice and materials to developers of AES70-compliant products and to end users of AES70 systems.
Working with other standards groups to ensure the optimum blending of AES70 with other industry media networking standards, especially those related to media program transfer.
Developing recommended enhancements to the AES70 standard.
Alliance members are large and small companies who desire to steer the evolution of AES70, and to benefit from the exchange of technology and business information that a trade association can provide. New members are always welcome.
Available development tools / code
A number of development tools / open source code is available which helps in start developing AES70 compatible products.
A device implementation example in C++ can be downloaded from https://github.com/OCAAlliance/OCAMicro
AES70 demo - ALSA (Linux Sound Card Driver) as OCA Device with Cloud UI controller: https://deuso.de/alsa-demo/
Free tools can be downloaded from https://ocaalliance.github.io/downloads.html
A javascript controller library can be downloaded from https://github.com/DeutscheSoft/AES70.js
A npm package with AES70.js is available at https://www.npmjs.com/package/aes70
References
External links
http://ocaalliance.com/, the OCA Alliance website.
http://www.aes.org/standards, the Audio Engineering Society standards page. AES standards participation is open to all; AES membership is not necessary.
https://github.com/OCAAlliance/OCAMicro, a device implementation of the AES70 protocol. Supported by OCA Alliance members.
Audio Engineering Society standards
Application layer protocols
Network protocols |
61366469 | https://en.wikipedia.org/wiki/Kon-Boot | Kon-Boot | Kon-Boot (aka konboot, kon boot) is a software utility that allows users to bypass Microsoft Windows passwords and Apple macOS passwords (Linux support has been deprecated) without lasting or persistent changes to system on which it is executed. It is also the first reported tool capable of bypassing Windows 10 online (live) passwords and supporting both Windows and macOS systems. It is also a widely used tool in computer security, especially in penetration testing. Since version 3.5 Kon-Boot is also able to bypass SecureBoot feature.
History
Kon-Boot was originally designed as a proof of concept, freeware security tool, mostly for people who tend to forget their passwords. The main idea was to allow users to login to the target computer without knowing the correct password and without making any persistent changes to system on which it is executed.
First Kon-Boot release was announced in 2008 on DailyDave mailing list. Version 1.0 (freeware) allowed users to login into Linux based operating systems and to bypass the authentication process (allowing access to the system without knowing the password).
In 2009 author of this software announced Kon-Boot for Linux and 32-bit Microsoft Windows systems. This release provided additional support for bypassing Windows systems passwords on any Windows operating system starting from Windows Server 2008 to Windows 7. This version is still available as freeware
Newest Kon-Boot releases are available only as commercial products and are still maintained. The commercial licenses licenses out the software to one USB pendrive per license purchased.
Current version is able to bypass passwords on the following operating systems:
Technology
Kon-Boot works like a bootkit (thus it also often creates false positive alerts in antivirus software). It injects (hides) itself into BIOS memory. Kon-Boot modifies the kernel code on the fly (runtime), temporarily changing the code responsible for verification user's authorization data while the operating system loads.
In contrast to password reset tools like CHNTPW (The Offline NT Password Editor), Kon-Boot does not modify system files and SAM hive, all changes are temporary and they disappear after system reboots.
Additional Features
While by default Kon-Boot bypasses Windows passwords it also includes some additional features that are worth noting:
Kon-Boot can change Windows passwords due to embedded Sticky-Keys feature. For example after successful Windows boot with Kon-Boot user can tap SHIFT key 5 times and Kon-Boot will open a Windows console window running with local system privileges. Fully working console can be used for a variety of purposes. For example in case of changing Windows password following command can be used: net user [username] [newpassword](selected user can be later added as new Windows administrator by typing: net localgroup administrators [username] /add). Similarly following command: net user [username] * will erase current Windows password for selected user. Obviously many other actions are available since the Windows console is running with system privileges.
In the commercial Kon-Boot editions it is possible to use Automatic PowerShell Script Execution feature which automatically executes (after Windows boot) given PowerShell script with full system privileges. This feature can be used to automatize various tasks for example performing forensics data gathering task etc. To use this feature Windows needs to be installed in UEFI mode.
Limitations (prevention)
Users concerned about tools like Kon-Boot should use disk encryption (FileVault, Bitlocker, Veracrypt etc.) software as Kon-Boot is not able to bypass disk encryption. BIOS password and enabled SecureBoot feature is also a good prevention measure. However Kon-Boot since version 3.5 is able to bypass SecureBoot feature. Kon-Boot does not support virtualization and instructs users to turn it off in the bios. Kon-Boot does not support ARM devices such as Apple's M1 chip.
References
External links
Official website
Password cracking software |
1153898 | https://en.wikipedia.org/wiki/KVM%20switch | KVM switch | A KVM switch (with KVM being an abbreviation for "keyboard, video and mouse") is a hardware device that allows a user to control multiple computers from one or more sets of keyboards, video monitors, and mice.
Name
Switches to connect multiple computers to one or more peripherals have had a variety of names.
The earliest name was Keyboard Video Switch (KVS). With the advent of the mouse, the Keyboard, Video and Mouse (KVM) switch became popular. The name was introduced by Remigius Shatas, the founder of Cybex, a peripheral switch manufacturer, in 1995. Some companies call their switches Keyboard, Video, Mouse and Peripheral (KVMP).
Types
With the popularity of USB—USB keyboards, mice, and I/O devices are still the most common devices connected to a KVM switch. The classes of KVM switches that are reviewed, are based on different types of core technologies in terms of how the KVM switch handles USB I/O devices—including keyboards, mice, touchscreen displays, etc. (USB-HID = USB Human Interface Device)
USB Hub Based KVMAlso called an Enumerated KVM switch or USB switch selector, a connected/shared USB device must go through the full initiation process (USB enumeration) every time the KVM is switched to another target system/port. The switching to different ports is just as if you were to physically plug and unplug a USB device into your targeted system.
Emulated USB KVM Dedicated USB console port(s) are assigned to emulate special sets of USB keyboard or mouse switching control information to each connected/targeted system. Emulated USB provides an instantaneous and reliable switching action that makes keyboard hotkeys and mouse switching possible. However, this class of KVM switch only uses generic emulations and consequently has only been able to support the most basic keyboard and mouse features. There are also USB KVM devices that allow cross-platform operating systems and basic keyboard and mouse sharing.
Semi-DDM USB KVM Dedicated USB console port(s) work with all USB-HID (including keyboard and mouse), but do not maintain the connected devices' presence to all of the targeted systems simultaneously. This class of KVM takes advantage of DDM (Dynamic Device Mapping) technology.
DDM USB KVM Dedicated Dynamic device mapping USB console port(s) work with all USB-HID (including keyboard and mouse) and maintain the connected devices' special functions and characteristics to each connected/targeted system. This class of KVM switch overcomes the frustrating limitations of an Emulated USB Class KVM by emulating the true characters of the connected devices to all the computers simultaneously. This means that you can now use the extra function keys, wheels, buttons, and controls that are commonly found on modern keyboards and mice.
Limited*supported, but does not allow USB re-enumeration, which not only causes long delays in switching, but also sometimes causes HPD (Hot-Plug Device) errors to the OS system(s).
Yes*Latency time within 1 second while switching between channels/ports.
Use
A KVM Switch is a hardware device, used in data centers, that allows the control of multiple computers from a single keyboard, monitor and mouse (KVM). This switch then allows data center personnel to connect to any server in the rack. A common example of home use is to enable the use of the full-size keyboard, mouse and monitor of the home PC with a portable device such as a laptop, tablet PC or PDA, or a computer using a different operating system.
KVM switches offer different methods of connecting the computers. Depending on the product, the switch may present native connectors on the device where standard keyboard, monitor and mouse cables can be attached. Another method to have a single DB25 or similar connector that aggregated connections at the switch with three independent keyboard, monitor and mouse cables to the computers. Subsequently, these were replaced by a special KVM cable which combined the keyboard, video and mouse cables in a single wrapped extension cable. The advantage of the last approach is in the reduction of the number of cables between the KVM switch and connected computers. The disadvantage is the cost of these cables.
The method of switching from one computer to another depends on the switch. The original peripheral switches (Rose, circa 1988) used a rotary switch while active electronic switches (Cybex, circa 1990) used push buttons on the KVM device. In both cases, the KVM aligns operation between different computers and the users' keyboard, monitor and mouse (user console).
In 1992–1993, Cybex Corporation engineered keyboard hot-key commands. Today, most KVMs are controlled through non-invasive hot-key commands (e.g. Ctrl+Ctrl, Scroll Lock+Scroll Lock and the Print Screen keys). Hot-key switching is often complemented with an on-screen display system that displays a list of connected computers.
KVM switches differ in the number of computers that can be connected. Traditional switching configurations range from 2 to 64 possible computers attached to a single device. Enterprise-grade devices interconnected via daisy-chained and/or cascaded methods can support a total of 512 computers equally accessed by any given user console.
Video bandwidth
While HDMI, DisplayPort, and DVI switches have been manufactured, VGA is still the most common video connector found with KVM switches, although many switches are now compatible with DVI connectors. Analogue switches can be built with varying capacities for video bandwidth, affecting the unit's overall cost and quality. A typical consumer-grade switch provides up to 200 MHz bandwidth, allowing for high-definition resolutions at 60 Hz.
For analog video, resolution and refresh rate are the primary factors in determining the amount of bandwidth needed for the signal. The method of converting these factors into bandwidth requirements is a point of ambiguity, in part because it is dependent on the analogue nature and state of the hardware. The same piece of equipment may require more bandwidth as it ages due to increased degradation of the source signal. Most conversion formulas attempt to approximate the amount of bandwidth needed, including a margin of safety. As a rule of thumb, switch circuitry should provide up to three times the bandwidth required by the original signal specification, as this allows most instances of signal loss to be contained outside the range of the signal that is pertinent to picture quality.
As CRT-based displays are dependent on refresh rate to prevent flickering, they generally require more bandwidth than comparable flat panel displays.
Monitor
A monitor uses DDC and EDID, transmitted through specific pins, to identify itself to the system. KVM switches may have different ways of handling these data transmissions:
None: the KVM switch lacks the circuitry to handle this data, and the monitor is not "visible" to the system. The system may assume a generic monitor is attached and defaults to safe settings. Higher resolutions and refresh rates may need to be manually unlocked through the video driver as a safety precaution. However, certain applications (especially games) that depend on retrieving DDC/EDID information will not be able to function correctly.
Fake: the KVM switch generates its own DDC/EDID information that may or may not be appropriate for the monitor that is attached. Problems may arise if there is an inconsistency between the KVM's specifications and the monitor's, such as not being able to select desired resolutions.
Pass-through: the KVM switch attempts to make communication between the monitor and the system transparent. However, it may fail to do so in the following ways:
generating Hot Plug Detect (HPD) events for monitor arrival or removal upon switching, or not passing monitor power states - may cause the OS to re-detect the monitor and reset the resolution and refresh rate, or may cause the monitor to enter to or exit from power-saving mode;
not passing or altering MCSS commands - may result in incorrect orientation of the display or improper color calibration.
Microsoft guidelines recommend that KVM switches pass unaltered any I2C traffic between the monitor and the PC hosts, and do not generate HPD events upon switching to a different port while maintaining stable non-noise signal on inactive ports.
Passive and active (electronic) switches
KVM switches were originally passive, mechanical devices based on multi-pole switches and some of the cheapest devices on the market still use this technology. Mechanical switches usually have a rotary knob to select between computers. KVMs typically allow sharing of two or four computers, with a practical limit of about twelve machines imposed by limitations on available switch configurations. Modern hardware designs use active electronics rather than physical switch contacts with the potential to control many computers on a common system backbone.
One limitation of mechanical KVM switches is that any computer not currently selected by the KVM switch does not 'see' a keyboard or mouse connected to it. In normal operation this is not a problem, but while the machine is booting up it will attempt to detect its keyboard and mouse and either fail to boot or boot with an unwanted (e.g. mouseless) configuration. Likewise, a failure to detect the monitor may result in the computer falling back to a low resolution such as (typically) 640x480. Thus, mechanical KVM switches may be unsuitable for controlling machines which can reboot automatically (e.g. after a power failure).
Another problem encountered with mechanical devices is the failure of one or more switch contacts to make firm, low resistance electrical connections, often necessitating some wiggling or adjustment of the knob to correct patchy colors on screen or unreliable peripheral response. Gold-plated contacts improve that aspect of switch performance, but add cost to the device.
Most active (electronic rather than mechanical) KVM devices provide peripheral emulation, sending signals to the computers that are not currently selected to simulate a keyboard, mouse and monitor being connected. These are used to control machines which may reboot in unattended operation. Peripheral emulation services embedded in the hardware also provides continuous support where computers require constant communication with the peripherals.
Some types of active KVM switches do not emit signals that exactly match the physical keyboard, monitor, and mouse, which can result in unwanted behavior of the controlled machines. For example, the user of a multimedia keyboard connected to a KVM switch may find that the keyboard's multimedia keys have no effect on the controlled computers.
Software alternatives
There are software alternatives to some of the functionality of a hardware KVM switch, such as Multiplicity, Input Director and Synergy, which does the switching in software and forwards input over standard network connections. This has the advantage of reducing the number of wires needed. Screen-edge switching allows the mouse to function over both monitors of two computers.
Remote KVM devices
There are two types of remote KVM devices that are best described as local remote and KVM over IP.
Local remote (Including KVM over USB)
Local remote KVM device design allows users to control computer equipment up to away from the user consoles (keyboard, monitor and mouse). They always need direct cable connection from the computer to the KVM switch to the console and include support for standard category 5 cabling between computers and users interconnected by the switch device. In contrast, USB powered KVM devices are able to control computer equipment using a combination of USB, keyboard, mouse and monitor cables of up to .
KVM over IP (IPKVM)
KVM switch over IP devices use a dedicated micro-controller and potentially specialized video capture hardware to capture the video, keyboard, and mouse signals, compress and convert them into packets, and send them over an Ethernet link to a remote console application that unpacks and reconstitutes the dynamic graphical image. KVM over IP subsystem is typically connected to a system's standby power plane so that it's available during the entire BIOS boot process.
These devices allow multiple computers to be controlled locally or globally with the use of an IP connection. There are performance issues related with LAN/WAN hardware, standard protocols and network latency so user management is commonly referred to as "near real time".
Access to most remote or "KVM" over IP devices today use a web browser, although many of the stand-alone viewer software applications provided by many manufacturers are also reliant on ActiveX or Java.
Whitelisting
Some KVM chipsets or manufacturers require the "whitelisting" or authority to connect to be implicitly enabled. Without the whitelist addition, the device will not work. This is by design and required to connect non-standard USB devices to KVMs. This is completed by noting the device's ID (usually copied from the Device manager in Windows), or documentation from the manufacturer of the USB device.
Generally all HID or consumer grade USB peripherals are exempt, but more exotic devices like tablets, or digitisers or USB toggles require manual addition to the white list table of the KVM.
Implementation
In comparison to conventional methods of remote administration (for example in-band Virtual Network Computing or Terminal Services), a KVM switch has the advantage that it doesn't depend on a software component running on the remote computer, thus allowing remote interaction with base level BIOS settings and monitoring of the entire booting process before, during, and after the operating system loads. Modern KVM over IP appliances or switches typically use at least 128-bit data encryption securing the KVM configuration over a WAN or LAN (using SSL).
KVM over IP devices can be implemented in different ways. With regards to video, PCI KVM over IP cards use a form of screen scraping where the PCI bus master KVM over IP card would access and copy out the screen directly from the graphics memory buffer, and as a result it must know which graphics chip it is working with, and what graphics mode this chip is currently in so that the contents of the buffer can be interpreted correctly as picture data. Newer techniques in OPMA management subsystem cards and other implementations get the video data directly using the DVI bus. Implementations can emulate either PS/2 or USB based keyboards and mice. An embedded VNC server is typically used for the video protocol in IPMI and Intel AMT implementations.
Computer sharing devices
KVM switches are called KVM sharing devices because two or more computers can share a single set of KVM peripherals. Computer sharing devices function in reverse compared to KVM switches; that is, a single PC can be shared by multiple monitors, keyboards, and mice. A computer sharing device is sometimes referred to as a KVM Splitter or reverse KVM switch. While not as common, this configuration is useful when the operator wants to access a single computer from two or more (usually close) locations - for example, a public kiosk machine that also has a staff maintenance interface behind the counter, or a home office computer that doubles as a home theater PC.
See also
Console server
Intel Active Management Technology
Intelligent Platform Management Interface
Remote graphics unit
Dynamic device mapping
Display Control Channel
Reverse DDM
Synergy (software)
References
Computer peripherals
Input/output
Out-of-band management
Computer connectors
KVM FAQs |
17112 | https://en.wikipedia.org/wiki/HMAC | HMAC | In cryptography, an HMAC (sometimes expanded as either keyed-hash message authentication code or hash-based message authentication code) is a specific type of message authentication code (MAC) involving a cryptographic hash function and a secret cryptographic key. As with any MAC, it may be used to simultaneously verify both the data integrity and authenticity of a message.
HMAC can provide authentication using a shared secret instead of using digital signatures with asymmetric cryptography. It trades off the need for a complex public key infrastructure by delegating the key exchange to the communicating parties, who are responsible for establishing and using a trusted channel to agree on the key prior to communication.
Details
Any cryptographic hash function, such as SHA-2 or SHA-3, may be used in the calculation of an HMAC; the resulting MAC algorithm is termed HMAC-X, where X is the hash function used (e.g. HMAC-SHA256 or HMAC-SHA3-512). The cryptographic strength of the HMAC depends upon the cryptographic strength of the underlying hash function, the size of its hash output, and the size and quality of the key.
HMAC uses two passes of hash computation. The secret key is first used to derive two keys – inner and outer. The first pass of the algorithm produces an internal hash derived from the message and the inner key. The second pass produces the final HMAC code derived from the inner hash result and the outer key. Thus the algorithm provides better immunity against length extension attacks.
An iterative hash function breaks up a message into blocks of a fixed size and iterates over them with a compression function. For example, SHA-256 operates on 512-bit blocks. The size of the output of HMAC is the same as that of the underlying hash function (e.g., 256 and 512 bits in the case of SHA-256 and SHA3-512, respectively), although it can be truncated if desired.
HMAC does not encrypt the message. Instead, the message (encrypted or not) must be sent alongside the HMAC hash. Parties with the secret key will hash the message again themselves, and if it is authentic, the received and computed hashes will match.
The definition and analysis of the HMAC construction was first published in 1996 in a paper by Mihir Bellare, Ran Canetti, and Hugo Krawczyk, and they also wrote RFC 2104 in 1997. The 1996 paper also defined a nested variant called NMAC. FIPS PUB 198 generalizes and standardizes the use of HMACs. HMAC is used within the IPsec, SSH and TLS protocols and for JSON Web Tokens.
Definition
This definition is taken from RFC 2104:
where
H is a cryptographic hash function
m is the message to be authenticated
K is the secret key
K is a block-sized key derived from the secret key, K; either by padding to the right with 0s up to the block size, or by hashing down to less than or equal to the block size first and then padding to the right with zeros
‖ denotes concatenation
⊕ denotes bitwise exclusive or (XOR)
opad is the block-sized outer padding, consisting of repeated bytes valued 0x5c
is the block-sized inner padding, consisting of repeated bytes valued 0x36
Implementation
The following pseudocode demonstrates how HMAC may be implemented. Block size is 512 bits (64 bytes) when using one of the following hash functions: SHA-1, MD5, RIPEMD-128.
hmac
key: Bytes
message: Bytes
hash: Function
blockSize: Integer
outputSize: Integer
block_sized_key = computeBlockSizedKey(key, hash, blockSize)
o_key_pad ← block_sized_key xor [0x5c blockSize]
i_key_pad ← block_sized_key xor [0x36 blockSize]
hash(o_key_pad ∥ hash(i_key_pad ∥ message))
computeBlockSizedKey
key: Bytes
hash: Function
blockSize: Integer
(length(key) > blockSize)
key = hash(key)
(length(key) < blockSize)
Pad(key, blockSize)
key
Design principles
The design of the HMAC specification was motivated by the existence of attacks on more trivial mechanisms for combining a key with a hash function. For example, one might assume the same security that HMAC provides could be achieved with MAC = H(key ∥ message). However, this method suffers from a serious flaw: with most hash functions, it is easy to append data to the message without knowing the key and obtain another valid MAC ("length-extension attack"). The alternative, appending the key using MAC = H(message ∥ key), suffers from the problem that an attacker who can find a collision in the (unkeyed) hash function has a collision in the MAC (as two messages m1 and m2 yielding the same hash will provide the same start condition to the hash function before the appended key is hashed, hence the final hash will be the same). Using MAC = H(key ∥ message ∥ key) is better, but various security papers have suggested vulnerabilities with this approach, even when two different keys are used.
No known extension attacks have been found against the current HMAC specification which is defined as H(key ∥ H(key ∥ message)) because the outer application of the hash function masks the intermediate result of the internal hash. The values of ipad and opad are not critical to the security of the algorithm, but were defined in such a way to have a large Hamming distance from each other and so the inner and outer keys will have fewer bits in common. The security reduction of HMAC does require them to be different in at least one bit.
The Keccak hash function, that was selected by NIST as the SHA-3 competition winner, doesn't need this nested approach and can be used to generate a MAC by simply prepending the key to the message, as it is not susceptible to length-extension attacks.
Security
The cryptographic strength of the HMAC depends upon the size of the secret key that is used. The most common attack against HMACs is brute force to uncover the secret key. HMACs are substantially less affected by collisions than their underlying hashing algorithms alone. In particular, Mihir Bellare proved that HMAC is a PRF under the sole assumption that the compression function is a PRF. Therefore, HMAC-MD5 does not suffer from the same weaknesses that have been found in MD5.
RFC 2104 requires that "keys longer than B bytes are first hashed using H" which leads to a confusing pseudo-collision: if the key is longer than the hash block size (e.g. 64 bytes for SHA-1), then HMAC(k, m) is computed as HMAC(H(k), m).This property is sometimes raised as a possible weakness of HMAC in password-hashing scenarios: it has been demonstrated that it's possible to find a long ASCII string and a random value whose hash will be also an ASCII string, and both values will produce the same HMAC output.
In 2006, Jongsung Kim, Alex Biryukov, Bart Preneel, and Seokhie Hong showed how to distinguish HMAC with reduced versions of MD5 and SHA-1 or full versions of HAVAL, MD4, and SHA-0 from a random function or HMAC with a random function. Differential distinguishers allow an attacker to devise a forgery attack on HMAC. Furthermore, differential and rectangle distinguishers can lead to second-preimage attacks. HMAC with the full version of MD4 can be forged with this knowledge. These attacks do not contradict the security proof of HMAC, but provide insight into HMAC based on existing cryptographic hash functions.
In 2009, Xiaoyun Wang et al. presented a distinguishing attack on HMAC-MD5 without using related keys. It can distinguish an instantiation of HMAC with MD5 from an instantiation with a random function with 297 queries with probability 0.87.
In 2011 an informational RFC 6151 was published to summarize security considerations in MD5 and HMAC-MD5. For HMAC-MD5 the RFC summarizes that – although the security of the MD5 hash function itself is severely compromised – the currently known "attacks on HMAC-MD5 do not seem to indicate a practical vulnerability when used as a message authentication code", but it also adds that "for a new protocol design, a ciphersuite with HMAC-MD5 should not be included".
In May 2011, RFC 6234 was published detailing the abstract theory and source code for SHA-based HMACs.
Examples
Here are some non-empty HMAC values, assuming 8-bit ASCII or UTF-8 encoding:
HMAC_MD5("key", "The quick brown fox jumps over the lazy dog") = 80070713463e7749b90c2dc24911e275
HMAC_SHA1("key", "The quick brown fox jumps over the lazy dog") = de7c9b85b8b78aa6bc8a7a36f70a90701c9db4d9
HMAC_SHA256("key", "The quick brown fox jumps over the lazy dog") = f7bc83f430538424b13298e6aa6fb143ef4d59a14946175997479dbc2d1a3cd8
HMAC_SHA256("The quick brown fox jumps over the lazy dogThe quick brown fox jumps over the lazy dog", "message") = 5597b93a2843078cbb0c920ae41dfe20f1685e10c67e423c11ab91adfc319d12
References
Notes
Mihir Bellare, Ran Canetti and Hugo Krawczyk, Keying Hash Functions for Message Authentication, CRYPTO 1996, pp. 1–15 (PS or PDF).
Mihir Bellare, Ran Canetti and Hugo Krawczyk, Message authentication using hash functions: The HMAC construction, CryptoBytes 2(1), Spring 1996 (PS or PDF).
External links
RFC2104
Online HMAC Generator / Tester Tool
FIPS PUB 198-1, The Keyed-Hash Message Authentication Code (HMAC)
C HMAC implementation
Python HMAC implementation
Java implementation
Rust HMAC implementation
Message authentication codes
Hashing |
3259020 | https://en.wikipedia.org/wiki/Programmable%20interval%20timer | Programmable interval timer | In computing and in embedded systems, a programmable interval timer (PIT) is a counter that generates an output signal when it reaches a programmed count. The output signal may trigger an interrupt.
Common features
PITs may be one-shot or periodic. One-shot timers will signal only once and then stop counting. Periodic timers signal every time they reach a specific value and then restart, thus producing a signal at periodic intervals. Periodic timers are typically used to invoke activities that must be performed at regular intervals.
Counters are usually programmed with fixed intervals that determine how long the counter will count before it will output a signal.
IBM PC compatible
The Intel 8253 PIT was the original timing device used on IBM PC compatibles. It used a 1.193182 MHz clock signal (one third of the color burst frequency used by NTSC, one twelfth of the system clock crystal oscillator) and contains three timers. Timer 0 is used by Microsoft Windows (uniprocessor) and Linux as a system timer, timer 1 was historically used for dynamic random access memory refreshes and timer 2 for the PC speaker.
The LAPIC in newer Intel systems offers a higher-resolution (one microsecond) timer. This is used in preference to the PIT timer in Linux kernels starting with 2.6.18.
See also
High Precision Event Timer
Monostable multivibrator
NE555
References
External links
http://www.luxford.com/high-performance-windows-timers
https://stackoverflow.com/questions/10567214/what-are-linux-local-timer-interrupts
Timing on the PC family under DOS
IBM PC compatibles
Digital electronics |
534995 | https://en.wikipedia.org/wiki/X.Org%20Server | X.Org Server | X.Org Server is the free and open-source implementation of the X Window System display server stewarded by the X.Org Foundation.
Implementations of the client side of the protocol are available e.g. in the form of Xlib and XCB.
The services with which the X.Org Foundation supports X Server include the packaging of the releases; certification (for a fee); evaluation of improvements to the code; developing the web site, and handling the distribution of monetary donations. The releases are coded, documented, and packaged by global developers.
Software architecture
The X.Org Server implements the server side of the X Window System core protocol version 11 (X11) and extensions to it, e.g. RandR.
Version 1.16.0 integrates support for systemd-based launching and management which improved boot performance and reliability.
Device Independent X (DIX)
The Device Independent X (DIX) is the part of the X.Org Server that interacts with clients and implements software rendering. The main loop and the event delivery are part of the DIX.
An X server has a tremendous amount of functionality that must be implemented to support the X core protocol. This includes code tables, glyph rasterization and caching, XLFDs, and the core rendering API which draws graphics primitives.
Device Dependent X (DDX)
The Device Dependent X (DDX) is the part of the x-server that interacts with the hardware. In the X.Org Server source code, each directory under "hw" corresponds to one DDX. Hardware comprises graphics cards as well as mouse and keyboards. Each driver is hardware specific and implemented as a separate loadable module.
2D graphics driver
For historical reasons the X.Org Server still contains graphics device drivers supporting some form of 2D rendering acceleration. In the past, mode-setting was done by an X-server graphics device driver specific to some video controller hardware (e.g., a GPU). To this mode-setting functionality, additional support for 2D acceleration was added when such became available with various GPUs. The mode-setting functionality was moved into the DRM and is being exposed through a DRM mode-setting interface, the new approach being called "kernel mode-setting" (KMS). But the 2D rendering acceleration remained.
In Debian the 2D graphics drivers for the X.Org Server are packaged individually and called xserver-xorg-video-*. After installation the 2D graphics driver-file is found under /usr/lib/xorg/modules/drivers/. The package xserver-xorg-video-nouveau installs nouveau_drv.so with a size of 215 KiB, the proprietary Nvidia GeForce driver installs an 8 MiB-sized file called nvidia_drv.so and Radeon Software installs fglrx_drv.so with a size of about 25MiB.
The available free and open-source graphics device drivers are being developed inside of the Mesa 3D-project. While these can be recompiled as required, the development of the proprietary DDX 2D graphics drivers is greatly eased when the X.Org Server keeps a stable API/ABI across multiple of its versions.
With version 1.17 a generic method for mode-setting was mainlined. The xf86-video-modesetting package, the Debian-package being called xserver-xorg-video-modesetting, was retired, and the generic modesetting DDX it contained was moved into the server package to become the KMS-enabled default DDX, supporting the vast majority of AMD, Intel and NVidia GPUs.
On April 7, 2016 AMD employee Michel Dänzer released xf86-video-ati version 7.7.0 and xf86-video-amdgpu version 1.1.0, the latter including support for their Polaris microarchitecture.
Acceleration architectures
There are (at least) XAA (XFree86 Acceleration Architecture), EXA, UXA and SNA.
In the X Window System, XFree86 Acceleration Architecture (XAA) is a driver architecture to make a video card's 2D hardware acceleration available to the X server. It was written by Harm Hanemaayer in 1996 and first released in XFree86 version 3.3. It was completely rewritten for XFree86 4.0. It was removed again from X.Org Server 1.13.
Most drivers implement acceleration using the XAA module. XAA is on by default, though acceleration of individual functions can be switched off as needed in the server configuration file (XF86Config or xorg.conf).
The driver for the ARK chipset was the original development platform for XAA.
In X.Org Server release 6.9/7.0, EXA was released as a replacement for XAA, as XAA supplies almost no speed advantage for current video cards. EXA is regarded as an intermediate step to converting the entire X server to using OpenGL.
Glamor
Glamor is a generic, hardware independent, 2D acceleration driver for the X server that translates the X render primitives into OpenGL operations, taking advantage of any existing 3D OpenGL drivers. In this way, it is functionally similar to Quartz Extreme and QuartzGL (2D performance acceleration) for Apple Quartz Compositor.
The ultimate goal of GLAMOR is to obsolete and replace all the DDX 2D graphics device drivers and acceleration architectures, thereby avoiding the need to write X 2D specific drivers for every supported graphic chipset. Glamor requires a 3D driver with support for shaders.
Glamor performance tuning was accepted for Google Summer of Code 2014. Glamor supports Xephyr and DRI3, and can boost some operations by 700–800%. Since its mainlining into version 1.16 of the X.Org Server, development on Glamor was continued and patches for the 1.17 release were published.
Virtualization
There is a distinct and special DDX for instances of the X.Org Server which run on a guest system inside of a virtualized environment: xf86-video-qxl, a driver for the "QXL video device". SPICE makes use of this driver though it works without it as well.
In the Debian repositories it is called xserver-xorg-video-qxl, cf. https://packages.debian.org/buster/xserver-xorg-video-qxl
Input stack
Under Debian, drivers related to input are found under /usr/lib/xorg/modules/input/. Such drivers are named e.g. evdev_drv.so, mouse_drv.so, synaptics_drv.so or wacom_drv.so.
With version 1.16, the X.Org Server obtained support for the libinput library in form of a wrapper called xf86-input-libinput. At the XDC 2015 in Toronto, libratbag was introduced as a generic library to support configurable mice. xserver-xorg-input-joystick is the input module for the X.Org server to handle classic joysticks and gamepads, which is not meant for playing games under X, but to control the cursor with a joystick or gamepad.
Other DDX components
XWayland
XWayland is a series of patches over the X.Org server codebase that implement an X server running upon the Wayland protocol. The patches are developed and maintained by the Wayland developers for compatibility with X11 applications during the transition to Wayland, and was mainlined in version 1.16 of the X.Org Server in 2014. When a user runs an X application from within Weston, it calls upon XWayland to service the request.
XQuartz
XQuartz is a series of patches from Apple Inc. to integrate support for the X11 protocol into their Quartz Compositor, in a similar way to how XWayland integrates X11 into Wayland compositors.
Xspice
Xspice is a device driver for the X.Org Server. It supports the QXL framebuffer device and includes a wrapper script which makes it possible to launch an X.Org Server whose display is exported via the SPICE protocol. This enables use of SPICE in a remote desktop environment, without requiring KVM virtualization.
Xephyr
Xephyr is an X-on-X implementation. Since version 1.16.0, Xephyr serves as the primary development environment for the new 2D acceleration subsystem (Glamor), permitting rapid development and testing on a single machine.
RandR
RandR (resize and rotate) is a communications protocol written as an extension to the X11 protocol. XRandR provides the ability to resize, rotate and reflect the root window of a screen. RandR is responsible for setting the screen refresh rate. It allows for the control of multiple monitors.
IPC
The X.Org Server, and any x-client, each run as distinct processes. On Unix/Linux, a process knows nothing about any other processes. For it to communicate with another process, it is completely and utterly reliant on the kernel to moderate the communication via available inter-process communication (IPC) mechanisms.
Unix domain sockets are used to communicate with processes running on the same machine. Special socket function calls are part of the System Call Interface. Although Internet domain sockets can be used locally, Unix domain sockets are more efficient, since they do not have the protocol overhead (checksums, byte orders, etc.).
X.Org Server does not use D-Bus.
Sockets are the most common interprocess communication (IPC) method between the processes of the X server and its various X clients. It provides the Application Programming Interface (API) for communication in the TCP/IP domain and also locally only in the UNIX domain. There are several other APIs described in the X Transport Interface, for instance TLI (Transport Layer Interface). Other options for IPC between for the X client-server, require X Window system extensions, for instance the MIT Shared Memory Extension (MIT-SHM).
Multiseat configuration
Multi-seat refers to an assembly of a single computer with multiple "seats", allowing multiple users to sit down at the computer, log in, and use the computer at the same time independently. The computer has multiple keyboards, mice, and monitors attached to each, each "seat" having one keyboard, one mouse and one monitor assigned to it. A "seat" consists of all hardware devices assigned to a specific workplace. It consists of at least one graphics device (graphics card or just an output and the attached monitor) and a keyboard and a mouse. It can also include video cameras, sound cards and more.
Due to limitation of the VT system in the Linux kernel and of the X core protocol (in particularly how X defines the relation between the root window and an output of the graphics card), multi-seat does not work out-of-the-box for the usual Linux distribution but necessitates a special configuration.
There are these methods to configure a multi-seat assembly:
multiple Xephyr servers over a host xorg-server
multiple instances of an xorg-server
one graphics card per seat
a single graphics card for all seats
The utilized command-line options of the xorg-server are:
-isolateDevice bus-id Restrict device resets (output) to the device at bus-id. The bus-id string has the form bustype:bus:device:function (e.g., ‘PCI:1:0:0’). At present, only isolation of PCI devices is supported; i.e., this option is ignored if bustype is anything other than ‘PCI’.
vtXX the default for e.g. Debian 9 Stretch is 7, i.e. by pressing ++ the user can switch to the VT running the xorg-server.
Only the user on the first monitor has the use of vt consoles and can use ++x to select them. The other users have a GDM login screen and can use xorg-server normally, but have no vt's.
Even though a single user can utilize multiple monitors connected to the different ports of a single graphics card (cf. RandR), the method which is based on multiple instances of the xorg-server seems to require multiple PCI graphics cards.
It is possible to configure multi-seat employing only one graphics card, but due to limitations of the X protocol this necessitates the usage of X Display Manager Control Protocol XDMCP.
There is also Xdmx (Distributed Multihead X).
Adoption
Unix and Linux
The X.Org Server runs on many free-software Unix-like operating systems, including being adopted for use by most Linux distributions and BSD variants. It is also the X server for the Solaris operating system. X.Org is also available in the repositories of Minix 3.
Windows
Cygwin/X, Cygwin's implementation of the X server for Microsoft Windows, uses the X.Org Server, as do VcXsrv (Visual C++ X-server) and Xming. SSH clients such as PuTTY allow launching of X applications through X11 forwarding on the condition that it is enabled on both the server and client.
OS X / macOS
OS X versions prior to Mac OS X Leopard (10.5) shipped with an XFree86-based server, but 10.5's X server adopted the X.Org codebase. Starting with OS X Mountain Lion, (10.8) X11 is not bundled in OS X; instead, it has to be installed from, for example, the open source XQuartz project. As of version 2.7.4, X11.app/XQuartz does not expose support for high-resolution Retina displays to X11 apps, which run in pixel-doubled mode on high-resolution displays.
OpenVMS
Current versions of the DECwindows X11 server for OpenVMS are based on X.org Server.
History
The modern X.Org Foundation came into being in 2004 when the body that oversaw X standards and published the official reference implementation joined forces with former XFree86 developers. X11R6.7.0, the first version of the X.Org Server, was forked from XFree86 4.4 RC2. The immediate reason for the fork was a disagreement with the new license for the final release version of XFree86 4.4, but several disagreements among the contributors surfaced prior to the split. Many of the previous XFree86 developers have joined the X.Org Server project.
In 2005, a great effort was put in the modularization of the X.Org server source code, resulting in a dual release by the end of the year. The X11R7.0.0 release added a new modular build system based on the GNU Autotools, while X11R6.9.0 kept the old imake build system, both releases sharing the same codebase. Since then the X11R6.9 branch is maintained frozen and all the ongoing development is done to the modular branch. The new build system also brought the use of dlloader standard dynamic linker to load plugins and drivers, deprecating the old own method. As a consequence of the modularization, the X11 binaries were moving out of their own /usr/X11R6 subdirectory tree and into the global /usr tree on many Unix systems.
In June 2006, another effort was done to move the X.Org server source codebase from CVS to git. Both efforts had the long-term goal of bringing new developers to the project. In the words of Alan Coopersmith:
In the 7.1 release, the KDrive framework (a small implementation of X written by Keith Packard, which was not based on XFree86 that X.Org developers used as a testing ground for new ideas, such as EXA) was integrated into the main codebase of X.Org server.
In 2008, the new DRI2, based on the kernel mode-setting (KMS) driver, replaced DRI. This change also set a major milestone in the X.Org server architecture, as the drivers were moved out from the server and user space (UMS) to the kernel space.
In 2013, the initial versions of DRI3 and Present extensions were written and coded by Keith Packard to provide a faster and tearing-free 2D rendering. By the end of the year the implementation of GLX was rewritten by Adam Jackson at Red Hat.
Releases
See also
Reference implementation part of a standard release package
X window manager a package that is deliberately kept separate from the X server package
X video extension
evdev
xorg.conf
XQuartz
Xenocara
References
External links
X servers
Freedesktop.org
Software forks
Software that uses Meson |
492171 | https://en.wikipedia.org/wiki/Video4Linux | Video4Linux | Video4Linux (V4L for short) is a collection of device drivers and an API for supporting realtime video capture on Linux systems. It supports many USB webcams, TV tuners, and related devices, standardizing their output, so programmers can easily add video support to their applications. MythTV, tvtime and Tvheadend are typical applications that use the V4L framework.
Video4Linux is responsible for creating V4L2 device nodes aka a device file (/dev/videoX, /dev/vbiX and /dev/radioX) and tracking data from these nodes. The device node creation is handled by V4L device drivers using the video_device struct (v4l2-dev.h) and it can either be allocated dynamically or embedded in another larger struct.
Video4Linux was named after Video for Windows (which is sometimes abbreviated "V4W"), but is not technically related to it.
While Video4Linux is only available on Linux, there is a compatibility layer available for FreeBSD called Video4BSD. This provides a way for many programs that depend on V4L to also compile and run on the FreeBSD operating system.
History
V4L had been introduced late into the 2.1.X development cycle of the Linux kernel. V4L1 support was dropped in kernel 2.6.38.
V4L2 is the second version of V4L. Video4Linux2 fixed some design bugs and started appearing in the 2.5.x kernels. Video4Linux2 drivers include a compatibility mode for Video4Linux1 applications, though the support can be incomplete and it is recommended to use Video4Linux1 devices in V4L2 mode. The project DVB-Wiki is now hosted on LinuxTV web site.
Some programs support V4L2 through the media resource locator v4l2://.
Notable software supporting Video4Linux
aMSN
Cheese (software)
Cinelerra
CloudApp
Ekiga
FFmpeg
FreeJ
GStreamer
Guvcview
kdetv
Kopete
Libav
Linphone
LiVES
Motion (surveillance software)
MPlayer
mpv
MythTV
Open Broadcaster Software
OpenCV
Peek
PyGame
Skype
Tvheadend
veejay
VLC media player
xawtv
Xine
ZoneMinder
See also
Direct Rendering Manager – defines a kernel-to-user-space interface for access to graphics rendering and video acceleration
Mesa 3D – implements video acceleration APIs
References
External links
media_tree development git
v4l-utils development git
Linux Media Infrastructure API (V4L2, DVB and Remote Controllers)
Video4Linux-DVB wiki
Video4Linux resources
Video4BSD, a Video4Linux emulation layer
Video For Linux (V4L) sample applications
Video For Linux 2 (V4L2) sample application
Access Video4Linux devices from Java
kernel.org
OpenWrt Wiki
Linux UVC driver and tools, USB video device class (UVC)
Digital television
Free video software
Interfaces of the Linux kernel
Linux drivers
Linux kernel features
Television technology |
19510277 | https://en.wikipedia.org/wiki/Killer%20NIC | Killer NIC | The Killer NIC (Network Interface Card), from Killer Gaming (now a subsidiary of Intel Corporation), is designed to circumvent the Microsoft Windows TCP/IP stack, and handle processing on the card via a dedicated network processor. Most standard network cards are host based, and make use of the primary CPU. The manufacturer claims that the Killer NIC is capable of reducing network latency and lag. The card was first introduced in 2006.
Hardware and Models
The Killer NIC comes in 2 models; the K1 and the M1. Both models contain a Freescale PowerQUICC processor, 64 MB RAM, a single Gigabit Ethernet port, as well as a single USB 2.0 port, intended for use with specialized programs running on the card's embedded Linux operating system.
The primary difference between the models is that the M1 has a stylized metallic heat sink, and a processor running at 400 MHz, while the K1 lacks a heat sink, and runs at only 333 MHz. Currently performance differences between the cards are limited, although it was believed that future programs designed for the cards will be capable of utilizing the increased processing power of the M1.
Killer NIC is offered as a stand-alone product or is bundled with computers from OEMs like the Dell XPS 630.
Some desktop motherboards ship with Killer networking interfaces built-in, such as gaming motherboards from Gigabyte, MSI, and ASRock.
Flexible Network Architecture / Game Networking DNA
The Flexible Network Architecture is a framework used to create and run Flexible Network Applications. These applications run on the embedded Linux operating system, and are accessed through a driver interface within the host computer's operating system. Aside from that, they use very little of the computers resources, instead handling processing on the card's processing unit. Bigfoot Networks has released a software development kit (SDK) that allows third-party developers to create their own applications. Bigfoot also publishes some of their own applications; these include a firewall, BitTorrent client, FTP application, and Telnet service that allows access to the Killer NIC's OS. This was considered a breakthrough at the time as independent reviews verified that gaming and downloading would proceed without interfering with each other.
Programs that download files often use the USB port to transfer data to external storage, making the Killer NIC useful as a NAS albeit at the generally higher power draw of a desktop PC, compared to a device like a NSLU2.
More typically applications that benefit from low latency, such as Skype or SIP or older VoIP using USB devices (NetTalk, MagicJack) may benefit. As these increasingly use Ethernet directly to routers, however, which would generally be a much lower latency than using a PC and USB connection.
However, FNA is not widely supported as of 2012 and interfaces with more typical router operating systems like DD-WRT, OpenWRT, NetBSD, FreeBSD, OpenBSD, or with proprietary router OS, have not generally been updated.
The successor technology, Game Networking DNA, now supported by Qualcomm Atheros, remains a Microsoft Windows-only solution.
Reception
Early reactions to the Killer NIC centered on price, with its initial MSRP of $280. This led to widespread skepticism and speculation that the product was merely attempting a money grab, hoping to cash in on the "bling" aspect of enthusiast computing. However, using a F.E.A.R. benchmark IGN found that the Killer NIC increased framerate from 15.41 to 23.5 frame/s. While recognizing the potential for real performance gains, they also noted that the test systems were already struggling with the game, and that having a higher-end machine in the first place could offset the gains made by the Killer NIC. This supposition was supported by Anandtech's findings, which showed some real, but much less dramatic gains. IGN declined to rate the card with a score, saying that more testing with other games would be needed. They remarked that the potential performance gain with the Killer NIC was probably higher than upgrading a $300 video card from one generation to the next. Anandtech noted that the people who would stand to gain the most benefit from the Killer NIC, the low-end users, would also be the ones least likely to pay $280 for a network card.
References
External links
Corporate website
Description of FNA
Networking hardware |
51304860 | https://en.wikipedia.org/wiki/Mks%20vir | Mks vir | mks_vir (formerly: MkS_Vir) is a Polish antivirus program, created by Marek Sell in 1987.
The original reason for creation of this software was that the solutions existing on the market these times did not satisfy the author's needs. The first versions for DOS were distributed on floppy disks by Apexim, the company in which Marek Sell worked. The updates were issued monthly and sent by mail. Initially, the software delivered to the users was personalized – the main screen contained the serial number and the data of the license owner. Despite that, the program was often used without license and its popularity can be confirmed by appearing Trojan horses, impersonating program updates, which were not issued yet. Later, together with the full version of the software, demo versions were issued, which allowed to use the program for a week. For educational reasons, the program contained descriptions of operation of some viruses (including demonstrations of their graphical and sound effects) and, from the 3.99 version, a lexicon of the viruses popular in Poland. In 1996, it became the winner of the third edition of the Teraz Polska contest.
In 1996, the MKS company, founded by Marek Sell, became the developer of the program. A website of the program and a BBS were created. The company continued the development of the program after the death of the author in 2004. Versions for Microsoft Windows and Unix were created. An online scanner, based on the ActiveX technology, became available on the software website.
After the bankruptcy of the MKS company, in 2011, the property receiver sold the rights to the mks_vir trademark to the ArcaBit company, set up a few years earlier by the former MKS employees. It reactivated the mks_vir product as a free antivirus application.
Officially, the ArcaBit company resigned from the distribution and the support for the software on 2014, March 1. Although they released a brand new version during May 2018.
Some of the program versions:
3.12 (January 1991 – the first version described in the program history)
4.00 (April 1993)
5.00 (July 1994)
6.00 (September 1998)
2002
2003
2004
2005 (the last version for DOS)
2006 (the last version for Windows 98/Me)
2k7 (the first 64-bit version)
9 (the last version produced by MKS)
10 (announced multilingual version – not issued)
12 (the version based on the ArcaBit engine)
13.11 (the last version produced by ArcaBit)
2013 Internet Security (the last version, announced in February 2013 – not issued)
Notes
References
Antivirus software
Discontinued software
1987 software
Freeware
DOS software
Windows security software
Unix security software
Solaris software
BSD software
Articles with underscores in the title |
16900868 | https://en.wikipedia.org/wiki/Airborne%20Networking | Airborne Networking | An Airborne Network (AN) is the infrastructure owned by the United States Air Force that provides communication transport services through at least one node that is on a platform capable of flight.
Background
Definition
The intent of the US Air Force's Airborne Network is to expand the Global Information Grid (GIG) to connect the three major domains of warfare: Air, Space, and Terrestrial. The Transformational Satellite Communications System network currently provides connectivity for all communication through space assets. The Combat Information Transport System and Theater Deployable Communications provide terrestrial connectivity for theatre based operations. The Airborne Network is engineered to utilize all airborne assets to connect with space and surface networks building a seamless communications platform across all domains.
Capabilities
The capabilities identified by this type of system are vastly beyond that of our current military. This system will enable the Air Force to provide a transportable network, flexible enough to communicate with any air, space, or ground asset in the area. The network will provide a beyond line-of-sight (LoS) communications infrastructure that can be packed up and moved in and out of the designated battlespace, enabling the military to have a reliable and secure communications network that extends globally. The network is designed to be flexible enough to provide the right communication and network packages for a specific region, mission, or technology.
Operationally, The AN is designed to be self-forming, self-organizing, and self-generating, with nodes joining and leaving the network as they enter and exit a specific region. The network consists of dedicated tactical links, wideband air-to-air links, and ad hoc networks constructed by the Joint Tactical Radio System (JTRS) networking services. JTRS is a software-defined radio that will work with many existing military and civilian radios. It includes integrated encryption and Wideband Networking Software to create mobile ad hoc networks. It also provides system performance analysis and fault diagnostics automatically, reducing the demand for human intervention and network maintenance.
Intended Use
The AN was designed as the cornerstone for the new military doctrine known as Network Centric Warfare. This doctrine was developed to use information superiority to equip warfighters with more precise information enabling commanders and shooters to make smarter decisions faster. The AN contributes to Network Centric Warfare by enabling commanders to provide real-time information to warfighters in the air and on the ground. Warfighters can then utilize more information and make more educated decisions about how to act in a particular situation. Once the act has been carried out commanders will have immediate information about the result and can make judgments on how to continue. All-in-all the AN was designed to reduce the time necessary to identify a target, make clear and educated decisions to pull or not to pull the trigger, and assess battle
Topologies
There are four main network topologies that will be deployed and vary based on the placement of backbone and subnet class networks.
Space, Air, Ground Tether
Establishing a direct connection to another aircraft or ground node, via a point-to-point link for nodes within LOS or via a Satellite Communications (SATCOM) link for nodes that are beyond line-of-sight is known as tethering. SATCOM links provide connectivity to a network ground entry point. Strike aircraft that accompany C2 aircraft such as an AWACS are tethered via point-to-point links. Finally, C2 or intelligence, surveillance, and reconnaissnce (ISR) aircraft may connect via a LOS link directly to a network ground entry point. Each of these tethered alternatives works exactly like a hub or switch that has an entry point to a larger network and allows their connected users access to that network.
Flat Ad Hoc
A flat ad hoc topology refers to establishing nonpersistent network connections as needed among AN nodes that are present at a given time. With this network the nodes dynamically “discover” other nodes to which they can interconnect and form the network. The specific interconnections between the nodes are not planned in advance, but are made as opportunities arise. The nodes join and leave the network at will, continually changing connections to neighbor nodes based upon their location and mobility characteristics.
Tiered Ad Hoc
Ad hoc networks can be flat in the sense that all nodes are peers of each other in a single network, as discussed above, or they can dynamically organize themselves into hierarchical tiers such that higher tiers are used to move data between more localized subnets. This network topology can be compared to any conventional deployed network that utilizes routers, switches, and hubs to temporarily connect users.
Persistent Backbone
A network topology characterized by a persistent backbone is established using relatively persistent wideband connections among high-value platforms flying relatively stable orbits. It provides the connectivity between the tactical subnets which are considered edge networks relative to the backbone. This provides concentration points for connectivity to the space backbone as well as to terrestrial networks. This type of network topology is comparable to a conventional permanent network with established data trunks, routers, switches, and hubs to connect users.
Architecture
Network Management
The platform management system enables operators to manage all on-board network elements. It interfaces and interoperates with the Airborne Network management system to enable operators to manage remote network elements in the airborne network. The network management system monitors the health of the network by passively testing the network for faults and latency. The system will also actively troubleshoot faults with probes to identify and isolate faulty connections, and enables operators to apply network parameters and security changes to all systems based on the status of the network.
Routing/Switching
Routing and switching enables data to be dynamically transmitted over the network to other nodes. Routing protocols must be able to identify nodes transmitted within their own platform and data to be sent to other platforms regardless of the current topology. The routing protocol must also provide seamless roaming by ensuring that no routed packets are lost when a node changes its point of attachment to the network. Maintaining scalability is important in routing as the network is constantly changing. The network must be able to function with numerous levels of platforms, varying numbers of fast moving platforms, and varying amounts of traffic per platform. Routers and switches will use metrics to determine the best paths to take when routing data. The routing protocol utilized for the AN will be an Adaptive Quality of Service routing protocol.
Gateways/Proxies
Gateways and proxies enable the connection numerous technology types regardless of age to communicate across the IP-based network. Gateways and proxies are essential in the operation of this network because so many different technologies are used to communicate in each domain. These systems will facilitate the transition of the legacy on-board infrastructure, transmission systems, tactical data link systems, and user applications to the objective airborne network systems. Therefore, they are only temporary until all platforms use a standardized IP radio for transmission.
Performance Enhancing Proxies
Performance Enhancing Proxies improve the performance of user applications running across the Airborne Network by countering wireless network impairments, such as limited bandwidth, long delays, high loss rates, and disruptions in network connections. Proxy systems are implemented between the user application and the network and can be used to improve performance at the application and transport functional layers of the OSI model. Some techniques that can be employed include:
Compression: Data compression or header compression can be used to minimize the number of bits sent over the network.
Data bundling: Smaller data packets can be combined (bundled) into a single large packet for transmission over the network.
Caching: A local cache can be used to save and provide data objects that are requested multiple times, reducing transmissions over the network (and improving response times).
Store and forward: Message queuing can be used to ensure message delivery to users who become disconnected from the network or are unable to connect to the network for a period of time. Once the platform connects, the stored messages are sent.
Pipelining: Rather than opening several separate network connections pipelining can be used to share a single network connection for multiple data transfers.
Protocol streamlining: The number of transmissions to set up and take down connections and acknowledge receipt of data can be minimized through a combination of caching, spoofing, and batching.
Translation: A translation can be performed to replace particular protocols or data formats with more efficient versions developed for wireless environments.
Embedded acknowledgments: Acknowledgements can be embedded in the header of larger information carrying packets to reduce the number of packets traversing the network.
Platform Categories
To categorize a specific airborne asset or class communications equipment all aircraft are divided into three main categories. These categories are determined by the types of missions the aircraft typically performs. The aircraft also fit into each category based on the type of equipment they can equip the airframe with. Each of the following sections outlines these three main categories.
Fighter Platforms
An airborne fighter platform flight profile includes periods of stable flight patterns and dynamic maneuvers at high speeds. Its relatively small size limits the amount of space available for mounting antennas and installing equipment. It will be employed as part of a strike package or combat air patrol (CAP). The strike package or CAP will have supporting airborne C2 and ISR platform(s), tanker (refueling) platform(s), and ground C2 platform(s). Each airborne fighter platform requires connectivity to all other strike package or CAP and supporting platforms; however, a majority of information will be exchanged between airborne fighter platforms. This is driven in large part by the need for frequent situational awareness and target sorting updates in a highly mobile environment. Pilots will be provided services such as real-time data, digital voice, and interactive data sharing.
Airborne fighter platforms will participate in both tethered and flat ad hoc network topologies. A tethered topology would primarily be used for reachback and forwarding between the airborne fighter platform and supporting elements. A flat ad hoc topology would be used between airborne fighter platforms in a strike package or CAP for the more frequent information exchanges. The figure outlines the minimum equipment requirements to support a fighter platform.
C4ISR Platforms
A C4ISR platform flight profile includes periods of en route flying and repeated, stable flight patterns. The relatively large size enables space available for mounting antennas and installing significant communications equipment to accommodate
multiple mission crew functions. It will host up to three dozen mission crew members, including a communications operator. A C4ISR platform's mission applications and sensors will support multiple capabilities and mission types. Mission durations for any single aircraft and crew could range up to 12 hours; with aerial refueling it could be extended to 24 hours. These platforms often operate beyond line-of-sight of ground infrastructure and could be employed as a stand-alone or as part of a strike package or CAP in support of a strike package. C4ISR aircraft require a broad range of connection capability to connect peer-to-peer with other C4ISR aircraft or serve as a hub to connect fighter platform aircraft. The services provided by C4ISR aircraft include real-time data, voice, video, bulk data transfer, and interactive data.
C4ISR platforms will participate in both tethered and tiered ad hoc network topologies. A tethered topology would primarily be used for reachback and forwarding between the C4ISR platform, Ground Theater Air Control System, and strike package or CAP aircraft. A tiered ad hoc topology would be used between the C4ISR platform and airborne fighter platforms in a strike package or CAP. The figure outlines the minimum equipment requirements to implement the operations of a C4ISR platform.
Airborne Communications Relay Platforms
Airborne communications relay platform flight profile includes periods of en route flying and repeated, stable flight patterns. The relatively large size of widebodies theoretically enables space available for mounting antennas and installing significant communications equipment. UAVs offer long endurance and high altitude, which give wide area air and surface coverage and good optical paths to satellites. The mission of an airborne communications relay platform is to be employed as part of and/or support to C4ISR constellation and/or strike package(s) or CAP. The communications relay platform provides connectivity between elements of a strike package, CAP aircraft, C4ISR platforms, and Ground Theater Air Control System platforms that require range extension or internetworking and gateway functions between networks for information interoperability. The services necessary for communication relay platforms include real-time data transfer, voice, video, bulk data, and interactive data transfer.
Airborne communications relay platforms will participate in both tethered and tiered ad hoc network topologies. A tethered topology would primarily be used for reachback and forwarding between the C4ISR platform, Ground Theater Air Control System, and strike package or CAP aircraft. A tiered ad hoc topology would be used between the C4ISR platform and airborne fighter platforms in a strike package or CAP. The figure outlines the minimum equipment requirements to implement the operations of a communications relay platform.
Challenges
Current Technology Restrictions
Many challenges lie ahead before the AN will exist as described in this document. Many of the challenges currently lie in the Legacy system avionics found on all aircraft. The biggest obstacle is a lack in bandwidth. Until more optics are integrated into aircraft systems, this system will lag in data transfer speeds and latency. One technology under research to resolve this problem is the Navy's research into highly integrated photonics to manage aircraft sensor suite communications. The technique runs radio frequencies over fiber optics and is currently being integrated into the EA-6B Prowler electronic warfare jet.
Security of this network is another huge obstacle. The goal is to give the system a low probability of jamming and interception. Many ideas of how to protect the system are being investigated and tested. Traditional methods of authentication and authorization are being used, to include biometrics, cryptographic tokens, and integrated Public Key Infrastructure.
Commercial Off-the-Shelf
Commercial off-the-shelf (COTS) creates extreme engineering challenges. While it offers flexibility in application and saves money in production it is incredibly difficult to adapt to various application. Getting COTS to install in applications it was not designed for continues to be a vast engineering challenge as military researchers work to integrate civilian L-3 radio and FPGA technology into reconnaissance aircraft designed in the 60s.
Bandwidth
Bandwidth to support the Air Force's AN does not currently exist. Only time can tell until enough bandwidth is freed up by obsolete technology. This creates the challenge of creating better ways of compressing data and developing more efficient ways to utilize the bandwidth currently available. One interim solution developed by Northrop Grumman is the Dialup rate IP over existing radios (DRIER). DRIER enables airborne or ground-based tactical users to select and download mission-critical data directly from the Joint STARS platform using existing, narrowband line-of-sight or beyond-line-of-sight UHF communications links. Users can also serve as a relay point, providing critical handover information between aircraft entering and exiting mission orbits.
References
Airborne Networking Architecture HQ ESC/NII for the USAF Airborne Network Special Interest Group AN Architecture, 2004
Airborne Networking Kenneth Stranc Mitre Corporation , 2004
Airborne Networking Challenges Ben Ames, Military and Aerospace Electronics Magazine Airborne Networking Challenges, 2004
Computer networks |
51310437 | https://en.wikipedia.org/wiki/InfluxDB | InfluxDB | InfluxDB is an open-source time series database (TSDB) developed by the company InfluxData. It is written in the Go programming language for storage and retrieval of time series data in fields such as operations monitoring, application metrics, Internet of Things sensor data, and real-time analytics. It also has support for processing data from Graphite.
History
Y Combinator-backed company Errplane began developing InfluxDB as an open-source project in late 2013 for performance monitoring and alerting. Errplane raised an $8.1M Series A financing led by Mayfield Fund and Trinity Ventures in November 2014. In late 2015, Errplane officially changed its name to InfluxData Inc. InfluxData raised Series B round of funding of $16 million in September 2016. In February 2018, InfluxData closed a $35 million Series C round of funding led by Sapphire Ventures.
Another round of $60 million was disclosed in 2019.
Technical overview
InfluxDB has no external dependencies and provides an SQL-like language, listening on port 8086, with built-in time-centric functions for querying a data structure composed of measurements, series, and points. Each point consists of several key-value pairs called the fieldset and a timestamp. When grouped together by a set of key-value pairs called the tagset, these define a series. Finally, series are grouped together by a string identifier to form a measurement.
Values can be 64-bit integers, 64-bit floating points, strings, and booleans. Points are indexed by their time and tagset. Retention policies are defined on a measurement and control how data is downsampled and deleted. Continuous Queries run periodically, storing results in a target measurement.
Events
InfluxData regularly hosts events related to InfluxDB called InfluxDays. The InfluxDays are technical conventions focused on the evolution of InfluxDB on technical and business points of view. Those events take place once a year in three locations: New-York, San Francisco or London. The InfluxDays cover a wide variety of different subjects: software engineering and coding talks as well as business-focused and practical workshops. Companies can showcase how they use InfluxDB.
Line protocol
InfluxDB accepts data via HTTP, TCP, and UDP. It defines a line protocol backwards compatible with Graphite and takes the form:
measurement(,tag_key=tag_val)* field_key=field_val(,field_key_n=field_value_n)* (nanoseconds-timestamp)?
Licensing
Contributors to InfluxDB need to give InfluxData Inc. the right to license the contributions and the rest of the software in any way, including under a closed-source license. The Contributor License Agreement claims not to be a copyright transfer agreement.
Closed source clustering components
In May 2016, InfluxData announced that the computer cluster component of InfluxDB would be sold as closed-source software in order to create a sustainable source of funding for the project's development. Community reaction was mixed, with some feeling the move was a "bait and switch".
References
2013 software
Software using the MIT license
Structured storage
Time series software
Free software programmed in Go |
54442305 | https://en.wikipedia.org/wiki/Peter%20W.%20Smith | Peter W. Smith | Peter W. Smith (February 23, 1936 – May 14, 2017) was an American investment banker who had a 40-year career managing corporate acquisitions and venture investments. He was active in Republican politics. In 1998, he was identified as a major financial supporter of the 1993 Troopergate story, in which several Arkansas state troopers accused U.S. President Bill Clinton of having carried out sexual dalliances while he was Governor of Arkansas. In 2017, he confirmed to The Wall Street Journal that he (along with former national security advisor Michael Flynn) had tried in 2016 to contact computer hackers, including Russian hackers, in an attempt to obtain opposition research material to use against Hillary Clinton in the 2016 presidential election. Ten days after speaking to the paper, he committed suicide in a Minnesota hotel room, citing ill health.
Education
Smith earned a BS in electrical engineering from Northeastern University. He went on to earn an MBA from the University of Notre Dame.
Career
From 1969 to 1980 he was a senior officer at Field Enterprises, Inc. From 1975 to 1997 he was president of his own firm, Peter W. Smith & Company, specializing in buyout transactions. From 1997 to 2014 he was Managing Member of DigaComm, LLC where he primarily managed early venture investments. At the time of his death he was Chairman Emeritus of Corporate Venture Alliances, LLC.
Political involvement
He was national chairman of the College Young Republicans. He was a board member and officer of the Atlantic Council of the United States and was also active with the Heritage Foundation, the Center for Strategic and International Studies, and the Brookings Institution. He was a contributor and fundraiser for the Republican National Committee and GOPAC as well as individual candidates. Smith was a major donor to Newt Gingrich's GOPAC, giving over $100,000 from 1989 to 1995—a sum which made him one of GOPAC's top 20 donors.
In December 1993, two Arkansas state troopers publicly claimed that they and other troopers had been used to facilitate and conceal multiple extra-marital affairs of then-Governor Bill Clinton. The stories were published in an article by then-conservative author David Brock in the American Spectator and also in a series of articles in the Los Angeles Times. Smith had arranged for the troopers to meet Brock. In February 1994 Smith started a "Troopergate Whistle Blower Fund" to provide support for the troopers and pay their attorney fees; the fund ultimately raised about $40,000. Altogether Smith said he spent about $80,000 on the case, including a $5,000 payment to Brock. Smith described his donations as an independent effort to encourage anti-Clinton stories in the mainstream press.
Several emails sent by Smith to the Illinois Republican Party turned up among those hacked from party computers and posted on DCLeaks, which U.S. intelligence officials believe to be an outlet for the foreign military intelligence agency of Russia, GRU. The emails were about a 2015 congressional election.
In May 2017 Smith told Shane Harris of the Wall Street Journal that he had been actively involved, during the 2016 presidential election campaign, trying to obtain emails he believed had been deleted from Hillary Clinton's computer server. Smith stated, "We knew the people who had these were probably around the Russian government" and told various internet groups to give any emails to Wikileaks. To act as a vehicle for his efforts, Smith established the Delaware based firm KLS Research LLC on September 2, 2016, and opened a bank account for KLS at Northern Trust where he also had a personal account. For advice on obtaining Clinton's emails and with support from Smith's assistant Jonathan Safron, Smith and his associate John Szobocsan contacted several experts in cyber security including Eric York, Matt Tait, the consulting firm Flynn Intel Group associated with Michael G. Flynn and his father Michael Flynn with whom Smith had established a business together in November 2015 immediately after Flynn retired from the Army. Smith reached out to Guccifer 2.0 and Charles Johnson suggested that Smith contact Andrew Auernheimer alias Weev. According to the Mueller Report, Trump campaign advisor Mike Flynn contacted Smith shortly after July 27, 2016, when Trump had publicly invited "Russia" to find the missing emails, and asked Smith to look for them. In that quest Smith contacted several known hacker groups, including some Russian groups. He was shown some information but was not convinced it was genuine, and suggested the hackers give it to WikiLeaks instead. As part of this effort, Smith donated $150,000 to "the Washington Scholarship Fund for the Russian students", including $100,000 of solicited money and $50,000 from his personal funds.
Smith was close to Sam Clovis.
He maintained a blog called Peter W. Smith.
Personal life and death
Smith lived in Lake Forest, Illinois. He was married to Janet; the couple had three children and three grandchildren at the time of his death.
Smith died on May 14, 2017 in a hotel room in Rochester, Minnesota. He had checked into the hotel, which is near the Mayo Clinic, the day after speaking to the Wall Street Journal. Nine days later he was found with a bag over his head that was attached to a helium source. Medical records list Smith's cause of death as "asphyxiation due to displacement of oxygen in confined space with helium." Police discovered a suicide note by Smith that stated "no foul play whatsoever", and that he was in poor health and his life insurance policy was about to expire.
See also
Russian interference in the 2016 United States elections
References
1936 births
2017 deaths
American investment bankers
People from Lake Forest, Illinois
Businesspeople from Portland, Maine
University of Notre Dame alumni
Northeastern University alumni
Maine Republicans
Suicides in Minnesota
Illinois Republicans
People associated with Russian interference in the 2016 United States elections
2017 suicides |
47807559 | https://en.wikipedia.org/wiki/Oldenburger%20Computer-Museum | Oldenburger Computer-Museum | The Oldenburg Computer Museum (OCM) is a museum founded in 2008 in Oldenburg (Oldb), Lower-Saxony, Germany that is dedicated to the preservation and operational presentation of the history of home computing.
Overview
The museum presents computers, video games and arcade game machines from the 1970s through 1990s. What is special about the Oldenburger Computer Museum is the fact that the exhibits are functional and invite visitors to try out and use them. Founded in 2008 by the non-profit association "Oldenburger Computer-Museum e. V.”, which is organised and run by dedicated volunteers. The goal of the Oldenburger Computer Museum is the preservation of the home computer culture as an interactive exhibition with fully functional exhibits. Everything on exhibit is equipped with software, which can - and should - be used, explored and experienced. In this way, visitors can get a sense of how computer technology has developed over time and how it relates to current technology, especially with regard to aspects like; graphics, sound, speed, mass storage, and the reduction in size of components.
At the Oldenburger Computer Museum one can play classic games on the Commodore C64, Atari 2600 and Commodore Amiga, write their own programs on original software and experience hands-on the history of computing.
History
The museum grew out of the private collection of Thiemo Eddiks. In the beginning, small temporary exhibitions were presented in the OFFIS - Institute for Computer Science and the Carl von Ossietzky University in Oldenburg as well as in other locations. In November 2008, the permanent exhibition was officially opened. In November 2009 the association, “Oldenburger Computer- Museum e. V.” was founded und was recognized as a non-profit organization. Initially started as an exhibition in a small space, the move to the current larger location took place in 2014. In addition to the permanent exhibition “Home computers of the 1970’s and 80’s”, a classic video arcade hall has also been created.
Exhibition
The permanent exhibition, “Homecomputers of the 1970’s and 1980’s" showcases 23 functional computer systems, including PDP-8, Commodore PET, Apple II, Osborne 1, Amstrad CPC 464, Apple Macintosh and Amiga 500 and is open very Tuesday from 6pm until 9pm.
Literature
References
External links
Oldenburger Computer Museum website
2014 establishments in Germany
Museums established in 2008
Buildings and structures in Oldenburg (city)
Tourist attractions in Oldenburg (city)
Museums in Lower Saxony
Technology museums in Germany
Computer museums |
3084295 | https://en.wikipedia.org/wiki/Linear%20network%20coding | Linear network coding | Network coding is a field of research that originally emerged in a series of papers from the late 1990s to the early 2000s. However, the concept of network coding, in particular linear network coding, appeared much earlier. In a 1978 paper, a scheme for improving the throughput of a two-way communication through a satellite was proposed. In this scheme, two users trying to communicate with each other will transmit their data streams to a satellite, which combines the two streams by summing them modulo 2 and then broadcasts the combined stream. Each of the two users, upon receiving the broadcast stream, can decode the other stream by using the information of their own stream.
The 2000 paper gave the butterfly network example (discussed below) that illustrates how linear network coding can outperform routing. This example is equivalent to the scheme for satellite communication described above. The same paper gives an optimal coding scheme for a network with one source node and three destination nodes. This represents the first example illustrating the optimality of convolutional network coding (a more general form of linear network coding) over a cyclic network.
Linear network coding may be used to improve a network's throughput, efficiency, and scalability, as well as its resilience to attacks and eavesdropping. Instead of simply relaying the packets of information they receive, the nodes of a network take several packets and combine them together for transmission. This process may be used to attain the maximum possible information flow in a network.
It has been mathematically proven that in theory linear coding is enough to achieve the upper bound in multicast problems with one source. However linear coding is not sufficient in general (e.g. multisource, multisink with arbitrary demands), even for more general versions of linearity such as convolutional coding and filter-bank coding. Finding optimal coding solutions for general network problems with arbitrary demands remains an open problem.
Encoding and decoding
In a linear network coding problem, a group of nodes are involved in moving the data from source nodes to sink nodes. Each node generates new packets which are linear combinations of earlier received packets, multiplying them by coefficients chosen from a finite field, typically of size .
Each node, with indegree, , generates a message from the linear combination of received messages by the relation:
where the values are the coefficients selected from . Note that, since operations are computed in a finite field, the generated message is of the same length as the original messages. Each node forwards the computed value along with the coefficients, , used in the level, .
Sink nodes receive these network coded messages, and collect them in a matrix. The original messages can be recovered by performing Gaussian elimination on the matrix. In reduced row echelon form, decoded packets correspond to the rows of the form .
A brief history
A network is represented by a directed graph . is the set of nodes or vertices, is the set of directed links (or edges), and gives the capacity of each link of . Let be the maximum possible throughput from node to node . By the max-flow min-cut theorem, is upper bounded by the minimum capacity of all cuts, which is the sum of the capacities of the edges on a cut, between these two nodes.
Karl Menger proved that there is always a set of edge-disjoint paths achieving the upper bound in a unicast scenario, known as the max-flow min-cut theorem. Later, the Ford–Fulkerson algorithm was proposed to find such paths in polynomial time. Then, Edmonds proved in the paper "Edge-Disjoint Branchings" the upper bound in the broadcast scenario is also achievable, and proposed a polynomial time algorithm.
However, the situation in the multicast scenario is more complicated, and in fact, such an upper bound can't be reached using traditional routing ideas. Ahlswede, et al. proved that it can be achieved if additional computing tasks (incoming packets are combined into one or several outgoing packets) can be done in the intermediate nodes.
The butterfly network example
The butterfly network is often used to illustrate how linear network coding can outperform routing. Two source nodes (at the top of the picture) have information A and B that must be transmitted to the two destination nodes (at the bottom). Each destination node wants to know both A and B. Each edge can carry only a single value (we can think of an edge transmitting a bit in each time slot).
If only routing were allowed, then the central link would be only able to carry A or B, but not both. Suppose we send A through the center; then the left destination would receive A twice and not know B at all. Sending B poses a similar problem for the right destination. We say that routing is insufficient because no routing scheme can transmit both A and B simultaneously to both destinations. Meanwhile, it takes four time slots in total for both destination nodes to know A and B.
Using a simple code, as shown, A and B can be transmitted to both destinations simultaneously by sending the sum of the symbols through the two relay nodes – in other words, we encode A and B using the formula "A+B". The left destination receives A and A + B, and can calculate B by subtracting the two values. Similarly, the right destination will receive B and A + B, and will also be able to determine both A and B. Therefore, with network coding, it takes only three time slots and improves the throughput.
Random Linear Network Coding
Random linear network coding is a simple yet powerful encoding scheme, which in broadcast transmission schemes allows close to optimal throughput using a decentralized algorithm. Nodes transmit random linear combinations of the packets they receive, with coefficients chosen from a Galois field. If the field size is sufficiently large, the probability that the receiver(s) will obtain linearly independent combinations (and therefore obtain innovative information) approaches 1. It should however be noted that, although random linear network coding has excellent throughput performance, if a receiver obtains an insufficient number of packets, it is extremely unlikely that they can recover any of the original packets. This can be addressed by sending additional random linear combinations until the receiver obtains the appropriate number of packets.
Open issues
Linear network coding is still a relatively new subject. Based on previous studies, there are three important open issues in RLNC:
High decoding computational complexity due to using the Gauss-Jordan elimination method
High transmission overhead due to attaching large coefficients vectors to encoded blocks
Linear dependency among coefficients vectors which can reduce the number of innovative encoded blocks
Wireless Network Coding
The broadcast nature of wireless (coupled with network topology) determines the nature of interference. Simultaneous transmissions in a wireless network typically result in all of the packets being lost (i.e., collision, see Multiple Access with Collision Avoidance for Wireless). A wireless network therefore requires a scheduler (as part of the MAC functionality) to minimize such interference. Hence any gains from network coding are strongly impacted by the underlying scheduler and will deviate from the gains seen in wired networks. Further, wireless links are typically half-duplex due to hardware constraints; i.e., a node can not simultaneously transmit and receive due to the lack of sufficient isolation between the two paths.
Although, originally network coding was proposed to be used at Network layer (see OSI model), in wireless networks, network coding has been widely used in either MAC layer or PHY layer. It has been shown that network coding when used in wireless mesh networks need attentive design and thoughts to exploit the advantages of packet mixing, else advantages cannot be realized. There are also a variety of factors influencing throughput performance, such as media access layer protocol, congestion control algorithms, etc. It is not evident how network coding can co-exist and not jeopardize what existing congestion and flow control algorithms are doing for our Internet.
Applications
Since linear network coding is a relatively new subject, its adoption in industries
is still pending. Unlike other coding, linear network coding is not entirely applicable
in a system due to its narrow specific usage scenario. Theorists are trying to connect
to real world applications. In fact, it was found that BitTorrent approach is far superior than network coding.
It is envisaged that network coding is useful in the following areas:
Owing to multi-source, multicast content-delivery nature of Information-centric networking in general and Named Data Networking in particular, the linear coding can improve over all network efficiency.
Alternative to forward error correction and ARQ in traditional and wireless networks with packet loss. e.g.: Coded TCP, Multi-user ARQ
Robust and resilient to network attacks like snooping, eavesdropping, replay or data corruption attacks.
Digital file distribution and P2P file sharing. e.g.: Avalanche from Microsoft
Distributed storage.
Throughput increase in wireless mesh networks. e.g. : COPE, CORE, Coding-aware routing, B.A.T.M.A.N.
Buffer and Delay reduction in spatial sensor networks: Spatial buffer multiplexing
Reduce the number of packet retransmission for a single-hop wireless multicast transmission, and hence improve network bandwidth.
Distributed file sharing
Low-complexity video streaming to mobile devices
Device-to-Device (D2D) extensions
There are new methods emerging to use Network Coding in multiaccess systems to develop Software Defined Wire Area Networks (SD-WANs) that can offer lower delay, jitter and high robustness. The proposal mentions that the method is agnostic to underlying technologies like LTE, Ethernet, 5G.
Maturity & Issues
Since this area is relatively new and the mathematical treatment of this subject is
currently limited to a handful of people, network coding has yet found its way to
commercialization in products and services. It is likely that this
subject will not prevail, and cease as a good mathematical exercise.
Researchers have clearly pointed out that special care is needed to explore how network coding can co-exist with existing routing, media access, congestion, flow control algorithms and TCP protocol. If not, network coding may not offer any advantages and can increase computation complexity and memory requirements.
See also
Secret sharing protocol
Homomorphic signatures for network coding
Triangular network coding
References
Fragouli, C.; Le Boudec, J. & Widmer, J. "Network coding: An instant primer" in Computer Communication Review, 2006.
Ali Farzamnia, Sharifah K. Syed-Yusof, Norsheila Fisa "Multicasting Multiple Description Coding Using p-Cycle Network Coding", KSII Transactions on Internet and Information Systems, Vol 7, No 12, 2013.
External links
Network Coding Homepage
A network coding bibliography
Raymond W. Yeung, Information Theory and Network Coding, Springer 2008, http://iest2.ie.cuhk.edu.hk/~whyeung/book2/
Raymond W. Yeung et al., Network Coding Theory, now Publishers, 2005, http://iest2.ie.cuhk.edu.hk/~whyeung/netcode/monograph.html
Christina Fragouli et al., Network Coding: An Instant Primer, ACM SIGCOMM 2006, http://infoscience.epfl.ch/getfile.py?mode=best&recid=58339.
Avalanche Filesystem, http://research.microsoft.com/en-us/projects/avalanche/default.aspx
Random Network Coding, https://web.archive.org/web/20060618083034/http://www.mit.edu/~medard/coding1.htm
Digital Fountain Codes, http://www.icsi.berkeley.edu/~luby/
Coding-Aware Routing, https://web.archive.org/web/20081011124616/http://arena.cse.sc.edu/papers/rocx.secon06.pdf
MIT offers a course: Introduction to Network Coding
Network coding: Networking's next revolution?
Coding-aware protocol design for wireless networks: http://scholarcommons.sc.edu/etd/230/
Coding theory
Information theory
Finite fields
Network performance
Wireless sensor network |
8809543 | https://en.wikipedia.org/wiki/James%20D.%20Foley | James D. Foley | James David Foley (born July 20, 1942) is an American computer scientist and computer graphics researcher. He is a Professor Emeritus and held the Stephen Fleming Chair in Telecommunications in the School of Interactive Computing at Georgia Institute of Technology (Georgia Tech). He was Interim Dean of Georgia Tech's College of Computing from 2008–2010. He is perhaps best known as the co-author of several widely used textbooks in the field of computer graphics, of which over 400,000 copies are in print and translated in ten languages. Foley most recently conducted research in instructional technologies and distance education.
Biography
Born in Pennsylvania, Foley attended Lehigh University, graduating with a bachelor's degree in electrical engineering in 1964. Foley was initiated into the Phi Beta Kappa Society and Tau Beta Pi while at Lehigh. He received his Ph.D. in computer, information, and control engineering from the University of Michigan in 1969.
After completing his graduate studies, Foley was first employed by the University of North Carolina. In 1977, he accepted a faculty position at George Washington University, where he became chairman of the Department of Electrical Engineering and Computer Science. Foley joined the Georgia Tech faculty in 1991.
Shortly after moving to Georgia Tech, Foley founded the GVU Center, which in 1996 was ranked first by U.S. News & World Report for graduate computer science work in graphics and user interaction. That same year, he was appointed director of the Mitsubishi Electric Research Laboratories (MERL) in Cambridge, Massachusetts. Foley also served as editor-in-chief of ACM Transactions on Graphics from 1991 to 1995.
In 1997, Foley was recognized by ACM SIGGRAPH with the prestigious Steven A. Coons Award. The receipt of this biannual award places Foley among the company of computer graphics pioneers such as Andy van Dam, Jim Blinn, Edwin Catmull and Ivan Sutherland.
In 2007 he was recognized by ACM SIGCHI with their Lifetime Achievement Award.
Foley accepted the position of chairman and CEO of Mitsubishi Electric Information Technology Center America (MEITCA) in 1998, directing corporate R&D at four labs in North America. He returned to Georgia as Executive Director and then CEO of Yamacraw, Georgia's economic development initiative in the design of broadband systems, devices and chips.
Foley became chairman of the Computing Research Association (CRA) in 2001. He stepped down from this position in 2005 but remained on CRA's board of directors until 2006.
Foley Scholars Endowment
The Foley Scholars Endowment was established in honor of James Foley as part of the GVU Center's 15th anniversary celebration. The endowment funds two $5,000 scholarships awarded annually to GVU-affiliated students who demonstrate "overall brilliance and potential impact." The first two Foley Scholars were named in 2008.
Notable awards
IEEE Fellow, 1986. "For contributions to computer graphics."
ACM SIGGRAPH Steven A. Coons Award for Outstanding Creative Contributions to Computer Graphics, 1997.
ACM Fellow, 1999. "Through his books, courses, papers, organizational, and professional contributions, Foley has had a broad and lasting impact on the computer graphics field and on ACM."
ACM SIGCHI Lifetime Achievement Award, 2007. "It is difficult to think of anyone who had a larger role in the institutionalization of HCI as a discipline."
National Academy of Engineering Member, 2008. "For contributions to the establishment of the fields of computer graphics and human-computer interaction."
Georgia Tech Class of 1934 Distinguished Professor Award, 2008. "The highest honor Georgia Tech bestows on faculty."
Selected publications
References
External links
James Foley's home page
Living people
1942 births
American computer scientists
Computer graphics researchers
Georgia Tech faculty
Fellows of the Association for Computing Machinery
Fellow Members of the IEEE
Human–computer interaction researchers
Lehigh University alumni
University of Michigan College of Engineering alumni
Members of the United States National Academy of Engineering
Mitsubishi Electric people |
24554388 | https://en.wikipedia.org/wiki/Women%27s%20College%20of%20the%20University%20of%20Denver | Women's College of the University of Denver | Colorado Women’s College (CWC) is one of eight undergraduate colleges at the University of Denver and the Rocky Mountain Region’s only all-women’s college. It was formerly the Colorado Women’s College before merging with the University of Denver in 1982.
History
The Colorado Women’s College was founded in 1888 and opened its doors to its first students as a two-year college in 1909, with enrollment at 59 students its first year. Over the next ten years it changed into a four-year institution, and offered both BS and BA degrees.
The 1920s and the 1930s brought change to the campus as two more buildings, the Foote and Porter Halls, were constructed. Meanwhile, the Colorado Women’s College once again became a two-year institution. In 1932, the college received accreditation by the North Central Association.
By 1967, enrollment at the Colorado Women’s College reached over 1,000 students and it switched back to being a four-year institution. At around the same time the Colorado Women’s College began a program targeting working women over the age of 25, which was a precursor to The Women’s College of today. However, it wasn’t until 1982, after a study conducted by both institutions, that the Colorado Women’s College merged with the University of Denver to form The Weekend College. University of Denver housed The Weekend College on their campus as they incorporated it with their College of Business.
During the 1990s the Weekend College changed its name to its present-day name of The Women’s College. This change coincided with The Women's College becoming an individual undergraduate college within the University of Denver’s academic system. In 2004, it moved into the Merle Catherine Chambers Center for the Advancement of Women.
Merle Catherine Chambers Center for the Advancement of Women
Opened in 2004, the Merle Catherine Chambers Center for the Advancement of Women currently houses The Women’s College of the University of Denver, The Women’s Foundation of Colorado, the Women in Engineering ProActive Network (WEPAN), and Higher Education Resource Services (HERS). Over 600 people contributed $9 million including a million dollar lead gift by Merle Chambers towards the construction of the center. The building has office spaces, multi-purpose meeting rooms, a technology center, and two gathering rooms to serve the needs of all the organizations.
Academics
Bachelor Degree Programs
Business Administration (BBA)
Communication (BA)
Information Technology Studies (BA)
Law & Society (BA)
Minor Programs
Business Administration
Communication
Information Technology Studies
Law & Society
Gender and Women's Studies
Leadership Studies
Certificate Programs
Community Based Research
Conflict Management Studies
Entrepreneurial Studies
Information Technology Studies
Gender & Women's Studies
Leadership Studies
Philanthropic Studies
Writing
Student organizations
Book and Theater Club
Lambda Pi Eta
"Voices" Editorial Board
Film Club
Writer's Club
Student Advisory Board (SAB)
Business-Minded Women (BMW)
DU Women in Technology (DUWIT)
Women's Communication Network (WCN)
Law and Society Student Association (LASSA)
Sisterhood of Speakers (SOS)
WebCentral TWC Online Student Community Group
The Women's College Alumnae Association
References
The Women’s College
Higher Education Resource Services (HERS)
The Women’s Foundation of Colorado
Merle Chambers
Architects of the Merle Catherine Chambers Center for the Advancement of Women
University of Denver
Women's universities and colleges in the United States
1908 establishments in Colorado
Residential colleges
Women in Colorado |
8045367 | https://en.wikipedia.org/wiki/BIOVIA | BIOVIA | BIOVIA is a software company headquartered in the United States, with representation in Europe and Asia. It provides software for chemical, materials and bioscience research for the pharmaceutical, biotechnology, consumer packaged goods, aerospace, energy and chemical industries.
Previously named Accelrys, it is a wholly owned subsidiary of Dassault Systèmes after an April 2014 acquisition and has been renamed BIOVIA.
History
Accelrys was formed in 2001 as a wholly owned subsidiary of Pharmacopeia, Inc. from the fusion of five companies: Molecular Simulations Inc., Synopsys Scientific Systems, Oxford Molecular, the Genetics Computer Group (GCG), and Synomics Ltd. MSI, itself a result of the combination of Biodesign, Cambridge Molecular Design, Polygen and, later, Biocad and Biosym Technologies.
In late 2003, Pharmacopeia, Inc. separated its drug discovery and software development businesses. The drug discovery company retained the name Pharmacopeia and remained in Princeton, New Jersey, with ~170 employees. The software company, with ~530 employees and 2002 revenue of $95.1 million would move to San Diego, California.
In 2004, Accelrys acquired SciTegic, producer of the Pipeline pilot software.
On December 22, 2005, Accelrys, Inc. announced to restate its historical financial statements, to reflect changes to the timing of revenue recognition on certain historical term-based contracts, substantially all of which were entered into prior to January 2004.
Accelrys managed a nanotechnology consortium producing software tools for rational nanodesign from 2004 to 2010.
In 2010, Symyx Technologies was merged with Accelrys.
In May 2011, the company acquired Contur Software AB, an electronic lab notebook software firm.
In January 2012, Accelrys acquired VelQuest, a maker of pharmaceutical and medical device-related software, for $35 million in cash.
In May 2012, Accelrys purchased Hit Explorer Operating System (HEOS) - a SaaS system that provides groups with project information in the cloud and access to biological assay results, analytics, chemical registration and pharmacokinetics data - from Scynexis.
In October 2012, Accelrys acquired Aegis Analytical Corp. for $30 million in cash, expanding Accelrys’ reach for customers in the move from the lab to the manufacturing floor. The company's Discoverant software aggregates and analyzes manufacturing, quality and development data to allow manufacturers for quality by design.
In January 2013, Accelrys acquired Swiss biosciences systems integrator Vialis AG for $5 million in cash.
In September 2013, Accelrys acquired Environmental Health & Safety (EH&S) compliance provider ChemSW.
On January 30, 2014 Dassault Systèmes of France announced the acquisition of Accelrys in an all-cash tender offer for at $12.50 per share, representing a fully diluted equity value for Accelrys of approximately $750 million. After the acquisition Accelrys was renamed BIOVIA.
Products
The Accelrys Enterprise Platform, a scientifically aware, service-oriented architecture (SOA) spanning data management and informatics, enterprise lab management, modeling and simulation, and workflow automation.
Pipeline Pilot, a program that aggregates and provides immediate access to the volumes of disparate research data locked in silos, automates the scientific analysis of that data, and enables researchers to rapidly explore, visualize and report research results.
ISIS/Draw, a chemical drawing tool.
ISIS/Base, a personal chemical database counterpart.
ISIS/Host, a chemical structure database that uses Oracle
Accelrys Draw, a chemical drawing tool.
Accelrys Direct, a chemical substance database that uses Oracle's data cartridge technology.
The Available Chemicals Directory (ACD) a compilation of supplier catalogues that is searchable by substructure.
The Accelrys Process Management and Compliance Suite, a "combination of software products for scientists working in early and mid-stage analytical, formulation and process/bioprocess development ... through to stability, material and release testing during late-stage quality control and commercial production." The Suite streamlines product development
Symyx Notebook by Accelrys, an Electronic lab notebook.
Materials Studio, a suite of modeling and simulation programs for material science.
Discovery Studio, a suite of modeling and simulation programs for life sciences.
Contur ELN
Externalized Collaboration Suite
Discoverant
iLabber
Experiment Knowledge Base
Lab Execution System (LES)
Commercial versions of otherwise academically licensed programs:
CHARMM (Chemistry at Harvard Macromolecular Mechanics) is commercially available from Accelrys. In October 2013, Martin
Karplus of Harvard University, Michael Levitt of Stanford University and Arieh Warshel of the University of Southern California were awarded the 2013 Nobel Prize in chemistry for their work in modeling and simulation including CHARMM.
MODELLER
See also
Other institutions developing software for computational chemistry:
MolSoft
Scilligence
ChemAxon
Dotmatics
Chemical Computing Group
Inte:Ligand
Cresset Biomolecular Discovery
OpenEye Scientific Software
Pharmacelera
Schrödinger
CambridgeSoft
VLifeMDS Software
ChemoSophia on-line computations
NovaMechanics Ltd Cheminformatics Solutions
Software for molecular mechanics modeling
References
External links
Finance.yahoo.com
Sdcexec.com
Fiercebiotechit.com
Molecular modelling software
Clinical data management
Nanotechnology companies
Software companies established in 2001
Software companies of the United Kingdom
Software companies based in California
Companies based in San Diego
Software companies of the United States
Companies formerly listed on the Nasdaq
2014 mergers and acquisitions |
58258730 | https://en.wikipedia.org/wiki/Rene%20Tammist | Rene Tammist | Rene Tammist (born 5 July 1978 in Tartu, Estonia) is an Estonian politician, serving as Minister of Entrepreneurship and Information Technology of the Republic of Estonia in the cabinet of Prime Minister Jüri Ratas since August 22, 2018.
Education
Tammist graduated from Tartu Secondary School No. 7 (Tartu Karlova School) in 1996 and received his bachelor's degree in public administration from Tartu University in 2001. He holds a master's degree in public policy and administration from Manchester University. Since 2006 Tammist has been enrolled at University College London’s PhD program. In 2009 he participated in the US Congress’ Trans-Atlantic climate and energy policy exchange program
Professional career
2001–2002 Advisor to the Foreign Minister of the Republic of Estonia
2004–2011 Energy and climate policy advisor to the Socialists and Democrats Group at the European Parliament
2007–2009 Supervisory board member of Eesti Energia
2009–2011 Supervisory board member of the Estonian Climate and Energy Agency
2011–2012 Lecturer on EU energy and climate policy at Tartu University
2011–2018 Executive director at the Estonian Renewable Energy Association
2012–2018 European Parliament appointed member of the Administrative Board of the Agency for the Cooperation of Energy Regulators
2014–2018 Supervisory board member of Kredex Foundation
2018– Minister of Entrepreneurship and Information Technology
Political career
Member of the Estonian Social Democratic Party since 1996. Tammist was sworn in as Minister of Entrepreneurship and Information Technology on August 22, 2018.
Hobbies
History and long-distance running.
1978 births
Living people
Politicians from Tartu
Government ministers of Estonia
21st-century Estonian politicians |
33231513 | https://en.wikipedia.org/wiki/List%20of%20moths%20of%20Turkey | List of moths of Turkey | There are about 4,200 known moth species of Turkey. The moths (mostly nocturnal) and butterflies (mostly diurnal) together make up the taxonomic order Lepidoptera.
This is a list of moth species which have been recorded in Turkey.
Micropterigidae
Microptericina amasiella Staudinger, 1880
Micropterix allionella (Fabricius, 1794)
Micropterix aruncella Scopoli, 1763
Micropterix klimeschi Heath, 1973
Micropterix maschukella Alphéraky, 1870
Micropterix paykullella Akermann, 1792
Micropterix schaefferi Heath, 1975
Micropterix wockei Staudinger, 1870
Eriocraniidae
Dyseriocrania subpurpurella Haworth, 1828
Hepialidae
Hepialus humuli (Linnaeus, 1758)
Korscheltellus lupulinus (Linnaeus, 1758)
Triodia amasina Herrich-Schäffer, 1851
Triodia sylvina (Linnaeus, 1761)
Zenophassus schamyl Christoph, 1888
Nepticulidae
Ectoedemia caradjai Hering, 1932
Ectoedemia terebinthivora Klimesch, 1975
Etainia sericopeza Zeller, 1839
Fomoria louisae Klimesch, 1978
Glaucolepis albiflorella Klimesch, 1978
Stigmella aceris Frey, 1857
Stigmella azaroli Klimesch, 1978
Stigmella basiguttella Heinemann, 1862
Stigmella centifoliella Zeller, 1848
Stigmella fagella Heinemann, 1862
Stigmella mespilicola Frey, 1856
Stigmella minusculella Herrich-Schäffer, [1855]
Stigmella muricatella Klimesch, 1978
Stigmella paliurella Gerasimov, 1937
Stigmella prunetorum Stainton, 1855
Stigmella pyrellicola Klimesch, 1978
Stigmella rhamnophila Amsel, 1935
Stigmella samiatella Zeller, 1839
Trifurcula eurema Tutt, 1899
Trifurcula pallidella Zeller, 1845
Opostegidae
Opostega auritella (Hübner, [1813])
Opostega crepusculella Zeller, 1839
Opostega salaciella Treitschke, 1833
Opostega spatulella Herrich-Schäffer, [1855]
Tischeriidae
Tischeria angusticollella Duponchel, 1843
Tischeria ekebladella Bjerkander, 1795
Tischeria gaunacella Duponchel, 1843
Tischeria marginea Haworth, 1828
Heliozelidae
Antispila pfeifferella (Hübner, [1813])
Incurvariidae
Alloclemensia devotella Rebel, 1893
Incurvaria masculella ([Denis & Schiffermüller], 1775)
Prodoxidae
Lampronia koerneriella Zeller, 1839
Lampronia rubiella Bjerkander, 1781
Lampronia rupella ([Denis & Schiffermüller], 1775)
Adelidae
Adela anatolica Rebel, 1902
Adela annae Zeller, 1853
Adela auricella Ragonot, 1874
Adela barbatella Zeller, 1847
Adela chlorista Meyrick, 1912
Adela croesella Scopoli, 1763
Adela dumerilella Duponchel, 1838
Adela fasciella (Fabricius, 1775)
Adela fibulella ([Denis & Schiffermüller], 1775)
Adela florella Staudinger, 1870
Adela istrianella Heydenreich, 1851
Adela leucocerella Scopoli, 1763
Adela mazzolella (Hübner, [1801])
Adela metallica (Poda, 1761)
Adela mollella (Hübner, [1816])
Adela prodigella Mann, 1853
Adela raddella (Hübner, 1793)
Adela reaumurella (Linnaeus, 1758)
Adela repetitella Mann, 1861
Adela rufifrontella Treitschke, 1833
Adela rufimitrella Scopoli, 1763
Adela tridesma Meyrick, 1912
Nematopogon panzerella (Fabricius, 1794)
Nematopogon pilella ([Denis & Schiffermüller], 1775)
Nematopogon robertella Clerck, 1759
Nematopogon schwarziella Zeller, 1839
Nematopogon swammerdammella (Linnaeus, 1758)
Deuterotineidae
Deuterotinea casanella Eversmann, 1844
Deuterotinea palaestinensis Rebel, 1901
Deuterotinea paradoxella Staudinger, 1859
Deuterotinea syriaca Lederer, 1857
Psychidae
Acanthopsyche atra (Linnaeus, 1767)
Anaproutia reticulatella Bruand, 1853
Apterona helicoidella Vallot, 1827
Bankesia pallida Staudinger, 1879
Bijugis pectinella ([Denis & Schiffermüller], 1775)
Dahlica triquetrella (Hübner, [1813])
Diplodoma laichartingella Goeze, 1783
Dissoctena granigerella Staudinger, 1859
Eochorica balcanica Rebel, 1919
Eumasia parietariella Heydenreich, 1851
Lepidopsyche unicolor (Hufnagel, 1766)
Masonia rassei Sieder, 1975
Megalophanes viciella ([Denis & Schiffermüller], 1775)
Melasina ciliaris Ochsenheimer, 1810
Melasina punctatella Bruand, 1853
Oiketicoides caucasica Bang-Haas, 1921
Oiketicoides febretta Boyer, 1835
Oiketicoides lutea Staudinger, 1871
Oiketicoides senex Staudinger, 1871
Oiketicoides taurica Wehrli, 1932
Pachytelia villosella Ochsenheimer, 1810
Penestoglossa tauricella Rebel, 1935
Phalacropterix bruandi Lederer, 1855
Psyche casta Pallas, 1767
Psyche crassiorella Bruand, [1851]
Ptilocephala mediterranea Lederer, 1853
Ptilocephala plumifera Ochsenheimer, 1810
Rebelia surientella Bruand, 1858
Reisseronia flavociliella Mann, 1864
Taleporia pseudoimprovisella Freina & Witt, 1984
Taleporia tubulosa Retzius, 1783
Eriocottidae
Eriocottis fuscanella Zeller, 1847
Tineidae
Anemallota praetoriella Christoph, 1872
Anemapogon quercicolella Herrich-Schäffer, [1851]
Ateliotum cypellias Meyrick, 1937
Ateliotum hungaricellum Zeller, 1839
Ateliotum syriacum Caradja, 1920
Cephimallota angusticostella Zeller, 1839
Cephimallota crassiflavella Bruand, 1851
Cephimallota libanotica Petersen, 1959
Cephimallota tunesiella Zagulyaev, 1966
Ceratuncus affinitellus Rebel, 1901
Ceratuncus danubiellus Mann, 1866
Crassicornella crassicornella Zeller, 1847
Edosa ditella Pierce & Diakonoff, 1938
Edosa fuscoviolacella Ragonot, 1895
Edosa lardatella Lederer, 1858
Eudarcia forsteri Petersen, 1964
Euplocamus anthracinalis Scopoli, 1763
Euplocamus delagrangei Ragonot, 1895
Euplocamus ophisus Cramer, [1779]
Fermocelina inquinatella Zeller, 1852
Fermocelina latiusculella Stainton, 1867
Haplotinea ditella Pierce & Diakonoff, 1938
Haplotinea insectella (Fabricius, 1794)
Hapsifera luridella Zeller, 1847
Hapsifera multiguttella Ragonot, 1895
Infurcitinea albicomella Stainton, 1851
Infurcitinea anatolica Petersen, 1968
Infurcitinea nedae Gaedike, 1983
Infurcitinea nigropluviella Walsingham, 1907
Infurcitinea rumelicella Rebel, 1903
Infurcitinea tauridella Petersen, 1968
Infurcitinea turcica Petersen, 1968
Lichenotinea pustulatella Zeller, 1852
Monopis imella (Hübner, [1813])
Monopis laevigella ([Denis & Schiffermüller], 1775)
Monopis meleodes Meyrick, 1917
Monopis ustella Haworth, 1828
Morophaga choragella ([Denis & Schiffermüller], 1775)
Morophaga morella Duponchel, 1838
Morophagoides orientalis Petersen,
Myrmecozela lutosella Eversmann, 1844
Nemapogon anatolica Gaedike, 1986
Nemapogon arenbergeri Gaedike, 1986
Nemapogon cloacella Haworth, 1828
Nemapogon gliriella Heyden, 1865
Nemapogon granella (Linnaeus, 1758)
Nemapogon gravosaella Petersen, 1957
Nemapogon hungarica Gozmany, 1960
Nemapogon inconditella Lucas, 1956
Nemapogon kasyi Gaedike, 1986
Nemapogon levantina Petersen, 1961
Nemapogon orientalis Petersen, 1961
Nemapogon ruricolella Stainton, 1849
Nemapogon signatella Petersen, 1957
Nemapogon teberdellus Zagulyaev, 1963
Nemapogon variatella Clemens, 1859
Nemapogon vartianae Gaedike, 1986
Neomeessia gracilis Petersen, 1968
Neurothaumasia ankerella Mann, 1867
Niditinea fuscipunctella Haworth, 1828
Novotinea fasciata Staudinger, 1879
Opogona panchalcella Staudinger, 1871
Paratinea merdella Zeller, 1847
Perissomastix wiltshirella Petersen, 1964
Reisserita relicinella Herrich-Schäffer, [1851]
Rhodobates laevigatellus Herrich-Schäffer, [1854]
Rhodobates pallipalpellus Rebel, 1901
Tinea basifasciella Ragonot, 1895
Tinea flavescentella Haworth, 1828
Tinea murariella Staudinger, 1859
Tinea pellionella Linnaeus, 1758
Triaxomera fulvimitrella Sodoffsky, 1830
Triaxomera parasitella (Hübner, 1796)
Trichophaga bipartitella Ragonot, 1892
Trichophaga tapetzella (Linnaeus, 1758)
Bucculatricidae
Bucculatrix albedinella Zeller, 1839
Bucculatrix anthemidella Deschka, 1972
Bucculatrix basifuscella Staudinger, 1880
Bucculatrix crataegi Zeller, 1839
Bucculatrix infans Staudinger, 1880
Bucculatrix nigricomella Zeller, 1839
Bucculatrix oppositella Staudinger, 1880
Bucculatrix pseudosylvella Rebel, 1941
Bucculatrix ulmella Zeller, 1848
Douglasiidae
Klimeschia cinereipunctella Turati & Fiori, 1930
Klimeschia transversella Zeller, 1839
Klimeschia vibratoriella Mann, 1862
Tinagma anchusellum Benander, 1936
Tinagma columbellum Staudinger, 1880
Tinagma minutissimum Staudinger, 1880
Tinagma ocnerostomellum Stainton, 1850
Gracillariidae
Acrocercops brongniardella (Fabricius, 1798)
Aspilapteryx tringipennella Zeller, 1839
Callisto denticulella Thunberg, 1794
Caloptilia alchimilella Scopoli, 1763
Caloptilia braccatella Staudinger, 1870
Caloptilia coruscans Walsingham, 1907
Caloptilia cuculipennella (Hübner, 1796)
Caloptilia elongella (Linnaeus, 1761)
Caloptilia fidella Reutti, 1853
Caloptilia fribergensis Fritzsche, 1871
Caloptilia hemidactylella ([Denis & Schiffermüller], 1775)
Caloptilia mutilata Staudinger, 1879
Caloptilia onustella (Hübner, [1813])
Caloptilia pallescens Staudinger, 1879
Caloptilia rhodinella Herrich-Schäffer, [1854]
Caloptilia roscipennella (Hübner, 1796)
Caloptilia stigmatella (Fabricius, 1781)
Calybites auroguttella Stephens, 1835
Calybites quadrisignella Zeller, 1839
Cupedia cupediella Herrich-Schäffer, 1855
Dialectica imperialella Mann, 1847
Dialectica scalariella Zeller, 1850
Gracillaria syringella (Fabricius, 1794)
Micrurapteryx kollariella Zeller, 1839
Parornix anglicella Stainton, 1850
Parornix anguliferella Zeller, 1847
Parornix devoniella Stainton, 1850
Parornix finitimella Zeller, 1850
Parornix oculata (Triberti, 1979)
Parornix torquillella Zeller, 1850
Phyllocnistis unipunctella Stephens, 1834
Phyllonorycter abrasella Duponchel, 1843
Phyllonorycter acaciella Duponchel, [1843]
Phyllonorycter acerifoliella Zeller, 1839
Phyllonorycter anatolica Deschka, 1970
Phyllonorycter belotella Staudinger, 1859
Phyllonorycter cerasicolella Herrich-Schäffer, 1855
Phyllonorycter corylifoliella (Hübner, 1796)
Phyllonorycter deleta Staudinger, 1880
Phyllonorycter emberizaepennella Bouché, 1834
Phyllonorycter flava Deschka, 1975
Phyllonorycter fraxinella Zeller, 1846
Phyllonorycter gerasimovi Hering, 1930
Phyllonorycter harrisella (Linnaeus, 1761)
Phyllonorycter helianthemella Herrich-Schäffer, 1861
Phyllonorycter klemannella (Fabricius, 1781)
Phyllonorycter kusdasi Deschka, 1970
Phyllonorycter lautella Zeller, 1846
Phyllonorycter maestingella (Müller, 1764)
Phyllonorycter mannii Zeller, 1846
Phyllonorycter messaniella Zeller, 1846
Phyllonorycter millierella Staudinger, 1870
Phyllonorycter muelleriella Zeller, 1839
Phyllonorycter nivalis Deschka, 1986
Phyllonorycter oxyacanthae Frey, 1856
Phyllonorycter platani Staudinger, 1871
Phyllonorycter pyrispinosae Deschka, 1986
Phyllonorycter quercifoliella Zeller, 1839
Phyllonorycter quinnata Geoffroy, 1785
Phyllonorycter roboris Zeller, 1849
Phyllonorycter saportella Duponchel, [1840]
Phyllonorycter schreberella (Fabricius, 1781)
Phyllonorycter trifasciella Haworth, 1828
Polymitia eximipalpella Gerasimov, 1930
Povolnya leucopennella Stephens, 1835
Sabulopteryx inquinata Triberti, 1985
Sabulopteryx limosella Duponchel, 1843
Sauterina hofmanniella Schleich, 1867
Spulerina simploniella F.R., [1840]
Roeslerstammiidae
Roeslerstammia pronubella ([Denis & Schiffermüller], 1775)
Pterolonchidae
Pterolonche albescens Zeller, 1847
Pterolonche inspersa Staudinger, 1859
Pterolonche pulverulenta Zeller, 1847
Agonoxenidae
Chrysoclista linneella (Linnaeus, 1761)
Spuleria flavicaput Haworth, 1828
Batrachedridae
Batrachedra ledereriella Zeller, 1850
Blastobasidae
Blastobasis phycidella Zeller, 1839
Holcocera inunctella Zeller, 1839
Coleophoridae
About 187 species - see: List of moths of Turkey (Coleophoridae)
Elachistidae
Elachista adscitella Stainton, 1851
Elachista albifrontella (Hübner, [1817])
Elachista anatoliensis Traugott-Olsen, 1990
Elachista anserinella Zeller, 1839
Elachista argentella Clerck, 1759
Elachista atrisquamosa Staudinger, 1880
Elachista blancella Traugott-Olsen, 1992
Elachista chionella Mann, 1861
Elachista chrysodesmella Zeller, 1850
Elachista cingillella Herrich-Schäffer, [1855]
Elachista collitella Duponchel, [1843]
Elachista contaminatella Zeller, 1847
Elachista deceptricula Staudinger, 1880
Elachista deresyensis Traugott-Olsen, 1988
Elachista disertella Herrich-Schäffer, [1855]
Elachista dispilella Zeller, 1839
Elachista dispositella Frey, 1859
Elachista festucicolella Zeller, 1853
Elachista flavescens Parenti, 1981
Elachista gangabella Zeller, 1850
Elachista gebzeensis Traugott-Olsen, 1990
Elachista gleichenella (Fabricius, 1781)
Elachista griseella Zeller, 1850
Elachista grotenfelti Kaila, 2012
Elachista incanella Herrich-Schäffer, [1855]
Elachista kleini Amsel, 1935
Elachista maculata Parenti, 1978
Elachista melancholica Frey, 1859
Elachista minusculella Traugott-Olsen, 1992
Elachista monosemiella Roesler, 1881
Elachista nuraghella Amsel, 1935
Elachista pollinariella Zeller, 1839
Elachista pollutella Herrich-Schäffer, [1855]
Elachista pollutissima Staudinger, 1880
Elachista revinctella Zeller, 1850
Elachista rudectella Stainton, 1851
Elachista rufocinerea Haworth, 1828
Elachista turkensis Traugott-Olsen, 1990
Elachista unifasciella Haworth, 1828
Elachista vegliae Parenti, 1978
Elachista zonariella Tensgtröm, 1847
Perittia echiella de Joannis, 1902
Perittia huemeri (Traugott-Olsen, 1990)
Perittia junnilaisella Kaila, 2009
Perittia karadaghella Sinev & Budashkin, 1991
Perittia ravida Kaila, 2009
Stephensia abbreviatella Stainton, 1851
Stephensia brunnichiella (Linnaeus, 1767)
Oecophoridae
Agonopterix adspersella Kollar, 1832
Agonopterix alstroemeriana Clerck, 1759
Agonopterix assimilella Treitschke, 1832
Agonopterix atomella ([Denis & Schiffermüller], 1775)
Agonopterix capreolella Zeller, 1839
Agonopterix cnicella Treitschke, 1832
Agonopterix comitella Lederer, 1855
Agonopterix despoliatella Erschoff, 1874
Agonopterix epicachritis Ragonot, 1895
Agonopterix flavella (Hübner, 1796)
Agonopterix furvella Treitschke, 1832
Agonopterix imbutella Christoph, 1888
Agonopterix kaekeritziana (Linnaeus, 1767)
Agonopterix latipennella Zerny, 1934
Agonopterix nanatella Stainton, 1849
Agonopterix nervosa Haworth, [1811]
Agonopterix pavida Meyrick, 1913
Agonopterix purpurea Haworth, [1811]
Agonopterix ramosella Stainton, 1867
Agonopterix rotundella Douglas, 1846
Agonopterix rutana (Fabricius, 1794)
Agonopterix squamosa Mann, 1864
Agonopterix subpropinquella Stainton, 1849
Agonopterix subumbellana Heinemann, 1959
Agonopterix thapsiella Zeller, 1847
Agonopterix xyleuta Meyrick, 1913
Agonopterix zephyrella (Hübner, [1813])
Alabonia kindermanni Herrich-Schäffer, [1855]
Alabonia staintoniella Zeller, 1850
Amselina cedestiella Zeller, 1868
Amselina emir Gozmany, 1961
Amselina minorita Gozmany, 1968
Amselina olympi Gozmany, 1957
Amselina parapsesta Gozmany, 1986
Anchinia grandis Stainton, 1867
Apatema mediopallidum Walsingham, 1900
Apiletria endopercna Meyrick, 1936
Apiletria luella Lederer, 1855
Apiletria purulentella Stainton, 1867
Aprominta aga Gozmany, 1962
Aprominta arenbergeri Gozmany, 1968
Aprominta bifasciata Staudinger, 1871
Aprominta designatella Herrich-Schäffer, [1855]
Aprominta syriacella Ragonot, 1895
Arragonia anatolica Gozmany, 1986
Athopeutis crinitella Herrich-Schäffer, [1855]
Batia lunaris Haworth, 1828
Borkhausenia cinerariella Mann, 1859
Borkhausenia coeruleopicta Christoph, 1888
Borkhausenia haasi Rebel, 1902
Borkhausenia minutella (Linnaeus, 1758)
Borkhausenia trigutta Christoph, 1888
Cacochroa permixtella Herrich-Schäffer, [1855]
Callima icterinella Mann, 1867
Carcina quercana (Fabricius, 1775)
Crossotocera wagnerella Zerny, 1930
Denisia similella (Hübner, 1796)
Depressaria badiella (Hübner, 1796)
Depressaria chaerophylli Zeller, 1839
Depressaria corticinella Zeller, 1854
Depressaria depressella (Hübner, [1813])
Depressaria douglasella Stainton, 1849
Depressaria floridella Mann, 1864
Depressaria hirtipalpis Zeller, 1854
Depressaria hofmanni Stainton, 1861
Depressaria marcella Rebel, 1901
Depressaria tenebricosa Zeller, 1854
Depressaria veneficella Zeller, 1847
Depressaria zelleri Staudinger, 1879
Diurnea fagella ([Denis & Schiffermüller], 1775)
Donaspastus undecimpunctellus Mann, 1864
Dysspastus cinerascens Gozmany, 1968
Endrosis sarcitrella (Linnaeus, 1758)
Eratophyes amasiella Herrich-Schäffer, [1855]
Esperia imitatrix Zeller, 1847
Esperia intermediella Stainton, 1867
Esperia oliviella (Fabricius, 1794)
Esperia sulphurella (Fabricius, 1775)
Exaeretia ledereri Zeller, 1854
Exaeretia lutosella Herrich-Schäffer, [1854]
Exaeretia nigromaculata Hannemann, 1989
Fabiola pokornyi Nickerl, 1864
Harpella eseliensis Rebel, 1908
Harpella forficella Scopoli, 1763
Hecestoptera kyra Gozmany, 1961
Holcopogon bubulcellus Staudinger, 1859
Holoscolia berytella Rebel, 1902
Holoscolia huebneri Koçak, 1980
Holoscolia majorella Rebel, 1902
Horridopalpus radiatus Staudinger, 1879
Hypercallia citrinalis Scopoli, 1763
Mylothra pyrrhella Ragonot, 1895
Oecogonia caradjai Popescu-Gorj & Capuse, 1965
Oecophora bractella (Linnaeus, 1758)
Orophia denisella ([Denis & Schiffermüller], 1775)
Orophia sordidella (Hübner, 1796)
Pleurota amaniella Mann, 1873
Pleurota armeniella Caradja, 1920
Pleurota christophi Lvovskiy, 1993
Pleurota eximia Lederer, 1861
Pleurota generosella Rebel, 1901
Pleurota issicella Staudinger, 1879
Pleurota malatya Back, 1973
Pleurota metricella Zeller, 1847
Pleurota pungitiella Herrich-Schäffer, [1854]
Pleurota pyropella ([Denis & Schiffermüller], 1775)
Pleurota tristatella Staudinger, 1871
Pleurota tristictella Seebold, 1898
Protasis punctella Costa, [1846]
Pseudatemelia flavifrontella (Denis & Schiffermüller, 1775)
Pseudatemelia sordida Staudinger, 1879
Schiffermuelleria irroratella Staudinger, 1879
Schiffermuelleria schaefferella (Linnaeus, 1758)
Semioscopis osthelderi Rebel, 1935
Symmoca caliginella Mann, 1867
Symmoca deprinsi Gozmány, 2001
Symmoca latiusculella Stainton, 1867
Symmoca salinata Gozmany, 1986
Symmoca sparsella de Joannis, 1891
Symmoca straminella Gozmany, 1986
Symmoca vitiosella Zeller, 1868
Telechrysis tripuncta Haworth, 1828
Ethmiidae
Ethmia amasina Staudinger, 1879
Ethmia aurifluella (Hübner, [1801])
Ethmia bipunctella (Fabricius, 1775)
Ethmia candidella Alphéraky, 1908
Ethmia caradjae Rebel, 1907
Ethmia chrysopyga Zeller, 1844
Ethmia defreinai Ganev, 1984
Ethmia distigmatella Erschoff, 1874
Ethmia dodocea Haworth, 1828
Ethmia fumidella Wocke, 1850
Ethmia funerella (Fabricius, 1787)
Ethmia haemorrhoidella Eversmann, 1844
Ethmia hakkarica Koçak, 1986
Ethmia infelix Meyrick, 1914
Ethmia iranella Zerny, 1940
Ethmia pseudoscythrella Rebel, 1902
Ethmia pusiella (Linnaeus, 1758)
Ethmia quadrinotella Mann, 1861
Ethmia rothshildi Rebel, 1912
Ethmia similis Sattler, 1967
Ethmia suspecta Sattler, 1867
Ethmia terminella T. B. Fletcher, 1938
Ethmia treitschkeella Staudinger, 1879
Ethmia tripunctella Staudinger, 1879
Gelechiidae
Acanthophila alacella Duponchel, 1838
Acompsia cinerella Clerck, 1759
Anacampsis obscurella ([Denis & Schiffermüller], 1775)
Anacampsis populella Clerck, 1759
Anarsia aleurodes Meyrick, 1922
Anarsia lineatella Zeller, 1839
Anarsia spartiella Schrank, 1802
Apatetris mirabella Staudinger, 1880
Apodia bifractella Duponchel, [1843]
Aproaerema anthyllidella (Hübner, [1813])
Aproaerema cincticulella Herrich-Schäffer, [1855]
Aristotelia arnoldella Rebel, 1905
Aristotelia brucinella Mann, 1872
Aristotelia cupreella Zerny, 1934
Aristotelia decoratella Staudinger, 1879
Aristotelia decurtella (Hübner, [1813])
Aristotelia euprepella Zerny, 1934
Aristotelia fervidella Mann, 1864
Aristotelia jactatrix Meyrick, 1926
Aristotelia maculata Staudinger, 1879
Aristotelia osthelderi Rebel, 1935
Aristotelia pancaliella Staudinger, 1870
Aristotelia parvula Staudinger, 1879
Aristotelia punctatella Staudinger, 1879
Aristotelia remissella Zeller, 1847
Aristotelia retusella Rebel, 1891
Aristotelia servella Zeller, 1839
Aristotelia striatopunctella Rebel, 1891
Aristotelia subericinella Herrich-Schäffer, [1855]
Aristotelia unifasciella Rebel, 1929
Aroga aristotelis Milliere, 1875
Aroga pascuicola Staudinger, 1871
Aroga velocella Duponchel, 1838
Bryotropha desertella Douglas, 1850
Bryotropha dryadella Zeller, 1850
Bryotropha terrella ([Denis & Schiffermüller], 1775)
Caryocolum albithoracellum Huemer, 1989
Caryocolum anatolicum Huemer, 1989
Caryocolum gypsophilae Stainton, 1869
Caryocolum horoscopa Meyrick, 1926
Caryocolum iranicum Huemer, 1989
Ceuthomadarus tenebrionellus Mann, 1864
Chionodes distinctella Zeller, 1839
Chionodes hayreddini Koçak, 1985
Chrysoesthia drurella (Fabricius, 1775)
Chrysoesthia sexguttella Thunberg, 1794
Coloptilia conchylidella Hofmann, [1898]
Compsolechia scintillella F.R., 1841
Compsolechia subsequella (Hübner, 1796)
Crossobela trinotella Herrich-Schäffer, [1856]
Deroxena venosulella Moeschler, 1862
Dichomeris barbella ([Denis & Schiffermüller], 1775)
Dichomeris derasella ([Denis & Schiffermüller], 1775)
Dichomeris juniperella (Linnaeus, 1761)
Dichomeris limosella Schläger, 1849
Dichomeris unguiculatus (Fabricius, 1798)
Dirhinosia arnoldiella (Rebel, 1905)
Dirhinosia cervinella (Eversmann, 1844)
Dirhinosia nitidula (Stainton, 1867)
Dirhinosia unifasciella (Rebel, 1929)
Ephysteris deserticolella Staudinger, 1870
Ephysteris promptella Staudinger, 1859
Epilechia magnetella Staudinger, 1870
Ergasiola ergasima Meyrick, 1916
Eulamprotes superbella Zeller, 1839
Eulamprotes wilkella (Linnaeus, 1758)
Eurodachtha flavissimella Mann, 1862
Eurodachtha nigralba Gozmany, 1978
Euscrobipalpa acuminatella Sircom, 1850
Euscrobipalpa artemisiella Treitschke, 1933
Euscrobipalpa atriplicella F.R., 1841
Euscrobipalpa chetitica Povolny, 1974
Euscrobipalpa dividella Rebel, 1936
Euscrobipalpa erichi Povolny, 1964
Euscrobipalpa grossa Povolny, 1966
Euscrobipalpa obsoletella F.R., 1841
Euscrobipalpa ocellatella Boyd, 1858
Euscrobipalpa pulchra Povolny, 1967
Euscrobipalpa smithi Povolny & Bradley, 1964
Euscrobipalpa vladimiri Povolny, 1966
Evippe penicillata Amsel, 1961
Filatima spurcella Duponchel, 1843
Gelechia fuscantella Heinemann, 1870
Gelechia indignella Staudinger, 1879
Gelechia invenustella Berg., 1875
Gelechia pistaciae Filipjev, 1933
Gelechia repetitrix Meyrick, 1931
Gelechia sabinella Zeller, 1839
Gelechia senticetella Staudinger, 1859
Gelechia stramentella Rebel, 1935
Gnorimoschema antiquum Povolny, 1966
Gnorimoschema tetrameris Meyrick, 1926
Homaloxestis briantiella Turati, 1879
Homaloxestis hades Gozmany, 1978
Inotica gaesata Meyrick, 1913
Isophrictis anthemidella Wocke, 1871
Isophrictis invisella Constant, 1885
Isophrictis kefersteiniella Zeller, 1850
Isophrictis lineatella Zeller, 1850
Isophrictis striatella ([Denis & Schiffermüller], 1775)
Iwaruna biguttella Duponchel, 1843
Lecithocera anatolica Gozmany, 1978
Lecithocera nigrana Duponchel, 1836
Lecithocera syriella Gozmany, 1978
Megacraspedus argyroneurellus Staudinger, 1870
Megacraspedus attritellus Staudinger, 1870
Megacraspedus imparellus F.R., 1837
Megacraspedus incertellus Rebel, 1930
Megacraspedus monolorellus Rebel, 1906
Megacraspedus separatellus F.R., 1837
Mesophleps pudicellus Mann, 1861
Mesophleps pyropella (Hübner, 1793)
Metanarsia modesta Staudinger, 1870
Metzneria aestivella Zeller, 1839
Metzneria agraphella Ragonot, 1895
Metzneria aprilella Herrich-Schäffer, [1850]
Metzneria ehikeella Gozmany, 1954
Metzneria intestinella Mann, 1864
Metzneria litigiosella Milliere, 1879
Metzneria metzneriella Stainton, 1851
Metzneria paucipunctella Zeller, 1839
Metzneria tenuiella Mann, 1864
Mirificarma aflavella Duponchel, 1844
Mirificarma eburnella ([Denis & Schiffermüller], 1775)
Mirificarma lentiginosella Zeller, 1839
Mirificarma maculatella (Hübner, 1796)
Mirificarma rhodoptera Mann, 1866
Monochroa lutulentella Zeller, 1839
Monochroa tenebrella (Hübner, [1817])
Neofaculta confidella Rebel, 1935
Neofaculta ericetella Geyer, [1832]
Neofaculta stictella Rebel
Neofriseria sceptrophora Meyrick, 1926
Nothris chinganella Christoph, 1882
Nothris sabulosella Rebel, 1935
Nothris sulcella Staudinger, 1879
Nothris verbascella Brahm, 1791
Onebala lamprostoma Zeller, 1847
Ornativalva heluanensis Debski, 1913
Ornativalva mixolitha Meyrick, 1918
Ornativalva ochraceofusca Sattler, 1967
Ornativalva ornatella Sattler, 1967
Ornativalva plutelliformis Staudinger, 1859
Palumbina guerinii Stainton, 1857
Pexicopia umbrella ([Denis & Schiffermüller], 1775)
Phthorimaea sabulosella Rebel, 1906
Platyedra gossypiella Saunders, 1843
Platyedra subcinerea Haworth, 1828
Platyedra vilella Zeller, 1847
Pogochaetia solitaria Staudinger, 1879
Prolita solutella Zeller, 1839
Prolita virgella Thunberg, 1794
Pseudotelphusa fugitivella Zeller, 1839
Pseudotelphusa scalella Scopoli, 1763
Psoricoptera gibbosella Stainton, 1854
Ptocheuusa campicolella Mann, 1857
Ptocheuusa paupella Zeller, 1847
Recurvaria leucatella Clerck, 1759
Recurvaria nanella ([Denis & Schiffermüller], 1775)
Scrobipalpa anatolica Povolny, 1973
Scrobipalpa bazae Povolny, 1977
Scrobipalpa bryophiloides Povolny, 1966
Scrobipalpa fraterna Povolny, 1973
Scrobipalpa halophila Povolny, 1973
Scrobipalpa heliopa Lower, 1900
Scrobipalpa heretica Povolny, 1973
Scrobipalpa meteorica Povolny, 1984
Scrobipalpa nana Povolny, 1973
Scrobipalpa remota Povolny, 1972
Scrobipalpula psilella Herrich-Schäffer, [1854]
Sitotroga cerealella Olivier, 1789
Sophronia consanguinella Herrich-Schäffer, [1855]
Sophronia finitimella Rebel, 1906
Sophronia humerella ([Denis & Schiffermüller], 1775)
Sophronia illustrella (Hübner, 1796)
Stenolechia gemmella (Linnaeus, 1758)
Stenolechia nigrinotella Zeller, 1847
Stenolechia sagittella Caradja, 1920
Stomopteryx detersella Zeller, 1847
Stomopteryx patruella Mann, 1857
Syncopacma captivella Herrich-Schäffer, [1854]
Syncopacma coronilella Treitschke, 1833
Syncopacma maraschella Caradja, 1920
Syncopacma polychromella Rebel, 1902
Syncopacma sangiella Stainton, 1863
Syncopacma splendens Staudinger, 1881
Syncopacma syncrita Meyrick, 1926
Syncopacma taeniolella Zeller, 1839
Syncopacma vorticella Scopoli, 1763
Teleiodes decorella Haworth, 1812
Teleiodes luculella (Hübner, [1813])
Teleiodes ostentella Zerny, 1933
Teleiodes paripunctella Thunberg, 1794
Teleiodes proximella (Hübner, 1796)
Teleiodes vulgella ([Denis & Schiffermüller], 1775)
Teleiopsis bagriotella Duponchel, [1840]
Teleiopsis diffinis Haworth, 1828
Teleiopsis latisacculus Pitkin, 1988
Teleiopsis terebinthinella Herrich-Schäff., [1856]
Telphusa comedonella Staudinger, 1879
Telphusa mersinella Staudinger, 1879
Telphusa praedicata Meyrick, 1923
Telphusa wagneriella Rebel, 1926
Tila sequanda Povolny, 1974
Turcopalpa glaseri Povolny, 1973
Xenolechia scriptella (Hübner, 1796)
Xenolechia tristis Staudinger, 1879
Xystophora arundinetella Stainton, 1858
Xystophora carchariella Zeller, 1839
Xyloryctidae
Odites kollarella Costa, [1836]
Momphidae
Mompha lacteella Stephens, 1834
Mompha miscella ([Denis & Schiffermüller], 1775)
Mompha subbistrigella Haworth, 1828
Cosmopterigidae
Allotalanta autophaea Meyrick, 1913
Ascalenia vanelloides Gerasimov, 1930
Coccidiphila lederiella Zeller, 1850
Cosmopterix scribaiella Zeller, 1850
Cosmopterix zieglerella (Hübner, [1810])
Eteobalea albiapicella Duponchel, 1843
Eteobalea beata Walsingham, 1907
Eteobalea dohrnii Zeller, 1847
Eteobalea intermediella Riedl, 1966
Eteobalea isabellella Costa, 1836
Eteobalea serratella Treitschke, 1833
Eteobalea sumptuosella Lederer, 1855
Pancalia leuwenhoekella (Linnaeus, 1761)
Pancalia nodosella Mann, 1854
Pyroderces argyrogrammos Zeller, 1847
Sorhagenia lophyrella Douglas, 1846
Sorhagenia rhamniella Zeller, 1839
Tolliella fulguritella Ragonot, 1895
Vulcaniella cognatella Riedl, 1991
Vulcaniella fiordalisa Petry, 1904
Vulcaniella glaseri Riedl, 1966
Vulcaniella grabowiella Staudinger, 1859
Vulcaniella pomposella Zeller, 1839
Scythridae
Scythris aerariella Herrich-Schäffer, [1854]
Scythris amphonycella Geyer, [1836]
Scythris anomaloptera Staudinger, 1880
Scythris apicalis Zeller, 1847
Scythris asiatica Staudinger, 1880
Scythris basistrigella Staudinger, 1880
Scythris canescens Staudinger, 1880
Scythris caramani Staudinger, 1880
Scythris cupreella Staudinger, 1859
Scythris discimaculella Rebel, 1935
Scythris dissimilella Herrich-Schäffer, [1855]
Scythris emichi Anker, 1870
Scythris fallacella Schläger, 1847
Scythris flabella Mann, 1861
Scythris flaviventrella Herrich-Schäffer, 1850
Scythris gravotella Zeller, 1847
Scythris iconiensis Rebel, 1903
Scythris jaeckhi Bengtsson, 1989
Scythris limbella (Fabricius, 1775)
Scythris moldavicella Rebel, 1906
Scythris monochreella Ragonot, 1895
Scythris obscurella Scopoli, 1763
Scythris ottomana Jäckh, 1978
Scythris paelopyga Staudinger, 1880
Scythris pascuella Zeller, 1852
Scythris pfeifferella Rebel, 1935
Scythris platypyga Staudinger, 1880
Scythris punctivittella Costa, [1836]
Scythris seliniella Zeller, 1839
Scythris senescens Stainton, 1854
Scythris subclavella Rebel, 1900
Scythris subfasciata Staudinger, 1880
Scythris tabescentella Staudinger, 1880
Scythris tabidella Herrich-Schäffer, [1854]
Scythris taurella Caradja, 1920
Scythris tenuisquamata Staudinger, 1880
Scythris tenuivittella Stainton, 1867
Scythris triguttella Zeller, 1839
Scythris unimaculella Rebel, 1905
Scythris vagabundella Herrich-Schäffer, [1854]
Scythris vittella Costa, [1836]
Syringopais temperatella Lederer, 1855
Alucitidae
Alucita cancellata Meyrick, 1908
Alucita cinnerethella Amsel, 1935
Alucita cymatodactyla Zeller, 1852
Alucita grammodactyla Zeller, 1841
Alucita hexadactyla Linnaeus, 1758
Alucita huebneri Wallengren, 1859
Alucita major Rebel, 1905
Alucita palodactyla Zeller, 1847
Alucita tridentata Scholz & Jäckh, 1994
Alucita zonodactyla Zeller, 1847
Epermeniidae
Epermenia aequidentella Hofmann, 1867
Epermenia chaerophyllella Goeze, 1776
Epermenia insecurella Stainton, 1849
Epermenia ochreomaculella Milliere, 1854
Epermenia orientalis Gaedike, 1966
Epermenia pontificella (Hübner, 1796)
Epermenia strictella Wocke, 1867
Epermenia wockeella Staudinger, 1880
Ochromolopis ictella (Hübner, [1813])
Ochromolopis staintoniella Milliere, 1869
Phaulernis fulviguttella Zeller, 1839
Yponomeutidae
Acrolepiopsis vesperella Zeller, 1850
Atemelia torquatella Lienig & Zeller, 1846
Digitivalva glaseri Gaedike, 1971
Digitivalva occidentella Klimesch, 1956
Digitivalva reticulella (Hübner, 1796)
Eidophasia messingiella F.R., 1837
Eidophasia syenitella Herrich-Schäffer, [1851]
Eidophasia tauricella Staudinger, 1879
Inuliphila pulicariae Klimesch, 1956
Inuliphila wolfschlaegeri Klimesch, 1956
Kessleria caucasica Friese, 1960
Kessleria impura Staudinger, 1879
Kessleria osyridella Milliere, 1869
Niphonympha albella Zeller, 1847
Paraswammerdamia lutarea Haworth, 1828
Phrealcia friesei Mey, 2012
Plutella porrectella (Linnaeus, 1758)
Plutella xylostella (Linnaeus, 1758)
Prays fraxinella Bjerkander, 1784
Prays oleae Bernard, 1788
Pseudoswammerdamia combinella (Hübner, 1796)
Rhigognotis senilella Zetterstedt, [1839]
Theristis mucronella Scopoli, 1763
Yponomeuta evonymellus (Linnaeus, 1758)
Yponomeuta irrorellus (Hübner, 1796)
Yponomeuta padellus (Linnaeus, 1758)
Yponomeuta plumbellus ([Denis & Schiffermüller], 1775)
Yponomeuta rorrellus (Hübner, 1796)
Ypsolopha albiramella Mann, 1861
Ypsolopha asperella (Linnaeus, 1761)
Ypsolopha dentella ([Denis & Schiffermüller], 1775)
Ypsolopha excisella Lederer, 1855
Ypsolopha instabilella Mann, 1866
Ypsolopha kristalleniae Rebel, 1916
Ypsolopha manniella Staudinger, 1880
Ypsolopha paranthesella (Linnaeus, 1761)
Ypsolopha persicella (Fabricius, 1787)
Ypsolopha sculpturella (Herrich-Schäffer, 1854)
Ypsolopha semitessella Mann, 1861
Ypsolopha sequella Clerk, 1759
Ypsolopha trichoriella Mann, 1861
Ypsolopha ustella Clerck, 1759
Ypsolopha vittella (Linnaeus, 1758)
Zelleria hepariella Stainton, 1849
Ochsenheimeriidae
Ochsenheimeria taurella ([Denis & Schiffermüller], 1775)
Lyonetidae
Bedellia somnulentella Zeller, 1847
Leucoptera malifoliella Costa, 1836
Lyonetia prunifoliella (Hübner, 1796)
Glyphipterigidae
Glyphipterix equitella Scopoli, 1763
Glyphipterix forsterella (Fabricius, 1787)
Glyphipterix simpliciella Stephens, 1834
Glyphipterix thrasonella Scopoli, 1763
Argyresthiidae
Argyresthia abdominalis Zeller, 1839
Argyresthia conjugella Zeller, 1839
Argyresthia mendica Haworth, 1828
Argyresthia pretiosa Staudinger, 1880
Argyresthia pruniella (Linnaeus, 1761)
Heliodinidae
Heliodines roesella (Linnaeus, 1758)
Schreckensteiniidae
Schreckensteinia festaliella (Hübner, [1819])
Brachodidae
Brachodes anatolicus Kallies, 2001
Brachodes appendiculata (Esper, [1783])
Brachodes buxeus Kallies, 2001
Brachodes candefacta Lederer, 1858
Brachodes caradjae Rebel, 1902
Brachodes dispar Herrich-Schäffer, [1854]
Brachodes orientalis Rebel, 1905
Brachodes pumila Ochsenheimer, 1808
Brachodes tristis Staudinger, 1879
Phycodes chalcocrossa Meyrick, 1909
Phycodes radiata Ochsenheimer, 1808
Sesiidae
Bembecia ichneumoniformis ([Denis & Schiffermüller], 1775)
Bembecia illustris Stgr. & Rebel, 1901
Bembecia lomatiaeformis Lederer, 1853
Bembecia pontica Staudinger, 1891
Bembecia sanguinolenta Lederer, 1853
Bembecia scopigera Scopoli, 1763
Bembecia stiziformis Herrich-Schäffer, 1851
Chamaesphecia albiventris Lederer, 1853
Chamaesphecia alysoniformis Herrich-Schäffer, 1846
Chamaesphecia anatolica Schwingenschuss, 1938
Chamaesphecia annellata Zeller, 1847
Chamaesphecia aurifera Romanoff, 1885
Chamaesphecia bibioniformis (Esper, [1800])
Chamaesphecia chalciformis (Esper, [1804])
Chamaesphecia colpiformis Staudinger, 1856
Chamaesphecia doleriformis Herrich-Schäffer, 1846
Chamaesphecia doryceraeformis Lederer, 1853
Chamaesphecia elampiformis Herrich-Schäffer, 1851
Chamaesphecia empiformis (Esper, [1783])
Chamaesphecia euceraeformis Ochsenheimer, 1816
Chamaesphecia gorbunovi Spatenka, 1992
Chamaesphecia haberhaueri Staudinger, 1879
Chamaesphecia leucopsiformis (Esper, [1800])
Chamaesphecia masariformis Ochsenheimer, 1808
Chamaesphecia minor Staudinger, 1856
Chamaesphecia proximata Staudinger, 1891
Chamaesphecia regula Staudinger, 1891
Chamaesphecia schmidtiiformis Freyer, 1836
Chamaesphecia tahira Kallies & Petersen, 1995
Chamaesphecia tenthrediniformis ([Denis & Schiffermüller], 1775)
Euhagena palariformis Lederer, 1858
Osminia fenusaeformis Herrich-Schäffer, 1852
Paranthrene insolita Cerf, 1914
Paranthrene tabaniformis (Rottemburg, 1775)
Pennisetia hylaeiformis Laspeyres, 1801
Pyropteron chrysidiforme (Esper, [1782])
Pyropteron minianiforme Freyer, 1845
Sesia apiformis (Linnaeus, 1761)
Sesia bembeciformis (Hübner, [1806])
Sesia melanocephala Dalman, 1816
Sesia pimplaeformis Oberthür, 1872
Synansphecia affinis Staudinger, 1856
Synansphecia leucomelaena Zeller, 1847
Synansphecia mannii Lederer, 1853
Synansphecia muscaeformis (Esper, [1783])
Synansphecia triannuliformis Freyer, 1845
Synanthedon andrenaeforme Laspeyres, 1801
Synanthedon cephiforme Ochsenheimer, 1808
Synanthedon formicaeforme (Esper, [1783])
Synanthedon myopaeforme Borkhausen, 1789
Synanthedon pipiziforme Lederer, 1855
Synanthedon stomoxiforme (Hübner, 1790)
Synanthedon tipuliforme Clerck, 1759
Synanthedon vespiforme (Linnaeus, 1761)
Tinthia brosiformis (Hübner, [1813])
Tinthia cingulata Staudinger, 1870
Tinthia hoplisiformis Mann, 1864
Tinthia myrmosaeformis Herrich-Schäffer, 1846
Tinthia tineiformis (Esper, [1789])
Choreutidae
Anthophila fabriciana (Linnaeus, 1767)
Choreutis nemorana (Hübner, [1799])
Choreutis pariana Clerck, 1759
Millieria dolosana Herrich-Schäffer, [1854]
Prochoreutis myllerana (Fabricius, 1794)
Prochoreutis sehestediana (Fabricius, 1777)
Prochoreutis stellaris Zeller, 1847
Tebenna bjerkandrella Borgström, 1784
Cossidae
Arctiocossus striolatus Rothshild, 1912
Azygophleps regia Staudinger, 1892
Cecryphalus nubila Staudinger, 1895
Cossulus argentatus Staudinger, 1887
Cossulus intrictatus Staudinger, 1887
Cossulus lignosus Brandt, 1938
Cossus araraticus Teich, 1896
Cossus cossus (Linnaeus, 1758)
Cossus funkei Röber, 1896
Dieida ledereri Staudinger, 1871
Dyspessa argaeensis Rebel, 1902
Dyspessa emilia Staudinger, 1878
Dyspessa hethitica Daniel, 1938
Dyspessa pallidata Staudinger, 1892
Dyspessa salicicola Eversmann, 1848
Dyspessa ulula Borkhausen, 1790
Dyspessacossus fereidun Grum-Grshimailo, 1895
Dyspessacossus hadjiensis Daniel, 1953
Dyspessacossus osthelderi Daniel, 1932
Holcocerus volgensis Christoph, 1893
Isoceras bipunctatum Staudinger, 1887
Isoceras huberi Eitschberger & Ströhle, 1987
Lamellocossus terebra Denis, 1785
Paracossulus thrips (Hübner, [1813])
Parahypopta caestrum (Hübner, [1808])
Paropta paradoxa Herrich-Schäffer, [1851]
Phragmacossia albida Erschoff, 1874
Phragmataecia castaneae (Hübner, 1790)
Samagystia cuhensis Freina, 1994
Stygioides colchica Herrich-Schäffer, [1851]
Stygioides psychidion Staudinger, 1870
Stygioides tricolor Lederer, 1858
Zeuzera pyrina (Linnaeus, 1761)
Zygaenidae
Adscita drenowskii Alberti, 1939
Adscita geryon Hübner, [1813]
Adscita mannii Lederer, 1852
Adscita obscura Zeller, 1847
Adscita statices (Linnaeus, 1758)
Adscita storaiae Tarmann, 1977
Clelea syriaca Hampson, 1919
Jordanita chloronota Staudinger, 1870
Jordanita chloros (Hübner, [1813])
Jordanita globulariae (Hübner, 1793)
Jordanita graeca Jordan, 1909
Jordanita syriaca Alberti, 1937
Jordanita tenuicornis Zeller, 1847
Lucasiterna subsolana Staudinger, 1862
Praviela anatolica Naufock, 1929
Rhagades amasina Herrich-Schäffer, [1851]
Roccia budensis Speyer & Speyer, 1858
Roccia hector Jordan, 1909
Roccia kurdica Tarmann, 1987
Roccia notata Zeller, 1847
Roccia staudingeri Alberti, 1954
Roccia volgensis Möschler, 1862
Theresimima ampelophaga Bayle, 1809
Zygaena adscharica Reiss, 1935
Zygaena araxis Koch, 1936
Zygaena armena Eversmann, 1851
Zygaena brizae (Esper, [1800])
Zygaena cambysea Lederer, 1870
Zygaena carniolica Scopoli, 1763
Zygaena cuvieri Boisduval, 1828
Zygaena cynarae (Esper, [1789])
Zygaena dorycnii Ochsenheimer, 1808
Zygaena ephialtes (Linnaeus, 1767)
Zygaena filipendulae (Linnaeus, 1758)
Zygaena formosa Herrich-Schäffer, [1852]
Zygaena fraxini Ménétriés, 1832
Zygaena graslini Lederer, 1855
Zygaena haematina Kollar, [1849]
Zygaena laeta (Hübner, 1790)
Zygaena laetifica Herrich-Schäffer, [1846]
Zygaena lonicerae Schewen, 1777
Zygaena loti ([Denis & Schiffermüller], 1775)
Zygaena lydia Staudinger, 1887
Zygaena manlia Lederer, 1870
Zygaena minos ([Denis & Schiffermüller], 1775)
Zygaena olivieri Boisduval, 1828
Zygaena osterodensis Reiss, 1921
Zygaena peschmerga Eckweiler & Görgner, 1981
Zygaena problematica Naumann, 1966
Zygaena punctum Ochsenheimer, 1808
Zygaena purpuralis Brünnlich, 1763
Zygaena rosinae Korb, 1902
Zygaena sedi (Fabricius, 1787)
Zygaena tamara Christoph, 1889
Zygaena viciae ([Denis & Schiffermüller], 1775)
Zygaenoprocris capitalis Staudinger, 1879
Limacodidae
Apoda limacodes (Hufnagel, 1766)
Heterogenea cruciata Knoch, 1783
Hoyosia cretica Rebel, 1906
Latoia inexpectata Staudinger, 1900
Tortricidae
Ablabia goiiana (Linnaeus, 1761)
Acleris boscana (Fabricius, 1794)
Acleris boscanoides Razowski, 1959
Acleris fuscana (Fabricius, 1787)
Acleris literana (Linnaeus, 1758)
Acleris napaea Meyrick, 1912
Acleris osthelderi Obraztsov, 1949
Acleris permutana Duponchel, 1836
Acleris quercinana Zeller, 1849
Acleris rhombana ([Denis & Schiffermüller], 1775)
Acleris scabrana ([Denis & Schiffermüller], 1775)
Acleris tripunctana (Hübner, 1793)
Acleris undulana Walsingham, 1900
Acleris variegana ([Denis & Schiffermüller], 1775)
Agapeta hamana (Linnaeus, 1758)
Agapeta zoegana (Linnaeus, 1767)
Aleimma loeflingiana (Linnaeus, 1758)
Ancylis achatana ([Denis & Schiffermüller], 1775)
Ancylis apicella ([Denis & Schiffermüller], 1775)
Ancylis badiana ([Denis & Schiffermüller], 1775)
Ancylis comptana Fröhlich, 1828
Ancylis mitterbacheriana ([Denis & Schiffermüller], 1775)
Ancylis obtusana Haworth, [1811]
Ancylis selenana Guenée, 1845
Ancylis unculana Haworth, [1811]
Anoplocnephasia orientana Alphéraky, 1876
Anoplocnephasia sedana Constant, 1884
Aphelia euxina Djakonov, 1929
Aphelia ignoratana Staudinger, 1879
Aphelia insincera Meyrick, 1912
Aphelia ochreana (Hübner, [1799])
Aphelia palaeana (Hübner, 1793)
Aphelia viburniana (Fabricius, 1787)
Archips crataeganus (Hübner, [1799])
Archips hebenstreitellus (Müller, 1764)
Archips podanus Scopoli, 1763
Archips rosanus (Linnaeus, 1758)
Archips vulpeculanus Fuchs, 1903
Archips xylosteanus (Linnaeus, 1758)
Argyrotaenia pulchellana Haworth, [1811]
Aspila funebrana Treitschke, 1835
Aspila janthinana Duponhcel, 1835
Aterpia anderreggana Guenée, 1845
Bactra lanceolana (Hübner, [1799])
Bactra robustana Christoph, 1872
Bactra Stephens, 1834
Barbara herrichiana Obraztsov, 1960
Barbara osmana Obraztsov, 1952
Cacochroea turbidana Treitschke, 1835
Cacoecimorpha pronubana (Hübner, [1799])
Capricornia boisduvaliana Duponchel, 1836
Celypha anatoliana Caradja, 1916
Celypha cespitana (Hübner, [1817])
Celypha flavipalpana Herrich-Schäffer, [1851]
Celypha rurestrana Duponchel, 1843
Ceratoxanthis argentomixtana Staudinger, 1870
Choristoneura diversana (Hübner, [1817])
Cirriphora pharaonana Kollar, 1858
Clepsis senecionana (Hübner, [1819])
Cnephasia alternella Stephens, 1852
Cnephasia anatolica Obraztsov, 1950
Cnephasia asiatica Kuznetsov, 1956
Cnephasia bizensis Réal, 1953
Cnephasia chrysantheana Duponchel, 1843
Cnephasia communana Herrich-Schäffer, [1851]
Cnephasia cupressivorana Staudinger, 1871
Cnephasia facetana Kennel, 1901
Cnephasia fragosana Zeller, 1847
Cnephasia helenica Obraztsov, 1950
Cnephasia heringi Razowski, 1958
Cnephasia kenneli Obraztsov, 1956
Cnephasia korvaci Razowski, 1965
Cnephasia longana Haworth, [1811]
Cnephasia maraschana Caradja, 1916
Cnephasia orthoxyana Réal, 1951
Cnephasia osthelderi Obraztsov, 1950
Cnephasia pascuana (Hübner, [1799])
Cnephasia semibrunneata de Joannis, 1891
Cnephasia syriella Razowski, 1956
Cnephasia tianshanica Filipjev, 1934
Cnephasia tristrami Walsingham, 1900
Cnephasia virgaureana Treitschke, 1835
Cnephasia virginana Kennel, 1899
Cnephasiella abrasana Duponchel, 1843
Cnephasiella incertana Treitschke, 1835
Cochylidia rupicola Curtis, 1834
Cochylimorpha alternana Stephens, 1834
Cochylimorpha armeniana de Joannis, 1891
Cochylimorpha chamomillana Herrich-Schäff, [1851]
Cochylimorpha diana Kennel, 1899
Cochylimorpha discolorana Kennel, 1899
Cochylimorpha eburneana Kennel, 1899
Cochylimorpha elongana F.R., 1839
Cochylimorpha fucosa Razowski, 1970
Cochylimorpha hilarana Herrich-Schäffer, [1851]
Cochylimorpha kurdistana Amsel, 1959
Cochylimorpha langeana Kalchberg, 1897
Cochylimorpha meridiana Staudinger, 1859
Cochylimorpha meridiolana Ragonot, 1894
Cochylimorpha nodulana Möschler, 1862
Cochylimorpha nomadana Erschoff, 1874
Cochylimorpha pyramidana Staudinger, 1870
Cochylimorpha sparsana Staudinger, 1879
Cochylimorpha wiltshirei Razowski, 1963
Cochylis defessana Mann, 1861
Cochylis epilinana Duponchel, 1842
Cochylis hybridella (Hübner, [1813])
Cochylis maestana Kennel, 1899
Cochylis militariana Derra, 1990
Cochylis nana Haworth, [1811]
Cochylis pallidana Zeller, 1847
Cochylis posterana Zeller, 1847
Cochylis roseana Haworth, [1811]
Cochylis salebrana Mann, 1862
Collicularia microgrammana Guenée, 1845
Commophila bilbaensis Rössler, 1877
Commophila cremonana Ragonot, 1894
Commophila deaurana Peyerimhoff, 1877
Commophila ferruginea Walsingham, 1900
Commophila flagellana Duponchel, 1836
Commophila francillana (Fabricius, 1794)
Commophila hartmanniana Clerck, 1759
Commophila iranica Razowski, 1963
Commophila kasyi Razowski, 1962
Commophila kindermanniana Treitschke, 1830
Commophila margarotana Duponchel, 1836
Commophila mauritanica Walsingham, 1898
Commophila moribundana Staudinger, 1859
Commophila nefandana Kennel, 1899
Commophila pannosana Kennel, 1913
Commophila prangana Kennel, 1900
Commophila sanguinana Treitschke, 1830
Commophila smeathmanniana (Fabricius, 1781)
Commophila speciosa Razowski, 1962
Commophila tesserana (Hübner, [1817])
Commophila tornella Walsingham, 1898
Commophila williana Brahm, 1791
Crocidosema plebejana Zeller, 1847
Croesia bergmanniana (Linnaeus, 1758)
Croesia forskaleana (Linnaeus, 1758)
Croesia holmiana (Linnaeus, 1758)
Cryptocochylis conjunctana Mann, 1864
Cydia alienana Caradja, 1916
Cydia conicolana Heylaerts, 1874
Cydia duplicana Zetterstedt, 1840
Cydia junctistrigana Walsingham, 1900
Cydia leucogrammana Hofmann, 1898
Cydia nigricana (Fabricius, 1794)
Cydia oxytropidis Martini, 1912
Cydia pfeifferi Rebel, 1935
Cydia phalacris Meyrick, 1912
Cydia pomonella (Linnaeus, 1758)
Cydia pyrivora Danilevsky, 1947
Cydia succedana ([Denis & Schiffermüller], 1775)
Diceratura ostrinana Guenée, 1845
Diceratura rhodograpta Djakonov, 1929
Diceratura roseofasciana Mann, 1855
Dichelia alexiana Kennel, 1919
Dichrorampha acuminatana Lienig & Zeller, 1846
Dichrorampha cinerosana Herrich-Schäffer, [1851]
Dichrorampha coniana Obraztsov, 1953
Dichrorampha petiverella (Linnaeus, 1758)
Dichrorampha plumbagana Treitschke, 1830
Dichrorampha proxima Danilevsky, 1948
Dichroramphoides agilana Tengström, 1847
Dichroramphoides gueneeana Obraztsov, 1953
Endothenia ericetana Humphreys & Westwood, 1854
Endothenia gentianeana (Hübner, [1799])
Endothenia illepidana Kennel, 1901
Endothenia lapideana Herrich-Schäffer, [1851]
Endothenia marginana Haworth, [1811]
Endothenia quadrimaculana Haworth, [1811]
Epagoge grotiana (Fabricius, 1781)
Epiblema farfarae T. B. Fletcher, 1938
Epiblema foenella (Linnaeus, 1758)
Epiblema gammana Mann, 1866
Epiblema graphana Treitschke, 1835
Epiblema hepaticana Treitschke, 1835
Epiblema junctana Herrich-Schäffer, [1856]
Epiblema scutulana ([Denis & Schiffermüller], 1775)
Epinotia abbreviana (Fabricius, 1794)
Epinotia brunnichiana (Linnaeus, 1767)
Epinotia cruciana (Linnaeus, 1761)
Epinotia dalmatana Rebel, 1891
Epinotia deruptana Kennel, 1901
Epinotia festivana (Hübner, [1799])
Epinotia kochiana Herrich-Schäffer, [1851]
Epinotia nigricana Herrich-Schäffer, [1851]
Epinotia ramella (Linnaeus, 1758)
Epinotia thapsiana Zeller, 1847
Eriopsela quadrana (Hübner, [1813])
Eucelis nigritana Mann, 1862
Eucosma agnatana Christoph, 1872
Eucosma albidulana Herrich-Schäffer, [1851]
Eucosma cana Haworth, [1811]
Eucosma coagulana Kennel, 1901
Eucosma conformana Mann, 1872
Eucosma directa Meyrick, 1912
Eucosma eremodora Meyrick, 1932
Eucosma gypsatana Kennel, 1921
Eucosma hohenwarthiana ([Denis & Schiffermüller], 1775)
Eucosma sparsana Rebel, 1935
Eucosma umbratana Staudinger, 1879
Eucosma urbana Kennel, 1901
Eudemis profundana ([Denis & Schiffermüller], 1775)
Eugnosta lathoniana (Hübner, [1800])
Eugnosta magnificana Rebel, 1914
Eupoecilia ambiguella (Hübner, 1796)
Eupoecilia angustana (Hübner, [1799])
Falseuncaria ruficiliana Haworth, [1811]
Fulvoclysia arguta Razowski, 1968
Fulvoclysia aulica Razowski, 1968
Fulvoclysia defectana Lederer, 1870
Fulvoclysia dictyodana Staudinger, 1879
Fulvoclysia nerminae Koçak, 1982
Fulvoclysia pallorana Lederer, 1864
Fulvoclysia proxima Razowski, 1970
Fulvoclysia subdolana Kennel, 1901
Grapholita adjunctana Kennel, 1901
Grapholita caecana Schläger, 1847
Grapholita compositella (Fabricius, 1775)
Grapholita coronillana Lienig & Zeller, 1846
Grapholita dorsana (Fabricius, 1775)
Grapholita fissana Fröhlich, 1828
Grapholita gemmiferana Treitschke, 1835
Grapholita jungiella (Linnaeus, 1761)
Grapholita lunulana ([Denis & Schiffermüller], 1775)
Grapholita nebritana Treitschke, 1830
Grapholita orobana Treitschke, 1830
Grapholita pallifrontana Lienig & Zeller, 1846
Grapholita selenana Zeller, 1847
Grapholita selliferana Kennel, 1901
Grapholita sinana Felder, 1874
Gypsonoma aceriana Duponchel, 1843
Gypsonoma dealbana Fröhlich, 1828
Gypsonoma nitidulana Lienig, 1846
Gypsonoma simulantana Staudinger, 1880
Hedya nubiferana Haworth, [1811]
Hedya ochroleucana Fröhlich, 1828
Hedya pruniana (Hübner, [1799])
Hedya salicella (Linnaeus, 1758)
Hedya schreberiana (Linnaeus, 1761)
Hedya sororiana Herrich-Schäffer, [1851]
Hysterophora maculosana Haworth, [1811]
Isotrias hybridana (Hübner, [1817])
Isotrias rectifasciana Haworth, [1811]
Kenneliola amplana (Hübner, [1799])
Kenneliola fagiglandana Zeller, 1841
Kenneliola inquinatana (Hübner, [1799])
Kenneliola molybdana Constant, 1884
Kenneliola splendana (Hübner, [1799])
Lathronympha strigana (Fabricius, 1775)
Lipoptycha grueneriana Herrich-Schäffer, [1851]
Lobesia artemisiana Zeller, 1847
Lobesia bicinctana Duponchel, 1844
Lobesia cinerariae Nolcken, 1882
Lobesia glebifera Meyrick, 1912
Lobesia littoralis Humprey & Westwood, 1845
Lobesia porrectana Zeller, 1847
Lobesia quaggana Mann, 1855
Lobesia reliquana (Hübner, [1825])
Lobesia vitisana Jacquin, 1788
Lobesiodes euphorbiana Freyer, 1842
Lobesiodes occidentis Falkovitsh, 1970
Loxoterma aurofasciana Haworth, [1811]
Loxoterma lacunana ([Denis & Schiffermüller], 1775)
Loxoterma rivulana Scopoli, 1763
Lozotaenia djakonovi Danilevsky, 1963
Lozotaenia forsteriana (Fabricius, 1781)
Lozotaenia Stephens, 1829
Lozotaeniodes cupressana Duponchel, 1836
Neosphaleroptera nubilana Haworth, [1811]
Notocelia cynosbatella (Linnaeus, 1758)
Notocelia incarnatana Zincken, 1821
Notocelia orientana Caradja, 1916
Notocelia roborana Illiger, 1801
Notocelia suffusana Duponchel, 1843
Notocelia uddmanniana (Linnaeus, 1758)
Olethreutes arcuella (Linnaeus, 1761)
Olindia schumacherana (Fabricius, 1787)
Orthotaenia undulana ([Denis & Schiffermüller], 1775)
Oxypteron impar Staudinger, 1871
Pammene albunginana Guenée, 1845
Pammene amygdalana Duponchel, 1843
Pammene blockiana Herrich-Schäffer, [1851]
Pammene crataegophila Amsel, 1935
Pammene fasciana (Linnaeus, 1761)
Pammene germmana (Hübner, [1799])
Pammene insulana Guenée, 1845
Pammene luedersiana Sorhagen, 1885
Pammene mariana Zerny, 1920
Pammene ochsenheimeriana Lienig & Zeller, 1846
Pammene pontica Obraztsov, 1960
Pammene pullana Kuznetsov, 1986
Pammene regiana Zeller, 1849
Pammene rhediella (Linnaeus, 1761)
Pammene splendidulana Guenée, 1845
Pammene trauniana ([Denis & Schiffermüller], 1775)
Paralipoptycha plumbana Scopoli, 1763
Pelochrista agrestana Treitschke, 1830
Pelochrista arabescana Eversmann, 1844
Pelochrista caecimaculana (Hübner, [1799])
Pelochrista definitana Kennel, 1901
Pelochrista hepatariana Herrich-Schäffer, [1851]
Pelochrista infidana (Hübner, [1824])
Pelochrista medullana Staudinger, 1879
Pelochrista modicana Zeller, 1847
Pelochrista praefractana Kennel, 1901
Pelochrista seriana Kennel, 1901
Phalonidia acutana Kennel, 1913
Phalonidia albipalpana Zeller, 1847
Phalonidia amasiana Ragonot, 1894
Phalonidia contractana Zeller, 1847
Phalonidia permixtana ([Denis & Schiffermüller], 1775)
Phaneta aspidiscana (Hübner, [1817])
Phaneta paetulana Kennel, 1901
Phaneta pauperana Duponchel, 1843
Phaneta tetraplana Möschler, 1866
Phiaris delitana Staudinger, 1879
Phiaris stibiana Guenée, 1845
Phiaris umbrosana Freyer, 1842
Phtheochroa imitana Derra, 1990
Phtheochroa larseni Huemer, 1990
Phtheochroa osthelderi Huemer, 1990
Phtheochroa schreieri Derra, 1990
Prochlidonia ochromixtana Kennel, 1913
Propiromorpha rhodophana Herrich-Schäffer, [1851]
Pseudamelia rogana Guenée, 1845
Pseudeulia asinana (Hübner, [1799])
Pseudococcyx tessulatana Staudinger, 1871
Pseudosciaphila branderiana (Linnaeus, 1758)
Ptycholoma lecheana (Linnaeus, 1758)
Rhopobota myrtillana Humpreys & Westwood, 1845
Rhopobota stagnana ([Denis & Schiffermüller], 1775)
Rhyacionia buoliana ([Denis & Schiffermüller], 1775)
Rhyacionia pinicolana Doubleday, 1850
Selania leplastriana Curtis, 1831
Siclobola micromys Stringer, 1929
Siclobola neglectana Herrich-Schäffer, [1851]
Siclobola pallidana (Fabricius, 1777)
Siclobola semialbana Guenée, 1845
Siclobola unifasciana Duponchel, 1843
Sparganothis pilleriana ([Denis & Schiffermüller], 1775)
Spilonota ocellana ([Denis & Schiffermüller], 1775)
Strophedra nitidana (Fabricius, 1794)
Strophedra weirana Douglas, 1850
Syndemis musculana (Hübner, [1799])
Thiodia anatoliana Kennel, 1916
Thiodia citrana (Hübner, [1799])
Thiodia fessana Mann, 1873
Tortricodes selma Koçak, 1991
Tortrix viridana Linnaeus, 1758
Trachysmia aureopunctana Ragonot, 1894
Trachysmia chalcantha Meyrick, 1912
Trachysmia decipiens Walsingham, 1900
Trachysmia duponchelana Duponchel, 1843
Trachysmia lucentana Kennel, 1899
Trachysmia palpana Ragonot, 1894
Trachysmia procerana Lederer, 1863
Trachysmia purana Guenée, 1846
Trachysmia thiana Staudinger, 1899
Trachysmia unionana Kennel, 1900
Pterophoridae
Adaina microdactyla (Hübner, [1813])
Agdistis adactyla (Hübner, [1819])
Agdistis caradjai Arenberger, 1975
Agdistis frankeniae Zeller, 1847
Agdistis heydeni Zeller, 1852
Agdistis mevlaniella Arenberger, 1972
Agdistis tamaricis Zeller, 1847
Amblyptilia acanthodactyla (Hübner, [1813])
Anacapperia fusca Hofmann, 1898
Anacapperia hellenica Adamczewski, 1951
Calyciphora homoiodactyla Kasy, 1960
Calyciphora nephelodactyla Eversmann, 1844
Calyciphora xanthodactyla Treitschke, 1833
Calyciphora xerodactyla Zeller, 1841
Capperia britanniodactyla Gregson, 1869
Capperia celeusi Frey, 1886
Capperia polonica Adamczewski, 1951
Capperia washbourni Adamczewski, 1951
Cnaemidophorus rhododactylus (Fabricius, 1787)
Emmelina monodactyla (Linnaeus, 1758)
Geina didactyla (Linnaeus, 1758)
Marasmarcha ehrenbergiana Zeller, 1841
Merrifieldia baliodactyla Zeller, 1841
Merrifieldia leucodactyla ([Denis & Schiffermüller], 1775)
Merrifieldia tridactyla (Linnaeus, 1758)
Oidematophorus lithodactylus Treitschke, 1833
Oxyptilus chrysodactylus ([Denis & Schiffermüller], 1775)
Oxyptilus distans Zeller, 1847
Oxyptilus ericetorum Stainton, 1851
Oxyptilus kollari Stainton, 1849
Oxyptilus marginellus Zeller, 1847
Oxyptilus parvidactylus Haworth, [1811]
Oxyptilus pilosellae Zeller, 1841
Oxyptilus propedistans Bigot & Picard, 1988
Paracapperia anatolica Caradja, 1920
Paraplatyptilia metzneri Zeller, 1841
Platyptilia calodactyla ([Denis & Schiffermüller], 1775)
Platyptilia capnodactyla Zeller, 1841
Platyptilia chondrodactyla Caradja, 1920
Platyptilia gonodactyla ([Denis & Schiffermüller], 1775)
Porittia galactodactyla ([Denis & Schiffermüller], 1775)
Procapperia maculata Constant, 1865
Pselnophorus borzhomi Zagulyaev, 1987
Pselnophorus heterodactylus (Müller, 1764)
Pterophorus calcarius Lederer, 1870
Pterophorus caspius Lederer, 1870
Pterophorus ivae Kasy, 1960
Pterophorus malacodactylus Zeller, 1847
Pterophorus pentadactylus (Linnaeus, 1758)
Pterophorus phlomidis Staudinger, 1870
Pterophorus subalternans Lederer, 1869
Stenoptilia bipunctidactyla Scopoli, 1763
Stenoptilia mannii Zeller, 1852
Stenoptilia pterodactyla (Linnaeus, 1761)
Stenoptilia stigmatodactyla Zeller, 1852
Stenoptilia zophodactyla Duponchel, 1838
Tabulaephorus parthicus Lederer, 1870
Tabulaephorus punctinervis Constant, 1885
Trichoptilus siceliota Zeller, 1847
Wheeleria spilodactyla Curtis, 1827
Carposinidae
Carposina berberidella Herrich-Schäffer, [1854]
Carposina scirrhosella Herrich-Schäffer, [1854]
Pyralidae
Abrephia compositella Treitschke, 1835
Acentria nivea Olivier, 1791
Achroia grisella (Fabricius, 1794)
Achyra nudalis (Hübner, 1796)
Acigona cicatricella (Hübner, [1824])
Acrobasis atrisquamella Ragonot, 1887
Acrobasis bithynella Zeller, 1848
Acrobasis centunculella Mann, 1859
Acrobasis consociella (Hübner, [1813])
Acrobasis glaucella Staudinger, 1859
Acrobasis obliqua Zeller, 1847
Acrobasis obtusella (Hübner, 1796)
Actenia beatalis Kalchberg, 1897
Actenia brunnealis Treitschke, 1829
Actenia honestalis Treitschke, 1829
Aeschremon disparalis Herrich-Schäffer, [1855]
Aglossa asiatica Erschoff, 1872
Aglossa caprealis (Hübner, [1809])
Aglossa pinguinalis (Linnaeus, 1758)
Agriphila asiatica Ganev & Hacker, 1984
Agriphila beieri Błeszyński, 1955
Agriphila bleszynskiella Amsel, 1961
Agriphila brionella Zerny, 1914
Agriphila deliella (Hübner, [1813])
Agriphila geniculea Haworth, [1811]
Agriphila inquinatella ([Denis & Schiffermüller], 1775)
Agriphila latistria Haworth, [1811]
Agriphila paleatella Zeller, 1847
Agriphila straminella ([Denis & Schiffermüller], 1775)
Agriphila tersella Lederer, 1855
Agriphila tolli Błeszyński, 1952
Agriphila trabeatella Herrich-Schäffer, [1848]
Agriphila tristella ([Denis & Schiffermüller], 1775)
Agriphiloides longipalpellus Bleszynki, 1965
Agrotera nemoralis Scopoli, 1763
Alisa amseli Ganev & Hacker, 1984
Alophia combustella Herrich-Schäffer, [1855]
Amaurophanes stigmosalis Herrich-Schäffer, [1848]
Anagasta cypriusella Roesler, 1965
Anagasta kuehniella Zeller, 1879
Anagasta welseriella Zeller, 1848
Anania funebris Ström, 1768
Anania verbascata (Fabricius, 1787)
Anarpia incertalis Duponchel, 1833
Ancylodes pallens Ragonot, 1887
Ancylodes straminella Christoph, 1877
Ancylolomia palpella ([Denis & Schiffermüller], 1775)
Ancylolomia pectinatella Zeller, 1847
Ancylolomia tentaculella (Hübner, 1796)
Ancylosis bichordella Ragonot, 1887
Ancylosis cinnamomella Duponchel, 1836
Ancylosis dumetella Ragonot, 1887
Ancylosis iranella Ragonot, 1887
Ancylosis maculifera Staudinger, 1870
Ancylosis ochricostella Ragonot, 1887
Ancylosis sareptella Herrich-Schäffer, 1860
Ancylosis turaniella Ragonot, 1887
Anerastia ablutella Zeller, 1839
Anerastia lotella (Hübner, [1813])
Angustalius malacellus Duponchel, 1836
Anhomoeosoma nimbellum Duponchel, 1836
Antigastra catalaunalis Duponchel, 1833
Aphomia sociella (Linnaeus, 1758)
Aporodes floralis (Hübner, [1809])
Aporodes nepticulalis Hofmann, [1898]
Aproceratia albunculella Staudinger, 1879
Arimania komaroffi Ragonot, 1888
Arsissa divaricella Ragonot, 1887
Arsissa miridella Ragonot, 1893
Arsissa ramosella Herrich-Schäffer, [1855]
Asalebria venustella Ragonot, 1887
Asarta ciliciella Staudinger, 1879
Assara turciella Roesler, 1973
Atralata albofascialis Treitschke, 1829
Bazaria gilvella Ragonot, 1887
Bazaria leuchochrella Herrich-Schäffer, [1855]
Bazaria turensis Ragonot, 1887
Bradyrrhoa cantenerella Duponchel, 1836
Bradyrrhoa confiniella Zeller, 1848
Bradyrrhoa gilveolella Treitschke, 1833
Bradyrrhoa mesobaphella Ragonot, 1888
Bradyrrhoa trapezella Duponchel, 1836
Cabotia lacteicostella Ragonot, 1887
Cabotia oblitella Zeller, 1848
Cadra abstersella Zeller, 1847
Cadra calidella Guenée, 1845
Cadra cautella Walker, 1863
Cadra delattinella Roesler, 1965
Cadra figulilella Gregson, 1871
Cadra furcatella Herrich-Schäffer, [1849]
Calamotropha hierichuntica Zeller, 1867
Calamotropha paludella (Hübner, [1824])
Cataclysta lemnata (Linnaeus, 1758)
Cataonia erubescens Christoph, 1877
Cataonia mauritanica Amsel, 1953
Catastia acraspedella Staudinger, 1879
Catastia marginea ([Denis & Schiffermüller], 1775)
Catoptria ciliciella Rebel, 1903
Catoptria colchicella Lederer, 1870
Catoptria confusella Staudinger, 1881
Catoptria dimorphella Staudinger, 1881
Catoptria falsella ([Denis & Schiffermüller], 1775)
Catoptria hilarella Caradja, 1925
Catoptria incertella Herrich-Schäffer, [1852]
Catoptria laevigatella Lederer, 1870
Catoptria lithargyrella (Hübner, 1796)
Catoptria mytilella (Hübner, [1805])
Catoptria pinella (Linnaeus, 1758)
Catoptria verella Zincken, 1817
Catoptria wolfi Ganev & Hacker, 1984
Chilo luteellus Motschulsky, 1866
Chilo phragmitellus (Hübner, [1810])
Chilo pulverosellus Ragonot, 1895
Chrysocrambus craterellus Scopoli, 1763
Chrysocrambus linetellus (Fabricius, 1781)
Chrysocrambus syriellus Zerny, 1934
Chrysoteuchia culmella (Linnaeus, 1758)
Conobathra celticola Staudinger, 1879
Conobathra tumidana ([Denis & Schiffermüller], 1775)
Corcyra cephalonica Stainton, 1866
Crambus lathoniellus Zincken, 1817
Crambus monochromellus Herrich-Schäffer, [1852]
Crambus pascuellus (Linnaeus, 1758)
Crambus perlellus Scopoli, 1763
Crambus pratellus (Linnaeus, 1758)
Crambus uliginosellus Zeller, 1850
Cryptoblabes gnidiella Milliere, 1867
Cybalomia lutosalis Mann, 1862
Cybalomia pentadalis Lederer, 1855
Cynaeda dentalis ([Denis & Schiffermüller], 1775)
Cynaeda gigantea Staudinger, 1880
Dattinia colchicalis Herrich-Schäffer, [1855]
Dattinia infulalis Lederer, 1858
Dattinia variabilis Zerny, 1930
Denticera divisella Duponchel, 1842
Diasemia litterata Scopoli, 1763
Diasemiopsis ramburialis Duponchel, 1834
Dioryctria abietella (Fabricius, 1787)
Dioryctria mendacella Staudinger, 1859
Dioryctria sylvestrella Ratzeburg, 1840
Dolicharthia punctalis ([Denis & Schiffermüller], 1775)
Donacaula mucronella ([Denis & Schiffermüller], 1775)
Duponchelia fovealis Zeller, 1847
Ebulea crocealis (Hübner, 1796)
Ebulea testacealis Zeller, 1847
Ecbatania holopyrrhella Ragonot, 1888
Eccopisa effractella Zeller, 1848
Ectomyelois ceratoniae Zeller, 1839
Elegia fallax Staudinger, 1881
Elophila affinialis Guenée, 1854
Elophila hederalis Amsel, 1935
Elophila nymphaeata (Linnaeus, 1758)
Ematheudes pseudopunctella Ragonot, 1888
Ematheudes punctella Treitschke, 1833
Ematheudes varicella Ragonot, 1887
Ematheudes vitellinella Ragonot, 1887
Emprepes vestalis Hampson, 1900
Endotricha flammealis ([Denis & Schiffermüller], 1775)
Epactoctena octogenalis Lederer, 1863
Epascestria peltaloides Rebel, 1932
Epascestria pustulalis (Hübner, [1823])
Ephelis cruentalis Geyer, 1832
Ephestia disparella Ragonot, 1901
Ephestia elutella (Hübner, 1796)
Epichalcia amasiella Roesler, 1969
Epidauria discella Hampson, 1901
Epidauria phoeniciella Ragonot, 1895
Epidauria strigosa Staudinger, 1879
Epidauria transversariella Zeller, 1848
Epiepischnia pseudolydella Amsel, 1953
Epilydia liturosella Erschoff, 1874
Epimetasia vestalis Ragonot, 1894
Epischidia caesariella Ragonot, 1901
Epischnia christophori Ragonot, 1887
Epischnia cretaciella Mann, 1969
Epischnia leucoloma Herrich-Schäffer, 1849
Epischnia leucomixtella Ragonot, 1887
Epischnia muscidella Ragonot, 1887
Epischnia prodromella (Hübner, [1799])
Epischnia stenopterella Rebel, 1910
Episcythrastis tetricella ([Denis & Schiffermüller], 1775)
Etiella zinckenella Treitschke, 1832
Eucarphia vinetella (Fabricius, 1787)
Euchromius anapiellus Zeller, 1847
Euchromius bellus (Hübner, 1796)
Euchromius bleszynskiellus Popescu-Gorj, 1964
Euchromius cochlearellus Amsel, 1949
Euchromius gratiosellus Caradja, 1910
Euchromius jaxartellus Erschoff, 1874
Euchromius keredjellus Amsel, 1949
Euchromius ocelleus Haworth, [1811]
Euchromius pulverosus Christoph, 1887
Euchromius rayatellus Amsel, 1949
Euchromius siuxellus Ganev & Hacker, 1986
Euchromius superbellus Zeller, 1849
Euchromius vinculellus Zeller, 1847
Euclasta splendidalis Herrich-Schäffer, [1848]
Eudonia angustea Curtis, 1827
Eudonia crataegella (Hübner, 1796)
Eudonia lineola Curtis, 1827
Eudonia mercurella (Linnaeus, 1758)
Eudonia obsoleta Staudinger, 1879
Eudonia polyphaealis Hampson, 1907
Eudonia truncicolella Stainton, 1849
Eurhobasis lutescentella Caradja, 1916
Eurhodope incompta Zeller, 1847
Eurhodope monogrammos Zeller, 1867
Eurhodope rosella Scopoli, 1763
Eurhodope sielmannella Roesler, 1969
Eurrhypara hortulata (Linnaeus, 1758)
Eurrhypis cacuminalis Eversmann, 1843
Eurrhypis pollinalis ([Denis & Schiffermüller], 1775)
Eurrhypis sartalis (Hübner, [1813])
Euzophera bigella Zeller, 1848
Euzophera cinerosella Zeller, 1839
Euzophera flagella Lederer, 1869
Euzophera imperfectella Ragonot, 1895
Euzophera luculentella Ragonot, 1888
Euzophera lunulella Costa, 1836
Euzophera osseatella Treitschke, 1832
Euzophera pinguis Haworth, [1811]
Euzophera pulchella Ragonot, 1887
Euzophera rubricetella Herrich-Schäffer, [1856]
Euzophera umbrosella Staudinger, 1879
Euzopherodes charlottae Rebel, 1914
Euzopherodes lutisignella Mann, 1869
Euzopherodes vapidella Mann, 1857
Evergestis aenealis ([Denis & Schiffermüller], 1775)
Evergestis blandalis Guenée, 1854
Evergestis boursini Amsel, 1938
Evergestis caesialis Herrich-Schäffer, [1855]
Evergestis desertalis (Hübner, [1813])
Evergestis extimalis Scopoli, 1763
Evergestis forficalis (Linnaeus, 1758)
Evergestis frumentalis (Linnaeus, 1761)
Evergestis infirmalis Staudinger, 1870
Evergestis isatidalis Duponchel, 1833
Evergestis limbata (Linnaeus, 1767)
Evergestis mundalis Guenée, 1854
Evergestis nomadalis Lederer, 1871
Evergestis pallidata (Hufnagel, 1767)
Evergestis politalis ([Denis & Schiffermüller], 1775)
Evergestis serratalis Staudinger, 1870
Evergestis sophialis (Fabricius, 1787)
Evergestis subfuscalis Staudinger, 1870
Evergestis umbrosalis F.R., [1842]
Exophora exaspersata Staudinger, 1879
Exophora florella Mann, 1862
Faveria dionysia Zeller, 1846
Galleria mellonella (Linnaeus, 1758)
Gesneria centuriella ([Denis & Schiffermüller], 1775)
Gnathogutta circumdatella Lederer, 1858
Gnathogutta luticornella Ragonot, 1887
Gnathogutta osseella Ragonot, 1887
Gnathogutta pluripunctella Ragonot, 1887
Gnathogutta pumicosa Lederer, 1855
Gnathogutta umbratella Treitschke, 1835
Gymnancyla canella ([Denis & Schiffermüller], 1775)
Hannemanneia tacapella Ragonot, 1887
Harpadispar diffusalis Guenée, 1854
Heliothela wulfeniana Scopoli, 1763
Hellula undalis (Fabricius, 1781)
Heosphora ramulosella Ragonot, 1895
Heterographis albicosta Staudinger, 1870
Heterographis candidatella Lederer, 1858
Heterographis cinerella Stainton, 1859
Heterographis faustinella Zeller, 1867
Heterographis geminella Amsel, 1961
Heterographis gracilella Ragonot, 1887
Heterographis harmoniella Ragonot, 1887
Heterographis hellenica Staudinger, 1870
Heterographis muliebris Meyrick, 1937
Heterographis nigripunctella Staudinger, 1879
Heterographis nubeculella Ragonot, 1887
Heterographis pallida Staudinger, 1870
Heterographis pectinatella Ragonot, 1887
Heterographis pyrethrella Herrich-Schäffer, 1860
Heterographis rhodochrella Herrich-Schäffer, 1852
Heterographis samaritanella Zeller, 1867
Heterographis xylinella Staudinger, 1870
Homoeosoma achroeellum Ragonot, 1887
Homoeosoma calcellum Ragonot, 1887
Homoeosoma inustellum Ragonot, 1884
Homoeosoma sinuellum (Fabricius, 1794)
Homoeosoma subalbatellum Mann, 1864
Hydriris ornatalis Duponchel, 1834
Hymenia fascialis Cramer, [1782]
Hyperlais dulcinalis Treitschke, 1835
Hyperlais nemausalis Duponchel, 1834
Hyperlais siccalis Guenée, 1854
Hypochalcia ahenella ([Denis & Schiffermüller], 1775)
Hypochalcia fasciatella Staudinger, 1881
Hypochalcia germanella Zincken, 1818
Hypotia corticalis ([Denis & Schiffermüller], 1775)
Hypsopygia costalis (Fabricius, 1775)
Hypsotropa ichorella Lederer, 1855
Hypsotropa limbella Zeller, 1848
Hypsotropa paucipunctella Ragonot, 1895
Hypsotropa syriacella Ragonot, 1888
Hypsotropa vulneratella Zeller, 1847
Isauria dilucidella Duponchel, 1836
Keradere lepidella Ragonot, 1887
Keradere noctivaga Staudinger, 1879
Lambaesia fumosella Ragonot, 1887
Lambaesia pistrinariella Ragonot, 1887
Lambaesia straminella Zerny, 1914
Lamoria anella ([Denis & Schiffermüller], 1775)
Lamoria ruficostella Ragonot, 1888
Loxostege aeruginalis (Hübner, 1796)
Loxostege comptalis Freyer, [1848]
Loxostege deliblatica Szent-Ivany & Uhrik-Meszaros, 1942
Loxostege flavivenalis Hampson, 1913
Loxostege mucosalis Herrich-Schäffer, [1848]
Loxostege straminealis Hampson, 1900
Loxostege turbidalis Treitschke, 1829
Loxostege wagneri Zerny, 1929
Mardinia ferrealis Hampson, 1900
Margaritia sticticalis (Linnaeus, 1761)
Mecyna amasialis Staudinger, 1880
Mecyna asinalis (Hübner, [1819])
Mecyna flavalis ([Denis & Schiffermüller], 1775)
Mecyna lutealis Duponchel, 1833
Mecyna lutulentalis Lederer, 1858
Mecyna pontica Staudinger, 1880
Mecyna trinalis ([Denis & Schiffermüller], 1775)
Megasis libanoticella Zerny, 1934
Megasis mimeticella Staudinger, 1879
Megasis rippertella Zeller, 1839
Melissoblaptes unicolor Staudinger, 1879
Melissoblaptes zelleri de Joannis, 1932
Merulempista cingillella Zeller, 1846
Mesocrambus candidellus Herrich-Schäffer, [1848]
Metacrambus carectellus Zeller, 1847
Metallosticha argyrogrammos Zeller, 1847
Metallostichodes nigrocyanella Constant, 1865
Metallostichodes vinaceella Ragonot, 1895
Metasia carnealis Treitschke, 1829
Metasia inustalis Ragonot, 1894
Metasia mendicalis Staudinger, 1880
Metasia ophialis Treitschke, 1829
Metasia rosealis Ragonot, 1895
Metasia subtilialis Caradja, 1916
Metasia suppandalis (Hübner, [1823])
Metasia virginalis Ragonot, 1894
Microstega hyalinalis (Hübner, 1796)
Microstega pandalis (Hübner, [1825])
Microstega praepetalis Lederer, 1869
Mutuuraia terrealis Treitschke, 1829
Myelois circumvoluta Geoffroy, 1785
Myelois cribratella Zeller, 1847
Myelois fuscicostella Mann, 1861
Myelois multiforella Ragonot, 1893
Myelois quadripunctella Zerny, 1914
Myelopsis tabidella Mann, 1864
Myrlaea albistrigata Staudinger, 1881
Myrlaea epischniella Staudinger, 1879
Nascia cilialis (Hübner, 1796)
Neocrambus wolfschlaegeri Schawerda, 1937
Nephopteryx alpigenella Duponchel, 1836
Nephopteryx gregella Eversmann, 1844
Nephopteryx hostilis Stephens, 1834
Nephopteryx impariella Ragonot, 1887
Nephopteryx insignella Mann, 1862
Nephopteryx melanotaeniella Ragonot, 1888
Nephopteryx rhenella Zincken, 1818
Nephopteryx serraticornella Zeller, 1839
Noctuelia escherichi Hofmann, [1898]
Noctuelia mardinalis Staudinger, 1892
Noctuelia superba Freyer, [1844]
Noctuelia vespertalis Herrich-Schäffer, [1855]
Nomophila noctuella ([Denis & Schiffermüller], 1775)
Nyctegretis achatinella (Hübner, [1824])
Nyctegretis ruminella Harpe, 1860
Nyctegretis triangulella Ragonot, 1901
Nymphula stagnata Donovan, 1806
Nymphula stratiotata (Linnaeus, 1758)
Oncocera combustella Herrich-Schäffer, [1852]
Opsibotys fuscalis ([Denis & Schiffermüller], 1775)
Orenaia alborivulalis Eversmann, 1844
Orthopygia almanalis Rebel, 1917
Orthopygia fulvocilialis Duponchel, 1832
Orthopygia glaucinalis (Linnaeus, 1758)
Orthopygia incarnatalis Zeller, 1847
Orthopygia rubidalis ([Denis & Schiffermüller], 1775)
Ostrinia nubilalis (Hübner, 1796)
Palmitia massilialis Duponchel, 1832
Palpita unionalis (Hübner, 1796)
Panstegia aerealis (Hübner, 1796)
Panstegia limbopunctalis Herrich-Schäffer, [1849]
Panstegia meciti Koçak, 1987
Paracorsia repandalis ([Denis & Schiffermüller], 1775)
Paralipsa gularis Zeller, 1877
Paramaxillaria meretrix Staudinger, 1879
Parastenia bruguieralis Duponchel, 1833
Paratalanta cultralis Staudinger, 1867
Pareromene euchromiella Ragonot, 1895
Pediasia aridella Thunberg, 1788
Pediasia aridelloides Bleszynki, 1965
Pediasia contaminella (Hübner, 1796)
Pediasia desertella Lederer, 1855
Pediasia fascelinella (Hübner, [1813])
Pediasia luteella ([Denis & Schiffermüller], 1775)
Pediasia matricella Treitschke, 1832
Pediasia persella Toll, 1947
Pediasia phrygius Fazekas, 1990
Pempelia albariella Zeller, 1846
Pempelia amoenella Zeller, 1848
Pempelia argillaceella Osthelder, 1935
Pempelia brephiella Staudinger, 1879
Pempelia cirtensis Ragonot, 1890
Pempelia formosa Haworth, [1811]
Pempelia johannella (Caradja, 1916)
Pempelia obductella Zeller, 1839
Pempelia obscurella Osthelder, 1935
Pempelia palumbella ([Denis & Schiffermüller], 1775)
Pempelia placidella Zerny, 1929
Pempelia romanoffiella Ragonot, 1887
Pempelia serratella Ragonot, 1893
Pempelia sordida Staudinger, 1879
Pempelia thymiella Zeller, 1846
Pempeliella ornatella ([Denis & Schiffermüller], 1775)
Pempeliella sororiella Zeller, 1839
Phlyctaenia coronata (Hufnagel, 1767)
Phlyctaenia perlucidalis (Hübner, [1809])
Phlyctaenomorpha sinuosalis Cerf, 1910
Phycita coronatella Guenée, 1845
Phycita kurdistanella Amsel, 1953
Phycita macrodontella Ragonot, 1887
Phycita meliella Mann, 1864
Phycita metzneri Zeller, 1846
Phycita pedisignella Ragonot, 1887
Phycita poteriella Zeller, 1846
Phycita strigata Staudinger, 1879
Phycitodes albatella Ragonot, 1887
Phycitodes binaevella (Hübner, [1813])
Phycitodes carlinella Heinemann, 1865
Phycitodes inquinatella Ragonot, 1887
Phycitodes lacteella Rothschild, 1915
Phycitodes nigrilimbella Ragonot, 1887
Phycitodes saxicola Vaugham, 1870
Pima boisduvaliella Guenée, 1845
Platytes cerussella ([Denis & Schiffermüller], 1775)
Pleuroptya balteata (Fabricius, 1798)
Pleuroptya ruralis Scopoli, 1763
Plodia interpunctella (Hübner, [1813])
Pollichia semirubella Scopoli, 1763
Polyocha cremoricosta Ragonot, 1895
Polyocha venosa Zeller, 1847
Praeepischnia lydella Lederer, 1865
Pristocerella solskyi Christoph, 1877
Prochoristis rupicapralis Lederer, 1855
Prophtasia platycerella Ragonot, 1887
Psammotis pulveralis (Hübner, 1796)
Psorosa albunculella Ragonot, 1901
Psorosa dahliella Treitschke, 1832
Psorosa maraschella Caradja, 1910
Psorosa nucleolella Möschler, 1866
Psorosa ochrifasciella Ragonot, 1887
Pterothrixidia ancyrensis Amsel, 1953
Pterothrixidia contectella Zeller, 1848
Pterothrixidia fimbriatella Zeller, 1848
Pterothrixidia impurella Duponchel, 1836
Pterothrixidia orientella Ragonot, 1893
Pterothrixidia rufella Duponchel, 1836
Pterothrixidia tauricella Wocke, 1871
Pyla fusca Haworth, [1811]
Pyralis farinalis Linnaeus, 1758
Pyralis imperialis Caradja, 1916
Pyralis perversalis Herrich-Schäffer, 1849
Pyralis regalis ([Denis & Schiffermüller], 1775)
Pyrasia gutturalis Staudinger, 1880
Pyrausta alborivularis Eversmann, 1843
Pyrausta amatalis Rebel, 1903
Pyrausta aurata Scopoli, 1763
Pyrausta biternalis Mann, 1862
Pyrausta castalis Treitschke, 1829
Pyrausta cingulata (Linnaeus, 1758)
Pyrausta cuprinalis Ragonot, 1895
Pyrausta despicata Scopoli, 1763
Pyrausta falcatalis Guenée, 1854
Pyrausta nigrata Scopoli, 1763
Pyrausta obfuscata Scopoli, 1763
Pyrausta pachyceralis Hampson, 1900
Pyrausta pauperalis Staudinger, 1880
Pyrausta pavidalis Zerny, 1935
Pyrausta purpuralis (Linnaeus, 1758)
Pyrausta sanguinalis (Linnaeus, 1767)
Pyrausta trimaculalis Staudinger, 1867
Pyrausta virginalis Duponchel, 1833
Pyrausta zeitunalis Caradja, 1916
Saluria chehirella Zerny, 1929
Saluria maculivittella Ragonot, 1887
Schoenobius alpherakyi Staudinger, 1874
Schoenobius forficellus Thunberg, 1794
Schoenobius gigantellus ([Denis & Schiffermüller], 1775)
Schoenobius niloticus Zeller, 1867
Scirpophaga praelata Scopoli, 1763
Sclerocona acutellus Eversmann, 1842
Scoparia ambigualis Treitschke, 1829
Scoparia anatolica Caradja, 1917
Scoparia basistrigalis Knaggs, 1866
Scoparia ingratella Zeller, 1846
Scoparia luteolalis Scopoli, 1772
Scoparia perplexella Zeller, 1839
Scoparia pyralea Haworth, [1811]
Sefidia clasperella Asselbergs, 1994
Selagia dissimilella Ragonot, 1887
Selagia spadicella (Hübner, 1796)
Selagia subochrella (Herrich-Schäffer, 1849)
Sitochroa concoloralis Lederer, 1857
Sitochroa palealis ([Denis & Schiffermüller], 1775)
Sitochroa verticalis (Linnaeus, 1758)
Spermatophthora hornigii Lederer, 1852
Staudingeria deserticola Staudinger, 1870
Staudingeria morbosella Staudinger, 1879
Stemmatophora caesarealis Ragonot, 1891
Stemmatophora combustalis F.R., [1842]
Stemmatophora subustalis Lederer, 1853
Stiphrometasia sancta Hampson, 1900
Sultania lophotalis Hampson, 1900
Surattha margherita Bleszynki, 1965
Susia uberalis Swinhoe, 1884
Synaphe armenialis Lederer, 1870
Synaphe asiatica Obraztsov, 1952
Synaphe berytalis Ragonot, 1888
Synaphe bombycalis ([Denis & Schiffermüller], 1775)
Synaphe connectalis (Hübner, 1796)
Synaphe consecretalis Lederer, 1855
Synaphe moldavica (Esper, [1789])
Synaphe morbidalis Guenée, 1854
Synaphe punctalis (Fabricius, 1775)
Synaphe syriaca Rebel, 1903
Synaphe uxorialis Lederer, 1858
Synclera traducalis Zeller, 1852
Synoria antiquella Herrich-Schäffer, [1855]
Syrianarpia osthelderi Leraut, 1982
Talis quercella ([Denis & Schiffermüller], 1775)
Talis renatae Ganev & Hacker, 1984
Tegostoma baphialis Lederer, 1868
Tegostoma comparalis (Hübner, 1796)
Tegostoma perlepidalis Guenée, 1854
Tegostoma ramalis (Hübner, 1796)
Therapne obsoletalis Mann, 1864
Thisanotia chrysonuchella Scopoli, 1763
Thopeutis galleriellus Ragonot, 1892
Thyridiphora furia Swinhoe, 1884
Titanio normalis (Hübner, 1796)
Titanio sericatalis Herrich-Schäffer, [1848]
Titanio venustalis Lederer, 1855
Trachycera advenella Zincken, 1818
Trachycera dulcella Zeller, 1848
Trachycera legatea Haworth, [1811]
Trachycera marmorea Haworth, [1811]
Trachycera niveicinctella Ragonot, 1887
Trachycera suavella Zincken, 1818
Tragonitis cristella (Hübner, 1796)
Tretopteryx pertusalis Geyer, [1832]
Udea albescenstalis Hampson, 1900
Udea dispunctalis Guenée, 1854
Udea ferrugalis (Hübner, 1796)
Udea fimbriatalis Duponchel, 1834
Udea fulvalis (Hübner, [1809])
Udea institalis (Hübner, [1819])
Udea languidalis Eversmann, 1842
Udea lutealis (Hübner, [1809])
Udea numeralis (Hübner, 1796)
Udea olivalis ([Denis & Schiffermüller], 1775)
Udea rhododendronalis Duponchel, 1834
Udea silvalis de Joannis, 1891
Udea vestalis Hampson, 1900
Ulotricha egregialis Herrich-Schäffer, 1838
Uresiphita limbalis ([Denis & Schiffermüller], 1775)
Xanthocrambus saxonellus Zincken, 1821
Zophodia grossulariella Zincken, 1818
Zophodiodes leucocostella Ragonot, 1887
Thyridae
Thyris fenestrella Scopoli, 1763
Lasiocampidae
Chilena sordida Erschoff, 1874
Chondrostega osthelderi Püngeler, 1925
Chondrostega pastrana Lederer, 1858
Dendrolimus pini (Linnaeus, 1758)
Eriogaster catax (Linnaeus, 1758)
Eriogaster czipkai Lajonquiere, 1975
Eriogaster lanestris (Linnaeus, 1758)
Eriogaster nippei Freina, 1988
Eriogaster pfeifferi Daniel, 1932
Eriogaster rimicola Schrank, 1802
Euthrix potatoria (Linnaeus, 1758)
Gastropacha populifolia (Esper, [1783])
Gastropacha quercifolia (Linnaeus, 1758)
Lasiocampa eversmanni Kindermann, 1843
Lasiocampa grandis Rogenhofer, 1891
Lasiocampa quercus (Linnaeus, 1758)
Lasiocampa trifolii ([Denis & Schiffermüller], 1775)
Macrothylacia rubi (Linnaeus, 1758)
Malacosoma alpicolum Staudinger, 1870
Malacosoma castrensis (Linnaeus, 1758)
Malacosoma franconicum ([Denis & Schiffermüller], 1775)
Malacosoma neustrium (Linnaeus, 1758)
Malacosoma paralellum Staudinger, 1887
Odonestis pruni (Linnaeus, 1758)
Pachypasa otus Drury, 1773
Phyllodesma tremulifolia (Hübner, [1810])
Poecilocampa alpina Frey & Wullschlegel, 1874
Sena proxima Staudinger, 1894
Trichiura crataegi (Linnaeus, 1758)
Trichiura verenae Witt, 1981
Bombycidae
Bombyx mori (Linnaeus, 1758)
Lemoniidae
Lemonia balcanica Herrich-Schäffer, [1843]
Lemonia ballioni Christoph, 1888
Lemonia dumi (Linnaeus, 1761)
Lemonia pauli Staudinger, 1894
Lemonia pia Püngeler, 1902
Lemonia syriensis Daniel, 1953
Endromidae
Endromis versicolora (Linnaeus, 1758)
Saturniidae
Neoris huttoni Moore, 1862
Pavonia cephalariae Romanoff, 1885
Pavonia pavonia (Linnaeus, 1758)
Pavonia spini ([Denis & Schiffermüller], 1775)
Perisomena caecigena Kupido, 1825
Saturnia pyri ([Denis & Schiffermüller], 1775)
Brahmaeidae
Brahmaea ledereri Rogenhofer, 1873
Geometridae
Abraxas grossulariatus (Linnaeus, 1758)
Agriopis ankeraria Staudinger, 1861
Agriopis aurantiaria (Hübner, [1799])
Agriopis brumaria Borkhausen, 1794
Agriopis marginaria (Fabricius, 1777)
Agriopis vittaria Sulzer, 1776
Alcis repandatus (Linnaeus, 1758)
Aleucis distinctata Herrich-Schäffer, [1839]
Aleucis mimetes Wehrli, 1932
Aleucis orientalis Staudinger, 1892
Alsophila ligustriaria Lang, 1789
Amorphogynia necessaria Zeller, 1849
Angerona prunaria (Linnaeus, 1758)
Anticlea badiata Lang, 1789
Anticlea derivata ([Denis & Schiffermüller], 1775)
Antonechloris smaragdaria (Fabricius, 1787)
Apeira syringaria (Linnaeus, 1758)
Aplasta ononaria Fuessly, 1783
Aplocera annexata Freyer, [1830]
Aplocera columbata Metzner, 1845
Aplocera dervenaria Mentzer, 1981
Aplocera efformata Guenée, 1857
Aplocera fraternata Herrich-Schäffer, 1861
Aplocera fraudulentata Herrich-Schäffer, 1861
Aplocera guneyi Riemis, 1992
Aplocera mundata Staudinger, 1892
Aplocera mundulata Guenée, 1857
Aplocera musculata Staudinger, 1892
Aplocera numidaria Herrich-Schäffer, [1852]
Aplocera obsitaria Lederer, 1853
Aplocera opificata Lederer, 1870
Aplocera plagiata (Linnaeus, 1758)
Aplocera simpliciata Treitschke, 1835
Aplocera uniformata Urbahn, 1971
Apocheima hispidarium ([Denis & Schiffermüller], 1775)
Apochima flabellaria Heeger, 1838
Apochima rjabovi Wehrli, 1936
Archiearis notha (Hübner, [1823])
Artiora evonymaria (Hübner, [1799])
Ascotis turcaria (Fabricius, 1775)
Asovia maeoticaria Alphéraky, 1876
Aspitates ochrearia Rossi, 1794
Aspitates quadripunctata Goeze, 1781
Asthena albulata (Hufnagel, 1767)
Biston achyrus Wehrli, 1936
Biston betularius (Linnaeus, 1758)
Biston stratarius (Hufnagel, 1767)
Boarmia roboraria (Fabricius, 1787)
Boarmia viertlii Bohatsch, 1883
Bupalus piniarius (Linnaeus, 1758)
Cabera pusaria (Linnaeus, 1758)
Calodyscia sicanaria Oberthür, 1923
Calospilos pantaria (Linnaeus, 1767)
Calospilos sylvata Scopoli, 1763
Campaea honoraria ([Denis & Schiffermüller], 1775)
Campaea margaritata (Linnaeus, 1767)
Camptogramma bilineata (Linnaeus, 1758)
Camptogramma grisescens Staudinger, 1892
Carsia lythoxylata (Hübner, [1799])
Casilda antophilaria (Hübner, [1813])
Cataclysme riguata (Hübner, [1823])
Catarhoe cuculata (Hufnagel, 1767)
Catarhoe cupreata Herrich-Schäffer, 1839
Catarhoe permixtaria Guenée, 1857
Catarhoe putridaria Herrich-Schäffer, [1852]
Catarhoe rubidata ([Denis & Schiffermüller], 1775)
Chemerina caliginearia Rambur, 1833
Chesias korbi Bohatsch, 1909
Chesias rufata (Fabricius, 1775)
Chesias sureyata Rebel, 1931
Chlorissa asphaleia Wiltshire, 1966
Chlorissa pretiosaria Staudinger, 1877
Chlorissa pulmentaria Guenée, 1857
Chlorissa viridata (Linnaeus, 1758)
Chloroclysta miata (Linnaeus, 1758)
Chloroclysta siterata (Hufnagel, 1767)
Chloroclysta truncata (Hufnagel, 1767)
Chloroclystis chloerata Mabille, 1870
Chloroclystis rectangulata (Linnaeus, 1758)
Chloroclystis v-ata Haworth, [1809]
Chrysocraspeda charites Oberthür, 1916
Cidaria fulvata Forster, 1771
Cinglis humifusaria Eversmann, 1837
Cleora cinctaria ([Denis & Schiffermüller], 1775)
Cleorodes lichenarius (Hufnagel, 1767)
Cleta filacearia Herrich-Schäffer, [1847]
Cleta perpusillaria Eversmann, 1847
Cleta ramosaria Villers, 1789
Cnestrognophos anthina Wehrli, 1953
Cnestrognophos libanoticus Wehrli, 1931
Cnestrognophos mutilatus Staudinger, 1879
Colostygia olivata ([Denis & Schiffermüller], 1775)
Colostygia pectinataria Knoch, 1781
Colostygia schneideraria Lederer, 1855
Colotois pennaria (Linnaeus, 1761)
Comibaena bajularia ([Denis & Schiffermüller], 1775)
Comibaena neriaria Herrich-Schäffer, [1852]
Cosmorhoe ocellata (Linnaeus, 1758)
Costaconvexa polygrammata Borkhausen, 1794
Crocallis elinguaria (Linnaeus, 1758)
Crocallis inexpectata Warnecke, 1940
Crocallis tusciaria Borkhausen, 1793
Culpinia prouti Thierry-Mieg, 1913
Cyclophora albiocellaria (Hübner, 1822)
Cyclophora annulata (Schulze, 1775)
Cyclophora linearia (Hübner, [1799])
Cyclophora porata (Linnaeus, 1767)
Cyclophora punctaria (Linnaeus, 1758)
Cyclophora puppillaria (Hübner, [1799])
Cyclophora quercimontanaria Bastelberger, 1897
Cyclophora ruficiliaria Herrich-Schäffer, [1855]
Cyclophora suppunctaria Zeller, 1847
Dasycorsa modesta Staudinger, 1879
Dicrognophos amanensis Wehrli, 1934
Dicrognophos pseudosnelleni Rjabov, 1964
Dicrognophos sartatus Treitschke, 1827
Discoloxia blomeri Curtis, 1832
Dyscia conspersaria ([Denis & Schiffermüller], 1775)
Dyscia lentiscaria Donzel, 1837
Dyscia sultanica Wehrli, 1936
Ecliptopera silaceata ([Denis & Schiffermüller], 1775)
Ectropis bistortata Goeze, 1781
Ectropis consonaria (Hübner, [1799])
Eilicrinia acardia Stichel, 1911
Eilicrinia cordiaria (Hübner, 1790)
Eilicrinia subcordiaria Herrich-Schäffer, [1852]
Eilicrinia trinotata Metzner, 1845
Ematurga atomaria (Linnaeus, 1758)
Enanthyperythra legataria Herrich-Schäffer, [1852]
Ennomos autumnaria Werneburg, 1859
Ennomos effractaria Freyer, [1842]
Ennomos erosaria (Hübner, 1790)
Ennomos fraxineti Wiltshire, 1947
Ennomos quercaria (Hübner, [1813])
Ennomos quercinaria (Hufnagel, 1767)
Entephria caesiata ([Denis & Schiffermüller], 1775)
Entephria ignorata Staudinger, 1892
Epilobophora sabinata Geyer, [1831]
Epione paralellaria ([Denis & Schiffermüller], 1775)
Epione repandaria (Hufnagel, 1767)
Epirrhoe alternata (Müller, 1764)
Epirrhoe galiata ([Denis & Schiffermüller], 1775)
Epirrhoe molluginata (Hübner, [1813])
Epirrhoe rivata (Hübner, [1813])
Epirrhoe tristata (Linnaeus, 1758)
Epirrita nebulata Borgstroem, 1784
Erannis declinans Staudinger, 1879
Erannis defoliaria (Linnaeus, 1761)
Euchoeca nebulata Scopoli, 1763
Euchrognophos dubitarius Staudinger, 1892
Euchrognophos nanodes Wehrli, 1936
Euchrognophos variegatus Duponchel, 1830
Eucrostes indigenata Villers, 1789
Eulithis prunata (Linnaeus, 1758)
Eulithis roessleraria Staudinger, 1871
Eumannia oppositaria Mann, 1864
Eumera hoeferi Wehrli, 1934
Eumera regina Staudinger, 1892
Eumera turcosyrica Wehrli, 1932
Eunychiodes amygdalaria Herrich-Schäffer, [1848]
Eunychiodes divergaria Staudinger, 1892
Eunychiodes variabila Brandt, 1938
Euphya biangulata Haworth, [1809]
Euphya chalusata Wiltshire, 1970
Euphya frustata Treitschke, 1828
Euphya sintenisi Staudinger, 1892
Eupithecia abietaria Goeze, 1781
Eupithecia achyrdaghica Wehrli, 1929
Eupithecia adscriptaria Staudinger, 1871
Eupithecia albosparsata de Joannis, 1891
Eupithecia alliaria Staudinger, 1870
Eupithecia amasina Bohatsch, 1893
Eupithecia arenbergeri Pinker, 1976
Eupithecia bastelbergeri Dietze, 1913
Eupithecia breviculata Donzel, 1837
Eupithecia brunneata Staudinger, 1900
Eupithecia buxata Pinker, 1958
Eupithecia calligraphata Wagner, 1929
Eupithecia centaureata ([Denis & Schiffermüller], 1775)
Eupithecia cerussaria Lederer, 1855
Eupithecia cuculliaria Rebel, 1901
Eupithecia denotata (Hübner, [1813])
Eupithecia denticulata Treitschke, 1828
Eupithecia distinctaria Herrich-Schäffer, [1848]
Eupithecia euxinata Bohatsch, 1893
Eupithecia expallidata Doubleday, 1856
Eupithecia extraversaria Herrich-Schäffer, [1852]
Eupithecia extremata (Fabricius, 1787)
Eupithecia furcata Staudinger, 1879
Eupithecia gemellata Herrich-Schäffer, 1861
Eupithecia graphata Treitschke, 1828
Eupithecia gratiosata Herrich-Schäffer, 1861
Eupithecia gueneata Mabille, 1862
Eupithecia haworthiana Doubleday, 1856
Eupithecia icterata Villers, 1789
Eupithecia impurata (Hübner, [1813])
Eupithecia inconspicuata Bohatsch, 1893
Eupithecia indigata (Hübner, [1813])
Eupithecia innotata (Hufnagel, 1767)
Eupithecia insigniata (Hübner, 1790)
Eupithecia intricata Zetterstedt, [1839]
Eupithecia irriguata (Hübner, [1813])
Eupithecia irritaria Staudinger, 1892
Eupithecia korvaci Prout, 1939
Eupithecia kunzi Pinker, 1976
Eupithecia lacteolata Dietze, 1906
Eupithecia laquaearia Herrich-Schäffer, [1848]
Eupithecia limbata Staudinger, 1879
Eupithecia linariata (Fabricius, 1787)
Eupithecia lutosaria Bohatsch, 1893
Eupithecia maeoticaria Bohatsch, 1893
Eupithecia marasa Wehrli, 1932
Eupithecia marginata Staudinger, 1892
Eupithecia mesogrammata Dietze, 1910
Eupithecia millefoliata Roesler, 1866
Eupithecia nigritaria Staudinger, 1879
Eupithecia novata Dietze, 1903
Eupithecia ochridata Pinker, 1968
Eupithecia ochrovittata Christoph, 1887
Eupithecia oxycedrata Rambur, 1833
Eupithecia pfeifferi Wehrli, 1929
Eupithecia pinkeri Mironov, 1991
Eupithecia plumbeolata Haworth, [1809]
Eupithecia pseudocastigata Pinker, 1976
Eupithecia pulchellata Stephens, 1831
Eupithecia pusillata (Fabricius, 1787)
Eupithecia quercetica Prout, 1938
Eupithecia reisserata Pinker, 1976
Eupithecia saueri Vojnits, 1978
Eupithecia scalptata Christoph, 1885
Eupithecia schiefereri Bohatsch, 1893
Eupithecia scopariata Rambur, 1833
Eupithecia semigraphata Bruand, [1847]
Eupithecia separata Staudinger, 1879
Eupithecia silenata Assman, 1849
Eupithecia silenicolata Mabille, 1867
Eupithecia simpliciata Haworth, [1809]
Eupithecia spadiceata Zerny, 1933
Eupithecia spissilineata Metzner, 1846
Eupithecia staudingeri Bohatsch, 1893
Eupithecia subfenestrata Staudinger, 1892
Eupithecia subfuscata Haworth, [1809]
Eupithecia subsequaria Herrich-Schäffer, [1852]
Eupithecia subumbrata ([Denis & Schiffermüller], 1775)
Eupithecia succenturiata (Linnaeus, 1758)
Eupithecia syriacata Staudinger, 1879
Eupithecia tantillaria Boisduval, 1840
Eupithecia terrenata Dietze, 1913
Eupithecia tripunctaria Herrich-Schäffer, [1852]
Eupithecia undata Freyer, [1840]
Eupithecia unedonata Mabille, 1868
Eupithecia variostrigata Alphéraky, 1876
Eupithecia venosata (Fabricius, 1787)
Eupithecia vulgata Haworth, [1809]
Eupithecia wehrlii Wagner, 1931
Eustroma mardinata Staudinger, 1895
Fritzwagneria waltheri Wagner, 1919
Geometra papilionaria (Linnaeus, 1758)
Glossotrophia confinaria Herrich-Schäffer, [1847]
Glossotrophia diffinaria Prout, 1913
Gnopharmia colchidaria Lederer, 1870
Gnopharmia objectaria Staudinger, 1892
Gnopharmia rubraria Staudinger, 1892
Gymnoscelis dearmata (Dietze, 1904)
Gymnoscelis rufifasciata Haworth, [1809]
Gypsochroa renitidata (Hübner, [1817])
Hemistola chrysoprasaria (Esper, [1794])
Hemithea aestivaria (Hübner, [1799])
Heterolocha laminaria Herrich-Schäffer, [1852]
Hierochthonia pulverata Warren, 1901
Horisme corticata Treitschke, 1835
Horisme tersata ([Denis & Schiffermüller], 1775)
Horisme vitalbata (Hübner, [1799])
Hydria cervinalis Scopoli, 1763
Hydria undulata (Linnaeus, 1758)
Hydriomena coerulata (Fabricius, 1777)
Hydriomena furcata Thunberg, 1784
Hydriomena ruberata Freyer, [1831]
Hylaea cedricola Wehrli, 1929
Hylaea fasciaria (Linnaeus, 1758)
Hylaea pinicolaria Bellier, 1861
Hypoxystis pluviaria (Fabricius, 1787)
Idaea affinitata Bang-Haas, 1907
Idaea albitorquata Püngeler, 1907
Idaea allongata Staudinger, 1898
Idaea antennata Wehrli, 1931
Idaea aureolaria ([Denis & Schiffermüller], 1775)
Idaea aversata (Linnaeus, 1758)
Idaea biselata (Hufnagel, 1767)
Idaea camparia Herrich-Schäffer, [1852]
Idaea cervantaria Millire, 1869
Idaea circuitaria (Hübner, [1819])
Idaea congruata Zeller, 1847
Idaea consanguinaria Lederer, 1853
Idaea consociata Staudinger, 1900
Idaea consolidata Lederer, 1853
Idaea degeneraria (Hübner, [1799])
Idaea determinata Staudinger, 1876
Idaea deversaria Herrich-Schäffer, [1847]
Idaea dilutaria (Hübner, [1799])
Idaea dimidiata (Hufnagel, 1767)
Idaea efflorata Zeller, 1849
Idaea elongaria Rambur, 1833
Idaea emarginata (Linnaeus, 1758)
Idaea fasciata Staudinger, 1892
Idaea fathmaria Oberthür, 1876
Idaea filicata (Hübner, [1799])
Idaea flaveolaria (Hübner, [1809])
Idaea fuscovenosa Goeze, 1781
Idaea gracilipennis Warren, 1901
Idaea holliata Homberg, 1909
Idaea humiliata (Hufnagel, 1767)
Idaea infirmaria Rambur, 1833
Idaea inquinata Scopoli, 1763
Idaea intermedia Staudinger, 1879
Idaea laevigata Scopoli, 1763
Idaea mediaria (Hübner, [1819])
Idaea metohiensis Rebel, 1900
Idaea moniliata ([Denis & Schiffermüller], 1775)
Idaea murinata (Hufnagel, 1767)
Idaea nitidata Herrich-Schäffer, 1861
Idaea obsoletaria Rambur, 1833
Idaea ochrata Scopoli, 1763
Idaea ossiculata Lederer, 1871
Idaea osthelderi Wehrli, 1932
Idaea ostrinaria (Hübner, [1813])
Idaea pallidata ([Denis & Schiffermüller], 1775)
Idaea peluraria Reisser, 1939
Idaea politaria (Hübner, [1799])
Idaea proclivata Fuchs, 1902
Idaea roseofasciata Christoph, 1882
Idaea rufaria (Hübner, [1799])
Idaea ruficostata Zeller, 1847
Idaea rusticata ([Denis & Schiffermüller], 1775)
Idaea sericeata (Hübner, [1813])
Idaea serpentata (Hufnagel, 1767)
Idaea sodaliaria Herrich-Schäffer, [1852]
Idaea straminata Borkhausen, 1794
Idaea subpurpurata Staudinger, 1900
Idaea subsericeata Haworth, [1809]
Idaea sylvestraria (Hübner, [1799])
Idaea taurica Bang-Haas, 1907
Idaea textaria Lederer, 1861
Idaea tineata Thierry-Mieg, 1910
Idaea trigeminata Haworth, [1809]
Idaea troglodytaria Herrich-Schäffer, [1852]
Idaea vulpinaria Herrich-Schäffer, [1852]
Itame vincularia (Hübner, [1813])
Itame wauaria (Linnaeus, 1758)
Jodis lactearia (Linnaeus, 1758)
Kentrognophos ciscaucasicus Rjabov, 1964
Kentrognophos mardinarius Staudinger, 1901
Kentrognophos onustarius Herrich-Schäffer, [1852]
Kentrognophos zeitunarius Staudinger, 1901
Larentia clavaria Haworth, [1809]
Libanonia semitata Prout, 1913
Ligdia adustata (Fabricius, 1787)
Ligdia lassulata Rogenhofer, 1873
Lithostege ancyrana Prout, 1938
Lithostege bosphoraria Herrich-Schäffer, [1848]
Lithostege farinata (Hufnagel, 1767)
Lithostege griseata ([Denis & Schiffermüller], 1775)
Lithostege infuscata Eversmann, 1837
Lithostege odessaria Boisduval, 1848
Lithostege palaestinensis Amsel, 1935
Lithostege witzenmanni Standfuss, 1892
Lobophora halterata (Hufnagel, 1767)
Lomaspilis marginata (Linnaeus, 1758)
Lomaspilis opis Butler, 1879
Lomographa punctata (Fabricius, 1775)
Lycia graecaria Staudinger, 1861
Lycia hirtaria (Linnaeus, 1761)
Lycia pomonaria (Hübner, 1790)
Lycia zonaria ([Denis & Schiffermüller], 1775)
Lysognophos certhiatus Rebel, & Zerny, 1931
Lysognophos lividatus (Fabricius, 1787)
Lythria purpuraria (Linnaeus, 1758)
Lythria rotaria (Fabricius, 1798)
Melanthia procellata ([Denis & Schiffermüller], 1775)
Menophra abruptaria Thunberg, 1792
Menophra japygiaria Costa, 1849
Menophra trypanaria Wiltshire, 1948
Microloxia herbaria (Hübner, [1813])
Minoa murinata Scopoli, 1763
Myinodes interpunctaria Herrich-Schäffer, 1839
Narraga cappadocica Herbulot, 1943
Narraga fasciolaria (Hufnagel, 1767)
Nebula achromaria Harpe, 1852
Nebula apiciata Staudinger, 1892
Nebula approxiamata Staudinger, 1879
Nebula ludificata Lederer, 1870
Nebula reclamata Prout, 1914
Nebula salicata (Hübner, [1799])
Nebula senectaria Herrich-Schäffer, [1852]
Nebula vartianata Wiltshire, 1970
Neognopharmia stevenaria Boisduval, 1840
Neognophina pfeifferi Wehrli, 1926
Nychiodes obscuraria Villers, 1789
Nychiodes rayatica Wiltshire, 1957
Ochodontia adustaria Fischer de Waldheim, 1840
Odezia atrata (Linnaeus, 1758)
Odonthognophos zacharius Staudinger, 1879
Odontopera bidentata (Linnaeus, 1761)
Operophtera brumata (Linnaeus, 1758)
Operophtera fagata Scharfenberg, 1805
Opisthograptis luteolata (Linnaeus, 1758)
Opisthograptis niko Christoph, 1893
Organognophos wanensis Wehrli, 1936
Orthonama lignata (Hübner, [1799])
Orthonama opstipata (Fabricius, 1794)
Orthostixis calcularia Lederer, 1853
Orthostixis cribraria (Hübner, [1799])
Oulobophora externata Freyer, 1846
Oulobophora internata Püngeler, 1888
Ourapteryx malatyenis Wehrli, 1936
Ourapteryx persica Ménétriés, 1832
Ourapteryx sambucaria (Linnaeus, 1758)
Pachycnemia hippocastanaria (Hübner, [1799])
Pareulype berberata (Fabricius, 1787)
Pellonia vibicaria (Linnaeus, 1761)
Pelurga comitata (Linnaeus, 1758)
Pennithera firmata (Hübner, 1822)
Perconia strigillaria (Hübner, 1787)
Peribatodes correptarius Zeller, 1847
Peribatodes gemmaria Brahm, 1791
Peribatodes manuelaria Herrich-Schäffer, [1852]
Peribatodes perversarius Boisduval, 1840
Peribatodes secundarius (Esper, [1794])
Peribatodes syrilibanoni Wehrli, 1931
Peribatodes umbrarius (Hübner, [1809])
Perizoma albulatum ([Denis & Schiffermüller], 1775)
Perizoma alchemillatum (Linnaeus, 1758)
Perizoma blandiatum ([Denis & Schiffermüller], 1775)
Perizoma flavofasciatum Thunberg, 1792
Perizoma gigas Wiltshire, 1976
Perizoma parahydratum Alberti, 1969
Perizoma verberatum Scopoli, 1763
Petrophora chlorosata Scopoli, 1763
Petrophora narbonea (Linnaeus, 1767)
Phaselia serrularia Eversmann, 1847
Phibalapteryx virgata (Hufnagel, 1767)
Phigalia pedaria (Fabricius, 1787)
Philereme senescens Staudinger, 1892
Philereme transversata (Hufnagel, 1767)
Philereme vetulata ([Denis & Schiffermüller], 1775)
Phyllometra culminaria Eversmann, 1843
Plagodis dolabaria (Linnaeus, 1767)
Plemyria rubiginata ([Denis & Schiffermüller], 1775)
Problepsis ocellata Frivaldsky, 1845
Protorhoe corollaria Herrich-Schäffer, [1848]
Protorhoe renodata Püngeler, 1908
Protorhoe unicata Guenée, 1857
Proutictis artesiaria ([Denis & Schiffermüller], 1775)
Pseudopanthera macularia (Linnaeus, 1758)
Pseudopanthera syriacata Guenée, 1852
Pseudoterpna coronillaria (Hübner, [1817])
Pseudoterpna pruinata (Hufnagel, 1767)
Pungeleria capreolaria (Fabricius, 1787)
Pydna badiaria Freyer, [1841]
Rheumaptera hastata (Linnaeus, 1758)
Rheumaptera montivagata Duponchel, 1830
Rhodometra sacraria (Linnaeus, 1767)
Rhodostrophia calabra Petagna, 1786
Rhodostrophia jacularia (Hübner 1813)
Rhodostrophia sieversi Christoph, 1882
Rhodostrophia tabidaria Zeller, 1847
Rhoptria asperaria (Hübner, [1817])
Rhoptria dolosaria Herrich-Schäffer, [1848]
Rhoptria mardinata Staudinger, 1900
Schistostege nubilaria (Hübner, [1799])
Scopula beckeraria Lederer, 1853
Scopula decorata ([Denis & Schiffermüller], 1775)
Scopula drenowskii Sterneck, 1941
Scopula flaccidaria Zeller, 1852
Scopula imitaria (Hübner, [1799])
Scopula immistaria Herrich-Schäffer, [1852]
Scopula immorata (Linnaeus, 1758)
Scopula immutata (Linnaeus, 1758)
Scopula incanata (Linnaeus, 1758)
Scopula luridata Zeller, 1847
Scopula marginepunctata Goeze, 1781
Scopula nigropunctata (Hufnagel, 1767)
Scopula ochroleucaria Herrich-Schäffer, [1847]
Scopula orientalis Alphéraky, 1876
Scopula ornata Scopoli, 1763
Scopula pseudohonestata Wehrli,
Scopula rubiginata (Hufnagel, 1767)
Scopula submutata Treitschke, 1828
Scopula tessellaria Boisduval, 1840
Scopula transcaspica Prout, 1935
Scopula turbidaria (Hübner, [1819])
Scopula virgulata ([Denis & Schiffermüller], 1775)
Scotopteryx alpherakii Erschoff, 1877
Scotopteryx bipunctaria ([Denis & Schiffermüller], 1775)
Scotopteryx chenopodiata (Linnaeus, 1758)
Scotopteryx coarctaria ([Denis & Schiffermüller], 1775)
Scotopteryx langi Christoph, 1885
Scotopteryx luridata (Hufnagel, 1767)
Scotopteryx moeniata Scopoli, 1763
Scotopteryx mucronata Scopoli, 1763
Scotopteryx nebulata Bang-Haas, 1907
Scotopteryx octodurensis Favre, 1902
Scotopteryx vicinaria Duponchel, [1845]
Selenia dentaria (Fabricius, 1775)
Selenia lunularia (Hübner, 1788)
Selenia tetralunaria (Hufnagel, 1767)
Selidosema brunnearium Villers, 1789
Selidosema rorarium (Fabricius, 1777)
Semiothisa aestimaria (Hübner, [1809])
Semiothisa alternata ([Denis & Schiffermüller], 1775)
Semiothisa clathrata (Linnaeus, 1758)
Semiothisa glarearia Brahm, 1791
Semiothisa liturata (Linnaeus, 1761)
Semiothisa notata (Linnaeus, 1758)
Semiothisa rippertaria Duponchel, 1830
Semiothisa signaria (Hübner, [1809])
Semiothisa syriacaria Staudinger, 1871
Serraca punctinalis Scopoli, 1763
Siona lineata Scopoli, 1763
Stamnodes depeculata Lederer, 1870
Stegania dalmataria Guenée, 1857
Stegania dilectaria (Hübner, 1790)
Stegania trimaculata Villers, 1789
Stueningia poggearia Lederer, 1855
Stueningia wolfi Hausmann, 1993
Synopsia sociaria (Hübner, [1799])
Tephrina arenacearia ([Denis & Schiffermüller], 1775)
Tephrina hopfferaria Staudinger, 1879
Tephrina inconspicuaria (Hübner, [1817])
Tephrina murinaria ([Denis & Schiffermüller], 1775)
Tephronia sepiaria (Hufnagel, 1767)
Thalera fimbrialis Scopoli, 1763
Thera britannica Turner, 1925
Thera cognata Thunberg, 1792
Thera cupressata Freyer, [1830]
Thera juniperata (Linnaeus, 1758)
Thera obeliscata (Hübner, 1787)
Thera variata ([Denis & Schiffermüller], 1775)
Therapis flavicaria ([Denis & Schiffermüller], 1775)
Theria rupicapraria ([Denis & Schiffermüller], 1775)
Thetidia volgaria Guenée, 1857
Timandra griseata Petersen 1902
Trichodezia haberhaueri Lederer, 1864
Triphosa agnata Cerf, 1918
Triphosa dubitata (Linnaeus, 1758)
Triphosa sabaudiata Duponchel, 1830
Triphosa taochata Lederer, 1870
Warneckeella malatyana Wehrli, 1934
Wehrliola revocaria Staudinger, 1892
Xanthorhoe biriviata Borkhausen, 1794
Xanthorhoe designata (Hufnagel, 1767)
Xanthorhoe ferrugata Clerck, 1759
Xanthorhoe fluctuata (Linnaeus, 1758)
Xanthorhoe inconsiderata Staudinger, 1892
Xanthorhoe montanata ([Denis & Schiffermüller], 1775)
Xanthorhoe munitata (Hübner, [1809])
Xanthorhoe oxybiata Millire, 1872
Xanthorhoe quadrifasciaria (Linnaeus, 1761)
Xanthorhoe rectifasciaria Lederer, 1853
Xenochlorodes beryllaria Mann, 1853
Xenochlorodes olympiaria Herrich-Schäffer, [1852]
unplaced auctata Staudinger, 1879
Cimeliidae
Axia olga Staudinger, 1900
Axia theresiae Korb, 1900
Drepanidae
Cilix asiatica Bang-Haas, 1907
Cilix glaucata Scopoli, 1763
Drepana falcataria (Linnaeus, 1758)
Watsonalla binaria (Hufnagel, 1767)
Watsonalla cultraria (Fabricius, 1775)
Watsonalla uncinula Borkhausen, 1790
Thyatiridae
Cymatophorina diluta ([Denis & Schiffermüller], 1775)
Habrosyne pyritoides (Hufnagel, 1766)
Polyploca korbi Rebel, 1901
Polyploca ruficollis (Fabricius, 1787)
Tethea ocularis (Linnaeus, 1767)
Tethea or ([Denis & Schiffermüller], 1775)
Thyatira batis (Linnaeus, 1758)
Thyatira hedemanni Christoph, 1885
Sphingidae
Acherontia atropos (Linnaeus, 1758)
Agrius convolvuli (Linnaeus, 1758)
Akbesia davidi Oberthür, 1884
Clarina kotschyi Kollar, [1849]
Daphnis nerii (Linnaeus, 1758)
Deilephila elpenor (Linnaeus, 1758)
Deilephila porcellus (Linnaeus, 1758)
Deilephila suellus Staudinger, 1878
Dolbina elegans A.Bang-Haas, 1927
Hemaris croatica (Esper, [1779])
Hemaris dentata Staudinger, 1887
Hemaris fuciformis (Linnaeus, 1758)
Hemaris tityus (Linnaeus, 1758)
Hippotion celerio (Linnaeus, 1758)
Hyles centralasiae Staudinger, 1887
Hyles euphorbiae (Linnaeus, 1758)
Hyles gallii (Rottemburg, 1775)
Hyles hippophaes (Esper, [1789])
Hyles livornica (Esper, [1779])
Hyles nicaea Prunner, 1798
Hyles vespertilio (Esper, [1779])
Hyles zygophylli
Hyloicus pinastri (Linnaeus, 1758)
Hyloicus Hübner, [1819]
Laothoe populi (Linnaeus, 1758)
Macroglossum stellatarum (Linnaeus, 1758)
Marumba quercus ([Denis & Schiffermüller], 1775)
Mimas tiliae (Linnaeus, 1758)
Proserpinus proserpinus Pallas, 1772
Rethera brandti O.Bang-Haas, 1937
Rethera komarovi Christoph, 1885
Smerinthus kindermannii Lederer, 1852
Smerinthus ocellatus (Linnaeus, 1758)
Sphingaenopiopsis gorgoniades (Hübner, [1819])
Sphinx ligustri (Linnaeus, 1758)
Theretra alecto (Linnaeus, 1758)
Notodontidae
Cerura intermedia Teich, 1896
Cerura vinula (Linnaeus, 1758)
Clostera anachoreta (Fabricius, 1787)
Clostera curtula (Linnaeus, 1758)
Clostera pigra (Hufnagel, 1766)
Dicranura ulmi ([Denis & Schiffermüller], 1775)
Drymonia dodonaea ([Denis & Schiffermüller], 1775)
Drymonia querna (Fabricius, 1787)
Drymonia ruficornis (Hufnagel, 1766)
Eligmodonta ziczac (Linnaeus, 1758)
Furcula bicuspis Borkhausen, 1790
Furcula bifida Brahm, 1787
Furcula furcula (Linnaeus, 1761)
Furcula interrupta Christoph, 1867
Furcula syra Grum-Grshimailo, 1899
Harpyia milhauseri (Fabricius, 1775)
Neoharpyia pulcherrima Brandt, 1938
Notodonta dromedarius (Linnaeus, 1767)
Ochrostigma melagona (Hufnagel, 1766)
Ochrostigma velitaris (Hufnagel, 1766)
Paradrymonia vittata Staudinger, 1892
Peridea anceps Goeze, 1781
Peridea korbi Rebel, 1918
Phalera bucephala (Linnaeus, 1758)
Phalera bucephaloides Ochsenheimer, 1810
Pheosia tremula (Linnaeus, 1761)
Pterostoma palpina (Linnaeus, 1761)
Ptilodon capucina (Linnaeus, 1758)
Ptilodontella cucullina ([Denis & Schiffermüller], 1775)
Ptilodontella saerdabensis Daniel, 1938
Rhegmatophila alpina Bellier, 1881
Spatalia argentina ([Denis & Schiffermüller], 1775)
Stauropus fagi (Linnaeus, 1758)
Tritopha tritopha ([Denis & Schiffermüller], 1775)
Thaumetopoeidae
Thaumetopoea processionea (Linnaeus, 1758)
Thaumetopoea solitaria Freyer, [1838]
Traumatocampa pinivora Treitschke, 1834
Traumatocampa pityocampa ([Denis & Schiffermüller], 1775)
Traumatocampa wilkinsoni Tams, 1925
Lymantriidae
Arctornis l-nigrum (Müller, 1764)
Clethrogyna dubia Tauscher, 1806
Dicallomera fascelina (Linnaeus, 1758)
Elkneria pudibunda (Linnaeus, 1758)
Euproctis chrysorrhoea (Linnaeus, 1758)
Laelia coenosa (Hübner, [1808])
Leucoma salicis (Linnaeus, 1758)
Lymantria destituta Staudinger, 1892
Lymantria dispar (Linnaeus, 1758)
Lymantria lapidicola Herrich-Schäffer, 1852
Lymantria monacha (Linnaeus, 1758)
Ocneria detrita (Esper, [1785])
Ocneria raddei Christoph, 1885
Ocneria samarita Staudinger, 1895
Ocneria terebinthi Freyer, [1838]
Ocneria terebynthina Staudinger, 1894
Ocnerogyna amanda Staudinger, 1892
Orgyia antiqua (Linnaeus, 1758)
Orgyia trigotephras Boisduval, 1829
Sphrageidus melania Staudinger, 1891
Sphrageidus similis (Fuessly, 1775)
Arctiidae
Ammobiota festiva (Hufnagel, 1766)
Arctia caja (Linnaeus, 1758)
Atolmis rubricollis (Linnaeus, 1758)
Axiopoena maura Eichwald, 1830
Callimorpha dominula (Linnaeus, 1758)
Chelis maculosa Gerning, 1780
Conjuncta conjuncta Staudinger, 1892
Coscinia cribraria (Linnaeus, 1758)
Cybosia mesomella (Linnaeus, 1758)
Cymbalophora rivularis Ménétriés, 1832
Diacrisia sannio (Linnaeus, 1758)
Diaphora mendica (Linnaeus, 1761)
Eilema caniola (Hübner, [1808])
Eilema complana (Linnaeus, 1758)
Eilema costalis Zeller, 1847
Eilema griseola (Hübner, [1802])
Eilema lurideola Zincken, 1817
Eilema lutarella (Linnaeus, 1758)
Eilema palliatella Scopoli, 1763
Eilema pseudocomplana Daniel, 1939
Eilema pygmaeola Doubleday, 1847
Epatalmis caesarea Goeze, 1781
Epicallia villica (Linnaeus, 1758)
Euplagia quadripunctaria (Linnaeus, 1761)
Euplagia splendidior Tams, 1922
Hyphantria cunea Drury, 1773
Hyphoraia aulica (Linnaeus, 1758)
Katha deplana (Esper, [1787])
Lacydes spectabilis Tauscher, 1806
Lithosia quadra (Linnaeus, 1758)
Maurica bellieri Lederer, 1855
Miltochrista miniata Forster, 1771
Muscula muscula Staudinger, 1899
Nebrarctia semiramis Staudinger, 1892
Nudaria mundana (Linnaeus, 1761)
Ocnogyna anatolica Witt, 1980
Ocnogyna herrichi Staudinger, 1879
Ocnogyna loewii Zeller, 1846
Ocnogyna parasita (Hübner, 1790)
Paidia albescens Staudinger, 1892
Paidia cinerascens Herrich-Schäffer, [1847]
Paidia rica Freyer, [1855]
Parasemia plantaginis (Linnaeus, 1758)
Pelosia muscerda (Hufnagel, 1766)
Pelosia obtusa Herrich-Schäffer, [1847]
Phragmatobia fuliginosa (Linnaeus, 1758)
Phragmatobia placida Frivaldsky, 1835
Rhyparia purpurata (Linnaeus, 1758)
Setina aurata Ménétriés, 1832
Setina roscida ([Denis & Schiffermüller], 1775)
Spilosoma lubricipeda (Linnaeus, 1758)
Spilosoma luteum (Hufnagel, 1766)
Spilosoma urticae (Esper, [1789])
Spiris striata (Linnaeus, 1758)
Thumatha senex (Hübner, [1808])
Tyria jacobaeae (Linnaeus, 1758)
Utetheisa pulchella (Linnaeus, 1758)
Watsonarctia deserta Bartel, 1902
Wittia sororcula (Hufnagel, 1766)
Ctenuchidae
Amata aequipuncta Turati, 1917
Amata antiochena Lederer, 1861
Amata banghaasi Obraztsov, 1966
Amata caspica Staudinger, 1877
Amata hakkariana Freina, 1982
Amata phegea (Linnaeus, 1758)
Amata rossica Turati, 1917
Amata sintenisi Standfuss, 1892
Amata tanina Freina, 1982
Amata taurica Turati, 1917
Amata transcaspica Obraztsov, 1941
Amata wiltshirei Bytinski-Salz, 1939
Callitomis dimorpha Bytinski-Salz, 1939
Dysauxes ancilla (Linnaeus, 1767)
Dysauxes famula Freyer, 1836
Dysauxes punctata (Fabricius, 1781)
Dysauxes syntomida Staudinger, 1892
Noctuidae
About 1085 species - see: List of moths of Turkey (Noctuidae)
External links
Tentative Checklist of the Turkish Lepidoptera part 1
Tentative Checklist of the Turkish Lepidoptera part 2
Tentative Checklist of the Turkish Lepidoptera part 3
Fauna Europaea (European part of Turkey)
Turkey
Moths
Moths |
8243357 | https://en.wikipedia.org/wiki/100%20Gigabit%20Ethernet | 100 Gigabit Ethernet | 40 Gigabit Ethernet (40GbE) and 100 Gigabit Ethernet (100GbE) are groups of computer networking technologies for transmitting Ethernet frames at rates of 40 and 100 gigabits per second (Gbit/s), respectively. These technologies offer significantly higher speeds than 10 Gigabit Ethernet. The technology was first defined by the IEEE 802.3ba-2010 standard and later by the 802.3bg-2011, 802.3bj-2014, 802.3bm-2015, and 802.3cd-2018 standards.
The standards define numerous port types with different optical and electrical interfaces and different numbers of optical fiber strands per port. Short distances (e.g. 7 m) over twinaxial cable are supported while standards for fiber reach up to 80 km.
Standards development
On July 18, 2006, a call for interest for a High Speed Study Group (HSSG) to investigate new standards for high speed Ethernet was held at the IEEE 802.3 plenary meeting in San Diego.
The first 802.3 HSSG study group meeting was held in September 2006. In June 2007, a trade group called "Road to 100G" was formed after the NXTcomm trade show in Chicago.
On December 5, 2007, the Project Authorization Request (PAR) for the P802.3ba 40 Gbit/s and 100 Gbit/s Ethernet Task Force was approved with the following project scope:
The purpose of this project is to extend the 802.3 protocol to operating speeds of 40 Gbit/s and 100 Gbit/s in order to provide a significant increase in bandwidth while maintaining maximum compatibility with the installed base of 802.3 interfaces, previous investment in research and development, and principles of network operation and management. The project is to provide for the interconnection of equipment satisfying the distance requirements of the intended applications.
The 802.3ba task force met for the first time in January 2008. This standard was approved at the June 2010 IEEE Standards Board meeting under the name IEEE Std 802.3ba-2010.
The first 40 Gbit/s Ethernet Single-mode Fibre PMD study group meeting was held in January 2010 and on March 25, 2010 the P802.3bg Single-mode Fibre PMD Task Force was approved for the 40 Gbit/s serial SMF PMD.
The scope of this project is to add a single-mode fiber Physical Medium Dependent (PMD) option for serial 40 Gbit/s operation by specifying additions to, and appropriate modifications of, IEEE Std 802.3-2008 as amended by the IEEE P802.3ba project (and any other approved amendment or corrigendum).
On June 17, 2010, the IEEE 802.3ba standard was approved. In March 2011, the IEEE 802.3bg standard was approved. On September 10, 2011, the P802.3bj 100 Gbit/s Backplane and Copper Cable task force was approved.
The scope of this project is to specify additions to and appropriate modifications of IEEE Std 802.3 to add 100 Gbit/s 4-lane Physical Layer (PHY) specifications and management parameters for operation on backplanes and twinaxial copper cables, and specify optional Energy Efficient Ethernet (EEE) for 40 Gbit/s and 100 Gbit/s operation over backplanes and copper cables.
On May 10, 2013, the P802.3bm 40 Gbit/s and 100 Gbit/s Fiber Optic Task Force was approved.
This project is to specify additions to and appropriate modifications of IEEE Std 802.3 to add 100 Gbit/s Physical Layer (PHY) specifications and management parameters, using a four-lane electrical interface for operation on multimode and single-mode fiber optic cables, and to specify optional Energy Efficient Ethernet (EEE) for 40 Gbit/s and 100 Gbit/s operation over fiber optic cables. In addition, to add 40 Gbit/s Physical Layer (PHY) specifications and management parameters for operation on extended reach (>10 km) single-mode fiber optic cables.
Also on May 10, 2013, the P802.3bq 40GBASE-T Task Force was approved.
Specify a Physical Layer (PHY) for operation at 40 Gbit/s on balanced twisted-pair copper cabling, using existing Media Access Control, and with extensions to the appropriate physical layer management parameters.
On June 12, 2014, the IEEE 802.3bj standard was approved.
On February 16, 2015, the IEEE 802.3bm standard was approved.
On May 12, 2016, the IEEE P802.3cd Task Force started working to define next generation two-lane 100 Gbit/s PHY.
On May 14, 2018, the PAR for the IEEE P802.3ck Task Force was approved. The scope of this project is to specify additions to and appropriate modifications of IEEE Std 802.3 to add Physical Layer specifications and Management Parameters for 100 Gbit/s, 200 Gbit/s, and 400 Gbit/s electrical interfaces based on 100 Gbit/s signaling.
On December 5, 2018, the IEEE-SA Board approved the IEEE 802.3cd standard.
On November 12, 2018, the IEEE P802.3ct Task Force started working to define PHY supporting 100 Gbit/s operation on a single wavelength capable of at least 80 km over a DWDM system (using a combination of phase and amplitude modulation with coherent detection).
In May 2019, the IEEE P802.3cu Task Force started working to define single-wavelength 100 Gbit/s PHYs for operation over SMF (Single-Mode Fiber) with lengths up to at least 2 km (100GBASE-FR1) and 10 km (100GBASE-LR1).
In June 2020, the IEEE P802.3db Task Force started working to define a physical layer specification that supports 100 Gbit/s operation over 1 pair of MMF with lengths up to at least 50 m.
On February 11, 2021, the IEEE-SA Board approved the IEEE 802.3cu standard.
On June 16, 2021, the IEEE-SA Board approved the IEEE 802.3ct standard.
Early products
Optical signal transmission over a nonlinear medium is principally an analog design problem. As such, it has evolved slower than digital circuit lithography (which generally progressed in step with Moore's law). This explains why 10 Gbit/s transport systems existed since the mid-1990s, while the first forays into 100 Gbit/s transmission happened about 15 years later – a 10x speed increase over 15 years is far slower than the 2x speed per 1.5 years typically cited for Moore's law.
Nevertheless, at least five firms (Ciena, Alcatel-Lucent, MRV, ADVA Optical and Huawei) made customer announcements for 100 Gbit/s transport systems by August 2011, with varying degrees of capabilities. Although vendors claimed that 100 Gbit/s light paths could use existing analog optical infrastructure, deployment of high-speed technology was tightly controlled and extensive interoperability tests were required before moving them into service.
Designing routers or switches which support 100 Gbit/s interfaces is difficult. The need to process a 100 Gbit/s stream of packets at line rate without reordering within IP/MPLS microflows is one reason for this.
, most components in the 100 Gbit/s packet processing path (PHY chips, NPUs, memories) were not readily available off-the-shelf or require extensive qualification and co-design. Another problem is related to the low-output production of 100 Gbit/s optical components, which were also not easily availableespecially in pluggable, long-reach or tunable laser flavors.
Backplane
NetLogic Microsystems announced backplane modules in October 2010.
Multimode fiber
In 2009, Mellanox and Reflex Photonics announced modules based on the CFP agreement.
Single mode fiber
Finisar, Sumitomo Electric Industries, and OpNext all demonstrated singlemode 40 or 100 Gbit/s Ethernet modules based on the C form-factor pluggable (CFP) agreement at the European Conference and Exhibition on Optical Communication in 2009.
Compatibility
Optical fiber IEEE 802.3ba implementations were not compatible with the numerous 40 and 100 Gbit/s line rate transport systems because they had different optical layer and modulation formats as the IEEE 802.3ba Port Types show. In particular, existing 40 Gbit/s transport solutions that used dense wavelength-division multiplexing to pack four 10 Gbit/s signals into one optical medium were not compatible with the IEEE 802.3ba standard, which used either coarse WDM in 1310 nm wavelength region with four 25 Gbit/s or ten 10 Gbit/s channels, or parallel optics with four or ten optical fibers per direction.
Test and measurement
Quellan announced a test board in 2009.
Ixia developed Physical Coding Sublayer Lanes and demonstrated a working 100GbE link through a test setup at NXTcomm in June 2008. Ixia announced test equipment in November 2008.
Discovery Semiconductors introduced optoelectronics converters for 100 Gbit/s testing of the 10 km and 40 km Ethernet standards in February 2009.
JDS Uniphase introduced test and measurement products for 40 and 100 Gbit/s Ethernet in August 2009.
Spirent Communications introduced test and measurement products in September 2009.
EXFO demonstrated interoperability in January 2010.
Xena Networks demonstrated test equipment at the Technical University of Denmark in January 2011.
Calnex Solutions introduced 100GbE Synchronous Ethernet synchronisation test equipment in November 2014.
Spirent Communications introduced the Attero-100G for 100GbE and 40GbE impairment emulation in April 2015.
VeEX introduced its CFP-based UX400-100GE and 40GE test and measurement platform in 2012, followed by CFP2, CFP4, QSFP28 and QSFP+ versions in 2015.
Mellanox Technologies
Mellanox Technologies introduced the ConnectX-4 100GbE single and dual port adapter in November 2014. In the same period, Mellanox introduced availability of 100GbE copper and fiber cables. In June 2015, Mellanox introduced the Spectrum 10, 25, 40, 50 and 100GbE switch models.
Aitia
Aitia International introduced the C-GEP FPGA-based switching platform in February 2013. Aitia also produce 100G/40G Ethernet PCS/PMA+MAC IP cores for FPGA developers and academic researchers.
Arista
Arista Networks introduced the 7500E switch (with up to 96 100GbE ports) in April 2013. In July 2014, Arista introduced the 7280E switch (the world's first top-of-rack switch with 100G uplink ports).
Extreme Networks
Extreme Networks introduced a four-port 100GbE module for the BlackDiamond X8 core switch in November 2012.
Dell
Dell's Force10 switches support 40 Gbit/s interfaces. These 40 Gbit/s fiber-optical interfaces using QSFP+ transceivers can be found on the Z9000 distributed core switches, S4810 and S4820 as well as the blade-switches MXL and the IO-Aggregator. The Dell PowerConnect 8100 series switches also offer 40 Gbit/s QSFP+ interfaces.
Chelsio
Chelsio Communications introduced 40 Gbit/s Ethernet network adapters (based on the fifth generation of its Terminator architecture) in June 2013.
Telesoft Technologies Ltd
Telesoft Technologies announced the dual 100G PCIe accelerator card, part of the MPAC-IP series. Telesoft also announced the STR 400G (Segmented Traffic Router) and the 100G MCE (Media Converter and Extension).
Commercial trials and deployments
Unlike the "race to 10 Gbit/s" that was driven by the imminent need to address growth pains of the Internet in the late 1990s, customer interest in 100 Gbit/s technologies was mostly driven by economic factors. The common reasons to adopt the higher speeds were:
to reduce the number of optical wavelengths ("lambdas") used and the need to light new fiber
to utilize bandwidth more efficiently than 10 Gbit/s link aggregate groups
to provide cheaper wholesale, internet peering and data center connectivity
to skip the relatively expensive 40 Gbit/s technology and move directly from 10 to 100 Gbit/s
Alcatel-Lucent
In November 2007, Alcatel-Lucent held the first field trial of 100 Gbit/s optical transmission. Completed over a live, in-service 504 kilometre portion of the Verizon network, it connected the Florida cities of Tampa and Miami.
100GbE interfaces for the 7450 ESS/7750 SR service routing platform were first announced in June 2009, with field trials with Verizon, T-Systems and Portugal Telecom taking place in June–September 2010. In September 2009, Alcatel-Lucent combined the 100G capabilities of its IP routing and optical transport portfolio in an integrated solution called Converged Backbone Transformation.
In June 2011, Alcatel-Lucent introduced a packet processing architecture known as FP3, advertised for 400 Gbit/s rates. Alcatel-Lucent announced the XRS 7950 core router (based on the FP3) in May 2012.
Brocade
Brocade Communications Systems introduced their first 100GbE products (based on the former Foundry Networks MLXe hardware) in September 2010. In June 2011, the new product went live at the AMS-IX traffic exchange point in Amsterdam.
Cisco
Cisco Systems and Comcast announced their 100GbE trials in June 2008. However, it is doubtful that this transmission could approach 100 Gbit/s speeds when using a 40 Gbit/s per slot CRS-1 platform for packet processing. Cisco's first deployment of 100GbE at AT&T and Comcast took place in April 2011. In the same year, Cisco tested the 100GbE interface between CRS-3 and a new generation of their ASR9K edge router model. In 2017, Cisco announced a 32 port 100GbE Cisco Catalyst 9500 Series switch and in 2019 the modular Catalyst 9600 Series switch with a 100GbE line card
Huawei
In October 2008, Huawei presented their first 100GbE interface for their NE5000e router. In September 2009, Huawei also demonstrated an end-to-end 100 Gbit/s link. It was mentioned that Huawei's products had the self-developed NPU "Solar 2.0 PFE2A" onboard and was using pluggable optics in CFP.
In a mid-2010 product brief, the NE5000e linecards were given the commercial name LPUF-100 and credited with using two Solar-2.0 NPUs per 100GbE port in opposite (ingress/egress) configuration. Nevertheless, in October 2010, the company referenced shipments of NE5000e to Russian cell operator "Megafon" as "40GBPS/slot" solution, with "scalability up to" 100 Gbit/s.
In April 2011, Huawei announced that the NE5000e was updated to carry 2x100GbE interfaces per slot using LPU-200 linecards. In a related solution brief, Huawei reported 120 thousand Solar 1.0 integrated circuits shipped to customers, but no Solar 2.0 numbers were given. Following the August 2011 trial in Russia, Huawei reported paying 100 Gbit/s DWDM customers, but no 100GbE shipments on NE5000e.
Juniper
Juniper Networks announced 100GbE for its T-series routers in June 2009. The 1x100GbE option followed in Nov 2010, when a joint press release with academic backbone network Internet2 marked the first production 100GbE interfaces going live in real network.
In the same year, Juniper demonstrated 100GbE operation between core (T-series) and edge (MX 3D) routers. Juniper, in March 2011, announced first shipments of 100GbE interfaces to a major North American service provider (Verizon).
In April 2011, Juniper deployed a 100GbE system on the UK education network JANET. In July 2011, Juniper announced 100GbE with Australian ISP iiNet on their T1600 routing platform. Juniper started shipping the MPC3E line card for the MX router, a 100GbE CFP MIC, and a 100GbE LR4 CFP optics in March 2012. In Spring 2013, Juniper Networks announced the availability of the MPC4E line card for the MX router that includes 2 100GbE CFP slots and 8 10GbE SFP+ interfaces.
In June 2015, Juniper Networks announced the availability of its CFP-100GBASE-ZR module which is a plug & play solution that brings 80 km 100GbE to MX & PTX based networks. The CFP-100GBASE-ZR module uses DP-QPSK modulation and coherent receiver technology with an optimized DSP and FEC implementation. The low-power module can be directly retrofitted into existing CFP sockets on MX and PTX routers.
Standards
The IEEE 802.3 working group is concerned with the maintenance and extension of the Ethernet data communications standard. Additions to the 802.3 standard are performed by task forces which are designated by one or two letters. For example, the 802.3z task force drafted the original Gigabit Ethernet standard.
802.3ba is the designation given to the higher speed Ethernet task force which completed its work to modify the 802.3 standard to support speeds higher than 10 Gbit/s in 2010.
The speeds chosen by 802.3ba were 40 and 100 Gbit/s to support both end-point and link aggregation needs respectively. This was the first time two different Ethernet speeds were specified in a single standard. The decision to include both speeds came from pressure to support the 40 Gbit/s rate for local server applications and the 100 Gbit/s rate for internet backbones. The standard was announced in July 2007 and was ratified on June 17, 2010.
The 40/100 Gigabit Ethernet standards encompass a number of different Ethernet physical layer (PHY) specifications. A networking device may support different PHY types by means of pluggable modules. Optical modules are not standardized by any official standards body but are in multi-source agreements (MSAs). One agreement that supports 40 and 100 Gigabit Ethernet is the CFP MSA which was adopted for distances of 100+ meters. QSFP and CXP connector modules support shorter distances.
The standard supports only full-duplex operation. Other objectives include:
Preserve the 802.3 Ethernet frame format utilizing the 802.3 MAC
Preserve minimum and maximum frame size of current 802.3 standard
Support a bit error rate (BER) better than or equal to 10−12 at the MAC/PLS service interface
Provide appropriate support for OTN
Support MAC data rates of 40 and 100 Gbit/s
Provide physical layer specifications (PHY) for operation over single-mode optical fiber (SMF), laser optimized multi-mode optical fiber (MMF) OM3 and OM4, copper cable assembly, and backplane.
The following nomenclature is used for the physical layers:
The 100 m laser optimized multi-mode fiber (OM3) objective was met by parallel ribbon cable with 850 nm wavelength 10GBASE-SR like optics (40GBASE-SR4 and 100GBASE-SR10). The backplane objective with 4 lanes of 10GBASE-KR type PHYs (40GBASE-KR4). The copper cable objective is met with 4 or 10 differential lanes using SFF-8642 and SFF-8436 connectors. The 10 and 40 km 100 Gbit/s objectives with four wavelengths (around 1310 nm) of 25 Gbit/s optics (100GBASE-LR4 and 100GBASE-ER4) and the 10 km 40 Gbit/s objective with four wavelengths (around 1310 nm) of 10 Gbit/s optics (40GBASE-LR4).
In January 2010 another IEEE project authorization started a task force to define a 40 Gbit/s serial single-mode optical fiber standard (40GBASE-FR). This was approved as standard 802.3bg in March 2011. It used 1550 nm optics, had a reach of 2 km and was capable of receiving 1550 nm and 1310 nm wavelengths of light. The capability to receive 1310 nm light allows it to inter-operate with a longer reach 1310 nm PHY should one ever be developed. 1550 nm was chosen as the wavelength for 802.3bg transmission to make it compatible with existing test equipment and infrastructure.
In December 2010, a 10x10 multi-source agreement (10x10 MSA) began to define an optical Physical Medium Dependent (PMD) sublayer and establish compatible sources of low-cost, low-power, pluggable optical transceivers based on 10 optical lanes at 10 Gbit/s each. The 10x10 MSA was intended as a lower cost alternative to 100GBASE-LR4 for applications which do not require a link length longer than 2 km. It was intended for use with standard single mode G.652.C/D type low water peak cable with ten wavelengths ranging from 1523 to 1595 nm. The founding members were Google, Brocade Communications, JDSU and Santur.
Other member companies of the 10x10 MSA included MRV, Enablence, Cyoptics, AFOP, oplink, Hitachi Cable America, AMS-IX, EXFO, Huawei, Kotura, Facebook and Effdon when the 2 km specification was announced in March 2011.
The 10X10 MSA modules were intended to be the same size as the CFP specifications.
On June 12, 2014, the 802.3bj standard was approved. The 802.3bj standard specifies 100 Gbit/s 4x25G PHYs - 100GBASE-KR4, 100GBASE-KP4 and 100GBASE-CR4 - for backplane and twin-ax cable.
On February 16, 2015, the 802.3bm standard was approved. The 802.3bm standard specifies a lower-cost optical 100GBASE-SR4 PHY for MMF and a four-lane chip-to-module and chip-to-chip electrical specification (CAUI-4). The detailed objectives for the 802.3bm project can be found on the 802.3 website.
On May 14, 2018, the 802.3ck project was approved. This has objectives to:
Define a single-lane 100 Gbit/s Attachment Unit interface (AUI) for chip-to-module applications, compatible with PMDs based on 100 Gbit/s per lane optical signaling (100GAUI-1 C2M)
Define a single-lane 100 Gbit/s Attachment Unit Interface (AUI) for chip-to-chip applications (100GAUI-1 C2C)
Define a single-lane 100 Gbit/s PHY for operation over electrical backplanes supporting an insertion loss ≤ 28 dB at 26.56 GHz (100GBASE-KR1).
Define a single-lane 100 Gbit/s PHY for operation over twin-axial copper cables with lengths up to at least 2 m (100GBASE-CR1).
On November 12, 2018, the IEEE P802.3ct Task Force started working to define PHY supporting 100 Gbit/s operation on a single wavelength capable of at least 80 km over a DWDM system (100GBASE-ZR) (using a combination of phase and amplitude modulation with coherent detection).
On December 5, 2018, the 802.3cd standard was approved. The 802.3cd standard specifies PHYs using 50Gbit/s lanes - 100GBASE-KR2 for backplane, 100GBASE-CR2 for twin-ax cable, 100GBASE-SR2 for MMF and using 100Gbit/s signalling 100GBASE-DR for SMF.
In June 2020, the IEEE P802.3db Task Force started working to define a physical layer specification that supports 100 Gbit/s operation over 1 pair of MMF with lengths up to at least 50 m.
On February 11, 2021, the IEEE 802.3cu standard was approved. The IEEE 802.3cu standard defines single-wavelength 100 Gbit/s PHYs for operation over SMF (Single-Mode Fiber) with lengths up to at least 2 km (100GBASE-FR1) and 10 km (100GBASE-LR1).
100G interface types
Coding schemes
10.3125 Gbaud with NRZ ("PAM2") and 64b66b on 10 lanes per direction
One of the earliest coding used, this widens the coding scheme used in single lane 10GE and quad lane 40G to use 10 lanes. Due to the low symbol rate, relatively long ranges can be achieved at the cost of using a lot of cabling.
This also allows breakout to 10×10GE, provided that the hardware supports splitting the port.
25.78125 Gbaud with NRZ ("PAM2") and 64b66b on 4 lanes per direction
A sped-up variant of the above, this directly corresponds to 10GE/40GE signalling at 2.5× speed. The higher symbol rate makes links more susceptible to errors.
If the device and transceiver support dual-speed operation, it is possible to reconfigure an 100G port to downspeed to 40G or 4×10G. There is no autonegotiation protocol for this, thus manual configuration is necessary. Similarly, a port can be broken into 4×25G if implemented in the hardware. This is applicable even for CWDM4, if a CWDM demultiplexer and CWDM 25G optics are used appropriately.
25.78125 Gbaud with NRZ ("PAM2") and RS-FEC(528,514) on 4 lanes per direction
To address the higher susceptibility to errors at these symbol rates, an application of Reed–Solomon error correction was defined in IEEE 802.3bj / Clause 91. This replaces the 64b66b encoding with a 256b257b encoding followed by the RS-FEC application, which combines to the exact same overhead as 64b66b. To the optical transceiver or cable, there is no distinction between this and 64b66b; some interface types (e.g. CWDM4) are defined "with or without FEC."
26.5625 Gbaud with PAM4 and RS-FEC(544,514) on 2 lanes per direction
This achieves a further doubling in bandwidth per lane (used to halve the number of lanes) by employing pulse-amplitude modulation with 4 distinct analog levels, making each symbol carry 2 bits. To keep up error margins, the FEC overhead is doubled from 2.7% to 5.8%, which explains the slight rise in symbol rate.
53.125 Gbaud with PAM4 and RS-FEC(544,514) on 1 lane per direction
Further pushing silicon limits, this is a double rate variant of the previous, giving full 100GE operation over 1 medium lane.
30.14475 Gbaud with DP-DQPSK and SD-FEC on 1 lane per direction
Mirroring OTN4 developments, DP-DQPSK (dual polarization differential quadrature phase shift keying) employs polarization to carry one axis of the DP-QPSK constellation. Additionally, new soft decision FEC algorithms take additional information on analog signal levels as input to the error correction procedure.
13.59375 Gbaud with PAM4, KP4 specific coding and RS-FEC(544,514) on 4 lanes per direction
A half-speed variant of 26.5625 Gbaud with RS-FEC, with a 31320/31280 step encoding the lane number into the signal, and further 92/90 framing.
40G interface types
CL73 allows communication between the 2 PHYs to exchange technical capability pages, and both PHYs come to a common speed and media type. Completion of CL73 initiates CL72. CL72 allows each of the 4 lanes' transmitters to adjust pre-emphasis via feedback from the link partner.
40GBASE-T is a port type for 4-pair balanced twisted-pair Cat.8 copper cabling up to 30 m defined in IEEE 802.3bq. IEEE 802.3bq-2016 standard was approved by The IEEE-SA Standards Board on June 30, 2016. It uses 16-level PAM signaling over four lanes at 3,200 MBaud each, scaled up from 10GBASE-T.
Chip-to-chip/chip-to-module interfaces
CAUI-10 is a 100 Gbit/s 10-lane electrical interface defined in 802.3ba.
CAUI-4 is a 100 Gbit/s 4-lane electrical interface defined in 802.3bm Annex 83E with a nominal signaling rate for each lane of 25.78125 GBd using NRZ modulation.
100GAUI-4 is a 100 Gbit/s 4-lane electrical interface defined in 802.3cd Annex 135D/E with a nominal signaling rate for each lane of 26.5625 GBd using NRZ modulation and RS-FEC(544,514) so suitable for use with 100GBASE-CR2, 100GBASE-KR2, 100GBASE-SR2, 100GBASE-DR, 100GBASE-FR1, 100GBASE-LR1 PHYs.
100GAUI-2 is a 100 Gbit/s 2-lane electrical interface defined in 802.3cd Annex 135F/G with a nominal signaling rate for each lane of 26.5625 GBd using PAM4 modulation and RS-FEC(544,514) so suitable for use with 100GBASE-CR2, 100GBASE-KR2, 100GBASE-SR2, 100GBASE-DR, 100GBASE-FR1, 100GBASE-LR1 PHYs.
100GAUI-1 is a 100 Gbit/s 1-lane electrical interface defined in 802.3ck Annex 120F/G with a nominal signaling rate for each lane of 53.125 GBd using PAM4 modulation and RS-FEC(544,514) so suitable for use with 100GBASE-CR1, 100GBASE-KR1, 100GBASE-SR1, 100GBASE-DR, 100GBASE-FR1, 100GBASE-LR1 PHYs.
Pluggable optics standards
The QSFP+ form factor is specified for use with the 40 Gigabit Ethernet. Copper direct attached cable (DAC) or optical modules are supported, see Figure 85–20 in the 802.3 spec. QSFP+ modules at 40Gbit/s can also be used to provide four independent ports of 10 gigabit Ethernet.
CFP modules use the 10-lane CAUI-10 electrical interface.
CFP2 modules use the 10-lane CAUI-10 electrical interface or the 4-lane CAUI-4 electrical interface.
CFP4 modules use the 4-lane CAUI-4 electrical interface.
QSFP28 modules use the CAUI-4 electrical interface.
SFP-DD or Small Form-factor Pluggable – Double Density modules use the 100GAUI-2 electrical interface.
Cisco's CPAK optical module uses the four lane CEI-28G-VSR electrical interface.
There are also CXP and HD module standards. CXP modules use the CAUI-10 electrical interface.
Optical connectors
Short reach interfaces use Multiple-Fiber Push-On/Pull-off (MPO) optical connectors. 40GBASE-SR4 and 100GBASE-SR4 use MPO-12 while 100GBASE-SR10 uses MPO-24 with one optical lane per fiber strand.
Long reach interfaces use duplex LC connectors with all optical lanes multiplexed with WDM.
See also
Ethernet Alliance
InfiniBand
Interconnect bottleneck
Optical communication
Optical fiber cable
Optical Transport Network
Parallel optical interface
Terabit Ethernet
References
Further reading
Overview of Requirements and Applications for 40 Gigabit Ethernet and 100 Gigabit Ethernet Technology Overview White Paper (Archived 2009-08-01) – Ethernet Alliance
40 Gigabit Ethernet and 100 Gigabit Ethernet Technology Overview White Paper – Ethernet Alliance
External links
Ethernet Alliance
IEEE P802.3ba 40Gbit/s and 100Gbit/s Ethernet Task Force
IEEE P802.3ba 40Gbit/s and 100Gbit/s Ethernet Task Force public area
Higher Speed Study Group documents
Ethernet |
27750 | https://en.wikipedia.org/wiki/Script%20kiddie | Script kiddie | A script kiddie, skiddie, or skid is a relatively unskilled individual who uses scripts or programs, such as a web shell, developed by others to attack computer systems and networks and deface websites, according to the programming and hacking cultures. It is generally assumed that most script kiddies are juveniles who lack the ability to write sophisticated programs or exploits on their own and that their objective is to try to impress their friends or gain credit in computer-enthusiast communities. However, the term does not necessarily relate to the actual age of the participant. The term is considered to be derogatory.
Characteristics
In a Carnegie Mellon report prepared for the U.S. Department of Defense in 2005, script kiddies are defined as The more immature but unfortunately often just as dangerous exploiter of security lapses on the Internet. The typical script kiddy uses existing and frequently well known and easy-to-find techniques and programs or scripts to search for and exploit weaknesses in other computers on the Internet—often randomly and with little regard or perhaps even understanding of the potentially harmful consequences.
Script kiddies have at their disposal a large number of effective, easily downloadable programs capable of breaching computers and networks.
Script kiddies vandalize websites both for the thrill of it and to increase their reputation among their peers. Some more malicious script kiddies have used virus toolkits to create and propagate the Anna Kournikova and Love Bug viruses.
Script kiddies lack, or are only developing, programming skills sufficient to understand the effects and side effects of their actions. As a result, they leave significant traces which lead to their detection, or directly attack companies which have detection and countermeasures already in place, or in some cases, leave automatic crash reporting turned on.
One of the most common types of attack utilized by script kiddies involves a form of social engineering, whereby the attacker somehow manipulates or tricks a user into sharing their information. This is often done through the creation of fake websites where users will input their login (a form of phishing), thus allowing the script kiddie access to the account.
Game hacking
A subculture of hacking and programming communities, cheat developers, are responsible for the development and maintenance of clients. These individuals must circumvent the target program's security features to become undetected by the anti-cheat. Script kiddies are known to download and slightly modify something that a cheat developer created..
See also
Black hat hacker
Exploit (computer security)
Hacker (computer security)
Hacktivism
Lamer
List of convicted computer criminals
Luser
Noob
Web shell, a tool that script kiddies frequently use
References
Further reading
The Art of Intrusion: The Real Stories Behind the Exploits of Hackers, Intruders and Deceivers (2005)
External links
Honeynet.org - Know Your Enemy (Essay about script kiddies) preserved at Internet Archive
Cracking the Hacker Mindset
Hacking (computer security)
Computing culture
Pejorative terms for people |
62375013 | https://en.wikipedia.org/wiki/Secure%20access%20service%20edge | Secure access service edge | A secure access service edge (SASE) is a technology used to deliver wide area network (WAN) and security controls as a cloud computing service directly to the source of connection (user, device, branch office, Internet of things (IoT) device, or edge computing location) rather than a data center. Security is based on digital identity, real-time context, and company and regulatory compliance policies. A digital identity may be attached to anything from a person to a device, branch office, cloud service, application software, IoT system, or an edge computing location. The term was coined by marketing analyst firm Gartner.
The practice of backhauling all WAN traffic over long distances to one or a few corporate data centers for security adds network latency when users and their applications are dispersed, rather than on-premises.
Overview
SASE combines SD-WAN with computer security functions, including cloud access security brokers (CASB), Secure Web Gateways (SWG), antivirus/malware inspection, virtual private networking (VPN), firewall as a service (FWaaS), and data loss prevention (DLP), all delivered by a single cloud service at the network edge.
SASE SD-WAN service enhancements may include traffic prioritization, WAN optimization and converged backbones to enhance reliability and maximize performance.
WAN and security functions are typically delivered as a single service at dispersed SASE points of presence (PoPs) located as close as possible to dispersed users, branch offices and cloud services. To access SASE services, edge locations or users connect to the closest available PoP. SASE vendors may contract with several backbone providers and peering partners to offer customers fast, low-latency WAN performance for long-distance PoP-to-PoP connections.
History and drivers
The term SASE was coined by Gartner analysts Neil McDonald and Joe Skorupa and described in a July 29, 2019 networking hype cycle and market trends report, and an August 30, 2019 Gartner report.
SASE is driven by the rise of mobile, edge and cloud computing in the enterprise at the expense of the LAN and corporate data center. As users, applications and data move out of the enterprise data center to the cloud and network edge, moving security and WAN to the edge as well is necessary to minimize latency and performance issues.
The cloud computing model is meant to delegate and simplify delivery of SD-WAN and security functions to multiple edge computing devices and locations. Based on policy, different security functions may also be applied to different connections and sessions from the same entity, whether SaaS applications, social media, data center applications or personal banking, according to Gartner.
The cloud architecture boasts typical cloud enhancements such as elasticity, flexibility, agility, global reach and delegated management.
Characteristics
SASE principal elements are:
Convergence of WAN and network and network security functions.
A cloud-native architecture delivering converged WAN and security as a service that offers the scalability, elasticity, adaptability and self-healing typical of all cloud services.
Globally distributed fabric of PoPs guaranteeing a full range of WAN and security capabilities with low latency, wherever business offices, cloud applications and mobile users are located. To deliver low latency at any location, SASE PoPs have to be more numerous and extensive than those offered by typical public cloud providers and SASE providers must have extensive peering relationships.
Identity-driven services. An identity can be attached to anything from a person or branch office to a device, application, service, IoT device or edge computing location at the source of connection. Identity is the most significant context affecting SASE security policy. However, location, time of day, risk/trust posture of the connecting device and application and data sensitivity will provide other real-time context determining the security services and policies applied to and throughout each WAN session.
Support for all edges equally, including physical locations, cloud data centers, users’ mobile devices and edge computing, with placement of all capabilities at the local PoP rather than the edge location. Edge connections to the local PoP may vary from an SD-WAN for a branch office to a VPN client or clientless Web access for a mobile user, to multiple tunnels from the cloud or direct cloud connections inside a global data center.
Gartner and others promote a SASE architecture for the mobile, cloud enabled enterprise. Benefits include:
Reduced complexity
Reduced complexity that comes with the cloud model and a single vendor for all WAN and security functions, vs. multiple security appliances from multiple vendors at each location. Reduced complexity also comes from a single-pass architecture that decrypts the traffic stream and inspects it once with multiple policy engines rather than chaining multiple inspection services together.
Universal access
A SASE architecture is architected to provide consistent fast, secure access to any resource from any entity at any location, as opposed to access primarily based on the corporate data center.
Cost efficiency
Cost efficiency of the cloud model, which shifts up-front capital costs to monthly subscription fees, consolidates providers and vendors, and reduces the number of physical and virtual branch appliances and software agents IT has to purchase manage and maintain in-house. Cost reduction also comes from delegation of maintenance, upgrades and hardware refreshes to the SASE provider.
Performance
Performance of applications and services enhanced by latency-optimized routing, which is particularly beneficial for latency-sensitive video, VoIP and collaboration applications. SASE providers can optimize and route traffic through high-performance backbones contracted with carrier and peering partners.
Ease of use
Depending on the implementation, SASE is likely to reduce the number of apps and agents required for a device to a single app and provides a consistent experience to the user regardless of where they are or what they are accessing.
Consistent security
Consistent security via a single cloud service for all WAN security functions and WAN connections. Security is based on the same set of policies, with the same security functions delivered by the same cloud service to any access session, regardless of application, user or device location and destination (cloud, data center application). Once the SASE provider adapts to a new threat, the adaptation can be available to all the edges.
SASE Vendors
The following companies offer SASE products and services:
Criticism
Criticism of SASE has come from several sources, including IDC and IHS Markit, as cited in a November 9, 2019 sdxcentral post written by Tobias Mann. Both analyst firms criticize SASE as a Gartner term that is neither a new market, technology nor product, but rather an integration of existing technology with a single source of management.
Clifford Grossner of IHS Markit criticizes the lack of analytics, artificial intelligence and machine learning as part of the SASE concept and the likelihood that enterprises won't want to get all SD-WAN and security functions from a single vendor. Gartner counters that service chaining of security and SD-WAN functions from multiple vendors yields “inconsistent services, poor manageability and high latency.”
IDC analyst Brandon Butler cites IDC's position that SD-WAN will evolve to SD-Branch, defined as centralized deployment and management of virtualized SD-WAN and security functions at multiple branch office locations.
Complementary technology
SD-WAN
SD-WAN is a technology that simplifies wide area networking through centralized control of the networking hardware or software that directs traffic across the WAN. It also allows organizations to combine or replace private WAN connections with Internet broadband, LTE and/or 5g connections. The central controller sets policies and prioritizes, optimizes and routes WAN traffic, selecting the best link and path dynamically for optimum performance. SD-WAN vendors may offer some security functions with their SD-WAN virtual or physical appliances, which are typically deployed at the data center or branch office.
Typically SASE incorporates SD-WAN as part of a cloud service that also delivers mobile access and a full security stack delivered from a local PoP.
Network as a Service (NaaS)
SASE and NaaS overlap in concept. NaaS delivers virtualized network infrastructure and services using a cloud subscription business model. Like SASE it offers reduced complexity and management costs. Typically, different NaaS providers offer different service packages, such as a package of WAN and secure VPN's as a service, bandwidth on demand, or hosted networks as a service. By contrast SASE is meant to be a single comprehensive secure SD-WAN solution for branch offices, mobile users, data centers and any other secure enterprise WAN requirement.
Next Generation Firewall (NGFW)
NGFW combines a traditional firewall with other security and networking functions geared to the virtualized data center. Security functions include application control, deep and encrypted packet inspection, intrusion prevention, Web site filtering, anti-malware, identity management, threat intelligence and even WAN quality of service and bandwidth management.
NGFW offers a subset of the security stack offered by SASE, and typically doesn't include SD-WAN services. NGFW may be deployed on premises or as a cloud service, while SASE is a cloud architecture by definition. While SASE focuses security on WAN connections, a NGFW can be deployed anywhere including internally in the data center.
Firewall as a Service (FWaaS)
FWaaS is a firewall offered as a cloud service, rather than on premises as software or hardware. Most FWaaS providers offer NGFW capabilities. Typically, an entire organization is connected to a single FWaaS cloud with no requirement for maintaining its own firewall infrastructure. SASE combines edge FWaaS with other security functions and SD-WAN.
Marketplace
Gartner classifies SASE as an emerging market with several vendors offering a large number of SASE capabilities, but no single provider offering the entire SASE portfolio. It lists 14 companies in several market categories as SASE players, including Cato Networks, Zscaler, Cloudflare, Cisco, Akamai, Palo Alto Networks, Symantec, VMware and Netskope, and expects some of the major cloud providers to move into this category. Gartner doesn't expect a complete SASE offering to be available until sometime in 2020.
Standards
MEF which was created as the Metro Ethernet Forum, has become a next generation standards organization with a broad focus around software defined network and security infrastructure services for service provider, technology manufacturers, and enterprise network design. For the purpose of creating a future where interoperation between "best of breed" solutions is possible, MEF set out to create a number of industry standards that could be leveraged for training as well as integration. The MEF SASE Services Definition (MEF W117) committee was established and will be providing a draft technical specification for public use. This specification has been the work of a number of technology manufacturers as well as several service providers and is based on current MEF Technical Specifications such as MEF 70.1 Draft Release 1 SD-WAN Service Attributes and Service Framework.
MEF released a Working Draft; "MEF W117 draft 1.01 SASE (Secure Access Service Edge) SASE Service Attributes and Service Framework" August 2021. The document is available to MEF participating companies and members.
References
Wide area networks |
42297690 | https://en.wikipedia.org/wiki/Percona | Percona | Percona is an American company based in Durham, North Carolina and the developer of a number of open source software projects for MySQL, MariaDB, PostgreSQL, MongoDB and RocksDB users. The company’s revenue of around $25 million a year is derived from support, consultancy and managed services of database systems.
The company was founded in 2006 by Peter Zaitsev and Vadim Tkachenko.
Open source software
Percona maintains a GitHub repository for their open source software, which can also be downloaded from the Percona website.
MySQL database software:
Percona Server for MySQL
Percona XtraDB Cluster
Percona XtraBackup
Percona Kubernetes Operator for Percona XtraDB Cluster
MongoDB database software:
Percona Server for MongoDB
Percona Kubernetes Operator for Percona Server for MongoDB
Percona Backup for MongoDB
Microsoft SQL Server database software:
Percona Server for SQL Server
Percona Kubernetes Operator for Percona Server for SQL Server
Percona Backup for SQL Server
PostgreSQL database distribution:
Percona Distribution for PostgreSQL
Database Management Tools
Percona Monitoring and Management
Percona Toolkit
Other information
The company’s founders, Peter Zaitsev and Vadim Tkachenko co-authored with Baron Schwartz the book High Performance MySQL (3rd edition), published by O’Reilly.
References
External links
MySQL
Free software companies |
9070451 | https://en.wikipedia.org/wiki/AMD%20690%20chipset%20series | AMD 690 chipset series | The AMD 690 chipset series is an integrated graphics chipset family which was developed and manufactured by AMD subsidiary ATI for both AMD and Intel platforms focusing on both desktop and mobile computing markets. The corresponding chipset for the Intel platform has a marketing name of Radeon Xpress 1200 series.
The chipsets production began in late 2006 with codenames RS690 and RS600, where both of them share similar internal chip design, targeting at the desktop market. Mobile versions of both chipsets have codenames RS690M and RS600M. The marketing name for this chipset on the Intel platform is the Radeon Xpress 1200 series (Radeon Xpress 1200 to Radeon Xpress 1270) while the name for the chipset on the AMD platform is 690G.
Both the 690G and Radeon Xpress 1200 chipsets include an integrated graphics processing unit (IGP) based on the ATI Radeon X700 series GPUs with ATI Avivo technology included for hardware video acceleration. Mobile versions have reduced power consumption with adaptive power management features (PowerPlay). The 690G and Radeon Xpress 1250 chipsets are direct successors to Xpress 1600 integrated graphics chipsets (codenamed RS480 and RS400).
Starting in late 2006, mobile versions of the 690 chipset (RS690M) were being rolled out in mass by major notebook computer manufacturers, including HP, Asus, Dell, Toshiba, Acer, and others. For some OEMs (including Dell and Acer), the M690 series chipset was going to replace the Radeon Xpress 1150 (codenamed RS485M) on the mobile platform, and desktop variants of the 690 chipset were announced in February 2007.
The 690 chipset series consists of three members: 690G, 690V and M690T. The planned "RD690" enthusiast chipset was canceled in the official roadmap without explanation and no release date was given for the "RX690" chipset which has no IGP and only one PCI-E x16 slot.
After ATI was acquired by AMD in July 2006, plans for the Radeon Xpress 1250 chipset for the Intel platform were canceled while the 690G/M690 chipsets for the AMD platform became the main production target. AMD released the chipsets to only two vendors, Abit and AsRock. Abit signed on prior to the AMD acquisition and AsRock was given the remaining inventory of RS600 chips for the Chinese market.
On AMD Technology Analyst Day 2007, AMD announced that 4 million units of 690 chipsets had been shipped to customers, calling it a commercial success. With that in mind, AMD announced on January 21, 2008 that the series will be further extended to embedded systems with the last member, the AMD M690E chipset.
Lineup
The chipset has several variants, they are summarized below, sorted by their northbridge codename.
The first one is the RS690 which is the basic chipset and implemented now as 690G. The second one is the RS690C which is a simplified version of 690G and without TMDS support and named as 690V. The third one in the series is the RS690M for mobile platforms, named M690. The fourth one is the RS690MC, a simplified version of M690 and without TMDS support, called M690V. Another one in the lineup is the RS690T, another variant to the M690 chipset with a local frame buffer (see below). A member for the embedded systems, the M690E, is basically a M690T with different display output configurations.
Key features
IGP General features
Chipset models in the series (excluding RD690 and RX690) feature an Integrated Graphics Processor (IGP) which is incorporated into the northbridge and manufactured on an 80 nm fabrication process. The IGP's 3D architecture is based on Radeon R420 and contains 4 pixel pipelines capable of Shader Model version 2.0b with DirectX 9 and OpenGL 2.0 compatibility but lacks hardware vertex processing. It uses a shared memory architecture, meaning system RAM is shared with the IGP. The IGP was the first chip in ATI's integrated lineup that included ATI Avivo capabilities (also seen in the Radeon X1000 series), and is therefore capable of decoding videos of resolution up to 720p/1080i in hardware.
Both chipsets in this family are fully compatible with the Windows Aero interface and are Windows Vista Premium certified. Also supported by the chipset are PCI slots, high definition 7.1 channel audio and Gigabit Ethernet.
The northbridge has a TDP of 13.8 Watts or an average of 8 Watts, and is pin compatible with RS485 northbridge. The northbridge supports HyperTransport 2.0 at 1 GHz, and an additional 3 PCI Express x1 slots. The northbridge and southbridge (SB600) are connected via "ALink II". This is in reality 4 PCIe lanes, providing 2 GB/s bandwidth.
690G
For 690G, the IGP was named "Radeon X1250", operating at 400 MHz clock frequency, with VGA, HDMI and dual link DVI-D output with HDCP support for single link transmission and TMDS support for HDMI output. (however a DVI to D-sub adapter will not work and is not compatible with DVI-D interface due to the lack of the four analog pins of DVI-A and DVI-I) One HDMI output can be active at the DVI/HDMI interface or at the TMDS interface. HDCP support is limited to only one of those interfaces at any time. The chipset also supports VGA and DVI -or- DVI and HDMI dual output simultaneously, to achieve a maximum of two active out of three attached monitor outputs, called "SurroundView", and up to four independent, active displays with an additional video card.
The 690G chipset also supports a maximum of 24 additional PCI Express lanes and a PCI Express x16 expansion slot, and the chipset mixes audio and video signals and output through the HDMI interface. The mobile version of the chipset is the M690 chipset (codenamed RS690M).
AMD dropped support for Windows (starting from Windows 7) and Linux drivers made for Radeon X1250 graphics integrated in the 690G chipset, stating that users should use the open-source graphics drivers instead. The latest available proprietary AMD Linux driver for the 690G chipset is fglrx version 9.3, which is outdated and no more compatible with current Linux distributions.
The free and opensource driver for AMD graphics in the linux kernel supports both 3D acceleration and hardware decoders as of kernel 3.12, and is unlikely to drop support of this (or any AMD graphics it already supports) in the foreseeable future. Being part of the kernel, no installation/configuration is needed.
690V
For 690V, "Radeon X1200" was the name of the IGP, with clock frequency of 350 MHz. The major differences between the 690G and 690V chipsets is that the 690V chipset lacks support for TMDS and HDMI output, and is therefore limited to VGA or LVDS output only. The mobile version of the chipset is the M690V chipset (codenamed RS690MC).
M690T
Originally codenamed "RS690T", the chipset is for mobile platforms only. Featuring an optional 16-bit DDR2 side-port memory with maximum 128 MiB capacity as local frame buffer. Sources revealed that the RS690T chipset may pair with SB700 southbridge and named as the "trevally" platform focusing the mobile market. It is worth to note that the RS690T chipset has been added to AMD "longevity programme", that is AMD committed to supply the chipset for at least five years after general availability. However, currently, M690T chipset was coupled with SB600 southbridge. The chipset was officially referred as "M690T chipset with Radeon X1270 graphics".
M690E
Announced on January 21, 2008, the M690E chipset as the suffix "E" suggests, is solely for embedded systems, providing the same feature sets as the M690T chipset, but with the analog TV output interface replaced with a secondary TMDS output interface, providing a total of two DVI/HDMI outputs with HDCP support limited to one of those interfaces at any time.
Radeon Xpress 1250
A version of the 690G chipset for Intel processors codenamed RS600; supports all of 690G features, but the HyperTransport controller is replaced with a QDR FSB controller and it also contains a dual-channel DDR2 memory controller. IGP clocked with 500 MHz instead of 400 MHz of the 690G.
Since Intel has not given the 1333 MHz FSB license to ATI Technologies after the company was purchased by AMD, the Radeon Xpress 1250 only comes with official support of 1066 MHz Front Side Bus (FSB). However, supporting 1333 MHz FSB was obviously given higher priorities when RS600 was being developed, resulting Xpress 1250 motherboards actually have the support for 1333 MHz FSB via overclocking, and support all 1333 MHz FSB Core 2 Duo and Core 2 Quad microprocessors.
Only Abit has released a motherboard with this chipset as a result of signed agreement before the AMD-ATI merger, while ASRock was reported to have purchased all of the remaining inventory of RS600 resulting from a strategic move of AMD to clear all RS600 inventories, thus making Abit and AsRock the only RS600 motherboard manufacturers.
Northbridge issues (690G, M690, 690V, M690V, M690T, M690E)
AMD does not provide any RS690 errata publicly (AMD document ER_RS690A5 for Revision A11 & ER_RS690B4 and its addendum for Revision A12). Most OSes require patches in order to work reliably.
Windows platform:
Stop error 0x000000EA might be rarely encountered due to an internal hardware optimization on revision A12 northbridges (related to AMD Errata Addendum of ER_RS690B4). AMD will release a new driver in 2010 to fix it.
Southbridge issues(SB600)
AMD does not provide any SB600 errata publicly (AMD document ER_IXP600AB7 for Revision A12, ER_IXP600AC33 for Revision A13 and ER_SB600AD12 for Revision A21). Most OSes require patches in order to work reliably.
Windows platform:
Microsoft KB982091
Microsoft KB931369
Microsoft KB924051
Linux platform:
USB freeze when multiple devices are connected through hub (related to AMD Product Advisory PA_SB600AL1)
SATA soft reset fails when PMP is enabled and devices will be not detected (does not apply to A11 and A12 revisions)
SATA internal errors are ignored because SATA will set Serial ATA port Error when it should not
SATA commands in AHCI mode are limited to 255 sectors per command because of NCQ problems
SATA controller does not support MSI
List of mainboards using 690 chipset
The lineup and output features comparison for 690 chipset series motherboards are summarized below.
Note: data below does not include RS690M and RS600M - mobile editions of RS690 and RS600 chipsets.
See also
AMD 580 chipset series
Comparison of ATI Chipsets
Comparison of AMD Chipsets
References
External links
ATI official website
AMD official website
Embedded single board computers with AMD chipsets
Motherboards with AMD chipsets
Advanced Micro Devices chipsets
ATI Technologies products
Computer-related introductions in 2007 |
1014878 | https://en.wikipedia.org/wiki/Mellel | Mellel | Mellel (, the Hebrew for "text") is a word processor for Mac OS X, developed since 2002 and marketed as especially suited for technical and academic writers, and for writers with long, complex documents. It is made by Mellel AAR, a small software company. New features are added to the program every few months, many of which come from user suggestions. Its closest competitor is Nisus Writer Pro.
One remarkable feature present in Mellel is its multilanguage support. Languages with non-Latin alphabets, including Arabic, Syriac, Hebrew, Greek, Korean or Persian, for example, are handled well due to the fact that Mellel sports its own text engine, that is not reliant on macOS text support, and in addition support for Unicode and OpenType fonts.
Mellel also presents a feature set suitable for working with long and complex documents, in order to match the needs of scholars and technical writers. Mellel has a distinctive way of handling footnotes and endnotes, allowing creation of numerous "streams" of notes in a single document. This feature allows inclusion of three or more footnote types at the same time (e.g., editor notes, translator notes, endnotes, regular notes, etc.). Cross-references are also dealt with in a singular way by Mellel. The software offers support for Outline, based on headings in the document text. With Mellel 4.0 the software added an Index tool which according to the company website "rivals dedicated Index applications". Mellel has a unique way of handling styles, which are also organised as part of a style set, that can be utilised by multiple documents. Its Replace Styles feature is able to reformat a large amount of text scattered throughout a document (a feature also present in OpenOffice.org Writer and in Nisus Writer for Mac OS Classic).
Due to its unique approach to implementing many of its features, Mellel originally lacked full compatibility with standard word processors, but--as of July 2020--Mellel exports accurately to .docx and to ePub according to the developer's website. From the outset, Mellel has exported accurately to .pdf.
Mellel provides tight integration with Bookends and Sente, bibliographical tools for managing citations, including Live Bibliography (instant updates of Bibliography when one adds a citation).
A version of Mellel is available for the iPad.
Owners of Mellel announced in January 2011 that they will give free updates for life for anyone who will purchase the application outside the Mac App Store. As per the company, this policy was dropped in February 2014. With the release of Mellel 4.0 Mellel AAR, the developer, announced that a for fee upgrade policy was re-instated (two years of free updates). This policy does not apply to users who've purchased Mellel during the "lifetime" period.
The current version of Mellel, 5.0, reportedly supports MacOS 10.7.x or later. Although Mellel is compatible with OS 10.15 Catalina, users may not want to upgrade to Catalina if, e.g., they use MathMagic or MathType to insert equations. These are not 64 bit compatible as of July 2020.
See also
List of word processors
MathMagic equation editor for Mellel, supporting automatic baseline alignment
MathType equation editor for Mellel.
Bookends bibliographic program, tightly integrated with Mellel
References
External links
Company homepage
About Mellel
MacOS-only software
MacOS word processors
Shareware
Software companies of Israel |
805720 | https://en.wikipedia.org/wiki/NcFTP | NcFTP | NcFTP is an FTP client program which debuted in 1990 as the first alternative FTP client. It was created as an alternative to the standard UNIX ftp program, and offers a number of additional features and greater ease of use.
NcFTP is a command-line user interface program, and runs on a large number of platforms.
See also
NcFTPd
Lftp
Comparison of FTP client software
Wget
References
Peter Leung (March 14, 2006) Upload directories recursively with NcFTP, Linux.com
Richard Petersen, Fedora 7 & Red Hat Enterprise Linux: the complete reference, Edition 4, McGraw-Hill Professional, 2007, , p. 342
Karl Kopper, The Linux Enterprise Cluster: build a highly available cluster with commodity hardware and free software, No Starch Press, 2005, , pp. 369–371
External links
IBM - NcFTP: The flexible FTP client
Debian - Details of package ncftp in jessie
Free FTP clients
1991 software
Free software programmed in C |
4991549 | https://en.wikipedia.org/wiki/Mount%20Brydges%20Bulldogs | Mount Brydges Bulldogs | The Mount Brydges Bulldogs are a Junior ice hockey team based in Mount Brydges, Ontario, Canada. They play in the Provincial Junior Hockey League and are three-time provincial champions.
History
The Mount Brydges Bulldogs were founded in 1975 as members of the Western Junior D Hockey League.
The 1983–84 season saw the Bulldogs pull off a 22–7–5 record, which led them on to win their first Western league championship, despite competition from the Exeter Hawks who only suffered a total of two losses during the regular season. In the provincial final, the Bulldogs met the Grand Valley Harvesters of the Northern Junior D Hockey League, winning their first series 4 games to 2.
The Bulldogs were a tough team until the mid-1980s. In 1988, the Western league absorbed the Southern league and became an eighteen-team super-league. From then into the mid-1990s, the Bulldogs struggled. In 1991, the Western League was disbanded and replaced with the OHA Junior Development League.
The 1996–97 season proved to be one of the best ever for the Bulldogs. At the end of the regular season, the Bulldogs were at the top of the league with 34 wins and only 3 losses. The Bulldogs battled through three rounds of playoffs to meet the Wellesley Applejacks at the OHAJDL finals. The Bulldogs earned their second OHA Cup with a 4-games-to-none sweep.
The Bulldogs finished the 2000-01 season in eighth place overall with a record of 23 wins and 14 losses. The Bulldogs defeated their competition in the first three rounds of the playoffs to win their conference title. In the provincial championship final they met the Wellesley Applejacks, who were caught flat-footed and defeated 4 games to 1. This marked the Bulldogs' third provincial championship.
The 2004–05 season was a strong one for the Bulldogs who finished sixth place overall in the OHAJDL. They again battled through the first three rounds of the playoffs, winning their conference. In the end, they met the Hagersville Hawks, an opponent that was not to be denied an OHA Cup. The Bulldogs lost the series 4-games-to-1.
The Bulldogs finished the 2005-06 season with a .500 record and ranked eleventh place overall. In the first round of the playoffs, the Bulldogs pulled off a major upset knocking off the number one ranked Mitchell Hawks 4-games-to-2. In round 2, they met the Thamesford Trojans who defeated the Bulldogs 4-games-to-1.
Mount Brydges pulled off a 21-win season in 2006-07, and finished in tenth place overall. In the first round of the playoffs the lost 4-games-to-1.
The 2007-2008 season proved to be their best regular season in franchise history. With 36 wins and only 3 losses, the Bulldogs were number one in the league with 73 points. The Bulldogs swept their way through the first two rounds of the playoffs before meeting the perennial powerhouse Thamesford Trojans in the Yeck Conference finals. Thamesford powered past Mount Brydges 4 games to none, en route to their 2008 championship.
The 2008-2009 season was yet again another great season in the history of the franchise. With 33 wins and only 6 losses, finishing first overall. The Bulldogs yet again swept their way to the Conference Final before losing to the eventual league champions North Middlesex Stars 4 games to 1.
The 2009-2010 season brought changes within the organization resulting in a third-place finish for the regular season and first-round knockout in the playoffs.
During the 2017-2018 season, the Bulldogs and North Middlesex Stars engaged in the inaugural season series entitled the Battle of Highway 81. The Bulldogs would go on to win the series on the final day of the regular season and down the Stars by a 3-2 season tally.
Battle of Highway 81 Series Scores 2017-18:
Game 1: Stars 4-2 Bulldogs,
Game 2: Bulldogs 3-1 Stars,
Game 3: Stars 5-2 Bulldogs,
Game 4: Bulldogs 5-2 Stars,
Game 5: Bulldogs 4-1 Stars
Season-by-season standings
Playoffs
1984 Won league, Won OHA Championship
Mount Brydges Bulldogs defeated Grand Valley Harvesters 4-games-to-2 in OHA final
1997 Won league
Mount Brydges Bulldogs defeated Wellesley Applejacks 4-games-to-none in final
2001 Won league, Won OHA Championship
Mount Brydges Bulldogs defeated Wellesley Applejacks 4-games-to-1 in final
2005 Won League, Lost OHA Championship
Hagersville Hawks defeated Mount Brydges Bulldogs 4-games-to-1 in final
2006 Lost conference semi-final
Thamesford Trojans defeated Mount Brydges Bulldogs 4-games-to-1 in conf. semi-final
2008 Lost conference final
Mount Brydges Bulldogs defeated Exeter Hawks 4-games-to-none in conf. quarter-final
Mount Brydges Bulldogs defeated West Lorne Lakers 4-games-to-none in conf. semi-final
Thamesford Trojans defeated Mount Brydges Bulldogs 4-games-to-0 in conf. final
2009 Lost conference Final
Mount Brydges Bulldogs defeated Exeter Hawks 4-games-to-none in conf. quarter final
Mount Brydges Bulldogsdefeated Port Stanley Sailors 4-games-to-none in conf. semi final
North Middlesex Stars defeated Mount Brydges Bulldogs 4-games-to-one in conf. final
2010 lost quarter final
defeated 4-games-to-none
2011Lost semi final
Mount Brydges Bulldogs defeated Lucan Irish 4-games-to-three in quarter final
Thamesford Trojans defeated Mount Brydges Bulldogs 4-games-to-none in semi final
Notable alumni
Jason Williams
References
External links
Official OHA Bulldogs' Website
Southern Ontario Junior Hockey League teams |
16769755 | https://en.wikipedia.org/wiki/HP%202133%20Mini-Note%20PC | HP 2133 Mini-Note PC | The HP 2133 Mini-Note PC was a full-function netbook aimed at the business and education markets. It was available with SUSE Linux Enterprise Desktop, Windows Vista or Windows XP. Its retail price started at US$499 for the Linux version with 4GB of flash memory. According to DigiTimes, the netbook was manufactured by Inventec. However, according to APC magazine, it was built by Compal Electronics who also make the MSI Wind and the Dell Inspiron Mini 9. The system was replaced in early 2009 by an upgraded model, the HP Mini 2140, which was also aimed at the education and business market.
Features
The machine has a spill-resistant 92%-of-full-size keyboard which Hewlett-Packard says is specially coated to reduce wear on the keys. Unusually, the touchpad buttons are to the sides of the pad itself, rather than below it. There is a small button above the touchpad to enable/disable the pad and buttons. The machine's shell is aluminium, while the inner chassis is anodised magnesium. The screen is protected by a layer of PMMA ("plexiglass"). The system has an accelerometer-based hard drive shock protection feature called "HP 3D DriveGuard".
As of October 2008, the HP 2133 is one of the few netbooks to feature an ExpressCard/54 slot, other ones being the Lenovo IdeaPad S9, Lenovo IdeaPad S10, NTT Corrino W 100I and the Gigabyte M912. The machine is available with a three- or six-cell battery, which provides approximately two and four hours of run time respectively on the high-end Windows Vista Business configuration shipped to reviewers. The larger battery projects downwards out of the rear of the machine, tilting it upwards – some reviewers have commented that this improves keyboard ergonomics.
A variety of CPU, RAM and mass storage configurations are available, and Bluetooth is available on high-end models. All of the current configurations of the machine feature a webcam, however in HP's press release it is listed as an optional feature. Operating systems available range from SuSE Linux to Microsoft Windows Vista Home and Business. Though the machine qualifies for Microsoft's "downgrade program", allowing units to be shipped with Windows XP Professional and with the option to upgrade to Windows Vista Business in future, this comes with the expectation that the customers order at least 25 units per year.
Reception
Reviewers have been impressed by the notebook's comfortable keyboard, the high-resolution display, aesthetic design and overall build quality.
However, the unusual touchpad, with buttons placed at its sides, caused some usability issues for some users. The high reflectivity of the screen also caused difficulties in operating the netbook in bright environments. Performance was also cause for concern, with neither speed nor battery life particularly impressing reviewers.
Review machines also became hot in places on the underside of the chassis. In many revisions of the notebook, the fan vent had an additional dense plastic grill which impeded airflow greatly. The heat problem could mostly be eliminated by removing this inner grill, additional grills behind the air intake vents, replacing the thermal compound between the heatsink, the CPU and GPU with a higher quality type and reducing the CPU clock speed in software. As of September, 2010, the previous two years have seen a significant number of system board failures rendering the unit useless. At some point, when the unit is turned on, while the power light will illuminate, there is no other activity and no boot activity. Several owners have been successful in restoring functionality after removing the system board and heating it with a heat gun or "baking it" in an oven for a limited time. A web search for problems with the 2133 returns a large number of links to forums and discussions regarding the problem.
Several of these reviewers hoped that the machine's performance would be improved by a CPU update, to a next-generation VIA Nano, or perhaps the Intel Atom. HP notebook product marketing manager Robert Baker remarked that the decision to launch the machine with current-generation processors was driven by the education market's purchasing schedule, and that they would consider new CPUs for an "interim refresh" about six months into the machine's life.
Similar products from HP
A new HP notebook similar in appearance to the Mini-Note, called the "Digital Clutch", was unveiled in October 2008, with a launch expected for December that year. The small pink computer is a collaboration with fashion designer Vivienne Tam, and has a 10-inch screen, a 1.6 GHz Intel Atom processor, 1GB of RAM, and an 80GB hard disk drive. A few days later, a black notebook of otherwise similar appearance called the "HP Mini 1000" was informally revealed by a banner on the company's store, and officially announced on 29 October 2008. Unlike the 2133, this device is meant for the home market.
An upgrade to the 2133, the HP Mini 2140, was announced by HP in January 2009.
References
External links
Linux-based devices
Subnotebooks
Netbooks
2133 Mini-Note PC |
63283288 | https://en.wikipedia.org/wiki/Corvette%20%28computer%29 | Corvette (computer) | Corvette (Russian: Корвет) was an 8-bit personal computer in the USSR, created for Soviet schools in 1980s. The first device was a homemade computer, created in 1985 by employees of the Moscow State University for their purposes (physics experiments). The first description was made in the magazine «Microprocessor tools and systems». The PC was named "ПК 8001" (21.08.1985).
Graphics
This computer had advanced graphic capabilities for its time. It has only one video mode which uses 4 planes: 3 graphic and 1 text. The graphic planes have 512x256 resolution. The text plane is capable to show 32x16 or 64x16 text using two sets of 256 ROM 8x16 characters for both modes. It possible to show 16 colors on screen. 8 colors are free and 8 additional colors can be used combining text symbols and pixels. Any logical color may be any physical color from 0 to 15 (RGBI). The graphic video RAM size is 192 KB (4 pages) or 48 KB (1 page). The text video RAM size is 1 KB which is 9-bit static RAM. There is no contention for access to video and processor RAM. The Corvette has a way to accelerate filling an area with a given color. It could be faster than the IBM PC AT with the EGA card for this task.
Sound
The Intel 8253 is used to generate sound.
Software
BASIC interpreter in ROM, fully compliant with the MSX standard, including all graphic commands (drawing points, lines, rectangles, filled rectangles, circles, ellipses, arcs, closed area filling, DRAW), working with integers, etc.
Operation systems MicroDOS (МикроДОС) and CP/M-80 (with floppy disk driver)
Text editor «Супертекст», «Микромир» (MIM), etc.
DBMS dBase II
Spreadsheet Microsoft Multiplan
Compilers for Fortran, Pascal, C, Ada, Forth, Lisp, PL/M, etc.
Software for education
Games («Berkut», PopCorn, Stalker, Dan Dare, Continental Circus, Deflector, «Treasure», «Winnie the Pooh», «Treasure Island», Super Tetris, Karate, etc.)
Educational computer technology complex
"НИИСчётмаш" created an educational computer technology complex based on "Corvette"
It includes a teacher's computer (ПК8020, with FDD , a printer port) and about 15 computers for students (ПК8010), connected to the local network (19,5 kbit/s).
Variants
It has been mass-produced since 1987 year at the plants of the Ministry of Radio Technology (Soviet Union):
thumb|right|Корвет — печатная плата экземпляра 1986 г.в.
Production
Even though this PC was created in a fairly short time, and the decision to produce a new computer was approved by the Council of Ministers, the start of the mass production was delayed. Although the computer consisted exclusively of components already mastered by Soviet industry, it was not possible to increase production volumes on time, and the supplied components were of very poor quality. In addition, there was a competition with another computer of the same purpose.: UKNC. As a result, deliveries of the new computer were far behind the plan.:
After the collapse of the USSR, the production of Corvettes stopped, and incomplete cases were used to assemble numerous ZX Spectrum clones. "LINTech" ("Laboratory of Information Technologies") carried out the modernization of the "Corvette" - the network was modernized and an IBM PC-compatible computer was installed as the head machine.
Network speed increased from 19.5 kilobits / sec to 375 kilobits / sec.
This revision was recommended by the Ministry of Education of the Russian Federation for use in schools.
Links
Forum about educational computer technology complex «Corvette»
Emulator «Corvette» S. Erohin
Emulator «Corvette» in the OS Android
About «Corvette»
«Corvette» documentation
«Corvette» description in the wiki Emuverse
«Corvette» characteristics
Technical documents
Scheme and description
«Corvette» ПК8010 / ПК8020 and MSX2: additional tests («Corvette» PC8020 & MSX2)
PAINT: Korvet VS UKNC
Games and other programs for «Corvette»
References
Computer-related introductions in 1987
8-bit computers
Personal computers |
4517049 | https://en.wikipedia.org/wiki/Machine%20Robo%3A%20Battle%20Hackers | Machine Robo: Battle Hackers | is a Japanese animated television series produced by Ashi Productions. It ran on TV Tokyo from June 3, 1987 through December 30, 1987.
Connection to Revenge of Cronos
In the finale of Machine Robo: Revenge of Cronos, the Machine Robo decide to leave planet Cronos after defeating Gandler so that they can fight against evil in a new dimension. When they cross the dimensional barrier, the Machine Robo appear on Electronic Planet B1 with no memory of events from the previous series.
Story
In the year 406AE, the Electronic Planet B-1 falls prey to the Mechanoid space gang Gurendos. To fight back against them, the Machine Robo join forces to form the Algo Republic. Meanwhile, a large starship with five humans in suspended animation malfunctions and crash lands on B-1. Emerging from their ship, they find themselves caught up in the struggle between the Gurendos forces and the Algo Republic, eventually joining forces with the rough-and-tumble bunch of Machine Robo known as the Battle Hackers team to help save B-1 and eventually find a way back to Earth. In the final episode it is discovered that the robotic wars of Electronic Planet B1 were created by humans as a testing ground for arms traders.
Characters
Algo Army
RIM (voice: Masako Katsuki)
The Mother Computer that commands the Algo Army.
Battle Hackers
The most dangerous unit in the Algo Army, it is composed of the misfits, hotheads, and dropouts of the Machine Robo.
R. JeTan (voice: Shin'ya Ōtaki)
The leader of the Battle Hackers. Like his name suggests, he transforms from robot to jet to tank. Carries the "R. Bazooka" and sub-rifle.
Garzack (voice: Nobuyuki Furuta)
Non-transforming sub-commander, his weapon is the "Garfire Special". Although no toy of Garzack was ever released, an electronic prototype was designed that would have interacted with the anime.
Mach Blaster (voice: Issei Futamata)
A triple changer that can go from robot to jet to gun. Former White Thunder member.
Drill Crusher (voice: Kōichi Yamadera)
"Muscle"-type character. Can transform from robot to drill-tank to rhinoceros. Former Silver Wolves member.
Fossil People
Header, Abarar, Leggar and Taildar combine into Gattai Saurer. Get around on hoverboards.
Pro Truck Racer (voice: Nobuaki Fukuda)
Former Silver Wolves member. Carries the Big Blazer Cannon.
Wheelmen
Hot Rod Joe
F-1 Jack
Buggy Wolf
Drag Sam
Rotary Kid
Twincam Jimmy
Humans
Akira Amachi (voice: Masaaki Ōkura)
Pilots the Jet Riser
Luke Stewart (voice: Ken'yū Horiuchi)
Pilots the Battle Riser
Mia White (voice: Naoko Matsui)
Pilots the Power Riser
Zen Ogawa (voice: Nozomu Sasaki)
Patricia Longfellow (voice: Yuri Amano)
Winner Robo
Roboshooter Gaiden
Testarossa Winner
Ferrari Testarossa.
Truck Winner
Race truck
Police Winner
Toyota Soarer Patrol Car
Buggy Winner
Hornet Buggy. Called Dirt Robo in the anime.
F-1 Winner
Lotus F1. Called Racer Robo in the anime.
Eagle Winner
F-15 Eagle.
Fire Winner
Chemical Fire Engine
Porsche Winner
Porsche 935.
Silver Wolves
The new team name of the Battle Clan from Revenge of Cronos. Composed of the land and sea robots.
Rod Drill (played by Ken'yū Horiuchi)
Leader of the Silver Wolves. Transforms into a drill-tank. One of his toy molds was used as the Renegade Screw Head in the GoBots series.
White Thunder
Team name of the Jet Clan from Revenge of Cronos. Composed of the aerial robots.
Blue Jet (played by Shinya Ōtaki)
Leader of the White Thunder. He can turn into a jet. Fights with "Tenkū Shin Ken (天空真剣, Sky True Sword)" style sword. One of his toy molds was used as the Renegade Fitor in the GoBots series.
Blue Dragons
Team name of the Jewel Lords and Rock Lords from Revenge of Cronos.
Dia Man (Solitare) (played by Shō Hayami)
Leader of the Blue Dragons. Transforms into a diamond.
Red Knights
Team name of the Martial Arts Robo/Cronos Clan from Revenge of Cronos.
Kendo Robo
Leader of the Red Knights.
Gurendos
The villains of the series. The group's name derives from the term guren-tai (愚連隊 hoodlums)
Dylan (ディラン総統) (voice: Ryūzaburō Ōtomo, Junichi Kagaya)
An evil computer that controls the Gurendos.
Gakurandar (ガクランダー) (voice: Ken Yamaguchi)
Second-in-command of the Gurendos. His name is based on gakuran.
Kariagen (カリアゲン) (voice: Satoru Inagaki)
Kariage (刈り上げ) loosely translates to "hair cropped close in the back".
Sorikondar (ソリコンダー) (voice: Minoru Inaba)
Name derived from sorikomi (剃り込み), again referring to his hairstyle.
Yasand (ヤーサンド) (voice: Yūji Mikimoto)
A yakuza swordsman. Rides a Mercedes-Benz. Name derived from Yassan (ヤッサン).
Shibumidas (シブミダス) (voice: Ken'ichi Ono)
Named derived from shibumi.
Rikimines (リキミネス) (voice: Minoru Inaba)
Suji (スジ) (voice: Issei Futamata)
Sorikondar's aide. Speaks in a Kansai dialect. Named after the phrase Suji no toranai (筋の通らない illogical)
Iron Eagle (アイアンイーグル) (voice: Katsumi Suzuki)
Pattsuri (パッツリ)
Zentry (ゼントリー)
Kizūn (キズーン)
Named derived from kizu (scar).
Gantsuke (ガンツケ)
Shinobis (シノビス)
Named derived from shinobi (ninja).
Igarn (イガーン)
Named derived from Iga, an ancient ninja province.
Geruka (ゲルカ)
Teppodaman (テッポダマン)
Uwappā (ウワッパー)
Sitappā (シタッパー)
Shitappa means "underling".
Bi-Bi-Bi Black (ビビビブラック)
Devil Satan 6 (played by Kenichi Ono)
Six monstrous robots that can combine into the giant Devil Satan 6 robot. In the anime they are referred to by number instead of name.
Gillhead (played by Kenichi Ono): The head. Speaks in a Kansai dialect.
Barabat: Left arm.
Deathclaw: Right arm.
Gurogiron: Torso.
Eyegos: Right Leg.
Blugoda: Left Leg.
See also
Machine Robo
Machine Robo: Revenge of Cronos
References
External links
1987 anime television series debuts
1987 Japanese television series endings
Japanese children's animated action television series
Japanese children's animated adventure television series
Japanese children's animated science fiction television series
Adventure anime and manga
Ashi Productions
Fictional robots
Super robot anime and manga
Television shows based on toys |
6414685 | https://en.wikipedia.org/wiki/Force%20Protection%20Inc | Force Protection Inc | Force Protection, Inc. was a manufacturer of ballistic- and blast-protected vehicles from the United States which have been used in Iraq, Afghanistan, Kosovo and other hot spots around the world. The company was acquired by General Dynamics in 2011.
Company
The company traces its roots to Sonic Jet Performance, Inc., a California speed boat company founded in 1997. When the boat business hit tough times after the Sept. 11, 2001 terrorist attacks, a new investor Frank Kavanaugh stepped in and looked to alter the direction of the business. Kavanaugh financed and secured the rights to a new line of products in support of the company's Mission to Protect and Save Lives. Around that time, the team identified an insolvent company in South Carolina called Technical Solutions, building a prototype of a mine-resistant vehicle called Buffalo and attempting to build a smaller vehicle called the Cougar/Tempest MRAP (Mine Resistant Ambush Protected). Technical Solutions was struggling and required a new strategy and additional resources. The products were designed using commercial components, and utilized a strategy to allow them to be rapidly rebuilt in the field.
Kavanaugh, the company chairman and largest investor, rebranded the company Force Protection, added rapid design capability, a focus on product quality, large scale production, and technical talent including the addition Dr. Vernon Joynt an internationally recognized blast expert. Over a 3-year period, the new management team retooled the vehicles and developed the production capability of the business to almost $1.3 billion in annual sales - by integrating high quality US automotive components with an innovative and effective blast resistant chassis.
The heavily armored trucks featured a V-hull design that deflected underbody blasts away from the passenger compartment, and with Dr. Joynt's expertise the products were enhanced to address side-blast from IEDs (Improvised Explosive Devices) that had become a primary cause of injury and death for soldiers in Iraq and Afghanistan. The base designs secured from Mechem (South African government) resulted in two U.S. models: the Buffalo, a huge, mine-clearing truck, and the Cougar, which was smaller and more versatile, and through the company's R & D efforts with Dr. Joynt several innovations and improvements such as the Cheetah, and Spaced Armor.
Force Protection initially struggled with its first small MRAP contracts in 2002 through 2005. At first the company had less than a dozen people on its early production line. At times the prototype approach took five weeks to build one Cougar. The United States Department of Defense fined Force Protection more than $1.5 million for initial delivery delays. Kavanaugh focused on moving from a prototype production environment to sustainable volume production methods.
The team overcame these issues, developed agreements with other defense industry manufacturers such as Armor Holdings and BAE Systems, as well as a joint venture company with General Dynamics ("Force Dynamics"), to merge Force Protection's proprietary designs with the manufacturing capacity necessary to meet increasing demand. Force Protection received several contracts as part of the MRAP program, supplying blast resistant vehicles to American forces in Iraq, but with Kavanaugh's departure in 2007 the remaining team lost focus and chose not to introduce an innovative JLTV category vehicle known as Cheetah. This led ultimately to the sale of the company to General Dynamics as orders were increasingly placed with rivals companies.
Product Line
Cheetah – Thirteen Cheetah prototypes were produced in 2005 and available in 2006 & 2007. Intended for urban operations, reconnaissance, and forward command and control, no contracts were signed for the Cheetah.
Cougar – The Cougar is a medium-sized mine-protected vehicle for command and control, artillery prime mover, recovery and ambulance duty. The Cougar has been in service with a number of armed forces since 2002.
Cougar H 4×4 can carry 4 troops and an EOD robot.
Cougar HE 6×6 can carry up to 12 troops.
Buffalo – The Buffalo is designed principally for route clearing activities, asset protection, urban weapons systems, and command and control. The Buffalo has been in service with a number of armed forces since 2003
SPECTRE – The SPECTRE light vehicle (formerly JAMMA) prototype. Offered to SOCOM in 2012, no contracts were signed.
Ocelot – The Ocelot, which is designed and built in the United Kingdom by Force Protection Europe, is a light protected patrol vehicle (LPPV) ordered in 2010 to replace the UK military's Snatch Land Rover .
See also
ILAV
References
External links
General Dynamics Land Systems - Force Protection website
Yahoo! - Force Protection Inc. Company Profile
Extensive article from the USA Today about Force Protection Inc.
Military vehicle manufacturers
Companies based in South Carolina |
64706065 | https://en.wikipedia.org/wiki/Boiga%20thackerayi | Boiga thackerayi | Boiga thackerayi, or Thackeray's cat snake, is arboreal, mostly seen close to forest streams, and is active during the night. It is non-venomous and is known to grow up to three feet in length. It is endemic to the Western Ghats, India.
Etymology
The epithet, thackerayi, is in honor of Indian conservationist and wildlife researcher Tejas Thackeray.
Geographic range
Boiga thackerayi is found in the Koyna region of Satara district in western Maharashtra, India.
Diet
It feeds on eggs of Humayun's night frog (Nyctibatrachus humayuni). This behavior was never reported in cat snakes from the Western Ghats earlier.
References
thackerayi
Snakes of Asia
Reptiles of India
Endemic fauna of the Western Ghats
Reptiles described in 2019
Taxa named by Varad B. Giri |
895299 | https://en.wikipedia.org/wiki/Atos | Atos | Atos is a French multinational information technology (IT) service and consulting company headquartered in Bezons, France and offices worldwide. It specialises in hi-tech transactional services, unified communications, cloud, big data and cybersecurity services. Atos operates worldwide under the brands Atos, Atos|Syntel, Atos Consulting, Atos Healthcare, Atos Worldgrid, Groupe Bull, Canopy and Unify.
History
The company was formed in 1997 through a merger of two French IT companies; and combined with the Dutch-based company Origin B.V. in 2000 to become Atos Origin. It subsequently acquired KPMG Consulting in 2002 and SchlumbergerSema in 2004.
In 2010 Atos Origin announced the buyout of Siemens IT Solutions and Services and finalized the acquisition in July 2011. Afterwards, the company name reverted to Atos.
Background: a series of mergers (1997–2011)
In 1996, Origin B.V. was created after a merger of the Dutch company BSO and the Philips C&P (Communications & Processing) division, while a year later in 1997, Atos was created following a merger of the French companies Axime and Sligos. In 2001, Atos Origin sold its Nordic operations to WM-data. In 2002, it made a major acquisition by buying KPMG Consulting in the United Kingdom and in the Netherlands. Then in 2004, it acquired SchlumbergerSema, the IT service division of Schlumberger and took over the infrastructure division of ITELLIUM, a subsidiary of KarstadtQuelle.
At the same time (2004), the company created a new subsidiary, Atos Worldline, and the renaming of its consulting activities as Atos Consulting. Also in 2004, Atos Origin Australia, originating from Philips, was sold to Fujitsu. In 2005, Atos Origin sold its activities in the Nordic region, which had become part of the company with the acquisition of Sema Group, to WM-data while in 2006, Atos Origin sold its operations in the Middle East to local management.
In October 2007, Philippe Germond replaced longtime CEO Bernard Bourigeaud. Two shareholders, the hedge funds Centaurus Capital and Pardus Capital, tried to gain control over the company via the supervisory board. In November 2008, the boardroom battle came to an end when Thierry Breton replaced Philippe Germond as chairman and CEO.
In August 2010 Atos Origin acquired Indian payment company Venture Infotek.
Siemens IT
In December 2010 Atos Origin agreed to acquire the IT subsidiary of Siemens for €850 million. As part of the transaction, Siemens agreed to take a 15% stake in the enlarged Atos, to be held for a minimum of five years.
The company dropped the "Origin" suffix of its name in July 2011 after completing its acquisition of the Siemens unit.
In November 2011 Atos and software services provider Ufida International Holdings formed the joint venture Yunano. The two companies invested €5.7 million. Atos has 70 percent and UFIDA has 30 percent. The joint venture has its HQ in Bezons, France, a suburb of Paris. In 2012 Atos announced the creation of a new company called Canopy. The CEO is Philippe Llorens. In 2011 Atos introduced a Zero Email initiative, banning email as a form of internal communications, except for use with customers and prospects. As part of the initiative, Atos acquired the French software company blueKiwi in early 2012, rolling out their ZEN social networking software across its organisation.
Bull
In August 2014, Atos announced that it had acquired a controlling stake in Bull SA through a tender offer launched in May. Atos announced plans in October 2014 to buy out or squeeze out the remaining share and bondholders of Bull.
Xerox ITO and Syntel
On 19 December 2014 Atos announces the acquisition of Xerox's IT Outsourcing business for , tripling the size of the North American business. At the time of the acquisition, the unit generated (Q3 2014) and had 9,800 employees operating in 45 countries.
In October 2018, the company accelerated its expansion in North America with the (including debt) acquisition of Syntel, a company with activities in banking, financial services, healthcare, retail and insurance.
Failed acquisition bid for DXC
In February 2021, Atos ended talks for a potential acquisition of DXC Technology. Atos has proposed for US$10 billion including debt for acquisition.
Google Cloud
In April 2018, Atos announces a global partnership with Google Cloud to help offer secure artificial intelligence systems. As part of this partnership, the two companies create common offerings and open "labs" dedicated to artificial intelligence in London, Dallas, Munich and Paris.
In 2019, the company divests from Worldline, its payment subsidiary, as part of its strategy to become a "digital pure player". The company gradually sells its shares, retaining only a 3.82% stake in Worldline in April 2020.
On 1 November 2019, Elie Girard replaces Thierry Breton as chief executive officer, following Breton's appointment as EU commissioner.
In December 2019, Atos acquires Maven Wave, a US-based Google Cloud Premier Partner specialising in cloud and mobile applications, data analytics, experience design and cloud infrastructure.
In June 2020, Atos, GENCI and CEA revealed "Joliot-Curie" supercomputer which would help in academic and industrial open research.
Services and Activities
Services
Atos activities are organized in four divisions:
Infrastructure & Data Management: Datacenter management, service desk and unified communications;
Business Applications & Platform Solutions: consulting and systems integration;
Big Data & Cybersecurity: production of high-performance computers & servers and cybersecurity;
Worldline: e-commerce payment services and point of sale terminal applications
Geographies
United Kingdom
According to a National Audit Office report on the government's four biggest suppliers, Atos earned £700 million in revenue from the public sector in the UK in 2012; of £7.2 billion sales worldwide. Atos holds £3 billion worth of UK government contracts providing services to a wide range of organizations including NHS Scotland, Home Office, Welsh Government, the Ministry of Defence, Transport for Greater Manchester, the BBC and a multimillion outsourcing contract to NS&I. In the United Kingdom, from 1998 to 2015 the company was at the centre of a controversy over the management of contracts by their healthcare division of the Work Capability Assessment for the Department for Work and Pensions (DWP).
Controversy
Atos Healthcare
In the United Kingdom, from 1998 to 2015, Atos Healthcare was at the centre of a controversy over the management of contracts by their healthcare division of the Work Capability Assessment (WCA) for the Department for Work and Pensions (DWP). In August 2015, statistics from the DWP showed that 2,380 people had died between 2011 and 2014 soon after being found fit for work through disability benefit assessments. In 2014, "the DWP negotiated an early exit from the existing WCA contract with the private firm, Atos, after raising concerns about the quality of its work". Nevertheless, in 2016 Atos was still undertaking work for the DWP in assessing Personal Independence Payment (PIP) applications. The Press Association said in 2017 that Atos, used by the DWP to make its decisions, were due to be paid more than £700m for their five-year contracts against an original estimate of £512m.
When Atos took over administering PIP, estimates of how fast claims could be processed were over-optimistic as were estimates of how easily claimants could get to assessment centres. This led to delays in assessments, distress to claimants and unexpectedly high costs. Atos was accused of misleading the government.
Atos developed a computer system that would extract data from GP's computers nationwide. Costs rose from £14 million to £40 million and it was felt Atos had taken insufficient care how it spent taxpayers' money.
When Atos lost the contract for fitness to work tests, Richard Hawkes of Scope said, "I doubt there's a single disabled person who'll be sorry to hear that Atos will no longer be running the fit-for-work tests." Hawkes claimed the "fundamentally flawed" test should be "more than an exercise in getting people off benefits. It should make sure disabled people get the specialist, tailored and flexible support they need to find and keep a job."
Mark Serwotka of Public and Commercial Services Union described the assessments as "designed to harass vulnerable people and take their benefits away rather than provide support and guidance. Doctors, MPs and disabled people all believe the tests should be scrapped so, instead of replacing the failed Atos with another profit-hungry provider, the government should bring the work in-house and invest in it properly."
Liberal Democrat leader, Tim Farron questioned how Atos and Capita could have been paid over £500m from tax payers money for assessing fitness to work as 61% who appealed won their appeals. Farron stated, "This adds to the suspicion that these companies are just driven by a profit motive, and the incentive is to get the assessments done, but not necessarily to get the assessments right. They are the ugly face of business."
In 2014, Atos Healthcare rebranded its occupational health business to become OH Assist. The Atos Healthcare brand was reserved for use for the PIP contract. Atos sold its OH Assist business to CBPE Capital in 2015.
For a number of years Atos denied claimants benefits or reduced their benefits if they did not take addictive opiate based pain killers. The Department of Work and Pensions subsequently revised its guidance stating, "healthcare practitioners [disability benefits assessors] should be mindful that the level of analgesia used does not necessarily correlate with the level of pain".
Corporation tax
It was disclosed in November 2013 through the National Audit Office that Atos had paid no corporation tax at all in the UK in 2012. The total value of contracts that had been awarded to Atos by June 2013 was approximately £1.6 billion.
Sponsorship
Olympic/Paralympic Games
Atos has been the official IT Partner for the Olympic Games since 2001 and is expected to continue until at least 2024. Atos, through the SchlumbergerSema's acquisition, was involved in previous Games during the 1990s, starting with the Barcelona Olympic Games in 1992. Atos has been one of 11 major sponsors for the Olympic Games since 2001.
In 2011, some UK-based disability campaign groups called for a boycott of the 2012 Summer Paralympics due to Atos' sponsorship of the games and Atos Healthcare's UK contract to perform Work Capability Assessments on behalf of the Department of Work and Pensions (DWP) During the first week of the Paralympics in the summer of 2012, activists and disabled people targeted Atos in a series of nationwide protests. This culminated on Friday 31 August with a demonstration outside Atos headquarters in London, which ended in a confrontation with the police.
Through the International Olympic Committee's TOP (The Olympic Partner) programme, Atos has sponsored athletes from all over the globe in order to support their Olympic ambitions, including Danny Crates, the 2004 Paralympic Champion in the 800m.
2014 Commonwealth Games
Atos was named as an official supporter of the 2014 Commonwealth Games in Glasgow. On 26 June 2013, "Glasgow Against Atos" occupied one of the Commonwealth Games venues in protest against Atos sponsorship.
2015 Southeast Asian Games
Atos was the official sponsor of 2015 Southeast Asian Games in Singapore.
2018 European Championships
In February 2017 Atos has been appointed as the first official sponsor of the Glasgow 2018 European Championships. The company has been awarded a £2.5 million contract for timing, scoring and results.
See also
List of IT consulting firms
References
External links
Information technology consulting firms of France
International information technology consulting firms
Consulting firms established in 2000
French companies established in 2000
Multinational companies headquartered in France
French brands
Companies based in Paris
Disability in the United Kingdom
Protests in the United Kingdom
21st-century controversies
Medical technology companies of France
Companies listed on Euronext Paris
Societates Europaeae
Outsourcing companies |
61070489 | https://en.wikipedia.org/wiki/World%20of%20Darkness%20Preludes%3A%20Vampire%20and%20Mage | World of Darkness Preludes: Vampire and Mage | World of Darkness Preludes: Vampire and Mage is a series of two interactive fiction video games developed by White Wolf Entertainment and Fula Fisken: Vampire: The Masquerade – We Eat Blood and Mage: The Ascension – Refuge. They were released on February 15, 2017, individually for Android and iOS, and together as a set for Microsoft Windows, MacOS and Linux.
The games are set in the World of Darkness, and are based on White Wolf Publishing's tabletop role-playing games Vampire: The Masquerade and Mage: The Ascension. Vampire follows a fledgling vampire who communicates with one of their friends through text message conversations, and Mage follows a volunteer in a refugee camp, who learns that magic is real and that they can use it. Vampire was written and illustrated by Sarah Horrocks and Zak Sabbath, Mage was written by Karin Tidbeck, and both games were directed by Martin Ericsson. Following allegations against Sabbath of sexual abuse, which he has denied, the standalone Vampire and the World of Darkness Preludes set are no longer offered for sale.
Critics enjoyed the games and the return of World of Darkness video games after more than a decade since Vampire: The Masquerade – Bloodlines, and praised their visual presentations, although Vampire writing and art were criticized as confusing at times. Critics enjoyed Mage story for its high stakes and for Tidbeck's writing, and for the authenticity they could bring to its Swedish setting as a Swedish author.
Gameplay
Vampire: The Masquerade – We Eat Blood and Mage: The Ascension – Refuge are interactive fiction games. In Vampire, the player must manage their character's vampiric hunger, and make choices determining whether to hang on to their human life or to move on. In Mage, the player must decide how to use their reality-altering power of True Magick, and whether to do good or evil with it. Vampire is presented entirely through the perspective of text message conversations with the player character's friends on their cell phone, while Mage uses a more typical Choose Your Own Adventure-style format.
Plot
Vampire: The Masquerade – We Eat Blood
Vampire: The Masquerade – We Eat Blood lets the player take the role of a young artist who wakes up to learn that they have been turned into a vampire, and follows their first nights as undead.
Mage: The Ascension – Refuge
Mage: The Ascension – Refuge is set in Malmö, Sweden in 2015, and is themed around modern political and social issues. The player takes the role of Julia Andersson, a volunteer in a Syrian refugee camp, who learns that magic exists and that she has the power of True Magick.
Development
Vampire and Mage were developed in a collaboration between White Wolf Entertainment and Fula Fisken following Paradox Interactive's purchase of White Wolf in 2016, and was the first time a Vampire: The Masquerade video game was released in over a decade; White Wolf also intended for the games to mark their start as a multimedia entertainment company. Vampire was written and illustrated by Sarah Horrocks and Zak Sabbath, and Mage was written by Karin Tidbeck; both games were directed by Martin Ericsson at White Wolf. The games were based on White Wolf's tabletop role-playing games Vampire: The Masquerade and Mage: The Ascension, and were inspired by Choose Your Own Adventure gamebooks. Fula Fisken developed them using the Unity game engine due to its multi-platform support, and because of how it allowed for a smooth production with a focus on the content rather than the technology.
Tidbeck was approached for Mage by Ericsson as he thought they would fit White Wolf's interactive fiction game concept, having known them since the 1990s, having previously worked with them on writing live action role-playing game projects, and having played tabletop role-playing games with them. Tidbeck got the idea for Mage from their time as a volunteer at a center for refugees in Malmö, and drew on their experience from that. The protagonist of Mage was written as pansexual and polyamorous, as Tidbeck likes to include LGBTQ portrayals in their works. In preparation for writing the game, Tidbeck read up on news articles to get a better understanding of the political background, and interviewed a Syrian family. As the game is based in the Mage: The Ascension setting, they additionally had to read the tabletop game's rule book, and learn and internalize its concepts. Working on an already established intellectual property was a challenge, as it meant having to stay faithful to the original, and being restrained in what they could and could not do, while also having to make something with their own flavor.
Writing a Choose Your Own Adventure-style story also brought challenges, as they unlike ordinary novels and short stories are non-linear, and involve keeping track of several variables and having to bring all possibilities together into the game's endings. Something Tidbeck wanted to avoid bringing over from that gamebook format was their complex rules and game mechanics, and to instead focus on the story, while embracing their freedom in what the player character can do. To plot out the story, they made use of post-it notes, which they then transcribed into the game engine Twine, where they wrote the story. Each section of the game was then sent to the game's producers for play testing and feedback; the tweaks Tidbeck had to do mostly involved adding minor choices for the player to make in addition to the bigger, more dramatic ones. Tidbeck was also involved in the game's music to an extent, offering feedback on music samples that were used when choosing the composer.
The games were published on February 15, 2017 by White Wolf Entertainment for Microsoft Windows, MacOS and Linux together as a set under the title World of Darkness Preludes: Vampire and Mage, and individually by Asmodee for Android and iOS. The standalone Vampire and the World of Darkness Preludes set stopped being offered for sale, however, following a series of allegations against Sabbath of sexual and emotional abuse, which Sabbath denied.
Reception
Several publications found it exciting to see new World of Darkness video games after such a long time since the last one, despite how different they were from 2004's Vampire: The Masquerade – Bloodlines, and considered it a new start, wondering what World of Darkness Preludes could lead to in the future; Kotaku, although finding the games too short, wished for further entries using the same format, based on World of Darkness tabletop games such as Wraith: The Oblivion and Changeling: The Dreaming. The games were also well received by users upon release.
Video game publications enjoyed Vampire art and presentation: how it helped in setting the right atmosphere for the story, and how the mobile messaging interface worked well with the mobile versions of the game. Kotaku also enjoyed the story's combination of modern technology with traditional horror, saying that one's text message conversations naturally would be "fucked up" after being turned into a vampire. Pocket Gamer, however, noted that the art, while "beautiful", sometimes was difficult to read, slowing down the pacing of the game as the player deciphers a picture. They also criticized the writing, calling it at times confusing, requiring re-reads of passages, and saying that the text message-style sentences often included lengthy run-on sentences that impacted the pacing, and at times grating text talk such as abbreviations, slang and misspellings.
TouchArcade and Kotaku appreciated Tidbeck's involvement in Mage, due to the authentic Swedish touch they could bring to the Swedish setting of the game, and their experience in the weird fiction genre; Pocket Gamer also enjoyed Mage writing, preferring it over Vampire due to its higher stakes and the use of the player character's moral perspective. Kotaku appreciated the use of visual distortion effects in Mage to communicate the strange nature of Magick and its effect on the world.
Notes
References
External links
2010s interactive fiction
2017 video games
Android (operating system) games
IOS games
LGBT-related video games
Linux games
MacOS games
Mage: The Ascension
Polyamory in fiction
Transgender-related video games
Vampire: The Masquerade
Video games about vampires
Video games developed in Sweden
Video games featuring female protagonists
Video games set in Sweden
Windows games
Works about the European migrant crisis
World of Darkness video games
Single-player video games |
3542471 | https://en.wikipedia.org/wiki/John%20Diebold | John Diebold | John Theurer Diebold (June 8, 1926 – December 26, 2005). An American businessman who was a pioneer in the field of automation, founding The Diebold Group to advise corporations around the world as well as governments in the U.S and abroad in the potential of information technology.
Early life
Diebold was born in Weehawken, New Jersey. After graduating from Weehawken High School, he enrolled at Swarthmore College, then during the war attended the United States Merchant Marine Academy and served in the merchant marine, returning to Swarthmore in 1946 to earn a B.S. in Engineering. He then completed an MBA at the Harvard Business School in 1951.
At the Harvard Business School he worked with venture-capital pioneer Georges Doriot and his colleague Curtis Tarr, who advised Diebold's research project on "Making the Automatic Factory a Reality". Diebold made automation studies the focus of his assignments for a small Chicago-based consulting firm, then in 1954 he returned to Weehawken to found his own consulting company. By 1960, he numbered more than 30 prominent clients including such notable companies as Bear, Sterns & Company; Boeing Airplane; General Electric, Radio Corporation of America; Westinghouse Electric; and others.
Diebold's first book, Automation: The Advent of the Automatic Factory, based on his studies at the Harvard Business School, was published by Van Nostrand in 1952. Owing to independent research and ever-persistent curiosity about the whole field of technology, he originated many of the concepts of data processing and utilization that are accepted today in both automation and management. This book was reissued unchanged on its 30th anniversary as a “management classic” by the American Management Association. He is credited with coining the word automation in its present meaning, and had much to do with introducing it to general usage.
Career summary
1952 wrote first book, Automation, originating many concepts basic in today's technology.
1954 founded John Diebold & Associates, consulting in automation and management; later known as The Diebold Group, the international management consulting firm. It was sold to Daimler-Benz in 1991.
1968 founded The Diebold Institute for Public Policy Studies, an operating foundation to apply advanced computer and communications technology to the improvement of the quality of life for a broad segment of the public. In 2005, the year of his death, the Institute led an international cooperative effort to assess the value of information technology in public infrastructures: health care; road transportation; education; communications and public safety.
Business career
John Diebold & Associates soon grew into The Diebold Group, which played a unique and often central role in the development of the information technology industry. John Diebold and his company were responsible for the creation of new products and services as well as in the definition of the IT role in the management of businesses and governments. His original wish to play a role in and to contribute to the development of a few of the formative issues that changes the world in which we live was fulfilled.
Starting at the founding of the firm, in 1954, Diebold found himself in a unique leadership role of teacher and concepts innovator. He recognized at the outset that computers meant much more than mechanization of existing systems. Instead, they would open hitherto undreamed of opportunities to do new things.
Only a few years after the Diebold Group's founding, books were being written about John Diebold, his ideas and his firm.
Central to all of this was the insight that for computers to achieve their potential they had to be viewed as management and strategy tools. The firm's leadership was evident not only in technical innovations but also in the highest level of strategic planning.
Working through and with the senior managements of the largest and best run corporations in the world, John Diebold and his firm had an impact that went far beyond their small professional firm. There was a multiplier effect with widespread dissemination through these organizations, their managements, employees and customers.
From its founding to its sale in 1991, the firm and John Diebold had a continuing role in the creation and dissemination of new ideas, insights and the introduction of new paradigms. An example was the concepts that talent is capital and its consequences were a key to success in the new world that took shape.
From the beginning Diebold contributed to new expectations for the delivery of public services and to what citizens could expect from governments.
The firm provided counsel to over 100 cities, most U.S. states, several foreign governments and major corporations, in the U.S. and abroad.
John Diebold was active in public as well as private pursuits. He was a trustee of the Carnegie Institution of Washington, the Committee for Economic Development, the National Planning Association, a Fellow of the International Academy of Management, a Member, Executive Committee, the Public Agenda Foundation; Chairman, U.S.East Asian History of Science and Vice Chairman of the Academy for Educational Development.
He also served as Vice Chairman to John J. McCloy at the American Council on Germany. He had six honorary degrees, the Legion of Honor from France and was decorated by the governments of Italy, Germany and Jordan. He also received numerous professional awards.
Books
Automation: The advent of the Automatic Factory, Van Nostrand, 1952
Making the future work: Unleashing our powers of innovation for the decades ahead, Simon and Schuster, 1984
The Papers and Speeches of John Diebold, 1957-1998
Volume 1. Beyond Automation: Managerial Problems of an Exploding Technology. Foreword: Peter F. Drucker. McGraw Hill, 1964; Republished by PraegerPublishers, 1970
Volume 2. Man and the Computer: Technology as an Agent of Social Change, Frederick A. Praeger, 1969
Volume 3. Business Decisions and Technological Change, Praeger Publisher, 1970
Volume 4. The Role of Business in Society. Foreword by James L. Hayes, Chairman, American Management Associations. American Management Associations, 1982
Volume 5. Managing Information: The Challenge and the Opportunity, Foreword by Thornton F. Bradshaw,Chairman, RCA Corporation. American Management Associations, 1985
Volume 6. Business in the Age of Information. Foreword by Russell Palmer, Dean, The Wharton School. American Management Associations, 1985
Volume 7. Technology and Public Policy. Meeting Society's 21st Century Needs. Management Science Publishing Co., 1997
Volume 8. Maintaining Profitability in an Increasingly Complex Environment. Management Science Publishing Co., 1998
Volume 9. Information Technology in the 21st Century, Management Science Publishing Co., 1998
Editor, World of the Computer, for Random House in 1973
References
Additional references
Managerial Innovations of John Diebold. An Analysis of Their Content and Dissemination by Mary Stephens-Caldwell Henderson, LeBaron Foundation, 1966
John Diebold. Breaking the Confines of the Possible by Wilbur Cross. The Future Makers. James H. Heineman, 1965
Agent of Change. Forty Years of the Diebold Group Edited by Liesa Bing and Ralph E. Weindling. Diebold Institute for Public Policy Studies, 2001
The John Diebold Lectures by David W. Ewing. Harvard University Press.
Computer visionary John Diebold dies by Richard Waters. Financial Times. December 28, 2005. Accessed September 29, 2013
John Diebold on management by Carl Heyel, Prentice Hall, 1972
Other People’s Business by Howard Klein, Mason/Charter Publishers, 1976. John Diebold and his firm are principal subject in book.
Starting at the Top by John Mack Carter and Joan Feeney. William Morrow & Co., 1985. John Diebold one of subjects in book.
Archives and records
John Diebold papers at Baker Library Special Collections, Harvard Business School.
The Diebold Group, Client Reports at Charles Babbage Institute, University of Minnesota. Contains nearly 1000 client reports (1954-1990), prepared for the Diebold Group's corporate clients, government, and other public clients. The reports assess whether and how companies can make use of computers, sometimes including specific recommendations for computer purchases based on predictions for automation in a particular industry.
External links
John Diebold as an author: John Diebold ist Mister Automation
John Diebold Papers. Baker Library Historical Collections. Harvard Business School.Harvard University Library; OASIS: Online Archival Search Information System. Mss:867 1906-2003 D559
Swarthmore Friends Historical Library. Collected Papers of Individual Alumni, John T. Diebold Papers. Call number: R66/R003/002
Carnegie Institution Trustee Emeritus John Diebold dies at age 79. Carnegie Institution for Science.
1926 births
2005 deaths
People from Weehawken, New Jersey
Swarthmore College alumni
Diebold
Harvard Business School alumni
United States Merchant Marine Academy alumni
Weehawken High School alumni |
1284136 | https://en.wikipedia.org/wiki/Wireless%20WAN | Wireless WAN | Wireless wide area network (WWAN), is a form of wireless network.
The larger size of a wide area network compared to a local area network requires differences in technology.
Wireless networks of different sizes deliver data in the form of telephone calls, web pages, and video streaming.
A WWAN often differs from wireless local area network (WLAN) by using mobile telecommunication cellular network technologies such as 2G, 3G, 4G LTE, and 5G to transfer data. It is sometimes referred as Mobile Broadband. These technologies are offered regionally, nationwide, or even globally and are provided by a wireless service provider. WWAN connectivity allows a user with a laptop and a WWAN card to surf the web, check email, or connect to a virtual private network (VPN) from anywhere within the regional boundaries of cellular service. Various computers can have integrated WWAN capabilities.
A WWAN may also be a closed network that covers a large geographic area. For example, a mesh network or MANET with nodes on buildings, towers, trucks, and planes could also be considered a WWAN.
A WWAN may also be a low-power, low-bit-rate wireless WAN, (LPWAN), intended to carry small packets of information between things, often in the form of battery operated sensors.
Since radio communications systems do not provide a physically secure connection path, WWANs typically incorporate encryption and authentication methods to make them more secure. Some of the early GSM encryption techniques were flawed, and security experts have issued warnings that cellular communication, including WWAN, is no longer secure. UMTS (3G) encryption was developed later and has yet to be broken.
See also
Private Shared Wireless Network
Wide area network
Wireless LAN
Wi-Fi
Satellite Internet access
References
Wide area networks
Wireless networking |
39096469 | https://en.wikipedia.org/wiki/Bloomfire | Bloomfire | Bloomfire is a software-as-a-service company based in Austin, Texas. The business creates web-based software applications that aim to increase virtual knowledge-and-insights-sharing in the workplace. It was founded in 2010 by Josh Little, and originally headquartered in Salt Lake City, Utah. The company is now headquartered in Austin, Texas and is backed by investors such as Austin Ventures. The current CEO is Mark Hammer.
Overview
Bloomfire products allow companies to share and search for information, insights, and research on a web-based application platform. The software application "Bloomfire", launched in 2012, allows users to create team communities where people can post questions and answers, add or create new content, and search or browse existing content. The software aims to increase accessibility to information within a company so employees have the knowledge they need to work efficiently. The application can be accessed from a device connected to the internet.
Bloomfire supports 53 file types, and content can be uploaded in the form of videos, audio files, images, slide decks, or text documents. The platform also has automatic video and audio transcription capabilities and makes the text of its transcripts searchable.
The application is available for an annual fee, and the company has several hundred customers. Bloomfire provides software for companies such as Capital One, Southwest Airlines, and Conagra. Bloomfire targets its Bloomfire software towards insights and market research, customer support, sales, and marketing teams, but it also applies to other business domains. Users have access to phone and email support, as well as online support.
History
The previous CEO of Bloomfire, Craig Malloy, had an 18-year career in videoconferencing before buying Bloomfire in 2010 with Co-Founder David Mccann.
The current CEO is Mark Hammer, who has been with the company since 2014. Hammer has more than 20 years of experience in leadership with software companies and has previously held senior management roles at SmartBear Software, Houghton Mifflin Harcourt, and Compass Learning.
Awards and recognition
In 2012, Bersin & Associates named Bloomfire in their Learning Leaders Winners in the Vendor Innovation in Learning and Talent Management: Informal Category. In 2012, Bloomfire was named a Brandon Hall Gold Award winner for Best Advance in Social Learning Technology.
Bloomfire received a Brandon Hall Gold Award for Excellence in Technology in 2015. Bloomfire also received a Bronze Stevie Award in 2017 for Sales & Customer Service and a Silver Stevie Award in 2018 for Sales & Customer Service. The company was also named to the Austin Business Journal's list of Best Places to Work in 2015, 2016, and 2018. In 2017, Bloomfire's CFO, Bill Tole, received the Austin Business Journal's Best CFO Award.
References
Further reading
External links
Official website
American companies established in 2010
Software companies based in Texas
2010 establishments in Utah
Software companies established in 2010
Companies based in Austin, Texas |
17470253 | https://en.wikipedia.org/wiki/Henry%20Thacker | Henry Thacker | Henry Thomas Joynt Thacker (20 March 1870 – 3 May 1939) was a doctor, New Zealand Member of Parliament and Mayor of Christchurch.
Early life
Thacker was born in Okains Bay on Banks Peninsula on 20 March 1870. His parents were Essy Joynt and John Edward Thacker. His father was an editor of the Sligo Guardian and after emigration to Christchurch in 1850, launched the second newspaper in Canterbury, the Guardian and Canterbury Advertiser. The newspaper failed after only a few months.
Henry Thacker attended Boys' High School and then Canterbury College (what is now known as the University of Canterbury), from where he graduated with a Bachelor of Arts. He then enrolled at Edinburgh University where he gained his M.B. and C.M. diplomas in 1895. Two years later he gained a fellowship in the Royal College of Surgeons in Dublin.
Return to New Zealand
Thacker returned to Christchurch in 1898 and opened a practice in Latimer Square. He represented Canterbury in rugby union in 1889 and 1891 and assisted in the development of Richard Arnst. From 1899 he held the rank of captain in the Army Medical Corps.
Rugby league
Thacker was the first president of the Canterbury Rugby Football League when the organisation began holding competitions in 1913. He served in this position from 1912 until 1929 and became a life member in 1920. Thacker also donated the Thacker Shield in 1913. He was the manager of the New Zealand side during their tour of Australia in 1913.
Political career
Thacker was a member of the Christchurch Hospital Board (1907–1922), Lyttelton Harbour Board (1907–1922), Christchurch City Council (1929–1931) and Mayor of Christchurch between 1919 and 1923. The 1919 mayoral election was contested by Thacker, John Joseph Dougall (Mayor of Christchurch 1911–1912) and James McCombs (MP for Lyttelton).
Thacker contested the and general elections without success in the and electorates, respectively. He then contested the Lyttelton by-election in 1913 as an independent Liberal, coming fourth with 5% of the vote in the first ballot.
Thacker was a member of the Liberal Party and represented the Christchurch East electorate in the New Zealand House of Representatives from 1914. He was re-elected in 1919 but was defeated in 1922 by Tim Armstrong from the Labour Party, when he came second out of three candidates.
In 1935, he was awarded the King George V Silver Jubilee Medal.
Death
Thacker died on 3 May 1939 at Christchurch. His wife died in 1955, and they are both buried at Waimairi Cemetery. The Thackers had no children.
References
|-
1870 births
1939 deaths
People educated at Christchurch Boys' High School
Mayors of Christchurch
Deputy mayors of Christchurch
New Zealand Liberal Party MPs
New Zealand hospital administrators
University of Canterbury alumni
Alumni of the University of Edinburgh
New Zealand rugby league administrators
New Zealand MPs for Christchurch electorates
Burials at Waimairi Cemetery
Christchurch City Councillors
Unsuccessful candidates in the 1908 New Zealand general election
Unsuccessful candidates in the 1911 New Zealand general election
Unsuccessful candidates in the 1922 New Zealand general election
Unsuccessful candidates in the 1925 New Zealand general election
New Zealand general practitioners
New Zealand military doctors
Lyttelton Harbour Board members |
14750832 | https://en.wikipedia.org/wiki/Briarcliffe%20College%E2%80%93Patchogue | Briarcliffe College–Patchogue | Briarcliffe College - Patchogue was a campus of Briarcliffe College located in Patchogue, New York, on the south shore of Long Island in Suffolk County. It offered associate and bachelor degree coursework, covering areas including accounting, business administration, criminal justice, computer programming, graphic design, word processing and office technologies. The campus permanently closed in 2018.
History
Briarcliffe College was founded in 1966 to help Long Island resident’s with their higher education needs and prepare them for the growing business world. In 1980, following an influx in the Long Island population, the Briarcliffe College – Patchogue campus was established. With this addition, the Patchogue campus offered day and evening classes, which was responsible for the growth in adult students.
Academics
Briarcliffe College – Patchogue offers a variety of programs:Accounting ProgramBriarcliffe College - Patchogue offers both a Diploma and Associate in Applied Science degree in accounting. The Accounting Program focuses on building strong accounting skills and stress business knowledge.Business Administration ProgramThe Business Administration Program offer an Associate in Applied Science and a Bachelor of Business Administration degree. Students can learn the business, professional, communication and technical skills needed to be successful in the business world today.Computer ProgrammingBriarcliffe College - Patchogue offers students a Certificate in Computer Programming. This program teaches students the front and back-end of programming, writing, testing and debugging techniques.Computer Service TechnicianThe Computer Service Technician Diploma Program at Briarcliffe College – Patchogue offers students the opportunity to learn installation, maintenance and support for computer systems. Those that graduate from this program can pursue careers as contractors or self-employed professionals.Criminal Justice ProgramBriarcliffe College – Patchogue offers a Criminal Justice Program to students who have a passion for investigative work, technology and public service. By earning an Associate of Science degree, students can pursue a career in fields such as security and corrections.Graphic Design ProgramThe Graphic Design Program at Briarcliffe College – Patchogue offers students the opportunity to earn an Associate in Applied Science degree. Students are able to study both traditional and digital design with courses ranging from theory to emerging communication technology.Word Processing SecretarialStudents at Briarcliffe College - Patchogue can choose to earn a Diploma in Word Processing Secretarial. Those who graduate from this program should be able to obtain entry-level employment in the administration field and/or create career flexibility and pursue advancement opportunities.Office Technologies Program'''
Briarcliffe College – Patchogue offer students an Office Technologies Program with the choice of a legal, medical or computer tech concentration. Graduates will receive an Associate in Applied Science degree and will be prepared for a position in the business and office environments.
Admissions
Admissions into Briarcliffe College – Patchogue may be granted upon completion of an application and a personal interview. Prospective students must also submit an application fee of $35.00.
Accreditation
Briarcliffe College - Patchogue is a higher education institution that is accredited by the Commission on Higher Education of the Middle States Association of Colleges and Schools, located at 3624 Market St., Philadelphia, PA 19104. Briarcliffe College – Patchogue is also authorized to offer Diploma, Associate degree and bachelor's degree programs.
Organizations
Students that attend Briarcliffe College - Patchogue have numerous extracurricular the options, such as social groups, students government. Currently, the college offers no sports and is a member of the United States Collegiate Athletic Association.
References
External links
Briarcliffe College Library
Patchogue Community Profile
Patchogue Village website
United States Collegiate Athletic Association
Patchogue, New York
Former for-profit universities and colleges in the United States
Graphic design schools in the United States
Defunct private universities and colleges in New York (state)
Educational institutions established in 1980
Universities and colleges in Suffolk County, New York
1980 establishments in New York (state)
Educational institutions disestablished in 2018 |
4706989 | https://en.wikipedia.org/wiki/Phenom%20%28rock%20group%29 | Phenom (rock group) | Phenom was a progressive rock group from Bangalore, India, notable for being one of the first Indian rock groups to release their work under a Creative Commons license . Phenom last performed in concert on July 29, 2006.
Creative Commons
On January 26, 2007, Phenom's album Unbound was included on the CD distributed at the Creative Commons India License launch in recognition of it being the first Creative Commons licensed music coming out of India.
Members
2006 Lineup
Gaurav Joshua Vaz – Bass and Backing Vocals
Mrinal Kalakrishnan – Drums and Backing Vocals
Jnaneshwar "JD" Das – Keyboards and Backing Vocals
Trinity "Tiny" D'Souza – Lead Guitar
Mark Lazaro – Lead Vocals
Yashraj Jaiswal – Drums and Background musics
2004 Lineup
Gaurav Joshua Vaz – Bass and Lead Vocals
Sashi Wapang – Lead Guitar and Backing Vocals (left group in June 2004)
Mrinal Kalakrishnan – Drums and Backing Vocals
Jnaneshwar "JD" Das – Keyboards and Backing Vocals
2002 Lineup
Gaurav Joshua Vaz – Guitar and Backing Vocals
Sashi Wapang – Lead Guitar
Mrinal Kalakrishnan – Drums and Backing Vocals
Jnaneshwar "JD" Das – Keyboards and Backing Vocals
Noella D'Sa – Lead Vocals (left group in early 2003)
Deepu Jobie John – Bass (left group in mid-2002)
Discography
2004: Phenom Unbound
The album was released under a Creative Commons license and contained five songs.
"Unbound"
"Coloured for this world"
"CAP 5101"
"Resurgence"
"A Little Step"
TV Series
Sacred Games - (2018)
References
External links
http://wearephenom.com – Official Website
http://www.swaroopch.info/archives/2005/12/03/linux-can/ Video of Phenom performing "Linux Can!" at FOSS.IN/2005
http://www.dnaindia.com/report.asp?NewsID=1001566&CatID=2 Article about Phenom's "Linux Can!"
The Hindu reports on Phenom
Indian progressive rock groups
Musical groups established in 2001 |
23168787 | https://en.wikipedia.org/wiki/Harwell%20CADET | Harwell CADET | The Harwell CADET was the first fully transistorised computer in Europe, and may have been the first fully transistorised computer in the world.
The electronics division of the Atomic Energy Research Establishment at Harwell, UK built the Harwell Dekatron Computer in 1951, which was an automatic calculator where the decimal arithmetic and memory were electronic, although other functions were performed by relays. By 1953, it was evident that this did not meet AERE's computing needs, and AERE director Sir John Cockcroft encouraged them to design and build a computer using transistors throughout.
E. H. Cooke-Yarborough based the design around a 64-kilobyte (65,536 bytes) magnetic drum memory store with multiple moving heads that had been designed at the National Physical Laboratory, UK. By 1953 his team had transistor circuits operating to read and write on a smaller magnetic drum from the Royal Radar Establishment. The machine used a low clock speed of only 58 kHz to avoid having to use any valves to generate the clock waveforms. This slow speed was partially offset by the ability to add together eight numbers concurrently.
The resulting machine was called CADET (Transistor Electronic Digital Automatic Computer – backward). It first ran a simple test program in February 1955. CADET used 324 point-contact transistors provided by the UK company Standard Telephones and Cables, which were the only ones available in sufficient quantity when the project started; 76 junction transistors were used for the first stage amplifiers for data read from the drum, since point-contact transistors were too noisy. CADET was built from a few standardised designs of circuit boards which never got mounted into the planned desktop unit, so it was left in its breadboard form. From August 1956 CADET was offering a regular computing service, during which it often executed continuous computing runs of 80 hours or more.
Cooke-Yarborough described CADET as being "probably the second fully transistorised computer in the world to put to use", second to an unnamed IBM machine. Both the Manchester University Transistor Computer and the Bell Laboratories TRADIC were demonstrated incorporating transistors before CADET was operational, although both required some thermionic valves to supply their faster clock power, so they were not fully transistorised. In April 1955 IBM announced the IBM 608 transistor calculator, which they claim was "the first all solid-state computing machine commercially marketed" and "the first completely transistorized computer available for commercial installation", and which may have been demonstrated in October 1954, before the CADET.
By 1956, Brian Flowers, head of the theoretical physics division at AERE, was convinced that the CADET provided insufficient computing power for the needs of his numerical analysts and ordered a Ferranti Mercury computer. In 1958, Mercury number 4 became operational at AERE to accompany the CADET for another two years before the CADET was retired after four years' operation.
See also
History of computing hardware, Second generation: transistors
References
External links
The Harwell CADET Computer
One-of-a-kind computers
Computer-related introductions in 1955
Transistorized computers
Early British computers |
55891573 | https://en.wikipedia.org/wiki/SHIFT%20Inc. | SHIFT Inc. | is a Japanese software testing company, headquartered in Tokyo, that provides software quality assurance and software testing solutions.
Overview
SHIFT Inc. was founded in 2005 by Masaru Tange, who was a manufacturing process improvement consultant.
In the earliest years, it was a tiny consulting company specializing in manufacturing and business process improvements. In 2007, it entered the software testing industry by undertaking consultancy work for the improvement of E-commerce testing. In 2009, Tange changed the company's direction from the process improvement consultancy to the software testing business. The company then grew so rapidly to be listed on the Tokyo Stock Exchange Mothers market in 2014. In April 2020, it has the market capitalization of 143 billion yen ($1.3 billion), which is the largest of the listed Japanese companies specialized in software quality assurance and testing services.
The company covers software testing outsourcing, project management office and test strategy planning supports, test execution, test design, automated testing, software inspection, and educational program services.
Notes
References
External links
SHIFT Inc.
Software companies of Japan
Service companies based in Tokyo
Software testing
Companies listed on the Tokyo Stock Exchange
Japanese companies established in 2005
Software companies established in 2005 |
55909484 | https://en.wikipedia.org/wiki/Commercial%20Operating%20System | Commercial Operating System | Commercial Operating System (COS) is a discontinued family of operating systems from Digital Equipment Corporation.
They supported the use of DIBOL, a programming language combining features of BASIC, FORTRAN and COBOL. COS also supported IBM RPG (Report Program Generator).
Implementations
The Commercial Operating System was implemented to run on hardware from the PDP-8 and PDP-11 family.
COS-310
COS-310 was developed for the PDP-8 to provide an operating environment for DIBOL. A COS-310 system was purchased as a package which included a desk, VT-52 VDT (Video Display Tube), and a pair of eight inch floppy drives. It could optionally be purchased with one or more 2.5 MB removable media hard drives. COS-310 was one of the operating systems available on the DECmate II.
Unlike under TSS/8, where each user had only a 4K virtual machine, on COS, each user had (up to) a virtual 32K.
COS-350
COS-350 was developed to support the PDP-11 port of DIBOL, and was the focus for some vendors of turnkey software packages.
Pre-COS-350, a PDP 11/05 single-user batch-oriented implementation was released; the multi-user PDP 11/10-based COS came about 4 years later. The much more powerful PDP-11/34 "added significant configuration flexibility and expansion capability."
See also
Comparison of operating systems
Timeline of operating systems
References
DEC operating systems
Time-sharing operating systems |
35256368 | https://en.wikipedia.org/wiki/Outline%20of%20C%2B%2B | Outline of C++ | The following outline is provided as an overview of and topical guide to C++:
C++ is a statically typed, free-form, multi-paradigm, compiled, general-purpose programming language. It is regarded as an intermediate-level language, as it comprises a combination of both high-level and low-level language features. It was developed by Bjarne Stroustrup starting in 1979 at Bell Labs as an enhancement to the C language.
What type of language is C++?
C++ can be described as all of the following:
Programming language — artificial language designed to communicate instructions to a machine, particularly a computer. Programming languages can be used to create programs that control the behavior of a machine and/or to express algorithms precisely.
Compiled language — programming language implemented through compilers (translators which generate machine code from source code), and not interpreters (step-by-step executors of source code, where no translation takes place).
General-purpose programming language — programming language designed to be used for writing software in a wide variety of application domains.
Intermediate language — language of an abstract machine designed to aid in the analysis of computer programs. The term comes from their use in compilers, where a compiler first translates the source code of a program into a form more suitable for code-improving transformations, as an intermediate step before generating object or machine code for a target machine.
Object-oriented programming language – programming language based on "objects", which are data structures that contain data, in the form of fields, often known as attributes; and code, in the form of procedures, known as methods. An object's procedures can access and modify the data fields of the objects. In object-oriented programming, computer programs are designed by making them out of objects that interact with one another.
Statically typed programming language
General C++ concepts
Name resolution
Argument-dependent name lookup — applies to the lookup of an unqualified function name depending on the types of the arguments given to the function call. This behavior is also known as Koenig lookup, named after its inventor Andrew Koenig (programmer).
Auto-linking — mechanism for automatically determining which libraries to link to while building a C or C++ program. It is activated by means of #pragma comment(lib, <name>) statements in the header files of the library.
Classes — Classes define types of data structures and the functions that operate on those data structures. Instances of these datatypes are known as objects and can contain member variables, constants, member functions, and overloaded operators defined by the programmer. The C++ programming language allows programmers to separate program-specific datatypes through the use of classes.
Exception guarantees
Header file
Inner class
One Definition Rule
Opaque pointer
Plain old data structure
Rule of three (C++ programming)
Run-time type information
Sequence point
Single Compilation Unit
Special member functions
Substitution failure is not an error
Template (C++)
Template metaprogramming
Traits class
Undefined behavior
Virtual function calls
Issues
Compatibility of C and C++
C++ Toolchain
C++ compilers
C++ libraries
C++ Standard Library
The C++ standard library is a collection of utilities that are shipped with C++ for use by any C++ programmer.
It includes input and output, multi-threading, time, regular expressions, algorithms for common tasks, and less common ones (find, for_each, swap, etc.) and lists, maps and hash maps (and the equivalent for sets) and a class called vector that is a resizable array. Many other functions are provided by the standard library, but mainly in a form designed for building on top of to create third party libraries.
Standard Template Library (STL)
Other notable libraries
Active Template Library
Adaptive Communication Environment
Algorithmic skeleton
Apache C++ Standard Library
Armadillo (C++ library)
Artefaktur
Asio C++ library
AT&T FSM Library
ATL Server
BALL
Blitz++
Boehm garbage collector
Boost (C++ libraries)
Borland Graphics Interface
Botan (programming library)
C++ AMP
CGAL
Cinder (programming library) — framework for advanced visualization capabilities.
ClanLib
CodeSynthesis XSD
CodeSynthesis XSD/e
CppUnit
Crypto++
CTPP
D-Bus
Database Management Library
Dinkumware
Effi (C++)
Eigen (C++ library)
GDAL
GDCM
GiNaC
Gtkmm
HOOPS 3D Graphics System
Integrated Performance Primitives (IPP) — a multi-threaded software library of functions for multimedia and data processing applications, produced by Intel.
Juce
Kakadu (software)
KFRlib - cross-platform, optimized audio and DSP library.
LEMON (C++ library)
LevelDB
Libarc
LibLAS
Libsigc++
Libx (graphics library)
LiteSQL
LIVE555
Loki (C++)
Math Kernel Library (MKL) — a library of optimized math routines for science, engineering, and financial applications, produced by Intel.
Matrix Template Library
Metakit
Microsoft Foundation Class Library
Object Windows Library
Object-oriented Abstract Type Hierarchy
ODB (C++)
OGRE
Open Asset Import Library
Open Inventor
OpenImageIO
Oracle Template Library
Orfeo toolbox
POCO C++ Libraries
Podofo
Poppler (software)
PTK Toolkit
Qt (framework)
RWTH FSA Toolkit
Sound Object (SndObj) Library
Stapl
SymbolicC++
Threading Building Blocks (TBB) — C++ template library developed by Intel Corporation for writing software programs that take advantage of multi-core processors.
VTD-XML
Windows Template Library
WxWidgets
Xcas
Xerces
YAAF
See also
List of C++ multi-threading libraries
List of C++ multiple precision arithmetic libraries
List of C++ template libraries
History of C++
History of C++
Programming languages that influenced C++
C
Simula
Ada 83
ALGOL 68
CLU
ML
Standardisation History
C++98 — In 1998, the C++ standards committee standardized C++ and published the international standard ISO/IEC 14882:1998 (informally known as C++98).
C++03
C++11 — Approved by ISO as of 12 August 2011, replacing C++03. The name is derived from the tradition of naming language versions by the year of the specification's publication.
C++14 — Most recent iteration of C++, announced by ISO on 18 August 2014, replacing C++11.
C++17 - Upcoming version. The specification is feature complete, and is entering the review period.
C++20
Example source code
Articles with example C++ code
C++ publications
Books about C++
The C++ Programming Language — widely regarded as the standard textbook for the language. By Bjarne Stroustrup.
The Design and Evolution of C++ — a book by Bjarne Stroustrup about the birth of C++.
Modern C++ Design — a book by Andrei Alexandrescu on various design patterns using C++.
Magazines about C++
C++ Report — was a bi-monthly professional computer magazine published by SIGS Publications Group.
C++ personalities
Alexander Stepanov
Andrei Alexandrescu
Andrew Koenig
Bjarne Stroustrup– Danish computer scientist, most notable for the creation and development of C++.
David Abrahams
Douglas C. Schmidt
Herb Sutter
Jim Coplien (a.k.a. James O. Coplien)
Pete Becker
Robert Cecil Martin
Scott Meyers
C++ dialects
The C++ standardisation committee discourages dialects (with a preference that the problem is solved by new functionality in the standard library, as is done with items like multi-threading for parallel programming), however some dialects have been created, for various reasons (to remove features that are harder to implement, response to a programming trend, etc.):
Programming language dialect — (relatively small) variation or extension of the language that does not change its intrinsic nature.
Charm++ — parallel object-oriented programming language based on C++ and developed in the Parallel Programming Laboratory at the University of Illinois. Charm++ is designed with the goal of enhancing programmer productivity by providing a high-level abstraction of a parallel program while at the same time delivering good performance on a wide variety of underlying hardware platforms.
Embedded C++ — dialect of C++ for embedded systems, built "to provide embedded systems programmers with a subset of C++ that is easy for the average C programmer to understand and use".
Embedded system — computer system designed for specific control functions for a facility, machine, or device in which it is embedded as an integrated part of the product. Embedded systems control many devices in common use today.
R++ — rule-based programming language developed by Bell Labs in the 1990s, based on C++.
Sieve C++ Parallel Programming System — C++ compiler and parallel runtime designed and released by Codeplay that aims to simplify the parallelization of code so that it may run efficiently on multi-processor or multi-core systems.
C++ language extensions
AspectC++ — aspect-oriented extension of C and C++ languages.
C++/CLI — Microsoft's language specification intended to supersede Managed Extensions for C++. It is a complete revision that aims to simplify the older Managed C++ syntax (which is now deprecated). C++/CLI is standardized by Ecma as ECMA-372. It is currently available only in Visual Studio 2005, 2008, 2010, 2012, 2013 and 2015 (also included in the Express Editions).
Common Language Infrastructure — open specification developed by Microsoft and standardized by ISO and ECMA that describes the executable code and runtime environment that form the core of the Microsoft .NET Framework and the free and open source implementations Mono and Portable.NET.
C++/CX — language extension for C++ compilers from Microsoft that enables C++ programmers to write programs for the new Windows Runtime platform, or WinRT. It brings a set of syntax and library abstractions that interface with the COM-based WinRT programming model in a way that is natural to native C++-programmers.
Cilk Plus — multithreaded parallel computing extension of C and C++ languages.
CUDA C/C++ — compiler and extensions for parallel computing using Nvidia graphics cards.
Managed Extensions for C++ — deprecated Microsoft set of deviations from C++, including grammatical and syntactic extensions, keywords and attributes, to bring the C++ syntax and language to the .NET Framework. These extensions allowed C++ code to be targeted to the Common Language Runtime (CLR) in the form of managed code as well as continue to interoperate with native code. Superseded by C++/CLI.
See also
Outline of computer programming
Outline of software
Outline of software engineering
References
External links
C++
C++ |
65498858 | https://en.wikipedia.org/wiki/Central%20Philippine%20University%20-%20College%20of%20Computer%20Studies | Central Philippine University - College of Computer Studies | The Central Philippine University College of Computer Studies, also referred to as CPU CCS, CPU College of Computer Studies or CPU Computer Studies, is one of the an academic units of Central Philippine University, a private university in Iloilo City, Philippines. Founded as a department under the Central Philippine University - College of Business and Accountancy (College of Commerce) in 1995 and a separate college in 2003, the college confers four undergraduate degrees and one graduate degree. CPU College of Computer Studies has been accredited Level II by Philippine Accrediting Association of Schools, Colleges and Universities (PAASCU) in the academic programs of Computer Science, Information Systems, and Information Technology.
Academic programs
The CPU College of Computer Studies is accredited with the Philippine Accrediting Association of Schools, Colleges and Universities. At present, the college confers four undergraduate degrees and one graduate degree. The Bachelor of Science in Information Technology (BSIT) program of the college is a ladderized program regulated under the Commission on Higher Education (Philippines) and Technical Education and Skills Development Authority (Philippines) (TESDA).
Undergraduate programs
B.S. in Computer science
B.S. in Digital media and interactive arts
B.S. in Information technology
B.S. in Information systems (defunct)
B.S. in Library and information science
Graduate programs
The CPU College of Computer Studies offers masteral degrees under the CPU School of Graduate Studies.
M.S. in Computer science
Facilities
The CPU College of Computer Studies is housed in the two-storey Mary Thomas Hall. Its facilities include air-conditioned classrooms and computer laboratories.
To reinforce the college's course offerings, Central Philippine University formed partnerships with CISCO and ORACLE. The CISCO Networking Academy designation offers elective and certification courses for information technology skills for students and professionals. The college's consortium with the ORACLE Academic Initiative Partner involves the same program and initiatives on information technology it has with CISCO.
For library facilities, the Central Philippine University Library with Henry Luce III Library as the main library, acts as the college's bibliothèque. The university's athletic facilities on the other hand, serves as the institution's needs for its students Physical Education subject classes and training of its athletic team, the CPU-CCS Warriors.
References
External links
cpu.edu.ph/college-of-computer-studies (Official website of CPU College of Computer Studies)
facebook.com/CPUCCS.StudentCouncil/ (CPU College of Computer Studies Provincial Council
cpu.edu.ph/ (Official website of Central Philippine University)
Computer Studies |
32244195 | https://en.wikipedia.org/wiki/DataSplice | DataSplice | DataSplice, LLC is a mobile software company headquartered in Fort Collins, Colorado and offers mobile applications which extend enterprise systems, including packaged software for Enterprise Asset Management (EAM) and computerized maintenance management systems (CMMS). The software provides an interface from these systems to handhelds, smartphones, tablet computers and mobile computers. It may also be used on a desktop system as a unified/simplified interface for multiple systems.
The software offered is a mobile middleware, with an emphasis on IBM's Maximo EAM system. Its primary client base is focused on utilities, gas/oil, defense, aerospace and other markets which utilizes field service management systems which require tracking, asset management and regulatory accountability.
DataSplice does not use a proprietary software platform, but rather utilizes the Common Language Infrastructure (CLI) of Microsoft.NET framework’s ADO.NET, which allows for connectivity to different database systems such as MySQL and Oracle. The extensible system consists of three components, including a remote client (for handheld and/or desktop use), a server which communicates with the primary EAM, and an administration client for configuring the system.
History
Originally known as Optimization Resources, which was founded in 1991, DataSplice was spun off as both a company and product in 2001. The original management and development staff continue to be engaged in daily operations. DataSplice is a privately held company.
DataSplice was acquired by Prometheus Group in 2018.
Products
The product's main emphasis is providing a simplified mobile interface into IBM Maximo.
The product consists of three components: Remote Client, Administration Client and the Server. The primary modules are Inventory, Work Orders, Inspections, Condition Monitoring and Asset Management.
The product's remote client, is HTML5 compliant, and as such is platform agnostic. Supported systems include iOS (iPad), Android (Droid), and Windows Mobile and 8 (Surface, Phone and Desktop). The remote client is also able to utilize bar code scanners, mobile printers, and RFID readers.
In 2010, DataSplice introduced InspecTMI, a field inspections and operator rounds mobile collection system geared toward highly regulated inspection scenarios, such as substations, power generation, and safety inspections.
References
Enterprise architecture
Mobile technology
Business software
Data |
58177654 | https://en.wikipedia.org/wiki/Data%20Facility%20Storage%20Management%20Subsystem%20%28MVS%29 | Data Facility Storage Management Subsystem (MVS) | Data Facility Storage Management Subsystem (DFSMS) is a central component of IBM's flagship operating system z/OS. It includes access methods, utilities and program management functions.
Data Facility Storage Management Subsystem is also a collective name for a collection of several products, all but two of which are included in the DFSMS/MVS product.
History
In 1972 IBM announced the first release of the OS/VS2 operating system for the IBM System 370 systems; that release later was known as Single Virtual Storage (SVS). In 1974 IBM announced release 2.0; that release and all subsequent releases became known as Multiple Virtual Storage (MVS). All releases of OS/VS2 were available to no charge because the software cost was bundled with the hardware cost. OS/VS2 Release 3.8 was the last free release of MVS.
In the late seventies and early eighties IBM announced:
5740-XE1 MVS/System Extensions (MVS/SE)
MVS/SE improves the performance and RAS of OS/VS2 (MVS)
5740-AM6 Data Facility Device Support (DFDS) for OS/VS1
5740-AM7 Data Facility Device Support (DFDS) for MVS
DFDS supports an indexed VTOC, and with the proper PTF supports the Speed Matching Buffer on the IBM 3880.
5740-XYQ Data Facility Extended Function (DFEF)
DFEF offers a new type of VSAM catalog, but had reliability problems that were only resolved in DFP.
5740-AM3 Sequential Access Method Extended (SAM-E)
SAM-E improves the performance of BPAM, BSAM and QSAM on direct access storage devices.
5740-AM8 Access Method Services Cryptographic Option
5748-UT2 Offline 3800 Utility
In June 1980, IBM announced MVS/System Product (MVS/SP) as a replacement for MVS/SE.
On October 21, 1981, IBM announced new Kxx models of the 3081, supporting a new architecture known as System/370 Extended Architecture (370-XA).
IBM also announced MVS/Extended Architecture (MVS/XA), consisting of MVS/SP Version 2 and a corequisite new product, Data Facility Product (DFP), 5665-284, replacing five of the products listed above, the linkage editor and the loader.
On May 17, 1983, IBM announced MVS/370 Data Facility Product (MVS/370 DFP), 5665-295, for MVS/SP Version 1 Release 3, replacing the same five programs as DFP for MVS/XA.
On February 5, 1985, IBM announced MVS/XA Data Facility Product (MVS/XA DFP) Version 2, 5655-XA2, as a replacement for MVS/XA Data Facility Product Version 1, 5665-284.
DFP replaced BDAM, BPAM, BSAM, ISAM, QSAM and VSAM.
On February 15, 1988 IBM announced MVS/System Product Version 3 (MVS/ESA), it also announced MVS/Data Facility Product Version 3 (MVS/DFP), 5665-XA3; MVS/SP V3 required either MVS/XA Data Facility Product Version 2, 5655-XA2, or Version 3. More recent releases were corequisites for MVS/ESA SP Version 4 and MVS/ESA SP Version 5.
On April 19, 1988, IBM announced the umbrella term Data Facility Storage Management Subsystem for facilities provided by the programs
IBM MVS/Data Facility Product (MVS/DFP) Version 3 Release 1.0
IBM Data Facility Data Set Services (DFDSS) Version 2 Release 4.0
IBM Data Facility Hierarchical Storage Manager (DFHSM) Version 2 Release 4.0
IBM Resource Access Control Facility (RACF) Release 8.1
IBM Data Facility Sort (DFSORT) Release 10.0
In addition to replacing part of the device support in the base MVS/SP, DFP replaces the Linkage Editor and several utility programs and service aids.
DFP is no longer available as a separate product, but has become part of Data Facility Storage Management Subsysem, under the name DFSMSdfp.
On May 19, 1992, IBM announced DFSMS/MVS, 5695-DF1, replacing MVS/Data Facility Product (MVS/DFP) Version 3, 5665-XA3,
Data Facility Hierarchical Storage Manager (DFHSM) Version 2, 5665-329
and Data Facility Data Set Services (DFDSS) Version 2, 5665-327.
DFSMS/MVS also replaced utilities and service aids.
DSDSS and DFHSM became optional chargeable features of DFSMS; DFSORT and RACF remained separate products.
While DFSMS/MVS Release 1 still included ISAM, IBM eventually dropped it, but continued to support the ISAM compatibility interface to VSAM.
DFSMS/MVS R1 included the optional Removeable Media Manager (DFSMSrmm), which supports both manual tape libraries and the 3495 Tape Library Dataserver.
On March 1, 1994, IBM announced DFSMS/MVS Release 2.
On March 1, 1994, IBM announced DFSMS/MVS Release 3.
On March 1, 1994, IBM announced DFSMS/MVS Release 4.
On March 1, 1994, IBM announced DFSMS/MVS Release 5.
Components
This section describes features of DFSMS from the perspective of z/OS; it does not distinguish between features added
by, e.g., DFDS, and features added in the latest release of z/OS.
DFSMSdss
DFSMSdss is a chargeable feature of DFSMS that can dump and restore selected data sets and selected volumes based on specifications in control statements. It is also referred to in documentation as a data mover.
DFSMSdfp
DFSMSdfp replaces the older direct, index and sequential access methods, the utilities and service aids, the linkage editor, the loader and program fetch. It is the component to which new device support code is added.
DFSMSdfp adds a number of loosely related facilities.
Indexed VTOC
The VTOC structure inherited from OS/360 uses records with 44 byte keys, and a sequential search using a Search Key Equal/TIC *-8 loop. The VTOC Index (VTOCIX) is an optional data set that indexes Data Set Control Blocks (DSCBs) and allows a faster search.
ICF catalog
The Improved Catalog Facility (ICF) replaces the OS/360 Control Volume (CVOL) and the VSAM catalog with a more resilient catalog structure.
PDSE
Partitoned Data Set Extended (PDSE) is a new type of dataset that resolves several issues with the old PDS organization but that can be read and written by existing BPAM, BSAM and QSAM code.
System Managed Storage
System Managed Storage (SMS) is a set of facilities for controlling the placement, migration and retention of datasets on direct access storage devices that is more flexible than older methods, e.g., VOL=SER specifications in JCL. Prior to SMS, the installations defined unit names during system generation, and two pools of DASD volumes, called PUBLIC and STORAGE, in a member of the system
parameter library. In addition, users had to explicitly define
characteristics of new datasets.
With SMS, an installation can define and update several types of lists, described by IBM as
Data Class
Data definition parameters
Storage Class
Availability and accessibility requirements
Management Class
Data migration, backup, and retention attributes
Storage Group
List of storage volumes with common properties
Aggregate Group
Backup or recovery of all data sets in a group in a single operation
Copy Pool
The installation can also define automatic class selection (ACS) rules that can test, e.g., data set name, and select list names based on installation policies and user requests. A common scenario is for the installation to write a storage group ACS routine to ignore any UNIT parameter and to select the storage group, and to write a DATACLASS ACS rule to assign a dataclass that has default DCB parameters, with both making decisions based on the data set name.
When SMS is active, several new parameters are available in dynamic allocation and the DD JCL statement, e.g., DSNTYPE.
The Binder
The Binder is a program similar to the linkage editor that can also manage program objects on a PDSE library.
Remote copy and mirroring
DFSMSdfp provides facilities for using several different protocols to duplicate or mirror DASD volumes to a remote location.
OAM
Object Access Method (OAM) maintains a library of unstructured objects. Such objects are sometimes referred to as BLOBs.
DFSORT
DFSORT is a sort/merge utility that is part of the DFSMS family but not part of the DFSMS/MVS product.
DFSMShsm
DFSMShsm, originally Hierarchical Storage Manager (HSM), 5740-XRB, and later Data Facility Hierarchical Storage Manager Version 2 (DFHSM), 5665-329, before becoming an optional component of DFSMS, is a utility for archiving and retrieving datasets. It migrates data from faster storage to less expensive storage, either based on time stamps or explicit requests. It uses DFSMSdss as a data mover.
RACF
RACF is a security program that is part of the DFSMS family but not part of the DFSMS/MVS product.
It includes an API called SAF that allows applications to do authentication and to check access privileges, and also includes an interface to LDAP.
DFSMSrmm
The Removable Media Manager (DFSMSDFSMSrmm) controls libraries of tapes, whether manually mounted on tape drive or stored in an automated tape library.
Notes
References
IBM mainframe operating systems |
31142544 | https://en.wikipedia.org/wiki/Single-chip%20Cloud%20Computer | Single-chip Cloud Computer | The Single-Chip Cloud Computer (SCC) is a computer processor (CPU) created by Intel Corporation in 2009 that has 48 distinct physical cores that communicate through architecture similar to that of a cloud computer data center. Cores are a part of the processor that carry out instructions of code that allow the computer to run. The SCC was a product of a project started by Intel to research multi-core processors and parallel processing (doing multiple calculations at once). Additionally Intel wanted to experiment with incorporating the designs and architecture of huge cloud computer data centers (Cloud computing) into a single processing chip. They took the aspect of cloud computing in which there are many remote servers that communicate with each other and applied it to a microprocessor. It was a new concept that Intel wanted to experiment with. The name "Single-chip Cloud Computer" originated from this concept.
Uses
The SCC is currently still being used for research purposes. It currently can run the GNU operating system on the chip, but cannot boot Windows. Some applications of the SCC are web servers, data informatics, bioinformatics, and financial analytics.
Technical details
Intel developed this new chip architecture based on huge cloud data centers, the cores are separated across the chip but are able to directly communicate with each other. The chip contains 48 P54C Pentium cores connected with a 4×6 2D-mesh. This mesh is a group of 24 tiles set up in four rows and six columns. Each tile contained two cores and a 16 KB (8 per core) message passing buffer (MPB) shared by the two cores, essentially a router. This router allows each core to communicate with each other. Previously cores had to send information back to the main memory and there it would be re-routed to other cores. The SCC contains 1.3 billion 45 nanometers (nm) long transistors that can amplify signals or act as a switch and turn core pairs on and off. These transistors use anywhere from 25 to 125 watts of power depending on the processing demand. For comparison the Intel i7 processor uses 156 watts of power. Four DDR3 memory controllers are on each chip, connected to the 2D-mesh as well. These controllers are capable of addressing 64 GB of random-access memory. The DDR3 memory is used to help each tile communicate with the others, without them the chip would not be functional. These controllers also work with the transistors to control when certain tiles are turned on and off to save power when not in use. When proper coding is implemented all of these pieces are put together you get a functional processor that is fast, powerful, and energy efficient with a framework resembling a network of cloud computers.
Modes of operation
The SCC comes with RCCE, a simple message passing interface provided by Intel that supports basic message buffering operations. The SCC has two modes that it can operate under, processor mode and mesh mode:
Processor mode
In processor mode cores are on and executing code from the system memory and programmed I/O (inputs and outputs) through the system which is connected to the system board FPGA. Loading memory and configuring the processor for bootstrapping (sustaining after the initial load) is currently done by software running on the SCC's management console that's embedded in the chip.
Mesh mode
Cores are turned off. Only the routers, transistors and RAM controllers are on and they are sending and receiving large packets of data. Additionally there is no memory map.
The future
Intel plans to share this technology with other companies such as HP, Yahoo, and Microsoft to have multiple companies researching the SCC to more efficiently and quickly advance the technology. They hope to make the SCC scalable to 100+ cores. One way they hope to achieve this is by having each chip be able to communicate with another chip, and they could put two chips together to get double the cores. They hope to improve the parallel programming productivity and power management to take advantage of the chip's architecture and large number of cores. Additionally they plan to experiment more with this architecture and similar chip architectures to develop a many-core scalable processors that maximizes the processing power of the cores while being power efficient.
See also
Intel MIC
Intel Tera-Scale
Teraflops Research Chip
References
Cloud computing
Intel
Intel microprocessors
Manycore processors
Parallel computing |
1954693 | https://en.wikipedia.org/wiki/Gutenprint | Gutenprint | Gutenprint (formerly Gimp-Print) is a collection of free-software printer drivers for use with UNIX spooling systems, such as CUPS, lpr and LPRng. These drivers provide printing services for Unix-like systems (including Linux and macOS), RISC OS and Haiku.
It was originally developed as a plug-in for the GIMP, but later became a more general tool for use by other programs and operating systems (macOS and Windows). When Apple introduced Mac OS X, it omitted printer drivers, claiming that it was the printer manufacturer's task to produce these. Many of them did not update their drivers, and since Apple had chosen to use CUPS as the core of its printing system, Gimp-Print filled the void.
Gutenprint has more than 1,300 drivers for Apollo, Apple, Brother, Canon, Citizen, Compaq, Dai Nippon, DEC, Epson, Fujifilm, Fujitsu, Gestetner, HP, IBM, Infotec, Kodak, Kyocera, Lanier, Lexmark, Minolta, NEC, NRG, Oki, Olivetti, Olympus, Panasonic, PCPI, Raven, Ricoh, Samsung, Savin, Seiko, Sharp, Shinko, Sony, Star, Tally, Tektronix and Xerox printers.
Many users incorrectly called it Gimp, so the software was renamed Gutenprint to clearly distinguish it from the GIMP. The name Gutenprint recognizes Johannes Gutenberg, the inventor of the movable type printing press.
Epson backend
The Epson backend is in active development; new printers, bug fixes and capability additions are contributed in each new release.
Canon backend
This backend is in active development, and new printers, bug fixes and capability additions are contributed in each new release.
Canon printers use intelligent printheads, which control the quality of the final output given metadata sent to the printer from the driver. A consequence of this design is that the print quality is not specified in resolution alone, but via a "resolution mode" quality setting (up to 5 quality settings available at a time). The resolution parameter in the driver-output data is only a meta-resolution, typically either 300 or 600 dpi, sometimes 1200 dpi for certain monochrome or high-quality photo modes on a limited number of printers. The firmware then controls the printhead and creates physical ink output up to the marketed resolution.
The available quality selections depends on a number of parameters (as applicable): the media to be printed on, duplex or simplex, borderless or bordered, color or monochrome printing, inkset selection, and cartridge selection. Thus, there are a number of available "resolution modes" per media, some of which will be available depending on the other parameters set for the printjob.
Since in gutenprint, all options are always available via the PPD, the driver attempts to select reasonable defaults in the cases where the user settings are in contradiction. The prioritization follows: media type, resolution mode, cartridge selection, inkset selection, duplex selection. When a parameter clash is detected by the driver, resolution mode and other parameters are set according to the priority above, and substitution of resolution mode carried out to try to maintain the required quality initially requested.
Borderless selection, added in version 5.2.9, is currently not a part of the prioritization and replacement algorithm, as only a small number of printers have been analyzed to discover the appropriate modes and media for borderless printing.
Unmaintained backends
The PCL, color laser, and Lexmark backends are currently unmaintained. Volunteers are welcome.
Fairly often, printers that would use these backends have emulation capability for other languages, in particular Postscript. In such a case, the printer can be configured to use a standard Postscript driver.
See also
LPD
CUPS
References
External links
Gutenprint official website
Gutenprint official SourceForge.net project page
Free device drivers
Linux drivers
Linux software
MacOS graphics software
Unix software |
616266 | https://en.wikipedia.org/wiki/Connectix | Connectix | Connectix Corporation was a software and hardware company, noted for having released innovative products that were either made obsolete as Apple Computer incorporated the ideas into system software, or were sold to other companies once they became popular. It was formed in October 1988 by Jon Garber; dominant board members and co-founders were Garber, Bonnie Fought (the two were later married), and close friend Roy McDonald. McDonald was still Chief Executive Officer and president when Connectix finally closed in August 2003.
Products
Primary products included these:
Virtual: Its original flagship product, which introduced virtual memory to the Mac OS years before Apple's implementation in System 7. Virtual also runs on a motley assortment of accelerator cards for the original Mac, Mac Plus, and Mac SE, which were not supported by Apple.
HandOff II: The file launcher developed by Fred Hollander of Utilitron, Inc. This INIT for Macintosh solved the "Application Not Found" problem by launching a substitute application for the one that created the file the user was trying to open. Apple would later build a similar functionality into System 7.
SuperMenu: The first commercial hierarchical Apple menu, developed by Fred Hollander of Utilitron, Inc. Again, Apple would make a hierarchal Apple menu standard in System 7, by buying one of the many shareware versions of the same concept.
MODE32: Software which allows 32-bit memory management on "32-bit dirty" Macintosh systems. Later bought by Apple and distributed for free, at least in part to settle a class-action lawsuit brought by customers who demanded to know why their 32-bit 68020 microprocessors could not access more than 8 megabytes of RAM.
Optima: Makes System 6 32-bit clean and puts a Macintosh IIsi into 32-bit mode. This makes all of the physical RAM addressable by System 6. It can have one application open at a time.
MAXIMA: A RAM disk utility, better than the one that later came with the Mac OS as it saved its contents before and after reboots, while also allowing booting from the RAM disk.
Connectix Desktop Utilities (CDU): A collection of utilities for desktop systems, including utilities for power management (screen dimming and automatic power down), synchronizing files when multiple disks are used, and custom desktop background images. A version of the CDU software received an Energy Star Compliant Controlling Device status from the US Environmental Protection Agency (EPA) on the basis of the software's power management functionality.
Connectix Powerbook Utilities (CPU): A collection of utilities designed to simplify common tasks for laptop users.
RAM Doubler: The first product to combine compression with virtual memory. A top selling Mac utility for many years which eventually was made obsolete as Apple improved their own virtual memory. There is also a RAM Doubler for Windows 3.1 which uses compression to increase system resources, allowing more applications to run. RAM Doubler was something of a case study for porting Macintosh products to the PowerPC processor, as CEO Roy McDonald presented a paper detailing the company's porting efforts at the Sumeria Technology and Issues Conference on June 30, 1994.
Speed Doubler: Software that combines an enhanced disk cache, better Finder copy utility, and a dynamically recompiling 68K-to-PowerPC emulator, which is faster than both the interpretive emulator that shipped in the original PowerPCs and the dynamically recompiling emulator that Apple shipped in later machines. It was made obsolete as 68K applications became less common and OS code improved.
Surf Express: A local proxy server designed to accelerate the web browsing experience by caching and auto-refreshing frequently visited web sites. Offered for both Mac OS and Windows 95.
QuickCam: The first webcam. Originally the sole design of Jon Garber, he wanted to call it the "Mac-camera", but was vetoed by marketing, who saw the possibility of it one day becoming a cross-platform product. It became the first Connectix Windows product 14 months later, with RAM Doubler for Windows 3.1 being the next. The Mac QuickCam shipped in August 1994, RAM Doubler for Windows in April 1995, and QuickCam for Windows in October 1995. The line was later sold to Logitech. QuickCam is now considered one of the top gadgets of all time.
DoubleTalk: Access Windows-Based Network Resources - Access Windows fileservers, transfer files to and from shared Windows workstations over the network and print to shared PC-based PostScript printers.
Virtual Game Station: PlayStation emulation software. Sold to Sony, who bought it only after their lawsuit to stop it failed, and then dropped the product immediately.
Virtual PC and Virtual server: Emulation software of x86-based personal computers for the Macintosh, Windows and OS/2. Sold to Microsoft, the transaction was completed on February 18, 2003.
With the sale of Virtual PC development and support, staff were transferred to Microsoft, including Connectix's Chief Technical Officer Eric Traut, but not including any of the Connectix board members or Technical Support. Its Macintosh products, including DoubleTalk, CopyAgent and RAM Doubler, were discontinued.
References
Software companies disestablished in 2003
Software companies established in 1988
Defunct computer hardware companies
Defunct software companies
Microsoft acquisitions
2003 mergers and acquisitions |
2179145 | https://en.wikipedia.org/wiki/Journal%20of%20Statistical%20Software | Journal of Statistical Software | The Journal of Statistical Software is a peer-reviewed open-access scientific journal that publishes papers related to statistical software. The Journal of Statistical Software was founded in 1996 by Jan de Leeuw of the Department of Statistics at the University of California, Los Angeles. Its current editors-in-chief are Achim Zeileis, Bettina Grün, Edzer Pebesma, and Torsten Hothorn. It is published by the Foundation for Open Access Statistics. The journal charges no author fees or subscription fees.
The journal publishes peer-reviewed articles about statistical software, together with the source code.
It also publishes reviews of statistical software and books (by invitation only). Articles are licensed under the Creative Commons Attribution License, while the source codes distributed with articles are licensed under the GNU General Public License.
Articles are often about free statistical software and coverage includes packages for the R programming language.
Abstracting and indexing
The Journal of Statistical Software is indexed in the Current Index to Statistics and the Science Citation Index Expanded. Its 2018 Impact Factor in Journal Citation Reports is 11.655. The journal was named a Rising Star by Science Watch in 2011.
References
External links
Foundation for Open Access Statistics
Computational statistics journals
Creative Commons Attribution-licensed journals
R (programming language)
Statistical Software
Publications established in 1996
Open access journals
Computer science journals
Online-only journals |
30855048 | https://en.wikipedia.org/wiki/William%20J.%20Donahue | William J. Donahue | William J. "Bill" Donahue is a retired lieutenant general for the United States Air Force who transformed networks and communications during his long career. He retired in May 2000 as the director of communications and information at Air Force Headquarters and commander of the Air Force Communications and Information Center in Washington, D.C. During his 33-year Air Force career, Donahue served in a variety of communications, information, command and control positions at virtually every level in the Air Force. During his active-duty career, Donahue led the Internet and information technology transformation in the Air Force.
Donahue has an undergraduate degree in mathematics and a master's degree in logistics management from the Air Force Institute of Technology. He is also a graduate of the National War College, the executive development program of the University of California, Berkeley, and the national security program of Harvard's John F. Kennedy School of Government. Since retirement from the Air Force, Donahue has worked in the government information technology industry.
He is executive vice president, federal solutions for Sytel and is a member of several corporate and advisory boards for leading information technology companies. From 2000 to 2003 Donahue was vice president and general manager for CSC's aerospace business unit, where he had operational responsibility for the delivery of information technology solutions, support services, and space systems solutions to the Air Force. He currently is involved in promoting and advocating for diabetes research.
Air Force biography
Lt. Gen. Donahue was director, communications and information, Headquarters U.S. Air Force, and commander, Air Force Communications and Information Center, Washington, D.C. He was responsible for strategic plans, doctrine, policies, architecture and standards for communications and information systems in the Air Force. He was the functional manager for more than 75,000 communications and information professionals in the Air Force. He was responsible for three field operating agencies: the Air Force Communications Agency, the Air Force Pentagon Communications Agency and the Air Force Frequency Management Agency.
The general entered active duty in September 1966 and was commissioned in November 1966 through Officer Training School. He has commanded two communications groups and has served as the chief communications-computer officer for the Iceland Defense Force, two numbered air forces and two major commands.
Education
1966 Bachelor of Arts degree in mathematics, Bellarmine College, Kentucky
1972 Master of Science degree in logistics management, Air Force Institute of Technology
1973 Squadron Officer School, Maxwell Air Force Base, Alabama
1985 National War College, Fort Lesley J. McNair, Washington, D.C.
1990 University of California's Executive Development Course
Assignments
September 1966 – November 1966, student, Officer Training School, Lackland Air Force Base, Texas
December 1966 – October 1967, student, basic communications-electronics course, Keesler Air Force Base, Mississippi
October 1967 – January 1971, communications operations officer, 2135th Communications Squadron, Ramstein Air Base, West Germany
January 1971 – February 1972, student, Wright-Patterson Air Force Base, Ohio
February 1972 – April 1973, instructor, basic communications-electronics officer course, Keesler Air Force Base, Mississippi
April 1973 – April 1975, chief, Electronic Principles Branch, School of Applied Aerospace Science, Keesler Air Force Base, Mississippi
December 1975 – November 1979, logistics plans officer, Headquarters Air Force Communications Service, Richards-Gebaur Air Force Base, Missouri
November 1979 – October 1980, assistant chief of staff, Headquarters Air Force Communications Command, Scott Air Force Base, Illinois
October 1980 – September 1981, assistant chief of staff, communications-electronics, Iceland Defense Force, Keflavik Naval Air Station, Iceland
September 1981 – December 1982, executive officer, Headquarters Air Force Communications Command, Scott Air Force Base, Illinois
December 1982 – June 1984, commander, 1901st Communications Group, Travis Air Force Base, California
June 1984 – July 1985, student, National War College, Fort Lesley J. McNair, Washington, D.C.
July 1985 – July 1987, commander, 1956th Communications Group, and deputy chief of staff for communications-computer systems, Headquarters 5th Air Force, Yokota Air Base, Japan
July 1987 – May 1989, deputy program manager, then program manager, Worldwide Military Command and Control System Information Systems Joint Program Management Office, McLean, Virginia
May 1989 – August 1991, assistant to the deputy chief of staff for command, control, communications and computers, Headquarters U.S. Air Force, Washington, D.C.
August 1991 – June 1992, deputy chief of staff for communications-computer systems, Headquarters Tactical Air Command, Langley Air Force Base, Virginia
June 1992 – August 1994, director, communications-computer systems, Headquarters Air Combat Command, Langley Air Force Base, Virginia
August 1994 – December 1996, director, command control systems for NORAD; director, command control systems for U.S. Space Command; and director, communications-computer systems for Air Force Space Command, Peterson Air Force Base, Colorado
December 1996 – March 1997 deputy chief of staff for communications and information, Headquarters U.S. Air Force, Washington, D.C.
April 1997 – May 2000, director, communications and information, Headquarters U.S. Air Force, and commander, Air Force Communications and Information Center, Washington, D.C.
Major awards and decorations
Defense Distinguished Service Medal
Defense Superior Service Medal
Legion of Merit
Meritorious Service Medal with two oak leaf clusters
Joint Service Commendation Medal
Air Force Commendation Medal
Effective dates of promotion
Second lieutenant November 24, 1966
First lieutenant May 25, 1968
Captain November 29, 1969
Major May 1, 1977
Lieutenant colonel October 1, 1980
Colonel November 1, 1984
Brigadier general July 2, 1992
Major general September 28, 1994
Lieutenant general January 1, 1997
External links
United States Air Force generals
Living people
Air Force Institute of Technology alumni
Recipients of the Legion of Merit
Harvard Kennedy School alumni
University of California, Berkeley alumni
Place of birth missing (living people)
Bellarmine University alumni
Recipients of the Defense Superior Service Medal
Recipients of the Defense Distinguished Service Medal
Year of birth missing (living people) |
69279453 | https://en.wikipedia.org/wiki/Canon%20Computer%20Systems | Canon Computer Systems | Canon Computer Systems, Inc. (CCSI), sometimes shortened to Canon Computer, was an American subsidiary of Canon Inc. formed in 1992 to develop and market the parent company's personal computers and workstations. The subsidiary also assumed the responsibility of marketing Canon's printers and photocopiers, which were formerly sold by other Canon divisions. It went defunct in January 2001.
History
Canon entered the computer industry in the 1970s, starting with the AX-1 in October 1978. It sported the form factor of a desktop calculator and was fully programmable. This was followed up with the AS-100 in 1982, which was a more-traditional albeit heavier personal computer that ran a Intel 8088 and ran MS-DOS. Canon entered the home computer market in 1984 with the V-20 and V-10 in 1984 and 1985 respectively. In 1987, the company released the Canon Cat—the brainchild of Jef Raskin who pioneered Apple's original Macintosh. In 1989, the company took a large stake in NeXT, a computer hardware company founded by Steve Jobs in 1987 after he resigned as CEO of Apple in the mid-1980s.
In April 1992, Canon spun off their computer manufacturing into Canon Computer Systems, a new subsidiary that also assumed the responsibility of marketing their parent company's printers and photocopiers. The subsidiary initially comprised 100 employees in October 1992, 50 based in Costa Mesa, California. Yasuhiro Tsubota, who founded Epson America in 1978, was named president. Several other higher-ups came from Epson America; Tsubota left Epson for NeXT 1990, to serve as a consultant for Jobs. The subsidiary's first offerings were a line of desktop computers and notebooks, branded as the Innova and Innova Book respectively. The company expected $125 million in revenue by October 1993. They allocated $10 million of their initial budget on advertising, hiring the newly formed Hajjar/Kaufman (a spinoff of Dentsu) as their advertising agency.
Most if not all of the notebooks in the Innova Book line were produced offshore by Taiwanese OEMs. Canon repeatedly turned to Chicony of Taipei, who lent their designs to Canon for their Innova Book 10 and Innova Book 200LS. The former, released in 1994, was a subnotebook four pounds in weight, while the latter, released in 1995, sported the largest screen of any laptop up to that point, at 11.3 inches diagonal. Canon Computer collaborated with IBM's Japanese subsidiary to produce the Canon NoteJet, a notebook computer with a built-in inkjet printer, introduced to market in 1993. In March 1994, Canon Computer took the reins of the NeXTstation after NeXT ceased manufacturing hardware in 1993. They later released the Object.Station, an x86-based workstation based on the NeXTstation design.
Although Canon Computer set a goal of $1 billion sales by 1997 in 1994, they were considered late newcomers to the market of personal computers. Innovas and Innova Books continued to be sold until January 1997, when the company quietly left the desktop and notebook market, citing poor sales. The subsidiary shifted its focus to silicon-on-insulator manufacturing, spending billion (US$25.8 million in 1997) to open up a clean room facility at Canon's plant in Hiratsuka, Tokyo. As part of this refocusing, Canon sold its existing shares of NeXT to Apple, who were in the process of acquiring that company after Jobs re-entered Apple in 1997. Canon Computer continued to sell printers, scanners and digital cameras until January 2001, when the subsidiary was restructured and renamed to Canon Digital Home and Personal Systems. Tsubota was replaced by Ryoichi Bamba.
Computers
Desktops
Notebooks
Innova
Subnotebooks
NoteJet
Other
Workstations
References
External links
American companies established in 1992
American companies disestablished in 2001
Computer Systems
Computer companies established in 1992
Computer companies disestablished in 2001
Defunct computer companies based in California
Defunct computer companies of the United States
Defunct computer hardware companies |
1479443 | https://en.wikipedia.org/wiki/MacApp | MacApp | MacApp was Apple Computer's object oriented application framework for the classic Mac OS. Released in 1985, it transitioned from Object Pascal to C++ in 1991's version 3.0 release, which offered support for much of System 7's new functionality. MacApp was used for a variety of major applications, including Adobe Photoshop and SoftPress Freeway. Microsoft's MFC and Borland's OWL were both based directly on MacApp concepts.
Over a period of ten years, the product had periods where it had little development followed by spurts of activity. Through this period, Symantec's Think Class Library/Think Pascal had become a serious competitor to MacApp, offering a simpler model in a much higher-performance integrated development environment (IDE).
Symantec was slow to respond to the move to the PowerPC platform in the early 1990s, and when Metrowerks first introduced their CodeWarrior/PowerPlant system in 1994, it rapidly displaced both MacApp and Think as the primary development platforms on the Mac. Even Apple used CodeWarrior as its primary development platform during the Copland era in the mid-1990s.
MacApp had a brief reprieve between 2000 and 2001, as a system for transitioning to the Carbon system in MacOS X. However, after demonstrating a version at Worldwide Developers Conference (WWDC) in June 2001, all development was cancelled that October.
History
Pascal versions
MacApp was a direct descendant of the Lisa Toolkit, Apple's first effort in designing an object-oriented application framework, led by Larry Tesler. The engineering team for the Toolkit included Larry Rosenstein, Scott Wallace, and Ken Doyle. Toolkit was written in a custom language known as Clascal, which added object-oriented techniques to the Pascal language.
Initially, development for the Mac was carried out using a cross-compiler in Lisa Workshop. As Mac sales effectively ended Lisa sales, an effort began to build a new development platform for the Mac. Lisa Programmer's Workshop became in 1985 the Macintosh Programmer's Workshop, or MPW. As part of this process, Clascal was updated to become Object Pascal and Lisa Toolkit offered design notes for what became MacApp.
Writing a Mac program without an application framework is not an easy task, but at the time the object-oriented programming field was still relatively new and considered somewhat suspect by many developers. Early frameworks tended to confirm this suspicion, being large, slow, and typically inflexible.
MacApp was perhaps the first truly usable framework in all meanings of the term. Compiled applications were quite reasonable in terms of size and memory footprint, and the performance was not bad enough to make developers shy from it. Although "too simple" in its first releases, a number of follow-up versions quickly addressed the main problems. By this point, around 1987, the system had matured into a useful tool, and a number of developers started using it on major projects.
MacApp 2.0 was released in 1989. Among the improvements was a simplification of some of the UI element interactions, and support for Multifinder. As Apple announced it was dropping MPW Pascal support in 1992, this version didn't get updated, not even with System 7 support, and Pascal developers were left out on their own to port MacApp 2.0 to the PowerPC.
C++ versions
By this point, in the late 1980s, the market was moving towards C++, and the beta version of Apple C++ compiler appeared in 1989, around the MacApp 2.0 release. At the same time, Apple was deep in the effort to release System 7, which had a number of major new features. The decision was made to transition to an entirely new version of MacApp, 3.0, which would use C++ in place of Object Pascal. This move was subject to a long and heated debate between proponents of Object Pascal and C++ in the Usenet and other forums. Nevertheless, 3.0 managed to garner a reasonable following after its release in 1991, even though the developer suite, MPW, was growing outdated. Apple then downsized the entire developer tools group, leaving both MacApp and MPW understaffed.
One of the reasons for this downsizing was Apple's long saga of attempting to introduce the "next great platform" for development, almost always in the form of a cross-platform system of some sort. Their first attempt was Bedrock, a class library created in partnership with Symantec that ran on the Mac and Windows, which died a lingering death as both parties eventually gave up on working with the other. One of the reasons for their problems was the creation of OpenDoc, which was itself developed into a cross-platform system that competed directly with Bedrock. There were some attempts to position Bedrock as an OpenDoc platform, but nothing ever came of this.
While these developments were taking place, MPW and MacApp were largely ignored. It was more important to put those developer resources into these new projects to help them reach the market sooner. But when Bedrock failed and OpenDoc found a lukewarm reception, the Mac was left with tools that were now almost a decade old and could not compete with the newer products from third parties. Through the early 1990s competing frameworks grew into real competitors to MacApp. First Symantec's TCL garnered a following, but then Metrowerks' PowerPlant generally took over the entire market.
Lingering death
The core developers of MacApp continued to work on the system at a low activity level throughout the 1990s. When all of Apple's "official" cross-platform projects collapsed, in late 1996 the team announced that they would be providing a cross-platform version of MacApp.
Soon after, Apple purchased NeXT and announced that OpenStep would be Apple's primary development platform moving forward, under the name Cocoa. Cocoa was already cross-platform, at that time having already been ported to about six platforms, and was far more advanced than MacApp. This led to strong protests from existing Mac programmers protested that their programs were being sent to the "penalty box", effectively being abandoned.
At WWDC'98, Steve Jobs announced that the negative feedback about the move to Cocoa was being addressed through the introduction of the Carbon system. Carbon would allow existing Mac programs to run natively under the new operating system, after some conversion. Metrowerks announced they would be porting their PowerPlant framework to Carbon, but no similar announcement was made by Apple regarding MacApp.
Through this period there remained a core of loyal MacApp users who grew increasingly frustrated at Apple's behaviour. By the late 1990s, during the introduction of Cocoa, this had grown to outright dismissal of the product. Things were so bad that a group of MacApp users went so far as to organize their own meeting at WWDC '98 under an assumed name, in order to avoid having Apple staffers refuse them a room to meet in.
This ongoing support was noticed within Apple, and in late 1999 a "new" MacApp team, consisting of members who had worked on it all along, was tasked with releasing out a new version. Included was the new Apple Class Suites (ACS), a thinner layer of C++ wrappers for many of the new Mac OS features being introduced from OpenStep, and support for building in Project Builder. MacApp 3.0 Release XV was released on 28 August 2001 to the delight of many. However, in October the product was killed once again, this time forever, and support for existing versions of MacApp officially ended.
The Carbon-compliant PowerPlant X did not ship until 2004, and today Cocoa is almost universal for both MacOS and iOS programming.
MacApp today
MacApp is being kept alive by a dedicated group of developers who have maintained and enhanced the framework since Apple stopped supporting it in 2001. MacApp has been updated to fully support Carbon Events, Universal Binaries, Unicode Text, MLTE control, DataBrowser control, FSRefs, XML parsing, Custom Controls, Composite Window, Drawer Window, HIView Window and Custom Windows. MacApp also has C++ wrapper classes for HIObject and HIView. Also the Pascal version, based mainly on MacApp-2, has been ported to Mac OS X and Xcode. It features long Unicode filenames and streamed documents with automatic byte-swapping.
MacApp supports the Xcode IDE. In fact at WWDC 2005, after Apple announced the transition to Intel CPUs, it took a single developer 48 hours to update MacApp and the MacApp example apps to support Universal Binaries.
Description
This description is based on MacApp 3.0, which had a more advanced underlying model than the earlier 2.0 and differed in many significant ways.
The Mac OS itself has a very simple event handing system. The event structure passed from the operating system to the application has only an event type like "keypress" or "mouseclick", and details of its location and the modifier keys being held down. It is up to the application to decode this simple information into the action the user carried out, for instance, clicking on a menu command. Decoding this could be difficult, running through lists of on-screen objects and checking if the event took place within their bounds.
MacApp provided a solution to this problem using the command pattern, in which user actions are encapsulated in objects containing event details, and then sent to the proper object to carry them out. The logic of mapping the event to the "proper object" was handled entirely within the framework and its runtime, greatly decreasing the complexity of this task. It is the role of MacApp's internal machinery to take the basic OS events, translate them into semantically higher-level commands, and then route the command to the proper object.
Not only did MacApp relieve the author of having to write this code, which every program requires, but also as a side-effect this design cleanly separated code into commands, user-facing actions, and their handlers, the internal code that did the work. For instance, one might have commands for "Turn Green" and "Turn Red", both of which are handled by a single function, ChangeColor(). A program that cleanly separated commands and handlers was known, in Apple parlance, factored.
Factoring of a program was particularly important in later versions of the Mac OS, starting with System 7. System 7 introduced the Apple Events system, which expanded the original Mac OS's event system with a much richer one that could be sent between applications, not just from the OS to a particular application. This was combined with the AppleScript system which allowed these Events to be generated from scripting code. In MacApp 3.0, Apple Events were decoded into the same commands as if they had been initiated by direct user actions, meaning that the developer didn't have to write much, if any, code to directly handle Apple Events. This was a major problem for developers using earlier systems, including MacApp 2.0, which had no such separation and often led to Apple Event support being left out.
In keeping with its role as an application framework, MacApp also included a number of pre-rolled objects covering most of the basic Mac GUI—windows, menus, dialogs and similar widgets were all represented within the system. Unfortunately, Apple typically supplied lightweight wrappers over existing internal Mac OS code instead of providing systems that were usable in the "real world". For instance, the TTEView class was offered as the standard text editor widget, but the underlying TextEdit implementation was severely limited and Apple itself often stated it should not be used for professional applications. As a result, developers were often forced to buy add-on objects to address these sorts of needs, or roll their own. The lack of a set of professional quality GUI objects can be considered one of MacApp's biggest problems.
These problem has been addressed with the release of MacApp R16. MacApp R16 uses standard Carbon controls for all MacApp GUI objects. For instance, Carbon introduced the Multilingual Text Engine (MLTE) for full Unicode text and long-document support. In R16, the original TTEView class has been superseded by the TMLTEView, which uses the MLTE control.
Notable Users
Adobe Photoshop was originally written with MacApp 1.1.1, in Object Pascal, and later ported to C++ and MacApp 3.0 for Photoshop 2.5. After MacApp cancellation by Apple, maintenance was taken over internally by the Photoshop development team, ported to PowerPC, and transformed to be shared with the Windows platform port.
References
External links
Programmer's Guide to MacApp - full documentation from the Inside Macintosh series
Macintosh operating systems APIs
Classic Mac OS programming tools |
5289557 | https://en.wikipedia.org/wiki/Rop%20Gonggrijp | Rop Gonggrijp | Robbert (Rop) Valentijn Gonggrijp (born 14 February 1968) is a Dutch hacker and one of the founders of XS4ALL.
Biography
Gonggrijp was born in Amsterdam. While growing up in Wormer in the Dutch Zaanstreek area, he became known as a teenage hacker and appeared as one of the main characters in Jan Jacobs's book Kraken en Computers (Hacking and computers, Veen uitgevers 1985, ) which describes the early hacker scene in the Netherlands. Moved to Amsterdam in 1988. Founded the hacker magazine Hack-Tic in 1989. He was believed to be a major security threat by authorities in the Netherlands and the United States. In the masthead of Hack-Tic, Gonggrijp described his role as hoofdverdachte ('prime suspect'). He was convinced that the Internet would radically alter society.
In 1993, a number of people surrounding Hack-Tic including Gonggrijp founded XS4ALL. It was the first ISP that offered access to the Internet for private individuals in the Netherlands. Gonggrijp sold the company to the former enemy Dutch-Telecom KPN in 1997. After he left XS4ALL, Gonggrijp founded ITSX, a computer security evaluation company, which was bought by Madison Gurkha in 2006. In 2001, Gonggrijp started work on the Cryptophone, a mobile telephone that can encrypt conversations.
Since 1989, Gonggrijp has been the main organizer of hacker events held every four years. Originally organized by the cast of Hack-Tic, these events have continued to live to this day.
Throughout the years, he has repeatedly shown his concerns about the increasing amount of information on individuals that government agencies and companies have access to. Rop held a controversial talk titled "We lost the war" at the Chaos Communication Congress 2005 in Berlin together with Frank Rieger.
In 2006 he founded the organisation "Wij vertrouwen stemcomputers niet" ("We do not trust voting computers") which campaigns against the use of electronic voting systems without a Voter Verified Paper Audit Trail and which showed in October 2006 on Dutch television how an electronic voting machine from manufacturer Nedap could easily be hacked. These findings were taken seriously both by the Dutch government and by international election observers.
On 16 May 2008 the Dutch government decided that elections would be held using paper ballots and red pencil only. A proposal to develop a new generation of voting computers was rejected.
Gonggrijp has worked for WikiLeaks, helping prepare the Collateral Murder April 2010 release of video footage from a Baghdad airstrike.
On 14 December 2010, in relation to ongoing investigations of WikiLeaks, the US Department of Justice issued a subpoena ordering Twitter to release information regarding Gonggrijp's account as well as those of Julian Assange, Chelsea Manning, Birgitta Jónsdóttir, Jacob Appelbaum and all 637,000 users following @wikileaks. The reason is Gonggrijp's assistance in enabling WikiLeaks to release the "Collateral Murder" video in April 2010, a WikiLeaks action.
References
External links
Personal blog
(Dutch and English)
(Hack-Tic archive)
History of XS4ALL (in Dutch)
interview with Gonggrijp (in Dutch)
Interview on Voting machines (in Dutch)
Extensive Talk with Rop Gonggrijp about his life and the hacker communities in the Netherlands and Germany (in German)
1968 births
Living people
People from Amsterdam
Computer security specialists
Cypherpunks
WikiLeaks
Dutch magazine founders
Dutch company founders
20th-century Dutch businesspeople
21st-century Dutch businesspeople |
67204826 | https://en.wikipedia.org/wiki/Front%20%28company%29 | Front (company) | Front is a privately held San Francisco, California-based software company that develops a shared email inbox and calendar product. Its collaboration software allows companies to communicate with customers.
The company received media coverage for its high percentage of female executives and staffers, considered unusual for a tech company.
History
Front was founded in Paris, France in 2013 by current CEO Mathilde Collin and CTO Laurent Perrin. Its first product was a shared inbox for employees to collectively manage and reply to generic group emails such as "[email protected]" or "[email protected]". A majority of Front's early users were in the United States, so it moved to San Francisco just a few months later.
In 2014, Front participated in the startup incubator program at Y Combinator and raised $3.1 million in seed funding. The company launched its service in June 2014.
By April 2016, the company reported it had over 1,000 paid customers, including French luxury company LVMH and marketing software company Hubspot. In May, the company raised another $10 million in 2016, with participation from Slack CEO Stewart Butterfield.
In January 2018, the company raised $66 million in a series B funding round led by investment firm Sequoia Capital. The company also opened an engineering office in Paris. In October, the company acquired calendar software startup Meetingbird, as its first acquisition.
By March 2019, the company had over 4,700 customers, with over 100 employees.
In January 2020, the company raised $59 million in a Series C funding round, valuing it at an estimated $800 million. The round was notable for the funding coming from other tech executives rather than traditional venture capital companies.
Products
Front's main product is a shared inbox that integrates a business team's communications channels including email, live chat, SMS text messages, voice, and CRM software such as Salesforce into a consolidated inbox.
The company also offers a shared calendar called Front Calendar, allowing employees to manage meetings including scheduling meetings by sharing available times.
Operations
Front is headquartered in San Francisco, with an engineering office located in Paris. The company reported 200 employees as of March 2021. As of August 2020, the company reported it had over 6,000 customers. The company is also notable for being a tech company with a female CEO, which CEO Collin has written about, and for women making up over 50% of its executive team.
References
External links
Official website
Customer relationship management software
Companies based in San Francisco
2013 establishments in California
American companies established in 2013
Software companies of the United States |
4099405 | https://en.wikipedia.org/wiki/Prism%20%28chipset%29 | Prism (chipset) | The Prism brand is used for wireless networking integrated circuit (commonly called "chips") technology from Conexant for wireless LANs. They were formerly produced by Intersil Corporation.
Legacy 802.11b products (Prism 2/2.5/3)
The open-source HostAP driver supports the IEEE 802.11b Prism 2/2.5/3 family of chips.
Wireless adaptors which use the Prism chipset are known for compatibility, and are preferred for specialist applications such as packet capture.
No win64 drivers are known to exist.
Intersil firmware
WEP
WPA (TKIP), after update
WPA2 (CCMP), after update
Lucent/Agere
WEP
WPA (TKIP in hardware)
802.11b/g products (Prism54, ISL38xx)
The chipset has undergone a major redesign for 802.11g compatibility and cost reduction, and newer "Prism54" chipsets are not compatible with their predecessors.
Intersil initially provided a Linux driver for the first Prism54 chips which implemented a large part of the 802.11 stack in the firmware. However, further cost reductions caused a new, lighter firmware to be designed and the amount of on-chip memory to shrink, making it impossible to run the older version of the firmware on the latest chips. In the meantime, the PRISM business was sold to Conexant, which never published information about the newer firmware API that would enable a Linux driver to be written.
However, a reverse engineering effort eventually made it possible to use the new Prism54 chipsets under the Linux and BSD operating systems.
See also
HostAP driver for prism chipsets
External links
PRISM solutions at Conexant
GPL drivers and firmware for the ISL38xx-based Prism chipsets (mostly reverse engineered)
Wireless networking hardware |
66848907 | https://en.wikipedia.org/wiki/Preslav%20Nakov | Preslav Nakov | Preslav Nakov (born on 26 January 1977 in Veliko Turnovo, Bulgaria) is a computer scientist who works on natural language processing. He is particularly known for his research on fake news detection, automatic detection of offensive language, and biomedical text mining. Nakov obtained a PhD in computer science under the supervision of Marti Hearst from the University of California, Berkeley. He was the first person to receive the prestigious John Atanasov Presidential Award for achievements in the development of the information society by the President of Bulgaria.
Education
Preslav Nakov grew up in Veliko Turnovo, Bulgaria, where he attended primary and secondary school, obtaining a Diploma in Mathematics from the Secondary School of Mathematics and Natural Sciences 'Vassil Drumev' in 1996. He then obtained a MSc degree in Informatics (Computer Science) with specialisations in Artificial Intelligence and Information and Communication Technologies from Sofia University in 2011. During his MSc studies, he worked as a teaching assistant at Sofa University and the Bulgarian Academy of Sciences, as well as a guest lecturer at University College London during a visit in Spring 1999. Subsequently, he enrolled into the PhD program at the Department of Electrical Engineering and Computer Science, University of California, Berkeley, partly supported by a Fulbright Scholarship. Under the supervision of Marti Hearst, he wrote a thesis on the topic of text mining from the Web, and graduated with a PhD in Computer Science from UC Berkeley in 2007.
Career
Upon graduating from the University of California, Berkeley, Nakov started work as a Research Fellow at the National University of Singapore. Since 2012, he has been a Senior Scientist at the Qatar Computing Research Institute. He maintains a position as an honorary lecturer at Sofia University.
Research
Preslav Nakov works in the area of natural language processing and text mining. He has published over 300 peer-reviewed research papers.
Preslav Nakov's early research was on lexical semantics and text mining. He published influential papers on biomedical text mining, most prominently on methods to identify citation sentences in biomedical papers.
He is though most well-known for his research on fake news detection, such as his work on predicting the factuality and bias of news sources, as well as for his research on the automatic detection of offensive language. Nakov also previously led the organisation of a popular evaluation campaign on sentiment analysis systems as part of SemEval between the years of 2015 and 2017.
He currently coordinates the Tanbih News Aggregator project, a large project with partners at the Qatar Computing Research Institute and the MIT Computer Science and Artificial Intelligence Laboratory, which aims to uncover stance, bias and propaganda in news.
Selected honors and distinctions
2003 John Atanasov Presidential Award for achievements in the development of the information society
2011 RANLP 2011 Young Researcher Award
2020 Conference on Information and Knowledge Management, best paper award
References
1977 births
Living people
Computer scientists
Natural language processing researchers
University of California, Berkeley alumni
Data miners
People from Veliko Tarnovo
Sofia University alumni |
37714336 | https://en.wikipedia.org/wiki/Microsoft%20Office%20password%20protection | Microsoft Office password protection | Microsoft Office password protection is a security feature to protect Microsoft Office (Word, Excel, PowerPoint) documents with a user-provided password. As of Office 2007, this uses modern encryption; earlier versions used weaker systems and are not considered secure.
Office 2007–2013 employed 128-bit key AES password protection which remains secure.
Office 2016 employed 256-bit key AES password protection which also remains secure.
The Office 97–2003 password protection used 40-bit key RC4 which contains multiple vulnerabilities rendering it insecure.
Types
Microsoft Office applications offer the use of two main groups of passwords that can be set to a document depending on whether they encrypt a password-protected document or not.
Passwords that do not encrypt a password-protected document have different security level features for each of Microsoft Office applications as mentioned below.
In Microsoft Word and in Microsoft PowerPoint passwords restrict modification of the entire document or presentation.
In Microsoft Excel passwords restrict modification of the workbook, a worksheet within it, or individual elements in the worksheet.
The password that encrypts a document also restricts the user from opening the document. It is possible to set this type of password in all Microsoft Office applications. If a user fails to enter a correct password to the field which appears after an attempt to open a password-protected document, viewing and editing the document will not be possible. Due to the encryption of a document protected by a password to open it, a hacker needs to decrypt the document to get access to its contents. To provide improved security, Microsoft has been consistently enhancing the Office encryption algorithm strength.
History of Microsoft Encryption password
In Excel and Word 95 and prior editions a weak protection algorithm is used that converts a password to a 16-bit key. Hacking software is now readily available to find a 16-bit key and decrypt the password-protected document instantly.
In Excel and Word 97 and 2000 the key length was increased to 40 bits. This protection algorithm is also currently considered to be weak and presents no difficulties to hacking software.
The default protection in Office XP and 2003 was not changed, but an opportunity to use a custom protection algorithm was added. Choosing a non-standard Cryptographic Service Provider allows increasing the key length. Weak passwords can still be recovered quickly even if a custom CSP is on.
In Office 2007 (Word, Excel and PowerPoint), protection was significantly enhanced since a modern protection algorithm named Advanced Encryption Standard was used. At present there is no software that can break this encryption. With the help of the SHA-1 hash function, the password is stretched into a 128-bit key 50,000 times before opening the document; as a result, the time required to crack it is vastly increased.
Excel and Word 2010 still employ AES and a 128-bit key, but the number of SHA-1 conversions has doubled to 100,000 further increasing the time required to crack the password.
Office 2013 (Access, Excel, OneNote, PowerPoint, Project, and Word) uses 128-bit AES, again with hash algorithm SHA-1 by default.
Office 2016 (Access, Excel, OneNote, PowerPoint, Project, and Word) uses 256-bit AES, the SHA-1 hash algorithm, and CBC (Cipher Block Chaining) by default.
Excel Worksheet and Macro protection
The protection for worksheets and macros is necessarily weaker than that for the entire workbook as the software itself must be able to display or use them. In Excel it is particularly weak, and an equivalent password can easily be found of the form ABABABABABAx where the first 11 chars are either A or B and the last is an ASCII character.
Password recovery attacks
There are a number of attacks that can be employed to find a password or remove password protection from Excel and Word documents.
Password removal can be done with the help of precomputation tables or a guaranteed decryption attack.
Attacks that target the original password set in Microsoft Excel and Word include dictionary attack, rule-based attack, brute-force attack, mask attack and statistics-based attack.
The efficiency of attacks can be considerably enhanced if one of the following means is applied: multiple CPUs (distributed attack), GPGPU (applicable only to Microsoft Office 2007–2010 documents) and cloud computing. Due to weak passwords, at the moment, cloud computing facilities are capable of unlocking as many as ca. 80% of the files saved in the Office 2007–2010 format. Passwords of sufficient length and complexity typically cannot be brute-forced.
Office 2013 introduces SHA-512 hashes in the encryption algorithm, making brute-force and rainbow table attacks slower. However, note that SHA hash algorithms are faster to calculate than certain other, slower hashes such as PBKDF2, scrypt or other KDFs, which makes them relatively easier to brute-force. These have not yet been implemented in Office.
For xlsx files that can be opened but not edited, there is another attack, as the file format is a group of XML files within a zip, unzipping editing and replacing the workbook.xml file, and/or the individual worksheet XML files with identical copies, except that the unknown key and salt are replaced with a known pair, or removing the key altogether allows the sheets to be edited.
There is specialized software designed to recover lost Microsoft Office passwords on pre-AES encryption.
Ultimately, the security of a password-protected document is dependent on the user choosing a password of sufficient complexity. If the password can be determined through guesswork or social engineering, the underlying cipher is not important.
References
Microsoft Office
Cryptographic attacks
Password authentication |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.