id
stringlengths
3
8
url
stringlengths
32
207
title
stringlengths
1
114
text
stringlengths
93
492k
5431445
https://en.wikipedia.org/wiki/Direct%20Access%20Archive
Direct Access Archive
Direct Access Archive, or DAA, is a proprietary file format developed by PowerISO Computing for disk image files. The format supports features such as compression, password protection, and splitting to multiple volumes. Popular Windows disk image mounting programs such as Alcohol 120% and Daemon Tools currently do not support the mounting of DAA images; Linux and BSD also do not support mounting images of this kind. Currently there is no published information about the format. Among mainstream applications, it can be opened or converted with MagicISO and UltraISO. Various free and open-source packages are also available to convert DAA to ISO images. File Structure Although lacking official documentation, DAA image files are zlib- or lzma-compressed ISO images chunk by chunk. Conversion PowerISO provides free command-line tools for Linux and Mac OS X which allow the user to extract DAA files or convert them into ISO format, however these tools have not been updated to support the newest version of the DAA format. The PowerISO Windows trial version only supports converting images from DAA files up to 300MB, less than half of the capacity of a standard CD. AcetoneISO is a free CD/DVD management application for Linux that can convert DAA to ISO with the help of the external PowerISO command-line tool for Linux. daa2iso is an open source command line application has been developed to convert DAA files to ISO files. The program comes with a Windows binary and source code which compiles under Unix-like operating systems. daa2iso allows users to select the .daa file, and the location for the .iso output via standard windows open and save dialogs For Mac OS X, DAA Converter is a GUI application which wraps the daa2iso command-line tool (GNU license). Features Due to using freely available compression algorithms, DAA includes the following features that are absent in ISO (but can be obtained by manually compressing ISO files): Ability to compress images, thus saving space and allowing smaller downloads Can be password protected Can be split into multiple smaller files References External links PowerISO Website Disk images Archive formats
19368
https://en.wikipedia.org/wiki/Mandriva%20Linux
Mandriva Linux
Mandriva Linux (a fusion of the French distribution Mandrake Linux and the Brazilian distribution Conectiva Linux) is a discontinued Linux distribution developed by Mandriva S.A. Each release lifetime was 18 months for base updates (Linux, system software, etc.) and 12 months for desktop updates (window managers, desktop environments, web browsers, etc.). Server products received full updates for at least five years after their release. The last release of Mandriva Linux was in August 2011. Most developers who were laid off went to Mageia. Later on, the remaining developers teamed up with community members and formed OpenMandriva, a continuation of Mandriva. History The first release of Mandrake was based on Red Hat Linux (version 5.1) and K Desktop Environment 1 in July 1998. After that, it moved away from the Red Hat standard and Red Hat inspiration and influence on its own design and implementation, and became a completely separate distribution. Mandriva included a number of original tools that make system configuration less difficult. Mandriva Linux was the brainchild of Gaël Duval, who wanted to focus on ease of use for new users. This goal was met as Mandrake Linux gained a reputation as "one of the easiest to install and user-friendly Linux distributions". At this time Internet Explorer held a dominant share of the web browser market, and Microsoft a near monopoly in operating systems. Mandrake Linux earned praise as a Linux distribution that users could use all the time, without dual booting into Windows for compatibility with web sites or software unavailable under Linux. CNET called the user experience of Mandrake Linux 8.0 the most polished available at that time. Duval became the co-founder of Mandrakesoft, but was laid off from the company in 2006 along with many other employees. Name changes From its inception until the release of version 8.0, Mandrake named its flagship distribution Linux-Mandrake. From version 8.1 to 9.2 the distribution name was reversed and called Mandrake Linux. In February 2004, MandrakeSoft lost a court case against Hearst Corporation, owners of King Features Syndicate. Hearst contended that MandrakeSoft infringed upon King Features' trademarked character Mandrake the Magician. As a precaution, MandrakeSoft renamed its products by removing the space between the brand name and the product name and changing the first letter of the product name to lower case, thus creating one word. Starting from version 10.0, Mandrake Linux became known as mandrakelinux, and its logo changed accordingly. Similarly, MandrakeMove (a Live CD version) became Mandrakemove. In April 2005, Mandrakesoft announced the corporate acquisition of Conectiva, a Brazilian-based company that produced a Linux distribution for Portuguese-speaking (Brazil) and Spanish-speaking Latin America. As a result of this acquisition and the legal dispute with Hearst Corporation, Mandrakesoft announced that the company was changing its name to Mandriva, and that their Linux distribution Mandrake Linux would henceforward be known as Mandriva Linux. Features Installation, control and administration Mandriva Linux contained the Mandriva Control Center, which eases configuration of some settings. It has many programs known as Drakes or Draks, collectively named drakxtools, to configure many different settings. Examples include MouseDrake to set up a mouse, DiskDrake to set up disk partitions and drakconnect to set up a network connection. They are written using GTK+ and Perl, and most of them can run in both graphical and text mode using the ncurses interface. Desktops Mandriva Linux 2011 was released only with KDE Plasma Desktop, whereas other desktop environments were available but not officially supported. Older Mandriva versions also used KDE as standard but others such as GNOME were also supported. Package manager Mandriva Linux used a package manager called urpmi, which functions as a wrapper to the .rpm binaries. It is similar to apt from Debian & Ubuntu, pacman from Arch Linux, yum or dnf from Fedora in that it allows seamless installation of a given software package by automatically installing the other packages needed. It is also media-transparent due to its ability to retrieve packages from various media, including network/Internet, CD/DVD and local disk. Urpmi also has an easy-to-use graphical front-end called rpmdrake, which is integrated into the Mandriva Control Center. Live USB A Live USB of Mandriva Linux can be created manually or with UNetbootin. Versions From 2007–2011, Mandriva was released on a 6-month fixed-release cycle, similar to Ubuntu and Fedora. Latest version The latest stable version is Mandriva Linux 2011 ("Hydrogen"), released on 28 August 2011. Development version The development tree of Mandriva Linux has always been known as Cooker. This tree is directly released as a new stable version. Version history Editions Each release of Mandriva Linux was split into several different editions. Each edition is derived from the same master tree, most of which is available on the public mirrors: all free / open source software, and all non-free software which is under a license that allows unrestricted distribution to the general public, is available from the public mirrors. Only commercial software under a license that does not allow unrestricted distribution to the general public (but for which Mandriva has negotiated an agreement to distribute it with paid copies) is not available from public mirrors. Mandriva Linux Free Mandriva Linux Free was a 'traditional' distribution (i.e. one that comes with a dedicated installer, to install the distribution to the computer before it is run). It was 'free' in both senses: it consists entirely of free and open-source software, and it was made available for public download at no charge. It was usually available in CD (three or four discs) and DVD editions for x86 32- and 64-bit CPU architectures. It was aimed at users to whom software freedom is important, and also at users who prefer a traditional installer to the installable live CD system used by One. The package selection was tailored towards regular desktop use. It consisted of a subset of packages from the 'main' and 'contrib' sections of the master tree. Mandriva Linux Free was phased in 2011 in favor of a single edition approach with Mandriva Desktop 2011. Mandriva Linux One Mandriva Linux One was a free to download hybrid distribution, being both a Live CD and an installer (with an installation wizard that includes disk partitioning tools). Several Mandriva Linux One versions were provided for each Mandriva Linux release preceding Mandriva 2008. Users could choose between different languages, select either the KDE or GNOME desktops and include or exclude non-free software. The default version included the KDE desktop with non-free software included. The One images consist of a subset of packages from the 'main', 'contrib' and 'non-free' sections of the master tree, with the documentation files stripped from the packages to save space. Mandriva Linux One 2008 has a smaller range of versions. There are KDE and GNOME versions with the default set of languages. There are also two KDE versions with alternative sets of languages. All versions include non-free software. Mandriva Linux Powerpack Mandriva Linux Powerpack was a 'traditional' distribution (in other words, one that comes with a dedicated installer, DrakX, which is first used to install the distribution to the hard disk of the computer before it is run). It is the main commercial edition of Mandriva Linux, and as such, requires payment for its use. It contains several non-free packages intended to add value for the end user, including non-free drivers like the NVIDIA and ATI graphics card drivers, non-free firmware for wireless chips and modems, some browser plugins such as Java and Flash, and some full applications such as Cedega, Adobe Reader and RealPlayer. It was sold directly from the Mandriva Store website and through authorized resellers. It was also made available via a subscription service, which allowed unlimited downloads of Powerpack editions for the last few Mandriva releases for a set yearly fee. It consisted of a subset of packages from the 'main', 'contrib', 'non-free' and 'restricted' sections of the master tree. In Mandriva Linux 2008, the Discovery and Powerpack+ editions were merged into Powerpack, which became Mandriva's only commercial offering. Users were able to choose between a novice-friendly Discovery-like setup or an installation process and desktop aimed at power users. Mandriva Linux Discovery Mandriva Linux Discovery was a commercial distribution aimed at first-time and novice Linux users. It was sold via the Mandriva Store website and authorized resellers, or could be downloaded by some subscribers to the Mandriva Club. Mandriva Linux 2008 does not include a Discovery edition, having added optional novice-friendly features to the Powerpack edition. In releases prior to Mandriva Linux 2007, Discovery was a 'traditional' distribution built on the DrakX installer. In Mandriva Linux 2007 and 2007 Spring, Discovery is a hybrid "Live DVD" which can be booted without installation or installed to hard disk in the traditional manner. Discovery was a DVD rather than a CD, allowing all languages to be provided on one disc. It consisted of a subset of packages from the 'main', 'contrib', 'non-free' and 'non-free-restricted' sections of the master tree. The package selection was tailored towards novice desktop users. A theme chosen to be appealing to novice users was used, and the 'simplified' menu layout in which applications are described rather than named and not all applications are included was the default (for all other editions, the default menu layout was the 'traditional' layout, where all graphical applications installed on the system were included and were listed by name). Mandriva Linux Powerpack+ Mandriva Linux Powerpack+ was a version of Powerpack with additional packages, mostly commercial software. Like Powerpack, it was sold directly from the Mandriva Store website and through authorized resellers; it was also a free download for Mandriva Club members of the Gold level and above. Powerpack+ was aimed at SOHO (small office / home office) users, with the expectation that it could be used to run a small home or office server machine as well as desktop and development workstations. The package selection was tailored with this in mind, including a wide range of server packages. It consisted of a subset of packages from the 'main', 'contrib', 'non-free' and 'restricted' sections of the master tree. Mandriva 2008 no longer includes a Powerpack+ edition; instead, the Powerpack edition includes all the available packages. Derivatives Derivatives are distributions that are based on Mandriva Linux, some by Mandriva itself, others by independent projects. Some maintain compatibility with Mandriva Linux, so that installing a Mandriva Linux .rpm also works on the offspring. OpenMandriva Lx - a continuation of Mandriva by the community Mageia - a fork of Mandriva by the former laid off developers PCLinuxOS - initially derived from Mandrake ROSA Linux - a fork of Mandriva by the former laid off developers References External links Discontinued Linux distributions KDE RPM-based Linux distributions X86-64 Linux distributions Linux distributions
78126
https://en.wikipedia.org/wiki/Sketchpad
Sketchpad
Sketchpad (a.k.a. Robot Draftsman) is a computer program written by Ivan Sutherland in 1963 in the course of his PhD thesis, for which he received the Turing Award in 1988, and the Kyoto Prize in 2012. It pioneered human–computer interaction (HCI), and is considered the ancestor of modern computer-aided design (CAD) programs as well as a major breakthrough in the development of computer graphics in general. For example, the graphical user interface (GUI) was derived from Sketchpad as well as modern object-oriented programming. Using the program, Ivan Sutherland showed that computer graphics could be used for both artistic and technical purposes in addition to demonstrating a novel method of human–computer interaction. History Sutherland was inspired by the Memex from "As We May Think" by Vannevar Bush. Sketchpad inspired Douglas Engelbart to design and develop oN-Line System at the Augmentation Research Center (ARC) at the Stanford Research Institute (SRI) during the 1960s. Software Sketchpad was the first program ever to utilize a complete graphical user interface. [1] The clever way the program organized its geometric data pioneered the use of "master" ("objects") and "occurrences" ("instances") in computing and pointed forward to object oriented programming. The main idea was to have master drawings which one could instantiate into many duplicates. If the user changed the master drawing, all the instances would change as well. Geometric constraints was another major invention in Sketchpad, letting the user easily constrain geometric properties in the drawing—for instance, the length of a line or the angle between two lines could be fixed. As a trade magazine said, clearly Sutherland "broke new ground in 3D computer modeling and visual simulation, the basis for computer graphics and CAD/CAM". Very few programs can be called precedents for his achievements. Patrick J. Hanratty is sometimes called the "father of CAD/CAM" and wrote PRONTO, a numerical control language at General Electric in 1957, and wrote CAD software while working for General Motors beginning in 1961. Sutherland wrote in his thesis that Bolt, Beranek and Newman had a "similar program" and T-Square was developed by Peter Samson and one or more fellow MIT students in 1962, both for the PDP-1. The Computer History Museum holds program listings for Sketchpad. Hardware Sketchpad ran on the Lincoln TX-2 (1958) computer at MIT, which had 64k of 36-bit words. The user drew on the screen with the recently invented light pen. Of the 36 bits available to store each display spot in the display file, 20 gave the coordinates of that spot for the display system and the remaining 16 gave the address of the n-component element responsible for adding that spot to display. In 1963, most computers ran jobs in batch job mode only, using punched cards or magnetic tape reels submitted by professional programmers or engineering students. A considerable amount of work was required to make the TX-2 operate in interactive mode with a large CRT screen. When Sutherland had finished with it, it had to be reconverted to run in batch mode again. Publications The Sketchpad program was part and parcel of Sutherland's Ph.D. thesis at MIT and peripherally related to the Computer-Aided Design project at that time. Sketchpad: A Man-Machine Graphical Communication System. See also Comparison of CAD Software References Bibliography , explains the principles of "Sketchpad". . . . . . External links . Archived at Ghostarchive and the Wayback Machine: . Demo 1, 2 1963 software Computer graphics Graphical user interfaces History of human–computer interaction
30992771
https://en.wikipedia.org/wiki/Alpha%20strike%20%28engineering%29
Alpha strike (engineering)
Alpha strike is a term referring to the event when an alpha particle, a composite charged particle composed of two protons and two neutrons, enters a computer and modifies the data or operation of a component in the computer. Alpha strikes can disturb the silicon substrate of the transistors in a computer through their electronic stopping power, causing the transistor to flip states if the charge imparted by the strike crosses a critical threshold (QCrit). This, in turn, can corrupt the information stored by that transistor and create a cascading effect on the operation of the component that encases it. History The first widely recognized radiation-generated error in a computer was the appearance of random errors in the Intel 4k 2107 DRAM in the late 1970s. This problem was investigated by Timothy C. Mays and Murray H. Woods, who (in 1979) reported that the errors were caused by alpha decay from trace amounts of uranium and thorium induced in the seminal paper surrounding the chip. Since then, there have been multiple incidents of computer errors due to radiation, including error reports from computers onboard spacecraft, corrupted data from voting machines, and crashes on computers onboard aircraft. According to a study from Hughes Aircraft Company, anomalies in satellite communication attributed to galactic cosmic radiation is on the order of (3.1×10−3) transistors per year. This rate is an estimate of the number of noticeable cascading errors in communication between satellites per satellite. Modern Impact Alpha strikes are limiting the computing capabilities of computers onboard high-altitude vehicles as the energy an alpha particle imparts on the transistors of a computer is far more consequential for smaller transistors. As a result, computers with smaller transistors and higher computing capability are more prone to errors and crashes than computers with larger transistors. One potential solution for optimizing the performance of computers onboard spacecraft while limiting the number of errors in the computer is the use of radiation protection. There a numerous materials under consideration as radiation shields, each with its own tradeoff between cost, weight, thermal diffusivity, and signal permittivity. One potential solution being explored by scientists and engineers is hydrogenated carbon nanofibers, a material that is light and can absorb alpha strikes through its internal structure. See also Alpha decay Cosmic ray Satellite Radiation protection References Radiation effects Computer engineering
54170667
https://en.wikipedia.org/wiki/Buildium
Buildium
Buildium is an American property management software company. It was founded in 2004 and headquartered in Boston, Massachusetts, providing cloud-based (software as a service), real estate software. Its property management software allows real estate professionals to manage property portfolios, including leasing, accounting and operations. History Buildium was co-founded in 2004 by Michael Monteiro and Dimitris Georgakopoulos. The company was bootstrapped for the first eight years, and in 2012 and 2014, Buildium raised two rounds of funding totaling $20 million from K1 Investment Management. In June 2016, the company raised $65M from Sumeru Equity Partners. In February 2015, Buildium acquired All Property Management, a provider of online marketing services for property managers. In April 2017, Buildium was named Leader in Gartner Frontrunners Quadrant for Property Management Software. In December 2019 Buildium was acquired by RealPage. RealPage previously pegged the value of the Buildium acquisition at $580 million. References Property management companies Organizations established in 2004 Companies based in Boston Software companies based in Massachusetts Cloud applications Cloud computing providers Software companies of the United States 2004 establishments in the United States 2004 establishments in Massachusetts Software companies established in 2004 Companies established in 2004
8764576
https://en.wikipedia.org/wiki/Presentation%20logic
Presentation logic
In software development, presentation logic is concerned with how business objects are displayed to users of the software, e.g. the choice between a pop-up screen and a drop-down menu. The separation of business logic from presentation logic is an important concern for software development and an instance of the separation of presentation and content. One major rationale behind "effective separation" is the need for maximum flexibility in the code and resources dedicated to the presentation logic. Client demands, changing customer preferences and desire to present a "fresh face" for pre-existing content often result in the need to dramatically modify the public appearance of content while disrupting the underlying infrastructure as little as possible. The distinction between "presentation" (front end) and "business logic" is usually an important one, because: the presentation source code language may differ from other code assets; the production process for the application may require the work to be done at separate times and locations; different workers have different skill sets, and presentation skills do not always coincide with skills for coding business logic; code assets are easier to maintain and more readable when disparate components are kept separate and loosely coupled; References Software design Software architecture Software engineering terminology
3733581
https://en.wikipedia.org/wiki/Absolute%20OpenBSD
Absolute OpenBSD
Absolute OpenBSD: Unix for the Practical Paranoid is a comprehensive guide to the OpenBSD operating system by Michael W. Lucas, author of Absolute FreeBSD and Cisco Routers for the Desperate. The book assumes basic knowledge of the design, commands, and user permissions of Unix-like operating systems. The book contains troubleshooting tips, background information on the system and its commands, and examples to assist with learning. 1st edition The first edition was released in June 2003. Some of the information in the book became outdated when OpenBSD 3.4 was released only a few months later. 2nd edition The second edition was released in April 2013. Peter N. M. Hansteen, author of The Book of PF, was the technical reviewer. External links References OpenBSD 2003 non-fiction books No Starch Press books Books about free software Books on operating systems
28173392
https://en.wikipedia.org/wiki/Trine%202
Trine 2
Trine 2 is a side-scrolling, puzzle-platform game developed by Frozenbyte. It is the sequel to Trine and was released on Microsoft Windows, OS X, PlayStation 3, and Xbox 360 in December 2011, and later for Linux in March 2012. Trine 2 allows three players to play the iconic roles of a wizard, a thief, and a knight in a simultaneous cooperative mode. A Director's Cut edition was released via the Wii U's eShop on the console's launch day in all regions except Australia and Japan. The game was also released as a launch title for the PlayStation 4 in North America and Europe in 2013. On February 13, 2019, it was announced that a port to the Nintendo Switch would be released on February 18, 2019. Gameplay Trine 2 is a puzzle-platform game, requiring the player to use the skills of the three characters, Amadeus the wizard, Zoya the thief, and Pontius the knight, to navigate each game level. As with the first game, the mystical "Trine" has bound the three characters together into one common entity, and thus the player controls only one character which can be switched to the other two at any time. Each of the characters has unique abilities: Amadeus can use magic to grab onto certain objects in the game world, and create boxes and planks to be used to get around; Zoya can strike at objects with her arrows, and grapple onto certain surfaces; and Pontius is strong in combat against foes, can bash apart walls, and deflect projectiles with his shield. A combination of these elements is necessary to complete each stage in the game's world. Characters have individual life meters, and if one character's meter depletes, that character cannot be used until the next checkpoint is reached. If all three characters lose their life meter, the player must start back at the last checkpoint. Throughout the game world are special magical vials, and for every fifty of these collected, the player receives a skill point, which can be used to gain abilities through a skill tree for each character. These skill points can be used collectively for each of the three characters, and can be traded between them. Trine 2 also supports up to three players in a cooperative mode. In this mode, each player controls one of the three characters, but all must be unique; three players will be forced to play as Amadeus, Zoya, and Pontius. Two players can switch characters as long as both agree to the swap. If a character dies, the other players can revive the character at the next checkpoint. The skill tree is shared among all characters, based on the hosting player's saved game. Story elements are incorporated into the game through the use of an all knowing narrator (voiced by Terry Wilton) as well as in-game scripted sequences. Scattered throughout the levels are also letters, poems, and documents which further flesh out the backstory and provide additional insight into the game's characters. Plot Original game Trine 2 takes places some years after the restoration of the kingdom of the previous game and opens with Amadeus (voiced by Kevin Howarth) sleeping after a long night trying to once again learn the elusive fireball spell. A strange light shines upon him, and beckons him to follow. Although somewhat perturbed, the wizard's curiosity overcomes his fear as he pursues the unearthly glow, which eventually reveals itself to be the Trine. Upon his arrival Pontius (voiced by Brian Bowles) appears, the Trine having already summoned him after he protected peasant farms from magically overgrown vines, and informs Amadeus that they are needed once more. The wizard is less than thrilled, not wanting to leave his wife and kids, but the knight eventually persuades Amadeus that he must come. They then reunite with Zoya (voiced by Vicky Krueger) and the Trine starts them on their adventure, taking them to a mysterious wilderness of which they had never heard of or seen before. The three heroes are immediately thrown into action after being attacked by bands of goblins. Along the way they encounter, of all things, a talking flower, which tasks them with finding her sister at the other side of the forest. The three heroes slowly discover the path out, while a mysterious figure (mentioned by the goblins simply as "The Witch") is overlooking their progress. The trio then enter an eerie tree house, which Amadeus believes to be the home of a witch. After bypassing several traps meant to keep out the goblins, they finally meet the mysterious person who has been watching them all this time. She reveals herself as the crown princess Rosabel (voiced by Charlotte Moore), and asks the heroes to help her rid the kingdom of all the evil that has befallen it, to which the three agree readily. This includes going to her once glamorous castle which has been taken over by the goblins, where the Goblin King now resides. After the three find and slay the great goblin, Zoya convinces the others to take a look around for treasure. Through a series of book entries, looted poems, and narrative scattered around the levels, a story of two sisters, Rosabel and Isabel (voiced by Alix Wilton Regan), unfolds. They begin with the sisters at the ages of eight and nine, with both of them initially being quite close. The two were talented at magic, but as the stories progress into their later years it becomes clear that it was Isabel that got the most attention, received nicer gifts, and was eventually chosen as Queen. Rosabel, looking on as her sister got the attention and ever increasing status, becomes increasingly jealous and a tad vengeful. On her birthday, Isabel is invited to a 'surprise party' by her sister. Taken to a secret hiding place where they used to play as small children, she is imprisoned by Rosabel and is held captive by an enchanted tree under an irreversible sleep spell. The forest however, starts to rob Isabel of her magical powers, causing all plant and animal life to overgrow. This put the kingdom under unbalance, allowing the goblins to conquer it. After observing the heroes and the Trine, Rosabel wishes to use the powers of the artifact to save Isabel and restore the kingdom. The heroic trio eventually come to realize that Isabel is being held hostage, and decide to confront Rosabel. After unsuccessfully trying to get the heroes to hand over the Trine, she imprisons them in the goblins dungeons. They quickly make their escape and, along the way, overhear goblins talk that the forest's growth has reached their homeland and that the monsters plan to attack. With Isabel using her powers to allow the forest to help them travel, they soon return to Rosabel's tree house. Rosabel summons her pet dragon to take care of the three heroes and steal the Trine, but they manage to defeat the creature and Rosabel, who falls into a lake. This awakens Isabel who quickly dives in to save her sister despite all she had been put through. She reappears with Rosabel's body, but whether she survived is unclear. Isabel then thanks the heroes for their service and the Trine appears to help everyone one last time. Amadeus, Pontius, and Zoya are then teleported back to the forest where their adventure began. The game ends with them sitting by a campfire sharing stories of their grand adventure before finally travelling back to their homes for the night. Isabel and Rosabel's fate are left open, though it is explained that with Isabel's magic, the forest and her kingdom will eventually recover and that the heroes' own homeland is now once again safe. Goblin Menace expansion The heroes return home after saving Isabel's kingdom, yet they fail to realize they forgot to deal with the invasion on their homeland planned by the goblins they encountered in the wilderness. While relaxing in a tavern, the attack begins, and the three decide to reach the town walls to help defend the city; unbeknownst to them, however, a band of goblins has been dispatched to Amadeus's cottage to kidnap his (very scornful) wife Margaret. The trio reach the walls and fight a goblin siege engine, but then the goblin leader, an inventor named Wheeze, appears and shows their hostage (who angrily demands her husband to save her). Their hands tied, the heroes are captured by a wyvern which spirits them away. They are left next to a temple in a distant desert, but are somewhat perplexed as they are still bound by the Trine's magic yet the artifact is nowhere to be seen and guide them as it did before. Exploring the temple, they discover it was built by an ancient goblin civilization that was destroyed by desert worms; they also find a mural depicting a human-like deity worshipped by the goblins. As they are leaving the temple, they are surprised and eaten alive by such a worm. The three manage to escape from the belly of the beast and find themselves close to a factory of flying machines used by the goblins, where they overhear that Wheeze is building a battle tank on the floating Cloudy Isles. The heroes steal a flying carriage and set course to the Isles. On Cloudy Isles, they confront and defeat Wheeze, then search for Margaret. As it turns out, the Trine was keeping her safe all along by shielding her in light. Pontius realizes that the goblins may have kidnapped her due to her resemblance to the ancient goblin deity. Relieved, Amadeus thanks the Trine and the heroes hope they can return home without starting a new adventure on the way. Development The Humble Frozenbyte Bundle, one of the Humble Bundles, started on April 12, 2011 and featured five games from Frozenbyte, including the original Trine, as well as the games Shadowgrounds and Shadowgrounds: Survivor. It also contained an executable version along with source code for an unfinished game, Jack Claw, and a pre-order for their upcoming game, Splot. By April 22, 2011, the Humble Frozenbyte Bundle had surpassed $700,000. Most of the money generated by the sale went to finishing the development of Trine 2 after it was suggested by lead developer, Mike Donovan, that a sequel be made. The Linux version was delayed from the rest to allow for additional development; it was the first Frozenbyte title ported to Linux in-house, with their other games having previously been ported instead by Alternative Games. Jukka Kokkonen, Frozenbyte’s Senior Programmer, revealed that the actual porting was actually "easier than expected", although he did comment that they had some trouble with testing. The port was released as a beta in late March, and is to be released on services including Desura and Gameolith. Kokkonen stated that he hopes that Trine 2 "shows that Linux can provide a proper gaming experience" and that they are "very excited to see how Linux users react to the game.” The basic porting process for the Wii U's Director's Cut edition was achieved by Frozenbyte in just two days, which gave the development team plenty of time to adjust visuals and implement the exclusive touch screen functionality, and subsequently made the game ready to be launched alongside the Wii U console launch date in all regions. According to Frozenbyte's sales and marketing manager Mikael Haveri, Nintendo's initial approach and close contact with their company helped support the process of releasing the game for Wii U, and subsequently made it Frozenbyte's first self-published title without the involvement of Atlus, the game's publisher for other platforms. The support for Trine 2: Director's Cut is cited as one of Nintendo's initial steps into reaching out to the independent video game development community. Haveri adds that working with Nintendo has been "very freeform" and positive. Expansions Goblin Menace expansion An expansion pack for the game entitled Trine 2: Goblin Menace was released September 7, 2012. It features six new levels, a new story, and several new skills which will also be available in the original game. Frozenbyte marketing director Mikael Haveri has also revealed that it will also feature several new puzzles based on light, water, low gravity and magnetic elements. The expansion, originally only released for the PC platforms, will be included alongside the Wii U release of the game, and the Wii U is the only non-PC console planned to receive this expansion. On February 7, 2013, a Frozebyte representative reported on their website that due to relatively poor sales for Trine 2 on Xbox 360 Live Arcade and PlayStation 3 Network, the conversion for the expansion for these consoles would not be cost effective, thus Trine 2: Goblin Menace will not likely be seen on Xbox 360 or PlayStation 3 in the foreseen future. Director's Cut edition The Trine 2: Director's Cut edition, released at first only for Nintendo's Wii U console, features the original game with the Goblin Menace expansion, alongside the enhanced controls that take advantage of the GamePad controller, as well as an exclusive level called the "Dwarven Caverns". A multiplayer mode called Magic Mayhem was in development at one point but was later scrapped in favor of more focus on the Dwarven Caverns level. Director's Cut edition also features a flexible control scheme, especially for the game's multiplayer mode which is both local and online, where players can, alongside the GamePad, use Wii Remotes, Nunchuks, Wii U Pro Controllers, and even the original Wii Classic Controllers. Additionally, Frozenbyte has plans to include some Miiverse support at some point in the future, with Mikael Haveri citing that "it's opening up the possibilities of what you can do in terms of interaction with other players and so on." Other additions include minor things such as patching support, although the patching process is currently unknown at this time. The Wii U release is the only version of Trine 2 to be available in Japan, released on January 22, 2014 under the title . The Japanese release was a courtesy of Nintendo. Complete Story edition A patch released June 6, 2013 for the Steam version of Trine 2 upgraded it to the Trine 2: Complete Story edition, if the Goblin Menace expansion had been previously purchased. This new edition includes the Dwarven Caverns level previously available only in the Director's Cut version from the Wii U. A cross-platform and DRM free release of Trine 2: Complete Story was then later made available as part of Humble Indie Bundle 9. The Complete Story edition was subsequently released on PlayStation 4 in 2013. Reception Trine 2 received largely positive reviews, earning a score of 9/10 by IGN, and 84/100 for the Windows and Wii U versions on Metacritic, as well as a score of 85/100 for the game's release on the PlayStation 3 and Xbox 360. By October 2014, Frozenbyte announced that the Trine series had sold over seven million copies worldwide. References External links 2011 video games Action-adventure games Android (operating system) games Asymmetrical multiplayer video games Atlus games Cooperative video games Fantasy video games Frozenbyte games Linux games MacOS games Multiplayer and single-player video games Multiplayer online games Nintendo Network games PlayStation 3 games PlayStation 4 games PlayStation Network games Puzzle-platform games Side-scrolling platform games Video games with Steam Workshop support Video game sequels Video games about witchcraft Video games developed in Finland Video games featuring female protagonists Video games scored by Ari Pulkkinen Video games with 2.5D graphics Video games with stereoscopic 3D graphics Wii U eShop games Windows games Xbox 360 Live Arcade games
704189
https://en.wikipedia.org/wiki/Cedega%20%28software%29
Cedega (software)
Cedega (formerly known as WineX) was the proprietary fork by TransGaming Technologies of Wine, from the last version of Wine under the X11 license before switching to GNU LGPL. It was designed specifically for running games created for Microsoft Windows under Linux. As such, its primary focus was implementing the DirectX API. WineX was renamed to Cedega on the release of version 4.0 on June 22, 2004. Cedega Gaming Service was retired on February 28, 2011. TransGaming announced that development would continue under the GameTree Linux Developer Program, however this proved moot as the company's core technology divisions were shuttered in 2016. Licenses Though Cedega was mainly proprietary software, TransGaming did make part of the source publicly available via CVS, under a mix of licenses. Though this was mainly done to allow a means for the public to view and submit fixes to the code, it was also frequently used as a means to obtain a quasi-demonstration version of Cedega. TransGaming released a proper demo of Cedega because of complaints of the difficulty of building a usable version of the program from the public CVS, as well as its outdated nature. The demo released by Cedega gave users a 14-day trial of a reasonably current version of the product with a watermark of the Cedega logo which faded from almost transparent to fully opaque every few seconds. This demo was removed without comment. While the licenses under which the code was released do permit non-commercial redistribution of precompiled public-CVS versions of the software, TransGaming strongly discouraged this, openly warning that the license would be changed if they felt that abuse was occurring or otherwise threatened. TransGaming similarly discouraged source-based distributions like Gentoo Linux from creating automated tools to let people build their own version of Cedega from the public CVS. The Wine project originally released Wine under the same MIT License as the X Window System, but owing to concern about proprietary versions of Wine not contributing their changes back to the core project, work as of March 2002 has used the LGPL for its licensing. Functionality In some cases it closely mimicked the experience that Windows users have (insert disc, run Setup.exe, play). In other cases some amount of user tweaking is required to get the game installed and in a state of playability. Cedega 5.2 introduced a feature called the Games Disc Database (GDDB) that simplifies many of these settings and adds auto-game detection when a CD is inserted so that settings are applied for the inserted game automatically. A basic list of features: Some types of copy protection Pixel Shaders 3.0 Vertex Shaders 3.0 DirectX 9.0 Joystick support including remapping axes The ability to run some Windows games History Cedega subscribers dwindled as users expressed a number of complaints due to lack of updates, fatal problems with supported games and with Wine having achieved a number of features that were unique to Cedega, giving even better compatibility in some cases. Users attributed the apparent lack of interest from TransGaming on Cedega to their focus on Cider, a similar Wine-based API layer for Mac OS X systems, supported by Electronic Arts to bring their Windows native games to Mac. On November 13, 2007's Development Status report, TransGaming explained that a number of modifications have been made to Cedega's code to add Wine's implementation of the MSI installation system and to be able to incorporate more of Wine's codebase. It was never confirmed if those changes were in conformance with Wine's LGPL license. Also on the November 13, 2007 report, it was announced that all of the work done on Cider would be merged back into Cedega (since both share the same code). Among the new features are “new copy protection, 2.0 shader updates, a head start on shader model 3.0, performance upgrades, a self updating user interface” and others. On September 23, 2008, Cedega officially presented the new version 6.1. Cedega Gaming Service was retired on February 28, 2011. Controversy TransGaming's business practice of benefiting financially from the Wine project, without contributing anything back to it has drawn criticism. TransGaming obtained the source to the original Wine project when it was under the MIT License and this license placed no requirements on how TransGaming published their software. TransGaming decided to release their software as proprietary software. Cedega includes licensed support for several types of CD-based copy protection (notably SecuROM and SafeDisc), the code for which TransGaming said they were under contract not to disclose. In 2002 the Wine project changed its license to the GNU Lesser General Public License (LGPL). This means that anyone who publishes a modified version of Wine must publish the source code under an LGPL-compatible license. TransGaming halted using code contributed to Wine when the license was changed, though this has resumed with TransGaming integrating certain LGPL portions of Wine into Cedega and placing those portions of the source code on their public servers. TransGaming offers a CVS tree for Cedega without copy protection related code and texture compression through its own repositories with mixed LGPL, AFPL and bstring licensing. Point2Play graphical frontend for Cedega is also not found on the CVS. Scripts and guides have been made by the community to facilitate building Cedega from the source tree. See also Wine — the free software/open source software on which Cedega is based. CrossOver — another commercial proprietary Wine-based product, targeted at running productivity/business applications and, recently, games. References External links Developers Page GameTree Linux Wiki — User-maintained database of games that work and don't work with Cedega, along with game-specific setup instructions and tweaks Screencast for installing and testing Cedega on SuSE Linux at showmedo Compatibility layers Software derived from or incorporating Wine Software forks Discontinued software
61081427
https://en.wikipedia.org/wiki/Patrick%20Breyer
Patrick Breyer
Patrick Breyer (born 29 April 1977) is a German digital rights activist, jurist, Pirate Party Germany politician, and – since 2019 – Member of the European Parliament (MEP). From 2012 to 2017 he was a member of the state parliament of Schleswig-Holstein and from April 2016 until the end of the legislative period he was also the leader of the Pirate group in that assembly. Breyer is one of four European Pirate Party MEPs in the 2019–2024 term along with three Czech Pirate Party members, all of whom are members of the Greens / EFA parliamentary group. Life Patrick Breyer lives in Kiel. He studied law and was awarded a Doctorate of Law in 2004 at the Goethe University Frankfurt with his thesis on The systematic recording and retention of telecommunications traffic data for government purposes in Germany (Data retention). In 2004 he was appointed a judge in the state of Schleswig-Holstein. In 2006 he became a founding member of the Pirate Party Germany. Political activity Legal proceedings concerning digital civil rights Breyer is involved in the Working Group on Data Retention for Information privacy and Civil and political rights and was involved in the organization of the successful class action lawsuit against data retention together with the lawyer and later judge at the Constitutional Court of the State of Berlin Meinhard Starostik. In 2016 he again filed a constitutional complaint against the new law on data retention. In 2012, the Federal Constitutional Court declared legislation on government access to telecommunications subscriber data to be partially unconstitutional in response to an appeal filed by Breyer. Breyer and Katharina Nocun again challenged the revised regulation before the Federal Constitutional Court. The complaint was successful on 27 May 2020. Breyer also filed a complaint with the European Court of Human Rights against the compulsory identification of prepaid SIM cards. which was dismissed in 2020. In 2012 Breyer filed a lawsuit against the European Commission for the release of documents on data retention and won in two instances. Breyer filed an action for injunction against the Federal Republic of Germany at the Berlin-Tiergarten District Court in 2008. The action was directed against the general and indiscriminate retention of the user IP address in logfiles when browsing government websites (so-called "surf logging"). After the Local Court had dismissed the complaint in its judgment of 13 August 2008, the Landgericht Berlin granted the request in part in its judgment of 31 January 2013. Breyer and the Federal Republic of Germany filed an appeal against the decision. On 19 October 2016, the European Court of Justice ruled on the basis of a submission by the Federal Court of Justice that dynamically assigned IP addresses represent personal data for the operator of a website if they can be traced to the subscriber in the course of criminal proceedings. On 16 May 2017 the Federal Court of Justice ruled that dynamically assigned IP addresses are personal data. Web site operators may only store them if this is necessary to ensure the general functioning of the services and if the interest and the fundamental rights and freedoms of the users do not take precedence. The Federal Court of Justice referred the case back to the Berlin Regional Court where it is pending. In May 2018, Breyer filed a constitutional complaint against the new authority of the Federal Police to carry out automatic number plate readings at border crossings. At the end of 2018 he announced that he would file a complaint against the automatic reading of vehicle registration plates within the framework of "Section Control" in Lower Saxony, and in March 2019 he filed the complaint with the organization "freiheitsfoo". Member of the Schleswig-Holstein Landtag In the 2012 Schleswig-Holstein state election, Breyer was elected to the Landtag from the list presented by the Pirate Party of Schleswig-Holstein and on 21 May 2012 was chosen by that party's parliamentary group to be its leader in the chamber. He held the chair until the regular election of the parliamentary party executive committee on 21 May 2013. Between November 2012 and April 2017, Breyer transferred dietary allowances in the total amount of 75,159.18 euros to a donation account of the State of Schleswig-Holstein under the purpose "Reduction of new debt". He justified this, among other things, by the fact that only 1% of all taxpayers nationwide would have an income comparable to that of parliamentary group chairmen and that there should be no first and second class members of parliament because of the parliamentary group chairmanship. In January 2013 Breyer criticized the vending machine industry, claiming that the sector "especially [that part of it] around Mr Paul Gauselmann, [had] been lubricating politicians of all established parties with large donations for years". Gauselmann had him warned against this, but Breyer did not issue a cease-and-desist declaration. In the summer of 2014, Breyer published on his website police and judicial orders of danger zones in Schleswig-Holstein, previously sent by email from the Ministry of the Interior, within which police checks were permitted without justification. In the more than 100-page documents, the Ministry of the Interior had only partially redacted (blacked out) the names, official telephone numbers, and email addresses of the police officers in charge of the case, some of whom are also investigating the criminal Outlaw motorcycle club. After the problem became known, Breyer publicly apologized for not having checked the documents sufficiently before publication and deleted the police officer data. He was criticized across party lines. The Gewerkschaft der Polizei filed criminal charges against Breyer. The public prosecutor's office did not initiate an investigation, as there were no sufficient actual indications of criminal offences. In an expert opinion, Thilo Weichert, the data protection commissioner, objected in several points to the actions of the Ministry of the Interior in sending the requested danger area orders. The Ministry had not carried out its redaction effectively or completely. Whether the documents classified as "for official use only" (VS-NfD) had in fact had to be classified in this way was at best questionable. An explicit reference should have been made to the intentional confidential treatment. However, the Member of Parliament would also have had to take confidentiality and protection of secrecy into account when exercising his rights. Breyer was "by far the most hard-working parliamentarian with a total of 356 initiatives" in the legislative period of the Schleswig-Holstein state parliament running since 2012, reported Die Welt at the end of 2015 with reference to the information system of the state parliament. On 17 February 2016 Breyer awarded the Green-Red-Blue coalition an "ostrich prize for extraordinary achievements in delaying important reforms in our country". During the plenary debate, SPD chairman Ralf Stegner was presented with a bird ostrich mannequin. Breyer accused the coalition of using postponement and procrastination to prevent "repeated decisions by the state parliament on uncomfortable reform initiatives" by the Pirate faction, including the introduction of a waiting period for changing ministers to work in industry. State parliament president Klaus Schlie (CDU) issued Breyer with a formal reprimand. On 12 April 2016 Breyer was re-elected chairman of the Pirate parliamentary group. In May 2016 Breyer revealed accusations that male cadets at the Eutin Police Academy had displayed openly misogynistic, sexist, and racist behaviour towards their female fellow students and that the Ministry of the Interior had remained inactive despite knowledge of the matter (the so-called "Whatsapp affair"). As a result of the revelations, one police student was refused entry into the service. In addition, the head of the academy had been replaced in what was claimed to be no more than a routine rotation of posts. A committee of inquiry is currently investigating the case. In autumn 2016 Breyer became the Pirate Party of Schleswig-Holstein's leading candidate in the state elections. In December 2016 and February 2017, in the state parliament, Breyer criticized the fact that the heads of the state audit office and the state constitutional court had been appointed by the other parties without a public invitation to tender according to party proportional representation. The president of the state parliament called him to order and instructed him to withdraw his speech. In order to defend his right to criticize what he called "job pushing", Breyer called in the State Constitutional Court. The court declared the call to order unconstitutional on 17 May 2017. On 27 March 2017 Breyer was awarded the Horst Lütje Foundation's "Backbone Prize", endowed with 1,000 Euros, for his efforts. In May 2017, Breyer uncovered accusations by criminal investigators that exculpatory statements in criminal proceedings against "rockers" had been suppressed and bullying had been used in response to criticism (the so-called "Rocker Affair"). As a result of the revelations, the head of the police department in the Ministry of the Interior as well as the heads of the state criminal investigation department and the state police office were obliged to resign. A parliamentary committee of inquiry is currently investigating the case. In the 2017 state elections the Pirate Party failed to win any seats in the state parliament. With 1.2% of the vote it scored seven percentage points less than five years earlier. This ended Breyer's mandate. As a representative of the "People's Initiative for Co-Determination", Breyer handed more than 20,000 citizens' signatures to the state parliament in December 2017. In May 2018 Breyer, as a representative of the "People's Initiative for the Protection of Water", which demands a legal ban on fracking, delivered more than 42,000 citizens' signatures to the President of the State Parliament. European elections 2019 In the 2019 European Parliament election in Germany Breyer was the Pirate Party Germany's leading candidate. For the election campaign, he recorded a rap music video, which featured such other well-known pirates as Anja Hirschel. In March 2019 Breyer filed a lawsuit against the EU Commission on the grounds that it was in possession of secret project documents relating to new kinds of video lie detectors intended for entry control, including an ethical and legal evaluation of the technology, on the grounds of protecting the commercial interests of the companies involved. Breyer was the only member of the German Pirate Party to be elected to the European Parliament, where he now sits, together with three members of the Czech Pirate Party who were elected at the same time, as one of a total of four members composing the Pirate Party in the parliament. He joined the Greens–European Free Alliance group, as Felix Reda had done in the previous legislature, and so too did the three Czech pirates. For his group, Breyer is a member of the European Parliament Committee on Civil Liberties, Justice and Home Affairs and an alternate member of the European Parliament Committee on Legal Affairs. Works The systematic recording and retention of telecommunications traffic data for government purposes in Germany (traffic data retention). Edited: November 2004; Rhombos, Berlin 2005, (Volltext). Weblinks Breyer's blog References External links 1977 births 21st-century German politicians Articles containing video clips Living people Members of the European Parliament for Germany Members of the Landtag of Schleswig-Holstein MEPs for Germany 2019–2024 Pirate Party (Germany) MEPs Pirate Party Germany politicians
18630373
https://en.wikipedia.org/wiki/Qumranet
Qumranet
Qumranet, Inc. was an enterprise software company offering a desktop virtualization platform based on hosted desktops in Kernel-based Virtual Machines (KVM) on servers, linked with their SPICE protocol. The company was also the creator, maintainer and global sponsor of the KVM open source hypervisor. History The company was founded in 2005 by CEO Benny Schnaider, with Rami Tamir as president, Moshe Bar as CTO, and chairman Dr. Giora Yaron. Qumranet had raised $20 million in two financing rounds from its founders, Norwest Venture Partners, Cisco Systems, and Sequoia Capital, in addition to investment by the rounding partners. The company's first product, named "Solid ICE", hosted Windows and Linux desktops on central servers located in a data center. The Ra'anana-based company developed a virtualization technology for IT data centers. From a very low-profile Israeli startup the company made waves with the rapid acceptance of KVM into the Linux kernel, and their Solid ICE desktop virtualization platform has received serious attention. Avi Kivity was the lead developer and maintainer of the Kernel-based Virtual Machine project from mid-2006, that has been part of the Linux kernel since the 2.6.20 release in February 2007. Qumranet was on the Gartner Group's 2008 list of "Cool Vendors," an award given to small companies with advanced technology. On September 4, 2008, Qumranet was acquired by Red Hat, Inc. for $107 million. Key executives Benny Schnaider, Co-Founder, Chief Executive Officer and Director Rami Tamir, Co-Founder, President and Director Moshe Bar Ph.D. Co-Founder and Chief Technology Officer Giora Yaron Ph.D, Co-Founder and Chairman of the Board Shmil Levy, Board Member, Sequoia Capital Vab Goel, Board Member, Norwest Venture Partners References External links http://www.qumranet.com/ (archived version) Software companies established in 2005 Red Hat Remote desktop Software companies of Israel Virtualization software Israeli companies established in 2005
57559964
https://en.wikipedia.org/wiki/TOTVS
TOTVS
TOTVS S.A. is a Brazilian software company based in São Paulo. TOTVS was initially formed from the merger of Microsiga and Logocenter. TOTVS is the leader in the Brazilian ERP market according to the FGV (Getúlio Vargas Foundation) and, in addition to Brazil, has offices in the United States, Portugal and Latin America. History In 1983, entrepreneur Laércio Cosentino was 23 years old and was director of data processing company Siga, a company created by Ernesto Haberkorn. Cosentino and Haberkorn opened Microsiga, a software company for small and medium-sized companies, and then became equal partners in the new company. Microsiga merged with Siga in 1989, and in 2005 the company changed its name to TOTVS. TOTVS acquired more than 50 corporative software manufacturers like Datasul, RM Sistemas, Midbyte, Logocenter, BCS, among others. TOTVS opened a California operation within the University of California campus, TOTVS Labs, a cloud computing solutions research center, in 2011. The following year, it opened an office in Silicon Valley, California. TOTVS acquired seven Brazilian software companies in 2013; PRX (current TOTVS Agroindústria), ZeroPaper, RMS, 72% of Ciashop, Seventeen Tecnologia da Informação, W & D Participações, parent company of PC Sistemas and PC Informática, and the American company GoodData. In May 2014, bought by 75.1 million dollars, the Paraná-based company Virtual Age that works in the development of software in the cloud for the companies of textile fashion and clothing. On August 14, 2015 Totvs announced the purchase of 100% of the commercial automation company Bematech for R$ 550 million. Offices TOTVS has 66 units and 11 centres of R&D in Brazil - São Paulo, Rio de Janeiro, Brasília, Belo Horizonte, Salvador, Recife, Fortaleza, Manaus, Porto Alegre, Joinville and Curitiba -, 3 centres of R&D in the United States - Weston, Florida, Raleigh, North Carolina and Mountain View, California (TOTVS Labs) - 4 units with 1 R&D center in Latin America - Argentina, Bolivia, Chile, Colombia, Dominican Republic, Ecuador, Mexico, Paraguay, Peru and Uruguay - as well as an office in Portugal and New York. References External links in English in English in Portuguese Companies listed on B3 (stock exchange) Technology companies of Brazil Companies based in São Paulo Software companies of Brazil ERP software companies Brazilian brands
2041
https://en.wikipedia.org/wiki/Ares
Ares
Ares (; , Árēs ) is the Greek god of courage and war. He is one of the Twelve Olympians, and the son of Zeus and Hera. The Greeks were ambivalent toward him. He embodies the physical valor necessary for success in war but can also personify sheer brutality and bloodlust, in contrast to his sister, the armored Athena, whose martial functions include military strategy and generalship. An association with Ares endows places, objects and other deities with a savage, dangerous, or militarized quality. Although Ares' name shows his origins as Mycenaean, his reputation for savagery was thought by some to reflect his likely origins as a Thracian deity. Some cities in Greece and several in Asia Minor held annual festivals to bind and detain him as their protector. In parts of Asia Minor he was an oracular deity. Still further away from Greece, the Scythians were said to ritually kill one in a hundred prisoners of war as an offering to their equivalent of Ares. The later belief that ancient Spartans had offered human sacrifice to Ares may owe more to mythical prehistory, misunderstandings and reputation than to reality. Though there are many literary allusions to Ares' love affairs and children, he has a limited role in Greek mythology. When he does appear, he is often humiliated. In the Trojan War, Aphrodite, protector of Troy, persuades Ares to take the Trojan's side. The Trojans lose, while Ares' sister Athena helps the Greeks to victory. Most famously, when the craftsman-god Hephaestus discovers his wife Aphrodite is having an affair with Ares, he traps the lovers in a net and exposes them to the ridicule of the other gods. Ares' nearest counterpart in Roman religion is Mars, who was given a more important and dignified place in ancient Roman religion as ancestral protector of the Roman people and state. During the Hellenization of Latin literature, the myths of Ares were reinterpreted by Roman writers under the name of Mars, and in later Western art and literature, the mythology of the two figures became virtually indistinguishable. Names The etymology of the name Ares is traditionally connected with the Greek word (arē), the Ionic form of the Doric (ara), "bane, ruin, curse, imprecation". Walter Burkert notes that "Ares is apparently an ancient abstract noun meaning throng of battle, war." R. S. P. Beekes has suggested a Pre-Greek origin of the name. The earliest attested form of the name is the Mycenaean Greek , a-re, written in the Linear B syllabic script. The adjectival epithet, Areios, was frequently appended to the names of other gods when they took on a warrior aspect or became involved in warfare: Zeus Areios, Athena Areia, even Aphrodite Areia. In the Iliad, the word ares is used as a common noun synonymous with "battle." Ares’ attributes are a helmet, shield, and sword or spear. Inscriptions as early as Mycenaean times, and continuing into the Classical period, attest to Enyalios as another name for the god of war. Worship, cult and ritual In mainland Greece and the Peloponnese, only a few places are known to have had a formal temple and cult of Ares. Pausanias (2nd century AD) notes an altar to Ares at Olympia, and the moving of a Temple of Ares to the Athenian agora during the reign of Augustus, essentially rededicating it (2 AD) as a Roman temple to the Augustan Mars Ultor. The Areopagus ("mount of Ares"), a natural rock outcrop in Athens, some distance from the Acropolis, was supposedly where Ares was tried and acquitted by the gods for his revenge-killing of Poseidon's son, Halirrhothius, who had raped Ares' daughter Alcippe. Its name was used for the court that met there, mostly to investigate and try potential cases of treason. Numismatist M. Jessop Price states that Ares "typified the traditional Spartan character", but had no important cult in Sparta; and he never occurs on Spartan coins. Gonzalez observes, in his 2005 survey of Ares' cults in Asia Minor, that cults to Ares on the Greek mainland may have been more common than some sources assert. Wars between Greek states were endemic; war and warriors provided his tribute, and fed his insatiable appetite for battle. Chained statues Gods were immortal but could be bound and restrained, both in mythic narrative and in cult practise. There was an archaic Spartan statue of Ares in chains in the temple of Enyalios (sometimes regarded as the son of Ares, sometimes as Ares himself), which Pausanias claimed meant that the spirit of war and victory was to be kept in the city. The Spartans are known to have ritually bound the images of other deities, including Aphrodite and Artemis (cf Ares and Aphrodite bound by Hephaestus), and in other places there were chained statues of Artemis and Dionysos. Statues of Ares in chains are described in the instructions given by an oracle of the late Hellenistic era to various cities of Pamphylia (in Anatolia) including Syedra, Lycia and Cilicia, places almost perpetually under threat from pirates. Each was told to set up a statue of "bloody, man-slaying Ares" and provide it with an annual festival in which it was ritually bound with iron fetters ("by Dike and Hermes") as if a supplicant for justice, put on trial and offered sacrifice. The oracle promises that "thus will he become a peaceful deity for you, once he has driven the enemy horde far from your country, and he will give rise to prosperity much prayed for." This Ares karpodotes ("giver of Fruits") is well attested in Lycia and Pisidia. Sacrifices Like most Greek deities, Ares was given animal sacrifice; in Sparta, after battle, he was given an ox for a victory by stratagem, or a rooster for victory through onslaught. The usual recipient of sacrifice before battle was Athena. Reports of historic human sacrifice to Ares in an obscure rite known as the Hekatomphonia represent a very long-standing error, repeated through several centuries and well into the modern era. The hekatomphonia was an animal sacrifice to Zeus; it could be offered by any warrior who had personally slain one hundred of the enemy. Pausanias reports that in Sparta, each company of youths sacrificed a puppy to Enyalios before engaging in a hand-to-hand "fight without rules" at the Phoebaeum. the chthonic night-time sacrifice of a dog to Enyalios became assimilated to the cult of Ares. Porphyry claims, without detail, that Apollodorus of Athens (circa second century BC) says the Spartans made human sacrifices to Ares, but this may be a reference to mythic pre-history. Thrace and Scythia A Thracian god identified by Herodotus (c. 484 – c. 425 BC) as Ares, through interpretatio Graeca, was one of three otherwise unnamed deities that Thracian commoners were said to worship. Herodotus recognises and names the other two as "Dionysus" and "Artemis", and claims that the Thracian aristocracy exclusively worshiped "Hermes". In Herodotus' Histories, the Scythians worship an indigenous form of Greek Ares, who is otherwise unnamed, but ranked beneath Tabiti (whom Herodotus claims as a form of Hestia), Api and Papaios in Scythia's divine hierarchy. His cult object was an iron sword. The "Scythian Ares" was offered blood-sacrifices (or ritual killings) of cattle, horses and "one in every hundred human war-captives", whose blood was used to douse the sword. Statues, and complex platform-altars made of heaped brushwood were devoted to him. This sword-cult, or one very similar, is said to have persisted among the Alans. Some have posited that the "Sword of Mars" in later European history alludes to the Huns having adopted Ares. Asia Minor In some parts of Asia Minor, Ares was a prominent oracular deity, something not found in any Hellennic cult to Ares or Roman cult to Mars. Ares was linked in some regions or polities with a local god or cultic hero, and recognised as a higher, more prestigious deity than in mainland Greece. His cults in southern Asia Minor are attested from the 5th century BC and well into the later Roman Imperial era, at 29 different sites, and on over 70 local coin issues. He is sometimes represented on coinage of the region by the "Helmet of Ares" or carrying a spear and a shield, or as a fully armed warrior, sometimes accompanied by a female deity. In what is now western Turkey, the Hellenistic city of Metropolis built a monumental temple to Ares as the city's protector, not before the 3rd century BC. It is now lost, but the names of some of its priests and priestesses survive, along with the temple's likely depictions on coins of the province. Crete A sanctuary of Aphrodite was established at Sta Lenika, on Crete, between the cities of Lato and Olus, possibly during the Geometric period. It was rebuilt in the late 2nd century BC as a double-sanctuary to Ares and Aphrodite. Inscriptions record disputes over the ownership of the sanctuary. The names of Ares and Aphrodite appear as witness to sworn oaths, and there is a Victory thanks-offering to Aphrodite, whom Millington believes had capacity as a "warrior-protector acting in the realm of Ares". There were cultic links between the Sta Lenika sanctuary, Knossos and other Cretan states, and perhaps with Argos on the mainland. While the Greek literary and artistic record from both the Archaic and Classical eras connects Ares and Aphrodite as complementary companions and ideal though adulterous lovers, their cult pairing and Aphrodite as warrior-protector is localised to Crete. Aksum In Africa, Maḥrem, the principal god of the kings of Aksum prior to the 4th century AD, was invoked as Ares in Greek inscriptions. The anonymous king who commissioned the Monumentum Adulitanum in the late 2nd or early 3rd century refers to "my greatest god, Ares, who also begat me, through whom I brought under my sway [various peoples]". The monumental throne celebrating the king's conquests was itself dedicated to Ares. In the early 4th century, the last pagan king of Aksum, Ezana, referred to "the one who brought me forth, the invincible Ares". Characterisation Ares was one of the Twelve Olympians in the archaic tradition represented by the Iliad and Odyssey. In Greek literature, Ares often represents the physical or violent and untamed aspect of war and is the personification of sheer brutality and bloodlust ("overwhelming, insatiable in battle, destructive, and man-slaughtering", as Burkert puts it), in contrast to his sister, the armored Athena, whose functions as a goddess of intelligence include military strategy and generalship. An association with Ares endows places and objects with a savage, dangerous, or militarized quality. In the Iliad, Zeus expresses a recurring Greek revulsion toward the god when Ares returns wounded and complaining from the battlefield at Troy: This ambivalence is expressed also in the Greeks' association of Ares with the Thracians, whom they regarded as a barbarous and warlike people. Thrace was considered to be Ares's birthplace and his refuge after the affair with Aphrodite was exposed to the general mockery of the other gods. A late-6th-century BC funerary inscription from Attica emphasizes the consequences of coming under Ares's sway: Hymns Homeric Hymn 8 to Ares (trans. Evelyn-White) (Greek epic 7th to 4th centuries BC) Ares, exceeding in strength, chariot-rider, golden-helmed, doughty in heart, shield-bearer, Saviour of cities, harnessed in bronze, strong of arm, unwearying, mighty with the spear, O defence of Olympus, father of warlike Victory, ally of Themis, stern governor of the rebellious, leader of righteous men, sceptred King of manliness, who whirl your fiery sphere among the planets in their sevenfold courses through the aether wherein your blazing steeds ever bear you above the third firmament of heaven; hear me, helper of men, giver of dauntless youth! Shed down a kindly ray from above upon my life, and strength of war, that I may be able to drive away bitter cowardice from my head and crush down the deceitful impulses of my soul. Restrain also the keen fury of my heart which provokes me to tread the ways of blood-curdling strife. Rather, O blessed one, give you me boldness to abide within the harmless laws of peace, avoiding strife and hatred and the violent fiends of death. Orphic Hymn 65 to Ares (trans. Taylor) (Greek hymns 3rd century BCE to 2nd century CE) To Ares, Fumigation from Frankincense. Magnanimous, unconquered, boisterous Ares, in darts rejoicing, and in bloody wars; fierce and untamed, whose mighty power can make the strongest walls from their foundations shake: mortal-destroying king, defiled with gore, pleased with war's dreadful and tumultuous roar. Thee human blood, and swords, and spears delight, and the dire ruin of mad savage fight. Stay furious contests, and avenging strife, whose works with woe embitter human life; to lovely Kyrpis [Aphrodite] and to Lyaios [Dionysos] yield, for arms exchange the labours of the field; encourage peace, to gentle works inclined, and give abundance, with benignant mind. Mythology When Ares does appear in myths, he typically faces humiliation. Birth He is one of the Twelve Olympians, and the son of Zeus and Hera. Argonautica In the Argonautica, the Golden Fleece hangs in a grove sacred to Ares, until its theft by Jason. The Birds of Ares (Ornithes Areioi) drop feather darts in defense of the Amazons' shrine to Ares, as father of their queen, on a coastal island in the Black Sea. Founding of Thebes Ares played a central role in the founding myth of Thebes, as the progenitor of the water-dragon slain by Cadmus. The dragon's teeth were sown into the ground as if a crop and sprang up as the fully armored autochthonic Spartoi. Cadmus placed himself in the god's service for eight years to atone for killing the dragon. To further propitiate Ares, Cadmus took as a bride Harmonia, a daughter of Ares's union with Aphrodite. In this way, Cadmus harmonized all strife and founded the city of Thebes. In reality, Thebes dominated Boeotia's great and fertile plain, which in both history and myth was a battleground for competing polities. According to Plutarch, the plain was anciently described as "The dancing-floor of Ares". Aphrodite In the Odyssey, in the tale sung by the bard in the hall of Alcinous, the Sun-god Helios once spied Ares and Aphrodite having sex secretly in the hall of Hephaestus, her husband. He reported the incident to Hephaestus. Contriving to catch the illicit couple in the act, Hephaestus fashioned a finely-knitted and nearly invisible net with which to snare them. At the appropriate time, this net was sprung, and trapped Ares and Aphrodite locked in very private embrace. But Hephaestus was not satisfied with his revenge, so he invited the Olympian gods and goddesses to view the unfortunate pair. For the sake of modesty, the goddesses demurred, but the male gods went to witness the sight. Some commented on the beauty of Aphrodite, others remarked that they would eagerly trade places with Ares, but all who were present mocked the two. Once the couple was released, the embarrassed Ares returned to his homeland, Thrace, and Aphrodite went to Paphos. In a much later interpolated detail, Ares put the young soldier Alectryon, who was Ares companion in drinking and even love-making, by his door to warn them of Helios's arrival as Helios would tell Hephaestus of Aphrodite's infidelity if the two were discovered, but Alectryon fell asleep on guard duty. Helios discovered the two and alerted Hephaestus. The furious Ares turned the sleepy Alectryon into a rooster which now always announces the arrival of the sun in the morning, as a way of apologizing to Ares. The Chorus of Aeschylus’ Suppliants (written 463 BC) refers to Ares as Aphrodite’s "mortal-destroying bedfellow". In the Illiad, Ares helps the Trojans because of his affection for their divine protector, Aphrodite; she thus redirects his innate destructive savagery to her own purposes. Giants In one archaic myth, related only in the Iliad by the goddess Dione to her daughter Aphrodite, two chthonic giants, the Aloadae, named Otus and Ephialtes, bound Ares in chains and imprisoned him in a bronze urn, where he remained for thirteen months, a lunar year. "And that would have been the end of Ares and his appetite for war, if the beautiful Eriboea, the young giants' stepmother, had not told Hermes what they had done," she related. In this, [Burkert] suspects "a festival of licence which is unleashed in the thirteenth month." Ares was held screaming and howling in the urn until Hermes rescued him, and Artemis tricked the Aloadae into slaying each other. In Nonnus's Dionysiaca, in the war between Cronus and Zeus, Ares killed an unnamed giant son of Echidna who was allied with Cronus, and described as spitting "horrible poison" and having "snaky" feet. Iliad In Homer's Iliad, Ares has no fixed allegiance. He promises Athena and Hera that he will fight for the Achaeans but Aphrodite persuades him to side with the Trojans. During the war, Diomedes fights with Hector and sees Ares fighting on the Trojans' side. Diomedes calls for his soldiers to withdraw. Zeus grants Athene permission to drive Ares from the battlefield. Encouraged by Hera and Athena, Diomedes thrusts with his spear at Ares. Athena drives the spear home, and all sides tremble at Ares's cries. Ares flees to Mount Olympus, forcing the Trojans to fall back. Ares overhears that his son Ascalaphus has been killed and wants to change sides again, rejoining the Achaeans for vengeance, disregarding Zeus's order that no Olympian should join the battle. Athena stops him. Later, when Zeus allows the gods to fight in the war again, Ares attacks Athena to avenge his previous injury. Athena overpowers him by striking him with a boulder. Attendants Deimos ("Terror" or "Dread") and Phobos ("Fear") are Ares' companions in war, and according to Hesiod, are also his children by Aphrodite. Eris, the goddess of discord, or Enyo, the goddess of war, bloodshed, and violence, was considered the sister and companion of the violent Ares. In at least one tradition, Enyalius, rather than another name for Ares, was his son by Enyo. Ares may also be accompanied by Kydoimos, the daemon of the din of battle; the Makhai ("Battles"); the "Hysminai" ("Acts of manslaughter"); Polemos, a minor spirit of war, or only an epithet of Ares, since it has no specific dominion; and Polemos's daughter, Alala, the goddess or personification of the Greek war-cry, whose name Ares uses as his own war-cry. Ares's sister Hebe ("Youth") also draws baths for him. According to Pausanias, local inhabitants of Therapne, Sparta, recognized Thero, "feral, savage," as a nurse of Ares. Offspring and affairs Though Ares plays a relatively limited role in Greek mythology as represented in literary narratives, his numerous love affairs and abundant offspring are often alluded to. The union of Ares and Aphrodite created the gods Eros, Anteros, Phobos, Deimos, and Harmonia. Other versions include Alcippe as one of his daughters. Cycnus (Κύκνος) of Macedonia was a son of Ares, who tried to build a temple to his father with the skulls and bones of guests and travellers. Heracles fought him, and in one account, killed him. In another account, Ares fought his son's killer but Zeus parted the combatants with a thunderbolt. List of offspring and their mothers Sometimes poets and dramatists recounted ancient traditions, which varied, and sometimes they invented new details; later scholiasts might draw on either or simply guess. Thus while Phobos and Deimos were regularly described as offspring of Ares, others listed here such as Meleager, Sinope and Solymus were sometimes said to be children of Ares and sometimes given other fathers. Mars The nearest counterpart of Ares among the Roman gods is Mars, originally an agricultural deity, who as a father of Romulus, Rome's legendary founder, was given a more important and dignified place in ancient Roman religion, as a guardian deity of the entire Roman state and its people. During the Hellenization of Latin literature, the myths of Ares were reinterpreted by Roman writers under the name of Mars. Greek writers under Roman rule also recorded cult practices and beliefs pertaining to Mars under the name of Ares. Thus in the classical tradition of later Western art and literature, the mythology of the two figures later became virtually indistinguishable. Renaissance and later depictions In Renaissance and Neoclassical works of art, Ares's symbols are a spear and helmet, his animal is a dog, and his bird is the vulture. In literary works of these eras, Ares is replaced by the Roman Mars, a romantic emblem of manly valor rather than the cruel and blood-thirsty god of Greek mythology. In popular culture Genealogy See also Family tree of the Greek gods Footnotes Notes References Antoninus Liberalis, The Metamorphoses of Antoninus Liberalis: A Translation with a Commentary, edited and translated by Francis Celoria, Routledge, 1992. . Online version at ToposText. Apollodorus, Apollodorus, The Library, with an English Translation by Sir James George Frazer, F.B.A., F.R.S. in 2 Volumes. Cambridge, Massachusetts, Harvard University Press; London, William Heinemann Ltd. 1921. . Online version at the Perseus Digital Library. Burkert, Walter, Greek Religion, Harvard University Press, 1985. . Internet Archive. Etymologicum Magnum, Friderici Sylburgii (ed.), Leipzig: J.A.G. Weigel, 1816. Internet Archive. Gantz, Timothy, Early Greek Myth: A Guide to Literary and Artistic Sources, Johns Hopkins University Press, 1996, Two volumes: (Vol. 1), (Vol. 2). Grimal, Pierre, The Dictionary of Classical Mythology, Wiley-Blackwell, 1996. . Internet Archive. Hansen, William, Handbook of Classical Mythology, ABC-CLIO, 2004. . Internet Archive. Hard, Robin, The Routledge Handbook of Greek Mythology: Based on H.J. Rose's "Handbook of Greek Mythology", Psychology Press, 2004. . Google Books. Hesiod, Theogony, in The Homeric Hymns and Homerica with an English Translation by Hugh G. Evelyn-White, Cambridge, Massachusetts, Harvard University Press; London, William Heinemann Ltd. 1914. Online version at the Perseus Digital Library. Internet Archive. Homer, The Iliad with an English Translation by A.T. Murray, Ph.D. in two volumes. Cambridge, Massachusetts, Harvard University Press; London, William Heinemann, Ltd. 1924. Online version at the Perseus Digital Library. Homer, The Odyssey with an English Translation by A.T. Murray, Ph.D. in two volumes. Cambridge, Massachusetts, Harvard University Press; London, William Heinemann, Ltd. 1919. Online version at the Perseus Digital Library. Homeric Hymn 8 to Ares, in The Homeric Hymns and Homerica with an English Translation by Hugh G. Evelyn-White, Cambridge, Massachusetts, Harvard University Press; London, William Heinemann Ltd. 1914. Online version at the Perseus Digital Library. Hyginus, Gaius Julius, De Astronomica, in The Myths of Hyginus, edited and translated by Mary A. Grant, Lawrence: University of Kansas Press, 1960. Online version at ToposText. Hyginus, Gaius Julius, Fabulae, in The Myths of Hyginus, edited and translated by Mary A. Grant, Lawrence: University of Kansas Press, 1960. Online version at ToposText. Nonnus, Dionysiaca, Volume II: Books 16–35,, translated by W. H. D. Rouse, Loeb Classical Library No. 345, Cambridge, Massachusetts, Harvard University Press, 1940. Online version at Harvard University Press. . Internet Archive (1940). Oxford Classical Dictionary, revised third edition, Simon Hornblower and Antony Spawforth (editors), Oxford University Press, 2003. . Internet Archive. Pausanias, Pausanias Description of Greece with an English Translation by W.H.S. Jones, Litt.D., and H.A. Ormerod, M.A., in 4 Volumes. Cambridge, Massachusetts, Harvard University Press; London, William Heinemann Ltd. 1918. Online version at the Perseus Digital Library. Peck, Harry Thurston, Harpers Dictionary of Classical Antiquities, New York, Harper and Brothers, 1898. Online version at the Perseus Digital Library. Pseudo-Plutarch, De fluviis, in Plutarch's morals, Volume V, edited and translated by William Watson Goodwin, Boston: Little, Brown & Co., 1874. Online version at the Perseus Digital Library. Smith, William, Dictionary of Greek and Roman Biography and Mythology, London (1873). Online version at the Perseus Digital Library. Stephanus of Byzantium, Stephani Byzantii Ethnica: Volumen I Alpha - Gamma, edited by Margarethe Billerbeck, in collaboration with Jan Felix Gaertner, Beatrice Wyss and Christian Zubler, De Gruyter, 2006. . Online version at De Gruyter. Google Books. Tripp, Edward, Crowell's Handbook of Classical Mythology, Thomas Y. Crowell Co; First edition (June 1970). . Internet Archive. Greek war deities War gods Martian deities Consorts of Aphrodite Children of Hera Metamorphoses characters Deities in the Iliad Children of Zeus Characters in Greek mythology Deeds of Poseidon Dogs in religion Characters in the Odyssey
4625359
https://en.wikipedia.org/wiki/Derived%20unique%20key%20per%20transaction
Derived unique key per transaction
In cryptography, Derived Unique Key Per Transaction (DUKPT) is a key management scheme in which for every transaction, a unique key is used which is derived from a fixed key. Therefore, if a derived key is compromised, future and past transaction data are still protected since the next or prior keys cannot be determined easily. DUKPT is specified in ANSI X9.24 part 1. Overview DUKPT allows the processing of the encryption to be moved away from the devices that hold the shared secret. The encryption is done with a derived key, which is not re-used after the transaction. DUKPT is used to encrypt electronic commerce transactions. While it can be used to protect information between two companies or banks, it is typically used to encrypt PIN information acquired by Point-Of-Sale (POS) devices. DUKPT is not itself an encryption standard; rather it is a key management technique. The features of the DUKPT scheme are: enable both originating and receiving parties to be in agreement as to the key being used for a given transaction, each transaction will have a distinct key from all other transactions, except by coincidence, if a present derived key is compromised, past and future keys (and thus the transactional data encrypted under them) remain uncompromised, each device generates a different key sequence, originators and receivers of encrypted messages do not have to perform an interactive key-agreement protocol beforehand. History DUKPT was invented in the late 1980s at Visa but didn’t receive much acceptance until the 1990s, when industry practices shifted towards recommending, and later requiring, that each device have a distinct encryption key. Before DUKPT, state of the art was known as Master/Session, which required every PIN-encrypting device to be initialized with a unique master key. In handling transactions originating from devices using Master/Session key management, an unwanted side effect was the need for a table of encryption keys as numerous as the devices deployed. At a major merchant acquirer the table could become quite large indeed. DUKPT resolved this. In DUKPT each device is still initialized with a distinct key, but all of the initialization keys of an entire family of devices are derived from a single key, the base derivation key (BDK). To decrypt encrypted messages from devices in the field, the recipient need only store the BDK. Keys As stated above, the algorithm needs an initial single key which in the original description of the algorithm was called the super-secret key, but was later renamed to—in a more official-sounding way—Base Derivation Key (or BDK). The original name perhaps conveys better the true nature of this key, because if it is compromised then all devices and all transactions are similarly compromised. This is mitigated by the fact that there are only two parties that know the BDK: the recipient of the encrypted messages (typically a merchant acquirer) the party which initializes the encryption devices (typically the manufacturer of the device). The BDK is usually stored inside a tamper-resistant security module (TRSM), or hardware security module (HSM). It must remain clear that this key is not the one used to initialize the encryption device that will participate in DUKPT operations. See below for the actual encryption key generation process. First: A key derived from the BDK, this is known as the IPEK (Initial PIN Encryption Key) Second: The IPEK is then injected into the devices, so any compromise of that key compromises only the device, not the BDK. This creates yet another set of keys (inside the device) irreversibly derived from it (nominally called the Future Keys) Fourth: Afterwards the IPEK is then immediately discarded. NOTE: This step contradicts "Session Keys" section where it indicates that only 21 "Future Keys" are generated. The IPEK must be retained by the terminal in order generate the next batch of 21 Future Keys. Fifth: Future Keys are used to encrypt transactions in the DUKPT process. Upon detection of compromise the device itself derives a new key via the Derived Key Generation Process. Communication Origination On the originating (encrypting) end, the system works as follows: A transaction is initiated which involves data to be encrypted. The typical case is a customer's PIN. A key is retrieved from the set of “Future Keys” This is used to encrypt the message, creating a cryptogram. An identifier known as the “Key Serial Number” (KSN) is returned from the encrypting device, along with the cryptogram. The KSN is formed from the device’s unique identifier, and an internal transaction counter. The (cryptogram, KSN) pair is forwarded on to the intended recipient, typically the merchant acquirer, where it is decrypted and processed further. Internally, the device does the following: Increments the transaction count (using an internal counter) Invalidates the key just used, and If necessary generates more future keys Receiving On the receiving (decrypting) end, the system works as follows: The (cryptogram, KSN) pair are received. The appropriate BDK (if the system has more than one) is located. The receiving system first regenerates the IPEK, and then goes through a process similar to that used on the originating system to arrive at the same encrypting key that was used (the session key). The Key Serial Number (KSN) provides the information needed to do this. The cryptogram is decrypted with session key. Any further processing is done. For merchant acquirers, this usually means encrypting under another key to forward on to a switch (doing a “translate”), but for certain closed-loop operations may involve directly processing the data, such as verifying the PIN. Session Keys The method for arriving at session keys is somewhat different on the originating side as it is on the receiving side. On the originating side, there is considerable state information retained between transactions, including a transaction counter, a serial number, and an array of up to 21 “Future Keys”. On the receiving side there is no state information retained; only the BDK is persistent across processing operations. This arrangement provides convenience to the receiver (a large number of devices may be serviced while only storing one key). It also provides some additional security with respect to the originator (PIN capture devices are often deployed in security-averse environments; the security parameters in the devices are ‘distant’ from the sensitive BDK, and if the device is compromised, other devices are not implicitly compromised). Registers Usage Backup Registers The following storage areas relating to key management are maintained from the time of the "Load Initial Key" command for the life of the PIN Entry Device: Initial Key Serial Number Register (59 bits) Holds the left-most 59 bits of the key serial number, that was initially injected into the PIN Entry Device along with the initial PIN encryption key during the "Load Initial Key" command. The contents of this register remain fixed for the service-life of the PIN Entry Device or until another "Load Initial Key" command. Encryption Counter (21 bits) A counter of the number of PIN encryptions that have occurred since the PIN Entry Device was first initialized. Certain counter values are skipped (as explained below), so that over 1 million PIN encryption operations are possible. Note: The concatenation (left to right) of the Initial Key Serial Number Register and the Encryption Counter form the 80-bit (20 hexadecimal digits) Key Serial Number Register. Future Key Registers (21 registers of 34 hexadecimal digits each) A set of 21 registers, numbered #1 to #21, used to store future PIN encryption keys. Each register includes a 2 hexadecimal digit longitudinal redundancy check (LRC) or a 2 hexadecimal digit cyclical redundancy check (CRC). Temporary Registers The following storage areas relating to key management are required on a temporary basis and may be used for other purposes by other PIN processing routines: Current Key Pointer (approximately 4 hexadecimal digits) Contains the address of that Future Key Register whose contents are being used in the current cryptographic operation. It identifies the contents of that Future Key Register whose address is contained in the Current Key Pointer. Shift Register (21 bits) A 21-bit register, whose bits are numbered left to right as #1 to #21. This register normally contains 20 "zero" bits and a single "one" bit. One use of this register is to select one of the Future Key Registers. The Future Key Register to be selected is the one numbered identically to the bit in the Shift Register containing the single "one". Crypto Register-1 (16 hexadecimal digits) A register used in performing cryptographic operations. Crypto Register-2 (16 hexadecimal digits) A second register used in performing cryptographic operations. Key Register (32 hexadecimal digits) A register used to hold a cryptographic key. Practical Matters (KSN scheme) In practical applications, one would have several BDKs on record, possibly for different customers, or to contain the scope of key compromise. When processing transactions, it is important for the receiver to know which BDK was used to initialize the originating device. To achieve this, the 80-bit KSN is structured into three parts: as Key Set ID, a TRSM ID, and the transaction counter. The algorithm specifies that the transaction counter is 21-bits, but treats the remaining 59 bits opaquely (the algorithm only specifies that unused bits be 0-padded to a nibble boundary, and then 'f' padded to the 80-bit boundary). Because of this, the entity managing the creation of the DUKPT devices (typically a merchant acquirer) is free to subdivide the 59 bits according to their preference. The industry practice is to designate the partitioning as a series of three digits, indicating the number of hex digits used in each part: the Key Set ID, the TRSM ID, and the transaction counter. A common choice is '6-5-5', meaning that the first 6 hex digits of the KSN indicate the Key Set ID (i.e., which BDK is to be used), the next 5 are the TRSM ID (i.e. a device serial number within the range being initialized via a common BDK), and the last 5 are the transaction counter. This notational scheme is not strictly accurate, because the transaction counter is 21 bits, which is not an even multiple of 4 (the number of bits in a hex digit). Consequently, the transaction counter actually consumes one bit of the field that is the TRSM ID (in this example that means that the TRSM ID field can accommodate 2(5*4-1) devices, instead of 2(5*4), or about half a million). Also, it is common practice in the industry to use only 64-bits of the KSN (probably for reasons pertinent to legacy systems, and DES encryption), which would imply that the full KSN is padded to the left with four ‘f’ hex digits. The remaining 4 hex digits (16-bits) are available, nonetheless, to systems which can accommodate them. The 6-5-5 scheme mentioned above would permit about 16 million BDKs, 500,000 devices per BDK, and 1 million transactions per device. Key management
276495
https://en.wikipedia.org/wiki/Small%20business
Small business
Small businesses are corporations, partnerships, or sole proprietorships which have fewer employees and/or less annual revenue than a regular-sized business or corporation. Businesses are defined as "small" in terms of being able to apply for government support and qualify for preferential tax policy varies depending on the country and industry. Small businesses range from fifteen employees under the Australian Fair Work Act 2009, fifty employees according to the definition used by the European Union, and fewer than five hundred employees to qualify for many U.S. Small Business Administration programs. While small businesses can also be classified according to other methods, such as annual revenues, shipments, sales, assets, or by annual gross or net revenue or net profits, the number of employees is one of the most widely used measures. Small businesses in many countries include service or retail operations such as convenience stores, small grocery stores, bakeries or delicatessens, hairdressers or tradespeople (e.g., carpenters, electricians), restaurants, guest houses, photographers, very small-scale manufacturing, and Internet-related businesses such as web design and computer programming. Some professionals operate as small businesses, such as lawyers, accountants, dentists, and medical doctors (although these professionals can also work for large organizations or companies). Small businesses vary a great deal in terms of size, revenues, and regulatory authorization, both within a country and from country to country. Some small businesses, such as a home accounting business, may only require a business license. On the other hand, other small businesses, such as day cares, retirement homes, and restaurants serving liquor are more heavily regulated and may require inspection and certification from various government authorities. Characteristics Researchers and analysts of small or owner-managed businesses generally behave as if nominal organizational forms (e.g., partnership, sole-trader, or corporation), and the consequent legal and accounting boundaries of owner-managed firms are consistently meaningful. However, owner-managers often do not distinguish between their personal and business interests. Lenders also often skirt organizational (corporate) boundaries by seeking personal guarantees or accepting privately held assets as collateral. Because of this behavior, researchers and analysts may wish to be cautious in assessing the organizational types and implied boundaries relating to owner-managed firms. This includes the analysis of traditional accounting disclosures and studies that treat the firm as defined by a formal organizational structure. Concepts of small business, self-employment, entrepreneurship, and startup The concepts of small business, self-employment, entrepreneurship, and startup overlap but also carry important distinctions. These four concepts are often conflated.Their key differences can be summarized as: self-employment: an organization created primarily to provide income to the founders, i.e. sole proprietor operations. entrepreneurship: all new organizations. startup: a new organization created to grow (and have employees). small business: an organization that is small (in employees or revenue) and may or may not have the intention to grow. Many small businesses are sole proprietor operations consisting only of the owner, but many also have additional employees. Some small businesses that offer a product, process or service, do not have growth as their primary objective. In contrast, a business that is created to become a big firm is known as a startup. Startups aim for growth and often offer an innovative product, process, or service. The entrepreneurs of startups typically aim to scale up the company by adding employees, seeking international sales, and so on, a process which is often but not always financed by venture capital and angel investments. Successful entrepreneurs have the ability to lead a business in a positive direction by proper planning, adapting to changing environments, and understand their own strengths and weakness. Spectacular success stories stem from startups that expanded in growth. Examples would be Microsoft, Genentech, and Federal Express which all embody the sense of new venture creation on small businesses. Self-employment provides work primarily for the founders. Entrepreneurship refers to all new businesses, including self-employment and businesses that never intend to grow big or become registered, but startups refer to new businesses that intend to grow beyond the founders, to have employees, and grow large. Size definitions The legal definition of "small business" varies by country and by industry. In addition to the number of employees, methods used to classify small companies include annual sales (turnover), the value of assets and net profit (balance sheet), alone or as a combination of factors. In India, all the manufacturing and service enterprises having investment "Not more than Rs 10 crore" and Annual Turnover "not more than Rs 50 crore" come under this category. In the United States, the Small Business Administration establishes small business size standards on an industry-by-industry basis but generally specifies a small business as having fewer than 500 employees for manufacturing businesses and less than $7.5 million in annual receipts for most non-manufacturing businesses. The definition can vary by circumstance—for example, a small business having fewer than 25 full-time equivalent employees with average annual wages below $50,000 qualifies for a tax credit under the health care reform bill Patient Protection and Affordable Care Act. By comparison, a medium-sized business or mid-sized business has fewer than 500 employees. The European Union generally defines a small business as one that has fewer than fifty employees and either turnover or balance sheet less than €10 m. but the European Commission is undertaking a review of this definition. By comparison, a medium-sized business has fewer than 250 employees and either turnover less than €50 m. or balance sheet less than €43 m. In Australia, a small business is defined by the Fair Work Act 2009 as one with fewer than fifteen employees. By comparison, a medium-sized business or mid-sized business has fewer than two hundred employees. In South Africa, the National Small Business Amendment Act (Act 26 of 2003) defines businesses in a variety of ways using five categories previously established by the National Small Business Act (Act 102 of 1996), namely, standard industrial sector and subsector classification, size of class, equivalent of paid employees, turnover and asset value excluding fixed property. Small businesses usually do not dominate their field. The following table serves as a guide to business size nomenclature. Business size definitions (by number of employees) Most cells reflect sizes not defined in legislation. Some definitions are multi-parameter, e.g., by industry, revenue, or market share. Demographics In 2016 a study that examined the demographic of small business owners was published. The study showed that the median American small business owners were above the age of 50. The ages were distributed as 51% over 50 years old, 33% between the ages 35–49, and 16% being under the age of 35. As for sex: 55% were owned by males, 36% by females, and 9% being equal ownership of both males and females. As for race: 72% were white/Caucasian, 13.5% were Latinos, 6.3% were African American, 6.2% were Asian, and 2% as other. As for educational background: 39% had obtained a bachelor's degree or higher, 33% had some college background, and 28% received at least a high school diploma. The United States census data for the years 2014 and 2015 shows the women's ownership share of small businesses by firm size. The data explains percentages owned by women along with the number of employees including the owner. Generally, the smaller the business, the more likely it to be owned by a woman. The data shows that about 22% of small businesses with 100-500 employees were owned by women, a percentage that rises the smaller the business. 41% of businesses with just 2-4 employees were run by women, and in businesses with just one person, that person was a woman in 51% of cases. Franchise businesses Franchising is a way for small business owners to benefit from the economies of scale of the big corporation (franchiser). McDonald's and Subway are examples of a franchise. The small business owner can leverage a strong brand name and purchasing power of the larger company while keeping their own investment affordable. However, some franchisees conclude that they suffer the "worst of both worlds" feeling they are too restricted by corporate mandates and lack true independence. It is an assumption that small business is just franchisees, but the truth is many franchisers are also small businesses, Although considered to be a successful way of doing business, literature has proved that there is a high failure rate in franchising as well, especially in the UK, where research indicates that out of 1658 franchising companies operating in 1984, only 601 remained in 1998, a mere 36%. Retailers' cooperative A retailers' cooperative is a type of cooperative that employs economies of scale on behalf of its retailer members. Retailers' cooperatives use their purchasing power to acquire discounts from manufacturers and often share marketing expenses. They are often recognized as "local groups" because they own their own stores within the community. It is common for locally-owned grocery stores, hardware stores, and pharmacies to participate in retailers' cooperatives. Ace Hardware, True Value, and NAPA are examples of a retailers' cooperative. Retail cooperatives also allow consumers to supply their own earnings and gain bargaining power outside of the business sector. Retail cooperatives mainly reside within small communities where local businesses are often shut down. Advantages Many small businesses can be started at a low cost and on a part-time basis, while a person continues a regular job with an employer or provides care for family members in the home. In developing countries, many small businesses are sole-proprietor operations such as selling products at a market stall or preparing hot food to sell on the street, which provide a small income. In the 2000s, a small business is also well suited to Internet marketing; because, it can easily serve specialized niches, something that would have been more difficult before the Internet revolution which began in the late 1990s. Internet marketing gives small businesses the ability to market with smaller budgets. Adapting to change is crucial in business and particularly small business; not being tied to the bureaucratic inertia associated with large corporations, small businesses can respond to changing marketplace demand more quickly. Small business proprietors tend to be in closer personal contact with their customers and clients than large corporations, as small business owners see their customers in person each week. One study showed that small, local businesses are better for a local economy than the introduction of new chain stores. By opening up new national level chain stores, the profits of locally owned businesses greatly decrease and many businesses end up failing and having to close. This creates an exponential effect. When one store closes, people lose their jobs, other businesses lose business from the failed business, and so on. In many cases, large firms displace just as many jobs as they create. Independence Independence is another advantage of owning a small business. A small business owner does not have to report to a supervisor or manager. Also, many people desire to make their own decisions, take their own risks, and reap the rewards of their efforts. Small business owners possess the flexibility and freedom to make their own decisions within the constraints imposed by economic and other environmental factors. However, entrepreneurs have to work for very long hours and understand that ultimately their customers are their bosses. Small businesses (often carried out by family members) may adjust quicker to the changing conditions; however, they may also be closed to the absorption of new knowledge and employing new labor from outside. Financial reporting Small businesses benefit from less extensive accounting and financial reporting requirements than those faced by larger businesses. The European Union's Directive on annual financial statements of 2013 aims to "limit administrative burdens and provide for simple and robust accounting rules, especially for small and medium-sized enterprises (SMEs)". In the UK, the Companies, Partnerships and Groups (Accounts and Reports) Regulations 2015 transposed the EU Directive into UK law and amended the reporting regime for reduced disclosure accounts for any accounting period commencing on or after 1 January 2016. "Abbreviated accounts" were permitted for smaller entities under "FRSSE", the Financial Reporting Standard for Smaller Entities). Until 2015, companies deemed small under the UK Companies Act 2006 were allowed to use this standard. For accounting years ending on or after 1 January 2016, FRSSE is no longer available, but there are options known as "abridged accounts" and "filleted accounts": Abridged accounts: accounting for profit / loss begins with the declaration of gross profit or loss, not turnover Filleted financial statements or filleted accounts: profit and loss accounts are excluded, but balance sheet and balance sheet notes are to be disclosed. Alternatively, the smallest companies are able to file "micro-entity accounts". FRS 105 is a Financial Reporting Standard applicable to the Micro-entities Regime. Challenges Small businesses often face a variety of problems, some of which are related to their size. A frequent cause of bankruptcy is under capitalization. This is often a result of poor planning rather than economic conditions. It is a common rule of thumb that the entrepreneur should have access to a sum of money at least equal to the projected revenue for the first year of business in addition to the anticipated expenses. For example, prospective owners anticipating 100,000 in revenue the first year with 150,000 in start up expenses should have at least 250,000 available. Start-up expenses are often grossly underestimated adding to the burden of the business. Failure to provide this level of funding for the company could leave the owner liable for all of the company's debt in bankruptcy court under the theory of undercapitalization. In addition to ensuring that the business has enough capital, the small business owner must also be mindful of contribution margin (sales minus variable costs). To break even, the business must be able to reach a level of sales where the contribution margin equals fixed costs. When they first start, many small business owners underprice their products to a point where even at their maximum capacity, it would be impossible to break even. Cost controls or price increases often resolve this problem. In the United States, some of the largest concerns of small business owners are insurance costs (such as liability and health), energy costs, taxes, and tax compliance. In the United Kingdom and Australia, small business owners tend to be more concerned with perceived excessive governmental red tape. Contracting fraud has been an ongoing problem for small businesses in the United States. Small businesses are legally obligated to receive a fair portion (23 percent) of the total value of all the government's prime contracts as mandated by the Small Business Act of 1953. Since 2002, a series of federal investigations have found fraud, abuse, loopholes, and a lack of oversight in federal small business contracting, which has led to the diversion of billions of dollars in small business contracts to large corporations. Another problem for many small businesses is termed the 'Entrepreneurial Myth' or E-Myth. The mythic assumption is that an expert in a given technical field will also be an expert at running that kind of business. Additional business management skills are needed to keep a business running smoothly. Some of this misunderstanding arises from the failure to distinguish between small business managers as entrepreneurs or capitalists. While nearly all owner-managers of small firms are obliged to assume the role of capitalist, only a minority will act as entrepreneurs. The line between an owner-manager and an entrepreneur can be defined by whether or not their business is growth-oriented. In general, small business owners are primarily focused on surviving rather than growing; therefore, not experiencing the five stages of the corporate life cycle (birth, growth, maturity, revival, and decline) as an entrepreneur would. Another problem for many small businesses is the capacity of much larger businesses to influence or sometimes determine their chances for success. Business networking and social media has been used as a major tool by small businesses in the UK, but most of them just use a "scattergun" approach in a desperate attempt to exploit the market which is not that successful. Over half of small firms lack a business plan, a tool that is considered one of the most important factors for a venture's success. Business planning is associated with improved growth prospects. Funders and investors usually require a business plan. A plan also serves as a strategic planning document for owners and CEOs, which can be used as a "bible" for decision-making An international trade survey indicated that the British share of businesses that are exporting rose from 32% in 2012 to 39% in 2013. Although this may seem positive, in reality, the growth is slow, as small business owners shy away from exporting due to actual and perceived barriers. Learning the basics of a foreign language could be the solution to open doors to new trade markets, it is a reality that not all foreign business partners speak English. China is stated to grow by 7.6% in 2013 and still, 95% of business owners who want to export to China have no desire and no knowledge to learn their local language. Bankruptcy When the small business fails, the owner may file for bankruptcy. In most cases, this can be handled through a personal bankruptcy filing. Corporations can file bankruptcy, but if it is out of business and valuable corporate assets are likely to be repossessed by secured creditors, there is little advantage to going to the expense of a corporate bankruptcy. Many states offer exemptions for small business assets so they can continue to operate during and after personal bankruptcy. However, corporate assets are normally not exempt; hence, it may be more difficult to continue operating an incorporated business if the owner files bankruptcy. Researchers have examined small business failures in some depth, with attempts to model the predictability of failure. Social responsibility Small businesses can encounter several problems related to engaging in corporate social responsibility, due to characteristics inherent in their size. Owners of small businesses often participate heavily in the day-to-day operations of their companies. This results in a lack of time for the owner to coordinate socially responsible efforts, such as supporting local charities or not-for-profit activities. Additionally, a small business owner's expertise often falls outside the realm of socially responsible practices, which can contribute to a lack of participation. Small businesses also face a form of peer pressure from larger forces in their respective industries, making it difficult to oppose and work against industry expectations. Furthermore, small businesses undergo stress from shareholder expectations. Because small businesses have more personal relationships with their patrons and local shareholders, they must also be prepared to withstand closer scrutiny if they want to share in the benefits of committing to socially responsible practices or not. Job quality While small businesses employ over half the workforce in the US and have been established as a main driving force behind job creation, the quality of the jobs these businesses create has been called into question. Small businesses generally employ individuals from the Secondary labor market. As a result, in the U.S., wages are 49% higher for employees of large firms. Additionally, many small businesses struggle or are unable to provide employees with benefits they would be given at larger firms. Research from the U.S. Small Business Administration indicates that employees of large firms are 17% more likely to receive benefits including salary, paid leave, paid vacation, bonuses, insurance, and retirement plans. Both lower wages and fewer benefits combine to create a job turnover rate among U.S. small businesses that is three times higher than large firms. Employees of small businesses also must adapt to the higher failure rate of small firms, which means that they are more likely to lose their job due to the firm going under. In the U.S. 69% of small businesses last at least two years, but this percentage drops to 51% for firms reaching five years in operation. The U.S. Small Business Administration counts companies with as much as $35.5 million in sales and 1,500 employees as "small businesses", depending on the industry. Outside government, companies with less than $7 million in sales and fewer than five hundred employees are widely considered small businesses. Cyber crime Cybercrime in the business world can be broken down into 4 main categories. They include loss of reputation and consumer confidence, cost of fixing the issue, loss of capital and assets, and legal difficulties that can come from these problems. Loss of reputation and consumer confidence can be impacted greatly after one attack. Many small businesses will struggle to gain confidence and trust in their customers after being known for having problems prior. The cost of fixing the cyber attack would require experts outside of their field to further the investigation and find the problem. Being down for a business means losing money at the same time. This could halt the online operations and mean the business could potentially be down for a long period of time. Loss of capital and assets ties well in with the cost of fixing the issue. During a cyberattack, a business may lose its funds for that business. Worst-case scenario, a business may actually lose all its working capital and funds. The legal difficulties involved with cybercrime can become pricy and hurt the business itself for not having standard security measures and standards. Security not only for the business but more importantly the customer should be the number one priority when dealing with security protocol. The monetary dollar damage caused by cybercrime in 2016 equaled out to be over 1.33 billion dollars in the United States alone. In 2016, California alone had over 255 million dollars reported to the IC3. The average company this year in the United States amounted to 17.36 million dollars in cybercrime attacks. Certain cyber attacks can vary on how long it takes to solve a problem. It can take upwards to 69 days for an average everyday attack on a business. The types of attacks include viruses and malware issues. Employee activities within the workspace can also render a cyber attack. Employees using mobile devices or remote work access off the job makes it easier for a cyber attack to occur. Marketing Although small businesses have close relationships with their existing customers, finding new customers and reaching new markets is a major challenge for small business owners. Small businesses typically find themselves strapped for time to do marketing, as they have to run the day-to-day aspects of the business. To create a continual stream of new business and find new clients and customers, they must work on marketing their business continuously. Low sales (the result of poor marketing) is one of the major reasons for small business failure. Common marketing techniques for small business include business networking (e.g., attending Chamber of Commerce events or trade fairs), "word of mouth" promotion by existing customers, customer referrals, Yellow pages directories, television, radio, and outdoor ads (e.g., roadside billboards), print ads, and Internet marketing. TV ads can be quite expensive, so they are normally intended to create awareness of a product or service. Another means by which small businesses can advertise is through the use of "deal of the day" websites such as Groupon and Living Social. These Internet deals encourage customers to patronize small businesses. Many small business owners find internet marketing more affordable. Google AdWords and Yahoo! Search Marketing are two popular options of getting small business products or services in front of motivated web searchers. Social media has also become an affordable route of marketing for small businesses. It is a fraction of the cost of traditional marketing and small businesses can do it themselves or find small social marketing agencies that they can hire out for a small fee. Statistically, social media marketing has a higher lead-to-close rate than traditional media. Successful online small business marketers are also adept at utilizing the most relevant keywords in their website content. Advertising on niche websites that are frequented by potential customers can also be effective, but with the long tail of the Internet, it can be time-intensive to advertise on enough websites to garner an effective reach. Creating a business website has become increasingly affordable with many do-it-yourself programs now available for beginners. A website can provide significant marketing exposure for small businesses when marketed through the Internet and other channels. Some popular services are WordPress, Joomla, Squarespace, and Wix. Social media has proven to be very useful in gaining additional exposure for many small businesses. Many small business owners use Facebook and Twitter as a way to reach out to their loyal customers to give them news about specials of the day or special coupons, generate repeat business and reach out to new potential clients. The relational nature of social media, along with its immediacy and twenty-four-hour presence lend an intimacy to the relationships small businesses can have with their customers while making it more efficient for them to communicate with greater numbers. Facebook ads are also a very cost-effective way for small business owners to reach a targeted audience with a very specific message. In addition to the social networking sites, blogs have become a highly effective way for small businesses to position themselves as experts on issues that are important to their customers. This can be done with a proprietary blog and/or by using a back-link strategy wherein the marketer comments on other blogs and leaves a link to the small business' own website. Posting to a blog about the company's business or service area regularly can increase web traffic to a company website. Marketing plan Market research – To produce a marketing plan for small businesses, research needs to be done on similar businesses, which should include desk research (done online or with directories) and field research. This gives an insight into the target group’s behavior and shopping patterns. Analyzing the competitor’s marketing strategies makes it easier for small businesses to gain market share. Marketing mix – Marketing mix is a crucial factor for any business to be successful. Especially for a small business, examining a competitor’s marketing mix can be very helpful. An appropriate market mix, which uses different types of marketing, can help to boost sales. Product life cycle – After the launch of the business, crucial points of focus should be the growth phase (adding customers, adding products or services, and/or expanding to new markets) and working towards the maturity phase. Once the business reaches the maturity stage, an extension strategy should be in place. Re-launching is also an option at this stage. Pricing strategy should be flexible and based on the different stages of the product life cycle. Promotion techniques – It is preferable to keep promotion expenses as low as possible. ‘Word of mouth’, ‘email marketing’, ‘print-ads’ in local newspapers, etc. can be effective. Channels of distribution – Selecting an effective channel of distribution may reduce the promotional expenses as well as overall expenses for a small business. Contribution to the economy In the US, small businesses (fewer than five hundred employees) account for more than half the non-farm, private GDP and around half the private sector employment. Regarding small business, the top job provider is those with fewer than ten employees, and those with ten or more but fewer than twenty employees comes in as the second, and those with twenty or more but fewer than one hundred employees comes in as the third (interpolation of data from the following references). The most recent data shows firms with fewer than twenty employees account for slightly more than 18% of the employment. According to "The Family Business Review", "there are approximately seventeen million sole-proprietorship in the US. It can be argued that a sole-proprietorship (an unincorporated business owned by a single person) is a type of family business" and "there are twenty-two million small businesses (fewer than five hundred employees) in the US and approximately 14,000 big businesses". Also, it has been found that small businesses created the newest jobs in communities, "In 1979, David Birch published the first empirical evidence that small firms (fewer than 100 employees) created the newest jobs", and Edmiston claimed that "perhaps the greatest generator of interest in entrepreneurship and small business is the widely held belief that small businesses in the United States create most new jobs. The evidence suggests that small businesses indeed create a substantial majority of net new jobs in an average year." The U.S. Small Business Administration has found small businesses have created two-thirds of net new private-sector jobs in the US since 2007. Local businesses provide competition to each other and also challenge corporate giants. Of the 5,369,068 employer firms in 1995, 78.8 percent had fewer than ten employees, and 99.7 percent had fewer than five hundred employees. Sources of funding Small businesses use various sources available for start-up capital: Self-financing by the owner through cash savings, equity loan on his or her home, and or other assets Loans or financial gifts from friends or relatives Grants from private foundations, government, or other sources Private stock issue Forming partnerships Angel investors Loans from banks, credit unions, or other financial institutions SME finance, including collateral-based lending and venture capital, given sufficiently sound business venture plans Some small businesses are further financed through credit card debt—usually a risky choice, given that the interest rate on credit cards is often several times the rate that would be paid on a line of credit at a bank or a bank loan and terms can change unpredictably. Recent research suggests that the use of credit scores in small business lending by community banks is surprisingly widespread. Moreover, the scores employed tend to be the consumer credit scores of the small business owners rather than the more encompassing small business credit scores that include data on the firms as well as on the owners. Many owners seek a bank loan in the name of their business; however, banks will usually insist on a personal guarantee by the business owner. On October 2010, Alejandro Cremades and Tanya Prive founded the first equity crowdfunding platform for small businesses in history as an alternative source of financing. The platform operates under the name of Rock The Post. Government support Several organizations in the United States also provide help for the small business sector, such as the Internal Revenue Service's Small Business and Self-Employed One-Stop Resource. The Small Business Administration (SBA) runs several loan programs that may help a small business secure loans. In these programs, the SBA guarantees a portion of the loan to the issuing bank, and thus, relieves the bank of some of the risk of extending the loan to a small business. The SBA also requires business owners to pledge personal assets and sign as a personal guarantee for the loan. The 8(a) Business Development Program assists in the development of small businesses owned and operated by African Americans, Hispanics, and Asians. Canadian small businesses can take advantage of federally funded programs and services. See Federal financing for small businesses in Canada (grants and loans). In the United Kingdom, the Small Business Commissioner (SBC) provides information and advice for small businesses and deals with complaints resolution with specific reference to late payment problems and other unfavourable payment practices. The SBC's role is to make non-binding recommendations advising on how the parties can resolve a dispute. Small businesses are also encouraged per public policy on taxation. For example, from January 1, 2020, Armenia introduced a special micro-entrepreneurship tax system with a non-taxable base of 24 million AMD. Accordingly, a micro-business will be exempted from taxes other than income tax which will not exceed 5,000 AMD per employee. Business networks and advocacy groups Small businesses often join or come together to form organizations to advocate for their causes or to achieve economies of scale that larger businesses benefit from, such as the opportunity to buy cheaper health insurance in bulk. These organizations include local or regional groups such as Chambers of Commerce and independent business alliances, as well as national or international industry-specific organizations. Such groups often serve a dual purpose, as business networks to provide marketing and connect members to potential sales leads and suppliers, and also as advocacy groups, bringing together many small businesses to provide a stronger voice in regional or national politics. In the case of independent business alliances, promoting the value of locally owned, independent business (not necessarily small) through public education campaigns is integral to their work. The largest regional small business group in the United States is the Council of Smaller Enterprises, located in Greater Cleveland. United Kingdom Trade and Investment gives out research in different markets around the world, and research in program planning and promotional activities to exporters. The BEXA's (British Exporters Association) role is to connect new exporters to expert services. It can provide details about regional export contacts, who could be made informally to discuss issues. Trade associations and all major banks often provide links to international groups in foreign markets, and some help set up joint ventures and trade fairs. Several youth organizations, including 4-H, Junior Achievement, and Scouting, have interactive programs and training to help young people run their own small business under adult supervision. See also American Independent Business Alliance Big business Distributism Federation of Small Businesses Home business Independent telephone company Localism (politics) versus transnational corporations Market capitalization Micro-enterprise National Federation of Independent Business S corporation Small Business Administration Small Business Commissioner Small Business Innovation Research (SBIR) Small business software Small is Profitable Small office/home office Small-scale project management Small start units References Birch, D. (1979). The job generation process. Unpublished Report, Massachusetts Institute of Technology, prepared for the Economic Development Administration of the U.S. Department of Commerce, Washington D.C. Birch, David (1987), Job Creation in America, How our smallest companies put the most people to work, The Free Press, New York Shanker, Melissa Carey, and Joseph H. Astrachan. "Family Business Review." Sage Publication 9.2 (1996): 1-123. Print. External links Business.usa.gov, the official website for business-related activities in the US Federation of Small Business, UK-based resource for small business owners Business terms Business models Business occupations Entrepreneurship Management occupations
2807
https://en.wikipedia.org/wiki/Active%20Directory
Active Directory
Active Directory (AD) is a directory service developed by Microsoft for Windows domain networks. It is included in most Windows Server operating systems as a set of processes and services. Initially, Active Directory was used only for centralized domain management. However, Active Directory eventually became an umbrella title for a broad range of directory-based identity-related services. A server running the Active Directory Domain Service (AD DS) role is called a domain controller. It authenticates and authorizes all users and computers in a Windows domain type network, assigning and enforcing security policies for all computers, and installing or updating software. For example, when a user logs into a computer that is part of a Windows domain, Active Directory checks the submitted username and password and determines whether the user is a system administrator or normal user. Also, it allows management and storage of information, provides authentication and authorization mechanisms, and establishes a framework to deploy other related services: Certificate Services, Active Directory Federation Services, Lightweight Directory Services, and Rights Management Services. Active Directory uses Lightweight Directory Access Protocol (LDAP) versions 2 and 3, Microsoft's version of Kerberos, and DNS. History Like many information-technology efforts, Active Directory originated out of a democratization of design using Request for Comments (RFCs). The Internet Engineering Task Force (IETF), which oversees the RFC process, has accepted numerous RFCs initiated by widespread participants. For example, LDAP underpins Active Directory. Also X.500 directories and the Organizational Unit preceded the Active Directory concept that makes use of those methods. The LDAP concept began to emerge even before the founding of Microsoft in April 1975, with RFCs as early as 1971. RFCs contributing to LDAP include RFC 1823 (on the LDAP API, August 1995), RFC 2307, RFC 3062, and RFC 4533. Microsoft previewed Active Directory in 1999, released it first with Windows 2000 Server edition, and revised it to extend functionality and improve administration in Windows Server 2003. Active Directory support was also added to Windows 95, Windows 98 and Windows NT 4.0 via patch, with some features being unsupported. Additional improvements came with subsequent versions of Windows Server. In Windows Server 2008, additional services were added to Active Directory, such as Active Directory Federation Services. The part of the directory in charge of management of domains, which was previously a core part of the operating system, was renamed Active Directory Domain Services (ADDS) and became a server role like others. "Active Directory" became the umbrella title of a broader range of directory-based services. According to Byron Hynes, everything related to identity was brought under Active Directory's banner. Active Directory Services Active Directory Services consist of multiple directory services. The best known is Active Directory Domain Services, commonly abbreviated as AD DS or simply AD. Domain Services Active Directory Domain Services (AD DS) is the foundation stone of every Windows domain network. It stores information about members of the domain, including devices and users, verifies their credentials and defines their access rights. The server running this service is called a domain controller. A domain controller is contacted when a user logs into a device, accesses another device across the network, or runs a line-of-business Metro-style app sideloaded into a device. Other Active Directory services (excluding LDS, as described below) as well as most of Microsoft server technologies rely on or use Domain Services; examples include Group Policy, Encrypting File System, BitLocker, Domain Name Services, Remote Desktop Services, Exchange Server and SharePoint Server. The self-managed AD DS must not be confused with managed Azure AD DS, which is a cloud product. Lightweight Directory Services Active Directory Lightweight Directory Services (AD LDS), formerly known as Active Directory Application Mode (ADAM), is an implementation of LDAP protocol for AD DS. AD LDS runs as a service on Windows Server. AD LDS shares the code base with AD DS and provides the same functionality, including an identical API, but does not require the creation of domains or domain controllers. It provides a Data Store for storage of directory data and a Directory Service with an LDAP Directory Service Interface. Unlike AD DS, however, multiple AD LDS instances can run on the same server. Certificate Services Active Directory Certificate Services (AD CS) establishes an on-premises public key infrastructure. It can create, validate and revoke public key certificates for internal uses of an organization. These certificates can be used to encrypt files (when used with Encrypting File System), emails (per S/MIME standard), and network traffic (when used by virtual private networks, Transport Layer Security protocol or IPSec protocol). AD CS predates Windows Server 2008, but its name was simply Certificate Services. AD CS requires an AD DS infrastructure. Federation Services Active Directory Federation Services (AD FS) is a single sign-on service. With an AD FS infrastructure in place, users may use several web-based services (e.g. internet forum, blog, online shopping, webmail) or network resources using only one set of credentials stored at a central location, as opposed to having to be granted a dedicated set of credentials for each service. AD FS uses many popular open standards to pass token credentials such as SAML, OAuth or OpenID Connect. AD FS supports encryption and signing of SAML assertions. AD FS's purpose is an extension of that of AD DS: The latter enables users to authenticate with and use the devices that are part of the same network, using one set of credentials. The former enables them to use the same set of credentials in a different network. As the name suggests, AD FS works based on the concept of federated identity. AD FS requires an AD DS infrastructure, although its federation partner may not. Rights Management Services Active Directory Rights Management Services (AD RMS, known as Rights Management Services or RMS before Windows Server 2008) is a server software for information rights management shipped with Windows Server. It uses encryption and a form of selective functionality denial for limiting access to documents such as corporate e-mails, Microsoft Word documents, and web pages, and the operations authorized users can perform on them. These operations can include viewing, editing, copying, saving as or printing for example. IT administrators can create pre-set templates for the convenience of the end user if required. However, end users can still define who can access the content in question and set what they can do. Logical structure As a directory service, an Active Directory instance consists of a database and corresponding executable code responsible for servicing requests and maintaining the database. The executable part, known as Directory System Agent, is a collection of Windows services and processes that run on Windows 2000 and later. Objects in Active Directory databases can be accessed via LDAP, ADSI (a component object model interface), messaging API and Security Accounts Manager services. Objects Active Directory structures are arrangements of information about objects. The objects fall into two broad categories: resources (e.g., printers) and security principals (user or computer accounts and groups). Security principals are assigned unique security identifiers (SIDs). Each object represents a single entity—whether a user, a computer, a printer, or a group—and its attributes. Certain objects can contain other objects. An object is uniquely identified by its name and has a set of attributes—the characteristics and information that the object represents— defined by a schema, which also determines the kinds of objects that can be stored in Active Directory. The schema object lets administrators extend or modify the schema when necessary. However, because each schema object is integral to the definition of Active Directory objects, deactivating or changing these objects can fundamentally change or disrupt a deployment. Schema changes automatically propagate throughout the system. Once created, an object can only be deactivated—not deleted. Changing the schema usually requires planning. Forests, trees, and domains The Active Directory framework that holds the objects can be viewed at a number of levels. The forest, tree, and domain are the logical divisions in an Active Directory network. Within a deployment, objects are grouped into domains. The objects for a single domain are stored in a single database (which can be replicated). Domains are identified by their DNS name structure, the namespace. A domain is defined as a logical group of network objects (computers, users, devices) that share the same Active Directory database. A tree is a collection of one or more domains and domain trees in a contiguous namespace, and is linked in a transitive trust hierarchy. At the top of the structure is the forest. A forest is a collection of trees that share a common global catalog, directory schema, logical structure, and directory configuration. The forest represents the security boundary within which users, computers, groups, and other objects are accessible. Organizational units The objects held within a domain can be grouped into organizational units (OUs). OUs can provide hierarchy to a domain, ease its administration, and can resemble the organization's structure in managerial or geographical terms. OUs can contain other OUs—domains are containers in this sense. Microsoft recommends using OUs rather than domains for structure and to simplify the implementation of policies and administration. The OU is the recommended level at which to apply group policies, which are Active Directory objects formally named group policy objects (GPOs), although policies can also be applied to domains or sites (see below). The OU is the level at which administrative powers are commonly delegated, but delegation can be performed on individual objects or attributes as well. Organizational units do not each have a separate namespace. As a consequence, for compatibility with Legacy NetBios implementations, user accounts with an identical sAMAccountName are not allowed within the same domain even if the accounts objects are in separate OUs. This is because sAMAccountName, a user object attribute, must be unique within the domain. However, two users in different OUs can have the same common name (CN), the name under which they are stored in the directory itself such as "fred.staff-ou.domain" and "fred.student-ou.domain", where "staff-ou" and "student-ou" are the OUs. In general the reason for this lack of allowance for duplicate names through hierarchical directory placement is that Microsoft primarily relies on the principles of NetBIOS, which is a flat-namespace method of network object management that, for Microsoft software, goes all the way back to Windows NT 3.1 and MS-DOS LAN Manager. Allowing for duplication of object names in the directory, or completely removing the use of NetBIOS names, would prevent backward compatibility with legacy software and equipment. However, disallowing duplicate object names in this way is a violation of the LDAP RFCs on which Active Directory is supposedly based. As the number of users in a domain increases, conventions such as "first initial, middle initial, last name" (Western order) or the reverse (Eastern order) fail for common family names like Li (李), Smith or Garcia. Workarounds include adding a digit to the end of the username. Alternatives include creating a separate ID system of unique employee/student ID numbers to use as account names in place of actual users' names, and allowing users to nominate their preferred word sequence within an acceptable use policy. Because duplicate usernames cannot exist within a domain, account name generation poses a significant challenge for large organizations that cannot be easily subdivided into separate domains, such as students in a public school system or university who must be able to use any computer across the network. Shadow groups In Microsoft's Active Directory, OUs do not confer access permissions, and objects placed within OUs are not automatically assigned access privileges based on their containing OU. This is a design limitation specific to Active Directory. Other competing directories such as Novell NDS are able to assign access privileges through object placement within an OU. Active Directory requires a separate step for an administrator to assign an object in an OU as a member of a group also within that OU. Relying on OU location alone to determine access permissions is unreliable, because the object may not have been assigned to the group object for that OU. A common workaround for an Active Directory administrator is to write a custom PowerShell or Visual Basic script to automatically create and maintain a user group for each OU in their directory. The scripts are run periodically to update the group to match the OU's account membership, but are unable to instantly update the security groups anytime the directory changes, as occurs in competing directories where security is directly implemented into the directory itself. Such groups are known as shadow groups. Once created, these shadow groups are selectable in place of the OU in the administrative tools. Microsoft refers to shadow groups in the Server 2008 Reference documentation, but does not explain how to create them. There are no built-in server methods or console snap-ins for managing shadow groups. The division of an organization's information infrastructure into a hierarchy of one or more domains and top-level OUs is a key decision. Common models are by business unit, by geographical location, by IT Service, or by object type and hybrids of these. OUs should be structured primarily to facilitate administrative delegation, and secondarily, to facilitate group policy application. Although OUs form an administrative boundary, the only true security boundary is the forest itself and an administrator of any domain in the forest must be trusted across all domains in the forest. Partitions The Active Directory database is organized in partitions, each holding specific object types and following a specific replication pattern. Microsoft often refers to these partitions as 'naming contexts'. The 'Schema' partition contains the definition of object classes and attributes within the Forest. The 'Configuration' partition contains information on the physical structure and configuration of the forest (such as the site topology). Both replicate to all domains in the Forest. The 'Domain' partition holds all objects created in that domain and replicates only within its domain. Physical structure Sites are physical (rather than logical) groupings defined by one or more IP subnets. AD also holds the definitions of connections, distinguishing low-speed (e.g., WAN, VPN) from high-speed (e.g., LAN) links. Site definitions are independent of the domain and OU structure and are common across the forest. Sites are used to control network traffic generated by replication and also to refer clients to the nearest domain controllers (DCs). Microsoft Exchange Server 2007 uses the site topology for mail routing. Policies can also be defined at the site level. Physically, the Active Directory information is held on one or more peer domain controllers, replacing the NT PDC/BDC model. Each DC has a copy of the Active Directory. Servers joined to Active Directory that are not domain controllers are called Member Servers. A subset of objects in the domain partition replicate to domain controllers that are configured as global catalogs. Global catalog (GC) servers provide a global listing of all objects in the Forest. Global Catalog servers replicate to themselves all objects from all domains and, hence, provide a global listing of objects in the forest. However, to minimize replication traffic and keep the GC's database small, only selected attributes of each object are replicated. This is called the partial attribute set (PAS). The PAS can be modified by modifying the schema and marking attributes for replication to the GC. Earlier versions of Windows used NetBIOS to communicate. Active Directory is fully integrated with DNS and requires TCP/IP—DNS. To be fully functional, the DNS server must support SRV resource records, also known as service records. Replication Active Directory synchronizes changes using multi-master replication. Replication by default is 'pull' rather than 'push', meaning that replicas pull changes from the server where the change was effected. The Knowledge Consistency Checker (KCC) creates a replication topology of site links using the defined sites to manage traffic. Intra-site replication is frequent and automatic as a result of change notification, which triggers peers to begin a pull replication cycle. Inter-site replication intervals are typically less frequent and do not use change notification by default, although this is configurable and can be made identical to intra-site replication. Each link can have a 'cost' (e.g., DS3, T1, ISDN etc.) and the KCC alters the site link topology accordingly. Replication may occur transitively through several site links on same-protocol site link bridges, if the cost is low, although KCC automatically costs a direct site-to-site link lower than transitive connections. Site-to-site replication can be configured to occur between a bridgehead server in each site, which then replicates the changes to other DCs within the site. Replication for Active Directory zones is automatically configured when DNS is activated in the domain based by site. Replication of Active Directory uses Remote Procedure Calls (RPC) over IP (RPC/IP). Between Sites SMTP can be used for replication, but only for changes in the Schema, Configuration, or Partial Attribute Set (Global Catalog) GCs. SMTP cannot be used for replicating the default Domain partition. Implementation In general, a network utilizing Active Directory has more than one licensed Windows server computer. Backup and restore of Active Directory is possible for a network with a single domain controller, but Microsoft recommends more than one domain controller to provide automatic failover protection of the directory. Domain controllers are also ideally single-purpose for directory operations only, and should not run any other software or role. Certain Microsoft products such as SQL Server and Exchange can interfere with the operation of a domain controller, necessitating isolation of these products on additional Windows servers. Combining them can make configuration or troubleshooting of either the domain controller or the other installed software more difficult. A business intending to implement Active Directory is therefore recommended to purchase a number of Windows server licenses, to provide for at least two separate domain controllers, and optionally, additional domain controllers for performance or redundancy, a separate file server, a separate Exchange server, a separate SQL Server, and so forth to support the various server roles. Physical hardware costs for the many separate servers can be reduced through the use of virtualization, although for proper failover protection, Microsoft recommends not running multiple virtualized domain controllers on the same physical hardware. Database The Active-Directory database, the directory store, in Windows 2000 Server uses the JET Blue-based Extensible Storage Engine (ESE98) and is limited to 16 terabytes and 2 billion objects (but only 1 billion security principals) in each domain controller's database. Microsoft has created NTDS databases with more than 2 billion objects. (NT4's Security Account Manager could support no more than 40,000 objects). Called NTDS.DIT, it has two main tables: the data table and the link table. Windows Server 2003 added a third main table for security descriptor single instancing. Programs may access the features of Active Directory via the COM interfaces provided by Active Directory Service Interfaces. Trusting To allow users in one domain to access resources in another, Active Directory uses trusts. Trusts inside a forest are automatically created when domains are created. The forest sets the default boundaries of trust, and implicit, transitive trust is automatic for all domains within a forest. Terminology One-way trust One domain allows access to users on another domain, but the other domain does not allow access to users on the first domain. Two-way trust Two domains allow access to users on both domains. Trusted domain The domain that is trusted; whose users have access to the trusting domain. Transitive trust A trust that can extend beyond two domains to other trusted domains in the forest. Intransitive trust A one way trust that does not extend beyond two domains. Explicit trust A trust that an admin creates. It is not transitive and is one way only. Cross-link trust An explicit trust between domains in different trees or in the same tree when a descendant/ancestor (child/parent) relationship does not exist between the two domains. Shortcut Joins two domains in different trees, transitive, one- or two-way. Forest trust Applies to the entire forest. Transitive, one- or two-way. Realm Can be transitive or nontransitive (intransitive), one- or two-way. External Connect to other forests or non-AD domains. Nontransitive, one- or two-way. PAM trust A one-way trust used by Microsoft Identity Manager from a (possibly low-level) production forest to a (Windows Server 2016 functionality level) 'bastion' forest, which issues time-limited group memberships. Management solutions Microsoft Active Directory management tools include: Active Directory Administrative Center (Introduced with Windows Server 2012 and above), Active Directory Users and Computers, Active Directory Domains and Trusts, Active Directory Sites and Services, ADSI Edit, Local Users and Groups, Active Directory Schema snap-ins for Microsoft Management Console (MMC), SysInternals ADExplorer These management tools may not provide enough functionality for efficient workflow in large environments. Some third-party solutions extend the administration and management capabilities. They provide essential features for a more convenient administration processes, such as automation, reports, integration with other services, etc. Unix integration Varying levels of interoperability with Active Directory can be achieved on most Unix-like operating systems (including Unix, Linux, Mac OS X or Java and Unix-based programs) through standards-compliant LDAP clients, but these systems usually do not interpret many attributes associated with Windows components, such as Group Policy and support for one-way trusts. Third parties offer Active Directory integration for Unix-like platforms, including: PowerBroker Identity Services, formerly Likewise (BeyondTrust, formerly Likewise Software) – Allows a non-Windows client to join Active Directory ADmitMac (Thursby Software Systems) Samba (free software under GPLv3) – Can act as a domain controller The schema additions shipped with Windows Server 2003 R2 include attributes that map closely enough to RFC 2307 to be generally usable. The reference implementation of RFC 2307, nss_ldap and pam_ldap provided by PADL.com, support these attributes directly. The default schema for group membership complies with RFC 2307bis (proposed). Windows Server 2003 R2 includes a Microsoft Management Console snap-in that creates and edits the attributes. An alternative option is to use another directory service as non-Windows clients authenticate to this while Windows Clients authenticate to AD. Non-Windows clients include 389 Directory Server (formerly Fedora Directory Server, FDS), ViewDS Identity Solutions - ViewDS v7.2 XML Enabled Directory and Sun Microsystems Sun Java System Directory Server. The latter two both being able to perform two-way synchronization with AD and thus provide a "deflected" integration. Another option is to use OpenLDAP with its translucent overlay, which can extend entries in any remote LDAP server with additional attributes stored in a local database. Clients pointed at the local database see entries containing both the remote and local attributes, while the remote database remains completely untouched. Administration (querying, modifying, and monitoring) of Active Directory can be achieved via many scripting languages, including PowerShell, VBScript, JScript/JavaScript, Perl, Python, and Ruby. Free and non-free AD administration tools can help to simplify and possibly automate AD management tasks. Since October 2017 Amazon AWS offers integration with Microsoft Active Directory. See also AGDLP (implementing role based access controls using nested groups) Apple Open Directory Flexible single master operation FreeIPA List of LDAP software System Security Services Daemon (SSSD) Univention Corporate Server References External links Microsoft Technet: White paper: Active Directory Architecture (Single technical document that gives an overview about Active Directory.) Microsoft Technet: Detailed description of Active Directory on Windows Server 2003 Microsoft MSDN Library: [MS-ADTS]: Active Directory Technical Specification (part of the Microsoft Open Specification Promise) Active Directory Application Mode (ADAM) Microsoft MSDN: [AD-LDS]: Active Directory Lightweight Directory Services Microsoft TechNet: [AD-LDS]: Active Directory Lightweight Directory Services Microsoft MSDN: Active Directory Schema Microsoft TechNet: Understanding Schema Microsoft TechNet Magazine: Extending the Active Directory Schema Microsoft MSDN: Active Directory Certificate Services Microsoft TechNet: Active Directory Certificate Services Directory services Microsoft server technology Windows components Windows 2000
68470706
https://en.wikipedia.org/wiki/Hack%20computer
Hack computer
The Hack Computer is a theoretical computer design created by Noam Nisan and Shimon Schocken and described in their book, The Elements of Computing Systems: Building a Modern Computer from First Principles.  In using the term “modern”, the authors refer to a digital, binary machine that is patterned according to the von Neumann architecture model. The Hack computer is intended for hands-on virtual construction in a hardware simulator application as a part of a basic, but comprehensive, course in computer organization and architecture.   One such a course, created by the authors and delivered in two parts, is freely available as a massive open online course (MOOC) called Build a Modern Computer From First Principles: From Nand to Tetris.  In the twelve projects included in the course, learners start with a two input Nand gate and end up with a fully operational virtual computer, including both hardware (memory and CPU) and software (assembler, VM, Java-like programming language, and OS).  In addition to the hardware simulator used for initial implementation of the computer hardware, a complete Hack computer emulator program and assembler that supports the projects described in the book and the on-line course is also available at the author's web site. Hardware architecture The Hack computer hardware consists of three basic elements as shown in the block diagram.   There are two separate 16-bit memory units and a central processing unit (CPU).  Because data is moved and processed by the computer in 16-bit words, the Hack computer is classified as a 16-bit architecture. The instruction memory, implemented as read-only memory from the viewpoint of the computer and designated ROM, holds assembled binary program code for execution.  The random access memory, called RAM, provides storage for an executing program’s data and provides services and storage areas for the computer’s memory-mapped I/O mechanism.  Data processing and program control management are provided by the CPU. The three units are connected by parallel buses. The address buses (15-bit), as well as the data and instruction busses (16-bit) for the ROM and RAM units are completely independent.  Therefore, the Hack design follows the Harvard architecture model with respect to bus communication between the memory units and the CPU. All memory is word addressable only. Read-only memory (ROM) The Hack computer’s ROM module is presented as a linear array of individually addressable, sequential, 16-bit memory registers.  Addresses start at 0 (0x0000). Since the memory elements are sequential devices, a system clock signal is supplied by the simulation application and the computer emulator application.  The ROM address bus is 15 bits wide, so a total of 32,768 individual words are available for program instructions.  The address of the currently active word is supplied by a program counter register within the CPU (see below).  The value in the ROM memory register identified by the address placed on the instruction address bus in a particular clock cycle is available as the "current" instruction at the beginning of the next cycle. There is no instruction register; instructions are decoded in each cycle from the currently active ROM register. Random access memory (RAM) Although the RAM module is also viewed as a continuous linear array of individually addressable sequential, read-write, 16-bit memory registers, it is functionally organized by address range into three segments. Addresses 0 (0x000) through 16383 (0x3FFF) contain conventional 16-bit, read-write registers and are meant for use as general-purpose program data storage. The registers at addresses 16384 (0x4000) through 24575 (0x5FFF) are essentially like data RAM, but they are also designated for use by a built-in screen I/O subsystem.  Data written to addresses in this range have the side effect of producing output on the computer’s virtual 256 x 512 screen (see I/O). If a program does not require screen output, registers in this range may be used for general program data. The final address in the RAM address space, at 24576 (0x6000), contains a single one word register whose current value is controlled by the output of a keyboard attached to the computer hosting the Hack emulator program.  This keyboard memory map register is read-only (see I/O). Data memory addresses in the range 24577 (0x6001) through 32767 (0x7FFF) are invalid. State transitions of the selected RAM memory register is also coordinated by the system clock signal. Central Processing Unit (CPU) TAs illustrated in the accompanying diagram, the Hack computer central processing unit (CPU) is an integrated logic unit with internal structure.  It provides many of the functions found in simple, commercially available CPUs.  The most complex element of the CPU is the arithmetic logic unit (ALU) which provides the computational functionality of the computer.  The ALU is a combinational logic device having two 16-bit input operands and a single 16-bit output.   The computation produced as output from the operands is specified by a set of six ordered, single-bit inputs to the ALU.  The ALU also emits two single-bit status flags which indicate whether a computation result is zero (zr flag) or negative (ng flag). The CPU also contains two 16-bit registers, labeled D and A.  The D (Data) register is a general-purpose register whose current value always supplies the ALU x operand, although for some instructions its value is ignored.  While the A (Address) register may also provide its current value as the y operand to the ALU when so directed by an instruction, its value may also be used for data memory addressing and as a target address in instruction memory for branching instructions.  To facilitate this function, the A register is directly associated with a "pseudo-register" designated as M which is not explicitly implemented in hardware. This M register therefore represents the value contained in RAM having the address of the current value contained in the A register. The final important element in the CPU is the program counter (PC) register.  The PC is a 16-bit binary counter whose low 15 bits specify the address in instruction memory of the next instruction for execution.  Unless directed otherwise by a branching instruction, the PC increments its value at the end of each clock cycle.  The CPU also includes logic to change, under program control, the order of the computer's instruction execution, by setting the PC to a non-sequential value.  The PC also implements a single-bit reset input that initializes the PC value to 0 (0x0000) when it is cycled from logic 0 to logic 1 and back. Unlike many actual CPU designs, there is no program accessible hardware mechanism provided to implement CPU external or internal interrupts or support for function calls. External Input and Output (I/O) The Hack computer employs a memory-mapped approach to I/O. Bit-mapped, black and white output to a virtual 256 x 512 screen is effected by writing a bit-map of the desired output to data memory locations 16384 (0x4000) through 24575 (0x5FFF).  The data words in this address range are viewed as a linear array of bits with each bit value representing black/white state of a single pixel on the computer emulator's virtual screen. The least significant bit of the word in first memory address of the screen RAM segment sets the pixel in the upper left corner of the screen to white if it is 0 and black if it is 1. The next significant bit in the first word controls the next pixel to the right, and so on. After the first 512-pixel row is described by the first 32 words of screen memory, the mapping is continued in the same fashion for the second row with the next 32 words. Logic external to the computer reads the screen RAM memory map segment many times per clock cycle and updates the virtual screen. If a keyboard is attached to computer hosting the CPU emulator program, the emulator puts a left-zero extended, 8-bit scan code corresponding a key depressed during program execution in the keyboard resister at RAM address 24576 (0x6000) . If no key is depressed, this register contains the value 0. The emulator provides a toggle button to enable/disable the keyboard. The encoding scheme closely follows ASCII encoding for printable characters. The effect of the Shift key is generally honored. Codes are also provided for other keys often present on a standard PC keyboard; for example, direction control keys (←, ↑, ↓, →) and Fn keys. As with screen memory, the keyboard memory register is updated many times per clock cycle. Operating cycle Step-wise operation of the CPU and memory units is controlled by a clock that is built-in to both the hardware simulator and the computer emulator programs. At the beginning of a clock cycle the instruction at the ROM address emitted by the current value of the program counter is decoded. The ALU operands specified in the instruction are marshalled where needed.  The computation specified is performed by the ALU and the appropriate status flags are set. The computation result is saved as specified by the instruction.  Finally, the program counter is updated to the value of the next required program instruction. If no branching was specified by the current instruction, the PC value is simply incremented.  If branching was specified, the PC is loaded (from the A register) with the address of the next instruction to be executed.  The cycle then repeats using the now current PC value. Because of its Harvard memory architecture model, the Hack computer is designed to execute the current instruction and “fetch” the next instruction in a single, two-part clock cycle. The speed of the clock may be varied by a control element in both the hardware simulator and the CPU emulator. Independent of the selected speed however, each instruction is completely executed in one cycle. The user may also single-step through a program. Execution of a program loaded in ROM is controlled by the CPU's reset bit. If the value of the reset bit is 0, execution proceeds according to the operating cycle described above. Setting the reset bit to 1 sets the PC to 0. Setting the reset bit value back to zero then begins execution of the current program at the first instruction; however, RAM contains the values from any previous activity on reset. There is no hardware or machine language support for interrupts of any kind. Data types Values stored in ROM memory must represent valid Hack machine language instructions as described in the Instruction Set Architecture section. Any 16-bit value may be stored in RAM.  The data type of value stored in RAM is inferred by its location and/or its use within a program.  The primary hardware supported data type is the 16-bit signed integer, which is represented in 2’s complement format.  Signed integers therefore have the range -32768 through 32767.  The lower 15 bits of a  value in RAM may also represent an address in ROM or RAM in the sense of a pointer. For values in the RAM memory registers assigned for screen I/O, the value will be interpreted as a 16 pixel map of the 256 row x 512 column virtual screen by the computer's independent I/O subsystem if the screen is "turned on". The code value in keyboard memory may be read programmatically and interpreted for use by a program. There is no hardware support for floating-point types. Instruction set architecture (ISA) and machine language The Hack computer's instruction set architecture (ISA) and derived machine language is sparse compared to many other architectures. Although the 6 bits used to specify a computation by the ALU could allow for 64 distinct instructions, only 18 are officially implemented in the Hack computer's ISA. Since the Hack computer hardware has direct support for neither integer multiplication (and division) or function calls, there are no corresponding machine language instructions in the ISA for these operations. Hack machine language has only two types of instructions, each encoded in 16 binary digits. A-instructions Instructions whose most significant bit is “0” are called A-instructions or address instructions. The A-instruction is bit-field encoded as follows: 0b14b13b12b11b10b9b8b7b6b5b4b3b2b1b0 0 – the most significant bit of a A-instruction is “0” b14 - b0 - these bits provide the binary representation of a non-negative integer in the decimal range 0 through 32767 When this instruction is executed, the remaining 15 bits are left-zero extended and loaded into the CPU's A-register.  As a side-effect, the RAM register having the address represented by that value is enabled for subsequent read/write action in the next clock cycle. C-instructions The other instruction type, known as C-instructions (computation instructions), have “1” as the most significant bit.  The remaining 15 bits are bit-field encoded to define the operands, computation performed, and storage location for the specified computation result.  This instruction may also specify a program branch based on the most recent computation result.  The C-instruction is bit-field encoded as follows: 1x1x0ac5c4c3c2c1c0d2d1d0j2j1j0 1 – the most significant bit of a C-instruction is “1” x1x0 – these bits are ignored by the CPU and, by convention, are each always set to “1” a – this bit specifies the source of the “y” operand of the ALU when it is used in a computation c0-c5 – these six control bits specify the operands and computation to be performed by the ALU d2-d0 – these three bits specify the destination(s) for storing the current ALU output j2-j0 – these three bits specify an arithmetic branch condition, an unconditional branch (jump), or no branching The Hack computer encoding scheme of the C-instruction is shown in the following tables. In these tables, A represents the value currently contained in the A-register D represents the value currently contained in the D-register M represents the value currently contained in the data memory register whose address is contained in the A-register; that is, M == RAM[A] Assembly language The Hack computer has a text-based assembly language to create programs for the hardware platform that implements the Hack computer ISA.  Hack assembly language programs may be stored in text files having the file name extension “.asm”.  Hack assembly language source files are case sensitive.  Each line of text contains one of the following elements: Blank line Comment Label declaration (with optional end-of-line comment) A-instruction (with optional end-of-line comment) C-instruction (with optional end-of-line comment) Each of these line types has a specific syntax and may contain predefined or user defined symbols or numeric constants. Blank lines and comments are ignored by the assembler.  Label declarations, A-instructions, and C-instructions, as defined below, may not include any internal white-space characters, although leading or trailing whitespace is permitted (and ignored). Comments Any text beginning with the two-character sequence “//” is a comment.  Comments may appear on a source code line alone, or may also be placed at the end of any other program source line. All text following the comment identifier character sequence to end of line is completely ignored by the assembler; consequently, they produce no machine code. Symbols and numeric constants Hack assembly language allows the use of alphanumeric symbols for number of different specific purposes.  A symbol may be any sequence of alphabetic (upper and lower case) or numeric digits. Symbols may also contain any of the following characters: under bar (“_”), period(“.”), dollar sign (“$”), and colon (“:”).  Symbols may not begin with a digit character.  Symbols are case sensitive.  User defined symbols are used to create variable names and labels (see below). The Hack assembly language assembler recognizes some predefined symbols for use in assembly language programs.  The symbols R0, R1, …, R15 are bound respectively to the integers 0 through 15.  These symbols are meant to represent general purpose registers and the symbols values therefore represent data memory addresses 0 through 15.  Predefined symbols SCREEN and KBD are also specified to represent the data memory address of the start of memory-mapped virtual screen output (16384) and keyboard input (24756).  There are a few other symbols (SP, LCL, ARG, THIS, and THAT) that are used in building the operating system software stack. A sting of decimal (0-9) digits may be used to represent a non-negative, decimal constant in the range 0 through 32,767.  The use of the minus sign to indicate a negative number is not allowed.  Binary or octal representation is not supported. Variables User defined symbols may be created in an assembly language program to represent variables; that is, a named RAM register.  The symbol is bound at assembly to a RAM address chosen by the assembler. Therefore, variables must be treated as addresses when appearing in assembly language source code. Variables are implicitly defined in assembly language source code when they are first referenced in an A-instruction.  When the source code is processed by the assembler, the variable symbol is bound to a unique positive integer value in beginning at address 16.  Addresses are sequentially bound to variable symbols in the order of their first appearance in the source code.   By convention, user-defined symbols that identify program variables are written in all lower case. Labels Labels are symbols delimited by left "(" and right ")" parenthesis.  They are defined on a separate source program line and are bound by the assembler to the address of the instruction memory location of the next instruction in the source code.  Labels may be defined only once, but they may be used multiple times anywhere within the program, even before the line on which they are defined.  By convention, labels are expressed in all-caps. They are used to identify the target address of branch C-instructions. A-instructions The A-instruction has the syntax “@xxxx”, where xxxx is either a numeric decimal constant in the range 0 through 32767, a label, or a variable (predefined or user defined).  When executed, this instruction sets the value of the A register and the M pseudo-register to a 15-bit binary value represented by “xxxx”.  The 15-bit value is left-zero extended to 16-bits in the A register. The A-instruction may be used for one of three purposes.  It is the only means to introduce a (non-negative) numeric value into the computer under program control; that is, it may be used to create program constants.  Secondly, it is used to specify a RAM memory location using the M pseudo-register mechanism for subsequent reference by a C-instruction.  Finally, a C-instruction which specifies a branch uses the current value of the A register as the branch target address.  The A-instruction is used to set that target address prior to the branch instruction, usually by reference to a label. C-Instructions C-instructions direct the ALU computation engine and program flow control capabilities of the Hack computer.  The instruction syntax is defined by three fields, referred to as “comp”, “dest”, and “jump”.  The comp field is required in every C-instruction.  The C-instruction syntax is “dest=comp;jump”.  The “=” and “;” characters are used to delimit the fields of the instruction.  If the dest field is not used, the “=” character is omitted.  If the jump field is not used, the “;” character is omitted. The C-instruction allows no internal spaces. The comp field must be one of the 28 documented mnemonic codes defined in the table above.  These codes are considered distinct units;  they must be expressed in all-caps with no internal spaces.  It is noted that the 6 ALU control bits could potentially specify 64 computational functions; however, only the 18 presented in the table are officially documented for recognition by the assembler. The dest field may be used to specify one or more locations to store the result of the specified computation.  If this field is omitted, along with the “=” delimiter, the computed value is not stored.  The allowed storage location combinations are specified by the mnemonic codes defined in the table above. The jump field may be used to specify the address in ROM of the next instruction to be executed.  If the field is omitted, along with the “;” delimiter, execution continues with the instruction immediately following the current instruction.  The branch address target, in ROM, is provided by the current value of the A register if the specified branch condition is satisfied.  If the branch condition fails, execution continues with the next instruction in ROM.  Mnemonic codes are provided for six different comparisons based on the value of the current computation.  Additionally, an unconditional branch is provided as a seventh option.  Because the comp field must always be supplied, even though the value is not required for the unconditional branch, the syntax of this instruction is given as “0;JMP”.  The branch conditions supported are specified in the table above. Assembler Freely available software supporting the Hack computer includes a command line assembler application. The assembler reads Hack assembly language source tiles (*.asm) and produces Hack machine language output files (*.hack). The machine language file is also a text file. Each line of this file is a 16-character string of binary digits that represents the encoding of each corresponding executable line of the source text file according to the specification described in the section "Instruction set architecture (ISA) and machine language". The file created may be loaded into the Hack computer emulator by a facility provided by the emulator user interface. Example Assembly Language Program Following is an annotated example program written in Hack assembly language. This program sums the first 100 consecutive integers and places the result of the calculation in a user-defined variable called “sum”.  It implements a “while” loop construct to iterate though the integer values 1 through 100 and adds each integer to a “sum” variable.  The user-defined variable “cnt” maintains the current integer value through the loop.  This program illustrates all of the features of the “documented” assembly language capabilities of Hack Computer except memory-mapped I/O. The contents of the Hack assembly language source file are shown in the second column in bold font. Line numbers are provided for reference in the following discussion but do not appear in the source code. The Hack machine code produced by the assembler is shown in the last column with the assigned ROM address in the preceding column. Note that full-line comments, blank lines, and label definition statements generate no machine language code. Also, the comments provided at the end of each line containing an assembly language instruction are ignored by the assembler. The assembler output, shown in the last column, is a text string of 16 binary characters, not 16-bit binary integer representation. Note that the instruction sequence follows the pattern of A-instruction, C-instruction, A-instruction, C-instruction, ... . This is typical for Hack assembly language programs. The A-instruction specifies a constant or memory address that is used in the subsequent C-instruction. All three variations of the A-instruction are illustrated. In line 11 (@100), the constant value 100 is loaded into the A register. This value is used in line 12 (D=D-A) to compute the value used to test the loop branch condition. Since line 4 (@cnt) contains the first appearance of the user-defined variable "cnt", this statement binds the symbol to the next unused RAM address. In this instance, the address is 16, and that value is loaded into the A register. Also, the M pseudo-register also now references this address, and RAM[16] is made the active RAM memory location. The third use of the A-instruction is seen in line 21 (@LOOP). Here the instruction loads the bound label value, representing an address in ROM memory, into the A register and M pseudo-register. The subsequent unconditional branch instruction in line 22 (0;JMP) loads the M register value into the CPU's program counter register to effect control transfer to the beginning of the loop. The Hack computer provides no machine language instruction to halt program execution. The final two lines of the program (@END and 0;JMP) create an infinite loop condition which Hack assembly programs conventionally use to terminate programs designed to run in the CPU emulator. See also Hennessy, John L., & Patterson, David A. (2019). Computer Architecture: A Quantitative Approach, 6th Edition. Cambridge, Massachusetts: Morgan Kaufmann Publishers Justice, Matthew. (2021). How Computers Really Work. San Francisco, California: No Starch Press. Malvino, Albert P., & Brown, Jerald A. (1993). Digital Computer Electronics, 3rd Edition. New York, New York: Glencoe McGraw-Hill Null, Linda, & Lobur, Julia. (2019). The Essentials of Computer Organization and Architecture. 5th Edition. Burlington, Massachusetts: Jones and Bartlett Learning. Patt, Yale N., & Patel, Sanjay J. (2020). Introduction to Computing Systems: From Bits and Gates to C and Beyond, 3rd Edition. New York, New York: McGraw Hill Education. Petzold, Charles. (2009). Code: The Hidden Language of Computer Hardware and Software. Redmond, Washington: Microsoft Press. Scott, John Clark. (2009). But How Do It Know? The Basic Principles of Computers for Everyone. Oldsmar, Florida: John C. Scott. Whipple, Richard. (2019). Build Your Own Computer from Scratch. Seattle, Washington: Amazon Kindle. References Computers
4839663
https://en.wikipedia.org/wiki/VJing
VJing
VJing (pronounced: VEE-JAY-ing) is a broad designation for realtime visual performance. Characteristics of VJing are the creation or manipulation of imagery in realtime through technological mediation and for an audience, in synchronization to music. VJing often takes place at events such as concerts, nightclubs, music festivals and sometimes in combination with other performative arts. This results in a live multimedia performance that can include music, actors and dancers. The term VJing became popular in its association with MTV's Video Jockey but its origins date back to the New York club scene of the 70s. In both situations VJing is the manipulation or selection of visuals, the same way DJing is a selection and manipulation of audio. One of the key elements in the practice of VJing is the realtime mix of content from a "library of media", on storage media such as VHS tapes or DVDs, video and still image files on computer hard drives, live camera input, or from computer generated visuals. In addition to the selection of media, VJing mostly implies realtime processing of the visual material. The term is also used to describe the performative use of generative software, although the word "becomes dubious (...) since no video is being mixed". History Antecedents Historically, VJing gets its references from art forms that deal with the synesthetic experience of vision and sound. These historical references are shared with other live audiovisual art forms, such as Live Cinema, to include the camera obscura, the panorama and diorama, the magic lantern, color organ, and liquid light shows. The color organ is a mechanism to make colors correspond to sound through mechanical and electromechanic means. Bainbridge Bishop, who contributed to the development of the color organ, was "dominated with the idea of painting music". In a book from 1893 that documents his work, Bishop states: "I procured an organ, and experimented by building an attachment to the keys, which would play with different colored lights to correspond with the music of the instrument." Between 1919 and 1927, Mary Hallock-Greenewalt, a piano soloist, created a new technological art form called Nourathar, which means "essence of light" in Arabic. Her light music consisted of environmental color fields that produced a scale of light intensities and color. "In place of a keyboard, the Sarabet had a console with graduated sliders and other controls, more like a modern mixing board. Lights could be adjusted directly via the sliders, through the use of a pedal, and with toggle switches that worked like individual keys." In clubs and private events in the 1960s "people used liquid-slides, disco balls and light projections on smoke to give the audience new sensations. Some of these experiments were linked to the music, but most of the time they functioned as decorations." These came to be known as liquid light shows. From 1965 to 1966 in San Francisco, the visual shows by artist collectives such as The Joshua Light Show and the Brotherhood of Light accompanied The Grateful Dead concerts, which were inspired by the Beat generation—in particular the Merry Pranksters—and fueled by the "expansion of consciousness" from the Acid Tests. The Exploding Plastic Inevitable, between 1966 and 1967, organized by Andy Warhol contributed to the fusion of music and visuals in a party context. "The Exploding Party project examined the history of the party as an experimental artistic format, focusing in particular on music visualization - also in live contexts" 1970s Important events During the late 1970s video and music performance became more tightly integrated. At concerts, a few bands started to have regular film/video along with their music. Experimental film maker Tony Potts was considered an unofficial member of The Monochrome Set for his work on lighting design and film making for projections for live shows. Test Department initially worked with "Bert" Turnball as their resident visual artist, creating slideshows and film for live performances. The organization, Ministry of Power included collaborations with performance groups, traditional choirs and various political activists. Industrial bands would perform in art contexts, as well as in concert halls, and often with video projections. Groups like Cabaret Voltaire started to use low cost video editing equipment to create their own time-based collages for their sound works. In their words, "before [the use of video], you had to do collages on paper, but now you present them in rhythm—living time—in video." The film collages made by and for groups such as the Test Dept, Throbbing Gristle and San Francisco's Tuxedomoon became part of their live shows. An example of mixing film with live performance is that of Public Image Ltd. at the Ritz Riot in 1981. This club, located on the East 9th St in New York, had a state of the art video projection system. It was used to show a combination of prerecorded and live video on the club's screen. PiL played behind this screen with lights rear projecting their shadows on to the screen. Expecting a more traditional rock show, the audience reacted by pelting the projection screen with beer bottles and eventually pulling down the screen. Technological developments An artist retreat in Owego New York called Experimental Television Center, founded in 1971, made contributions to the development of many artists by gathering the experimental hardware created by video art pioneers: Nam June Paik, Steve Rutt and Bill Etra, and made the equipment available to artists in an inviting setting for free experimentation. Many of the outcomes debuted at the nightclub Hurrah which quickly became a new alternative for video artists who could not get their avant garde productions aired on regular broadcast outlets. Similarly, music video development was happening in other major cities around the world, providing an alternative to mainstream television. A notable image processor is the Sandin Image Processor (1971), primarily as it describes what is now commonly referred to as open source. The Dan Sandin Image Processor, or "IP," is an analog video processor with video signals sent through processing modules that route to an output color encoder. The IP's most unique attribute is its non-commercial philosophy, emphasizing a public access to processing methods and the machines that assist in generating the images. The IP was Sandin's electronic expression for a culture that would "learn to use High-Tech machines for personal, aesthetic, religious, intuitive, comprehensive, and exploratory growth." This educational goal was supplemented with a "distribution religion" that enabled video artists, and not-for-profit groups, to "roll-your-own" video synthesizer for only the cost of parts and the sweat and labor it took to build it. It was the "Heathkit" of video art tools, with a full building plan spelled out, including electronic schematics and mechanical assembly information. Tips on soldering, procuring electronic parts and Printed Circuit boards, were also included in the documentation, increasing the chances of successfully building a working version of the video synthesizer. 1980s Important events In May 1980, multi media artist / filmmaker Merrill Aldighieri was invited to screen a film at the nightclub Hurrah. At this time, music video clips did not exist in large quantity and the video installation was used to present an occasional film. To bring the role of visuals to an equal level with the DJ's music, Merrill made a large body of ambient visuals that could be combined in real time to interpret the music. Working alongside the DJ, this collection of raw visuals was mixed in real time to create a non-stop visual interpretation of the music. Merrill became the world's first full-time VJ. MTV founders came to this club and Merrill introduced them to the term and the role of "VJ",inspiring them to have VJ hosts on their channel the following year. Merrill collaborated with many musicians at the club, notably with electronic musician Richard Bone to make the first ambient music video album titled "Emerging Video". Thanks to a grant from the Experimental Television Center, her blend of video and 16 mm film bore the influential mark of the unique Rutt Etra and Paik synthesizers. This film was offered on VHS through "High Times Magazine" and was featured in the club programming. Her next foray into the home video audience was in collaboration with the newly formed arm of Sony, Sony HOME VIDEO, where she introduced the concept of "breaking music on video" with her series DANSPAK. With a few exceptions like the Jim Carrol Band with Lou Reed and Man Parrish, this series featured unknown bands, many of them unsigned. The rise of electronic music (especially in house and techno genres) and DJ club culture provided more opportunities for artists to create live visuals at events. The popularity of MTV lead to greater and better production of music videos for both broadcast and VHS, and many clubs began to show music videos as part of entertainment and atmosphere. Joe Shannahan (owner of Metro in 1989-1990) was paying artists for video content on VHS. Part of the evening they would play MTV music videos and part of the evening they would run mixes from local artists Shanahan had commissioned. Medusa's (an all-ages bar in Chicago) incorporated visuals as part of their nightly art performances throughout the early to mid 80s (1983–85). Also in Chicago during the mid-80s was Smart Bar, where Metro held "Video Metro" every Saturday night. Technological developments In the 1980s the development of relatively cheap transistor and integrated circuit technology allowed the development of digital video effects hardware at a price within reach of individual VJs and nightclub owners. One of the first commercially distributed video synthesizers available in 1981 was the CEL Electronics Chromascope sold for use in the developing nightclub scene. The Fairlight Computer Video Instrument (CVI), first produced in 1983, was revolutionary in this area, allowing complex digital effects to be applied in real time to video sources. The CVI became popular amongst television and music video producers and features in a number of music videos from the period. The Commodore Amiga introduced in 1985 made a breakthrough in accessibility for home computers and developed the first computer animation programs for 2D and 3D animation that could produce broadcast results on a desktop computer. 1990s Important events A number of recorded works begin to be published in the 1990s to further distribute the work of VJs, such as the Xmix compilations (beginning in 1993), Future Sound of London's "Lifeforms"(VHS, 1994), Emergency Broadcast Network's "Telecommunication Breakdown" (VHS, 1995), Coldcut and Hexstatic's "Timber" (VHS, 1997 and then later on CDRom including a copy of VJamm VJ software), the "Mego Videos" compilation of works from 1996-1998 (VHS/PAL, 1999) and Addictive TV's 1998 television series "Transambient" for the UK's Channel 4 (and DVD release). In the United States, the emergence of the rave scene is perhaps to be credited for the shift of the VJ scene from nightclubs into underground parties. From around 1991 until 1994, Mark Zero would do film loops at Chicago raves and house parties. One of the earliest large-scale Chicago raves was "Massive New Years Eve Revolution" in 1993, produced by Milwaukee's Drop Bass Network. It was a notable event as it featured the Optique Vid Tek (OVT) VJs on the bill. This event was followed by Psychosis, held on 3 April 1993, and headlined by Psychic TV, with visuals by OVT Visuals. In San Francisco Dimension 7 were a VJ collective working the early West Coast rave scene beginning in 1993. Between 1996 and 1998, Dimension 7 took projectors and lasers to the Burningman festival, creating immersive video installations on the Black Rock desert. In the UK groups such as The Light Surgeons and Eikon were transforming clubs and rave events by combining the old techniques of liquid lightshows with layers of slide, film and video projections. In Bristol, Children of Technology emerged, pioneering interactive immersive environments stemming from co-founder Mike Godfrey's architectural thesis whilst at university during the 1980s. Children of Technology integrated their homegrown CGI animation and video texture library with output from the interactive Virtual Light Machine (VLM), the brainchild of Jeff Minter and Dave Japp, with output onto over 500 sq m of layered screens using high power video and laser projection within a dedicated lightshow. Their "Ambient Theatre Lightshow" first emerged at Glastonbury 93 and they also provided VJ visuals for the Shamen, who had just released their no 1. hit "Ebeneezer Goode" at the festival. Invited musicians jammed in the Ambient Theatre Lightshow, using the VLM, within a prototype immersive environment. Children of Technology took interactive video concepts into a wide range of projects including show production for "Obsession" raves between 1993 and 1995, theatre, clubs, advertising, major stage shows and TV events. This included pioneering projects with 3D video / sound recording and performance, and major architectural projects in the late 1990s, where many media technology ideas were now taking hold. Another collective, "Hex" were working across a wide range of media - from computer games to art exhibitions - the group pioneered many new media hybrids, including live audiovisual jamming, computer-generated audio performances, and interactive collaborative instruments. This was the start of a trend which continues today with many VJs working beyond the club and dance party scene in areas such as installation art. The Japanese book "VJ2000" (Daizaburo Harada, 1999) marked one of the earliest publications dedicated to discussing the practices of VJs. Technological developments The combination of the emerging rave scene, along with slightly more affordable video technology for home-entertainment systems, brought consumer products to become more widely used in artistic production. However, costs for these new types of video equipment were still high enough to be prohibitive for many artists. There are three main factors that lead to the proliferation of the VJ scene in the 2000s: affordable and faster laptops; drop in prices of video projectors (especially after the dot-com bust where companies were loading off their goods on craigslist) the emergence of strong rave scenes and the growth of club culture internationally As a result of these, the VJ scene saw an explosion of new artists and styles. These conditions also facilitated a sudden emergence of a less visible (but nonetheless strong) movement of artists who were creating algorithmic, generative visuals. This decade saw video technology shift from being strictly for professional film and television studios to being accessible for the prosumer market (e.g. the wedding industry, church presentations, low-budget films, and community television productions). These mixers were quickly adopted by VJs as the core component of their performance setups. This is similar to the release of the Technics 1200 turntables, which were marketed towards homeowners desiring a more advanced home entertainment system, but were then appropriated by musicians and music enthusiasts for experimentation. Initially, video mixers were used to mix pre-prepared video material from VHSplayers and live camera sources, and later to add the new computer software outputs into their mix. The 90s saw the development of a number of digital video mixers such as Panasonic's WJ-MX50, WJ-MX12, and the Videonics MX-1. Early desktop editing systems such as the NewTek Video Toaster for the Amiga computer were quickly put to use by VJs seeking to create visuals for the emerging rave scene, whilst software developers began to develop systems specifically designed for live visuals such as O'Wonder's "Bitbopper". The first known software for VJs was Vujak - created in 1992 and written for the Mac by artist Brian Kane for use by the video art group he was part of - Emergency Broadcast Network, though it was not used in live performances. EBN used the EBN VideoSampler v2.3, developed by Mark Marinello and Greg Deocampo. In the UK, Bristol's Children of Technology developed a dedicated immersive video lightshow using the Virtual Light Machine (VLM) called AVLS or Audio-Visual-Live-System during 1992 and 1993. The VLM was a custom built PC by video engineer Dave Japp using super-rare transputer chips and modified motherboards, programmed by Jeff Minter (Llamasoft & Virtual Light Co.). The VLM developed after Jeff's earlier Llamasoft Light Synthesiser programme. With VLM, DI's from live musicians or DJ's activated Jeff's algorithmic real-time video patterns, and this was real-time mixed using pansonic video mixers with CGI animation/VHS custom texture library and live camera video feedback. Children of Technology developed their own "Video Light" system, using hi-power and low-power video projection to generate real-time 3D beam effects, simultaneous with enormous surface and mapped projection. The VLM was used by the Shamen, The Orb, Primal Scream, Obsession, Peter Gabriel, Prince and many others between 1993 and 1996. A software version of the VLM was integrated into Atari's Jaguar console, in response to growing VJ interest. In the mid-90s, Audio reactive pure synthesis (as opposed to clip-based) software such as Cthugha and Bomb were influential. By the late 90s there were several PC based VJing softwares available, including generative visuals programs such as MooNSTER, Aestesis, and Advanced Visualization Studio, as well as video clip players such as FLxER, created by Gianluca Del Gobbo, and VJamm. Programming environments such as Max/MSP, Macromedia Director and later Quartz Composer started to become used by themselves and also to create VJing programs like VDMX or pixmix. These new software products and the dramatic increases in computer processing power over the decade meant that VJs were now regularly taking computers to gigs. 2000s Important events The new century has brought new dynamics to the practice of visual performance. To be a VJ previously had largely meant a process of self-invention in isolation from others: the term was not widely known. Then through the rise of internet adoption, having access to other practitioners very became the norm, and virtual communities quickly formed. The sense of collective then translated from the virtual world onto physical spaces. This becomes apparent through the numerous festivals that emerge all over Europe with strong focus on VJing. VJ events in Europe The VideA festival in Barcelona ran from 2000 - 2005., AVIT, clear in its inception as the online community of VJCentral.com self-organising a physical presence, had its first festival in Leeds (2002), followed by Chicago (2003), Brighton (2003), San Francisco (2004), and Birmingham (2005), 320x240 in Croatia (2003), Contact Europe in Berlin (2003). Also, the Cimatics festival in Brussels should be credited as a pioneering event, with a first festival edition in 2002 completely dedicated to VJing. In 2003, the Finnish media arts festival PixelAche was dedicated to the topic of VJing, while in 2003, Berlin's Chaos Computer Club started a collaboration with AVIT organisers that featured VJ Camps and Congress strands. LPM - Live Performers Meeting was born in Rome in 2004, with the aim to offer a real meeting space for often individually working artists, a place to meet the fellow VJ artists, spin-off new projects, and share all VJing related experiences, softwares, questions and insights. LPM since has become one of the leading international meetings dedicated to artists, professionals and enthusiasts of VJing, visual and live video performance, counting its 20th edition in 2019. Also around this time (in 2005 and 2007), UK artists Addictive TV teamed up with the British Film Institute to produce Optronica, a crossover event showcasing audiovisual performances at the London IMAX cinema and BFI Southbank. Two festivals entirely dedicated to VJing, Mapping Festival in Geneva and Vision'R in Paris, held their first edition in 2005. As these festivals emerged that prominently featured VJs as headline acts (or the entire focus of the festival), the rave festival scene also began to regularly include VJs in their main stage lineups with varying degrees of prominence. VJ events beyond Europe The MUTEK festival (2000–present) in Montréal regularly featured VJs alongside experimental sound art performances, and later the Elektra Festival (2008–present) also emerged in Montréal and featured many VJ performances. In Perth, Australia, the Byte Me! festival (2007) showed the work of many VJs from the Pacific Rim area alongside new media theorists and design practitioners. With lesser funding, the US scene has been host to more workshops and salons than festivals. Between 2000–2006, Grant Davis (VJ Culture) and Jon Schwark of Dimension 7 produced "Video Salon", a regular monthly gathering significant in helping establish and educate a strong VJ community in San Francisco, and attended by VJs across California and the United States. In addition, they produced an annual "Video RIOT!" (2003–2005) as a political statement following the R.A.V.E. Act (Reducing Americans' Vulnerability to Ecstasy Act) of 2003; a display of dissatisfaction by the re-election of George W. Bush in 2004; and in defiance of a San Francisco city ordinance limiting public gatherings in 2005. Several VJ battles and competitions began to emerge during this time period, such as Video Salon's "SIGGRAPH VJ Battle" in San Diego (2003), Videocake's "AV Deathmatch" series in Toronto (2006) and the "VJ Contests" at the Mapping Festival in Geneva (2009). These worked much like a traditional DJ battle where VJs would be given a set amount of time to show off their best mixes and were judged according to several criteria by a panel of judges. Publications Databases of visual content and promotional documentation became available on DVD formats and online through personal websites and through large databases, such as the "Prelinger Archives" on Archive.org. Many VJs began releasing digital video loop sets on various websites under Public Domain or Creative Commons licensing for other VJs to use in their mixes, such as Tom Bassford's "Design of Signage" collection (2006), Analog Recycling's "79 VJ Loops" (2006), VJzoo's "Vintage Fairlight Clips" (2007) and Mo Selle's "57 V.2" (2007). Promotional and content-based DVDs began to emerge, such as the works from the UK's ITV1 television series Mixmasters (2000–2005) produced by Addictive TV, Lightrhythm Visuals (2003), Visomat Inc. (2002), and Pixdisc, all of which focused on the visual creators, VJ styles and techniques. These were then later followed by NOTV, Atmospherix, and other labels. Mia Makela curated a DVD collection for Mediateca of Caixa Forum called "LIVE CINEMA" in 2007, focusing on the emerging sister practice of "live cinema". Individual VJs and collectives also published DVDs and CD-ROMs of their work, including Eclectic Method's bootleg video mix (2002) and Eclectic Method's "We're Not VJs" (2005), as well as eyewash's "DVD2" (2004) and their "DVD3" (2008). Books reflecting on the history, technical aspects, and theoretical issues began to appear, such as "The VJ Book: Inspirations and Practical Advice for Live Visuals Performance" (Paul Spinrad, 2005), "VJ: Audio-Visual Art and VJ Culture" (Michael Faulkner and D-Fuse, 2006), "vE-jA: Art + Technology of Live Audio-Video" (Xárene Eskandar [ed], 2006), and "VJ: Live Cinema Unraveled" (Tim Jaeger, 2006). The subject of VJ-DJ collaboration also started to become a subject of interest for those studying in the field of academic human–computer interaction (HCI). Technological developments The availability and affordability of new consumer-level technology allowed many more people to get involved into VJing. The dramatic increase in computer processing power that became available facilitated more compact, yet often more complex setups, sometimes allowing VJs to bypass using a video mixer, using powerful computers running VJ software to control their mixing instead. However, many VJs continue to use video mixers with multiple sources, which allows flexibility for a wide range of input devices and a level of security against computer crashes or slowdowns in video playback due to overloading the CPU of computers due to the demanding nature of realtime video processing. Today's VJs have a wide choice of off the shelf hardware products, covering every aspect of visuals performance, including video sample playback (Korg Kaptivator), real-time video effects (Korg Entrancer) and 3D visual generation. The widespread use of DVDs gave initiative for scratchable DVD players. Many new models of MIDI controllers became available during the 2000s, which allow VJs to use controllers based on physical knobs, dials, and sliders, rather than interact primarily with the mouse/keyboard computer interface. There are also many VJs working with experimental approaches to working with live video. Open source graphical programming environments (such as Pure Data) are often used to create custom software interfaces for performances, or to connect experimental devices to their computer for processing live data (for example, the IBVA EEG-reading brainwave unit, the Arduino microprocessor, or circuit bending children's toys). The second half of this decade also saw a dramatic increase in display configurations being deployed, including widescreen canvases, multiple projections and video mapped onto the architectural form. This shift has been underlined by the transition from broadcast based technology - fixed until this decade firmly in the 4x3 aspect ratio specifications NTSC and PAL - to computer industry technology, where the varied needs of office presentation, immersive gaming and corporate video presentation have led to diversity and abundance in methods of output. Compared to the ~640x480i fixed format of NTSC/PAL, a contemporary laptop using DVI can output a great variety of resolutions up to ~2500px wide, and in conjunction with the Matrox TripleHead2Go can feed three different displays with an image coordinated across them all. Common technical setups A significant aspect of VJing is the use of technology, be it the re-appropriation of existing technologies meant for other fields, or the creation of new and specific ones for the purpose of live performance. The advent of video is a defining moment for the formation of the VJ (video jockey). Often using a video mixer, VJs blend and superimpose various video sources into a live motion composition. In recent years, electronic musical instrument makers have begun to make specialty equipment for VJing. VJing developed initially by performers using video hardware such as videocameras, video decks and monitors to transmit improvised performances with live input from cameras and even broadcast TV mixed with pre-recorded elements. This tradition lives on with many VJs using a wide range of hardware and software available commercially or custom made for and by the VJs. VJ hardware can be split into categories - Source hardware generates a video picture which can be manipulated by the VJ, e.g. video cameras and Video Synthesizers. Playback hardware plays back an existing video stream from disk or tape-based storage mediums, e.g. VHS tape players and DVD players. Mixing hardware allows the combining of multiple streams of video e.g. a Video Mixer or a computer utilizing VJ software. Effects hardware allows the adding of special effects to the video stream, e.g. Colour Correction units Output hardware is for displaying the video signal, e.g. Video projector, LED display, or Plasma Screen. There are many types of software a VJ may use in their work, traditional NLE production tools such as Adobe Premiere, After Effects, and Apple's Final Cut Pro are used to create content for VJ shows. Specialist performance software is used by VJs to play back and manipulate video in real-time. VJ performance software is highly diverse, and include software which allows a computer to replace the role of an analog video mixer and output video across extended canvasses composed of multiple screens or projectors. Small companies producing dedicated VJ software such as Modul8 and Magic give VJs a sophisticated interface for real-time processing of multiple layers of video clips combined with live camera inputs, giving VJs a complete off the shelf solution so they can simply load in the content and perform. Some popular titles which emerged during the 2000s include Resolume, NuVJ. Some VJs prefer to develop software themselves specifically to suit their own performance style. Graphical programming environments such as Max/MSP/Jitter, Isadora, and Pure Data have developed to facilitate rapid development of such custom software without needing years of coding experience. Sample workflows There are many types of configurations of hardware and software that a VJ may use to perform. Research and reflective thinking Several research projects have been dedicated to the documentation and study of VJing from the reflective and theoretical point of view. Round tables, talks, presentations and discussions are part of festivals and conferences related to new media art, such as ISEA and Ars Electronica for example, as well as specifically related to VJing, as is the case of the Mapping Festival. Exchange of ideas through dialogue contributed to the shift of the discussion from issues related to the practicalities of production to more complex ideas and to the process and the concept. Subjects related to VJing are, but not exclusively: identity and persona (individual and collective), the moment as art, audience participation, authorship, networks, collaboration and narrative. Through collaborative projects, visual live performance shift to a field of interdisciplinary practices. Periodical publications, online and printed, launched special issues on VJing. This is the case of AMinima printed magazine, with a special issue on Live Cinema (which features works by VJs), and Vague Terrain (an online new media journal), with the issue The Rise of the VJ. See also Notes References Amerika, Mark (2007). Meta/Data: A Digital Poetics. Cambridge: MIT Press. Bargsten, Joey (2011). "Hybrid Forms and Syncretic Horizons". Saarbrücken: Lambert Academic Publishing. Eskandar, Xárene (ed) (2006). vE-jA: Art + Technology of Live Audio-Video. San Francisco: h4SF. Faulkner, Michael / D-Fuse (ed), 2006. VJ: Audio-Visual Art and VJ Culture. London: Laurence King. Lehrer, Jonah (2008). Proust was a neuroscientist. New York:Mariner. Lund, Cornelia & Lund, Holger (eds), 2009. Audio.Visual- On Visual Music and Related Media. Estugarda: Arnoldsche Art publishers. Manu; Ideacritik & Velez, Paula, et all. (2009) " Aether9 Communications: Proceedings". TK: Greyscale Editions. Vol3, Issue 1. Spinrad, Paul (2005). The VJ Book: Inspirations and practical Advice for Live Visuals Performance. Los Angeles: Feral House. VJ Theory (ed) (2008). "VJam Theory: Collective Writings on Realtime Visual Performance". Falmouth:realtime Books. External links Visual Music Archive by Prof. Dr. Heike Sperling VJ booking - Free VJ Portfolio Network VJ Union Australia – Australia Wide VJ Network VJs Mag – Audio Visual Performers Source VJs TV by VJs Magazine – Audio Visual Performers Network Visual music Mass media occupations Video art
13087
https://en.wikipedia.org/wiki/GW-BASIC
GW-BASIC
GW-BASIC is a dialect of the BASIC programming language developed by Microsoft from IBM BASICA. Functionally identical to BASICA, its BASIC interpreter is a fully self-contained executable and does not need the Cassette BASIC ROM found in the original IBM PC. It was bundled with MS-DOS operating systems on IBM PC compatibles by Microsoft. The language is suitable for simple games, business programs and the like. Since it was included with most versions of MS-DOS, it was also a low-cost way for many aspiring programmers to learn the fundamentals of computer programming. Microsoft also sold a BASIC compiler, BASCOM, compatible with GW-BASIC, for programs needing more speed. According to Mark Jones Lorenzo, given the scope of the language, "GW-BASIC is arguably the ne plus ultra of Microsoft's family of line-numbered BASICs stretching back to the Altair--and perhaps even of line-numbered BASIC in general." With the release of MS-DOS 5.0, GW-BASIC's place was taken by QBasic, a slightly abridged version of the interpreter part of the separately available QuickBASIC interpreter and compiler package. On May 21, 2020, Microsoft released the 8088 assembler source code for GW-BASIC 1.0 on GitHub under the MIT License. Features IBM BASICA and GW-BASIC are largely ports of MBASIC version 5.x, but with added features specifically for the IBM PC hardware. Common features of BASIC-80 5.x and BASICA/GW-BASIC include: WHILE...WEND loops Variable names of up to 40 characters OPTION BASE statement to set the starting index of array variables as either 0 or 1 Dynamic string space allocation LINE INPUT, which allowed field separator characters like comma to be ignored CALL statement for executing machine language routines CHAIN and MERGE commands Ability to save programs in either tokenized binary format or ASCII text The ability to "crunch" program lines by omitting spaces, a common feature of earlier Microsoft BASIC implementations, was removed from BASIC-80 5.x and BASICA/GWBASIC. BASIC-80 programs not using PEEK/POKE statements run under GW-BASIC. BASICA adds many features for the IBM PC such as sound, graphics, and memory commands. Features not present in BASIC-80 include the ability to execute the RND function with no parameters and the ability to also save programs in a "protected" format, preventing them from being LISTed. BASICA also allows double-precision numbers to be used with mathematical and trigonometric functions such as COS, SIN, and ATN, which wasn't allowed in 8-bit versions of BASIC. This feature was normally not enabled and required the optional parameter /D at startup, i.e., GWBASIC /D. BASIC's memory footprint was slightly increased if it was used. Microsoft did not offer a generic version of MS-DOS until v3.20 in 1986; before then, all variants of the operating system were OEM versions. Depending on the OEM, BASIC was distributed as either BASICA.EXE or GWBASIC.EXE. The former should not be confused with IBM BASICA, which always came as a .COM file. Some variants of BASIC has extra features to support a particular machine. For example, the AT&T and Tandy versions of DOS include a special GW-BASIC that supports their enhanced sound and graphics capabilities. The initial version of GW-BASIC is the one included with Compaq DOS 1.13, released with the Compaq Portable in 1983, and was analogous to IBM BASICA 1.10. It uses the CP/M-derived file control blocks for disk access and does not support subdirectories. Later versions support subdirectories, improved graphics, and other capabilities. GW-BASIC 3.20 (1986) adds EGA graphics support (no version of BASICA or GW-BASIC had VGA support) and is the last major new version released before it was superseded by QBasic. Buyers of Hercules Graphics Cards received a special version of GW-BASIC on the card's utility disk that is called HBASIC, which adds support for its 720×348 monochrome graphics. Other versions of BASICA/GW-BASIC do not support Hercules graphics and can only display graphics on that card through the use of third-party CGA emulation, such as SIMCGA. GW-BASIC has a command line-based integrated development environment (IDE) based on Dartmouth BASIC. Using the cursor movement keys, any line displayed on the screen can be edited. It also includes function key shortcuts at the bottom of the screen. Like other early microcomputer versions of BASIC, GW-BASIC lacks many of the structures needed for structured programming such as local variables, and GW-BASIC programs executed relatively slowly because it was an interpreted language. All program lines must be numbered; all non-numbered lines are considered to be commands in direct mode to be executed immediately. Program source files are normally saved in binary compressed format with tokens replacing keywords, with an option to save in ASCII text form. The GW-BASIC command-line environment has commands to RUN, LOAD, SAVE, LIST the current program, or quit to the operating SYSTEM; these commands can also be used as program statements. There is little support for structured programming in GW-BASIC. All IF/THEN/ELSE conditional statements must be written on one line, although WHILE/WEND statements may group multiple lines. Functions can only be defined using the single line DEF FNf(x)=<mathematical function of x> statement (e.g., DEF FNLOG(base,number)=LOG(number)/LOG(base)). The data type of variables can be specified with a character at the end of the variable name: A$ is a string of characters, A% is an integer, etc. Groups of variables can also be set to default types based on the initial letter of their name by use of the DEFINT, DEFSTR, etc., statements. The default type for undeclared variables not identified by such typing statements, is single-precision floating point (32-bit MBF). GW-BASIC allows use of joystick and light pen input devices. GW-BASIC can read from and write to files and COM ports; it can also do event trapping for ports. Since the cassette tape port interface of the original IBM PC was never implemented on compatibles, cassette operations are not supported. GW-BASIC can play simple music using the PLAY statement, needing a string of notes represented in a music macro language, e.g., PLAY "edcdeeL2edfedL4c". More low-level control is possible with the SOUND statement, which takes the arguments of a frequency in hertz and a length in clock ticks for the standard internal PC speaker in IBM machines. Consequently, sound is limited to single channel beeps and whistles as befits a 'business' machine. Home-based PCs like the Tandy 1000 allow up to three channels of sound for the SOUND and PLAY commands. Name There are several theories on what the initials "GW" stand for. Greg Whitten, an early Microsoft employee who developed the standards in the company's BASIC compiler line, says Bill Gates picked the name GW-BASIC. Whitten refers to it as Gee-Whiz BASIC and is unsure if Gates named the program after him. The Microsoft User Manual from Microsoft Press also refers to it by this name. It may have also been nicknamed Gee-Whiz because of its numerous graphics commands. Other common theories as to the initials' origins include "Graphics and Windows", "Gates, William" (Microsoft's president at the time), or "Gates-Whitten" (the two main designers of the program). See also Microsoft Binary Format (MBF) References External links GW-BASIC source code on GitHub Classic Basic Games Page, a resource for BASIC games and other programs Back to BASICs, another BASIC resource site GW-BASIC User's Manual Gary Beene's Information Center regarding BASIC, with timeline dates for DOS, Windows and BASIC dialects GW-BASIC - Gee Whiz! Cory Smith's site devoted to GW-BASIC. PC-BASIC - a GW-BASIC emulator for modern operating systems. GW-BASIC – A resource for GW-BASIC, gathered from various sources. Discontinued Microsoft BASICs Programming languages created in 1983 BASIC interpreters BASIC programming language family Software using the MIT license Assembly language software Formerly proprietary software Microsoft free software Microsoft programming languages
15912060
https://en.wikipedia.org/wiki/Libusb
Libusb
libusb is a library that provides applications with access for controlling data transfer to and from USB devices on Unix and non-Unix systems, without the need for kernel-mode drivers. Rationale Because the Linux kernel is a monolithic type of kernel, device drivers are part of it. Availability libusb is currently available for Linux, the BSDs, Solaris, OS X, Windows, Android, and Haiku. It is written in C. Amongst other applications, the library is used by SANE, the Linux scanner project, in preference to the kernel scanner module, which is restricted to Linux kernel 2.4. See also Linux API udev Video4Linux References External links USB Disk Security USB C (programming language) libraries Free computer libraries
45178
https://en.wikipedia.org/wiki/Process%20%28computing%29
Process (computing)
In computing, a process is the instance of a computer program that is being executed by one or many threads. It contains the program code and its activity. Depending on the operating system (OS), a process may be made up of multiple threads of execution that execute instructions concurrently. While a computer program is a passive collection of instructions typically stored in a file on disk, a process is the execution of those instructions after being loaded from the disk into memory. Several processes may be associated with the same program; for example, opening up several instances of the same program often results in more than one process being executed. Multitasking is a method to allow multiple processes to share processors (CPUs) and other system resources. Each CPU (core) executes a single task at a time. However, multitasking allows each processor to switch between tasks that are being executed without having to wait for each task to finish (preemption). Depending on the operating system implementation, switches could be performed when tasks initiate and wait for completion of input/output operations, when a task voluntarily yields the CPU, on hardware interrupts, and when the operating system scheduler decides that a process has expired its fair share of CPU time (e.g, by the Completely Fair Scheduler of the Linux kernel). A common form of multitasking is provided by CPU's time-sharing that is a method for interleaving the execution of users' processes and threads, and even of independent kernel tasks – although the latter feature is feasible only in preemptive kernels such as Linux. Preemption has an important side effect for interactive processes that are given higher priority with respect to CPU bound processes, therefore users are immediately assigned computing resources at the simple pressing of a key or when moving a mouse. Furthermore, applications like video and music reproduction are given some kind of real-time priority, preempting any other lower priority process. In time-sharing systems, context switches are performed rapidly, which makes it seem like multiple processes are being executed simultaneously on the same processor. This simultaneous execution of multiple processes is called concurrency. For security and reliability, most modern operating systems prevent direct communication between independent processes, providing strictly mediated and controlled inter-process communication functionality. Representation In general, a computer system process consists of (or is said to own) the following resources: An image of the executable machine code associated with a program. Memory (typically some region of virtual memory); which includes the executable code, process-specific data (input and output), a call stack (to keep track of active subroutines and/or other events), and a heap to hold intermediate computation data generated during run time. Operating system descriptors of resources that are allocated to the process, such as file descriptors (Unix terminology) or handles (Windows), and data sources and sinks. Security attributes, such as the process owner and the process' set of permissions (allowable operations). Processor state (context), such as the content of registers and physical memory addressing. The state is typically stored in computer registers when the process is executing, and in memory otherwise. The operating system holds most of this information about active processes in data structures called process control blocks. Any subset of the resources, typically at least the processor state, may be associated with each of the process' threads in operating systems that support threads or child processes. The operating system keeps its processes separate and allocates the resources they need, so that they are less likely to interfere with each other and cause system failures (e.g., deadlock or thrashing). The operating system may also provide mechanisms for inter-process communication to enable processes to interact in safe and predictable ways. Multitasking and process management A multitasking operating system may just switch between processes to give the appearance of many processes executing simultaneously (that is, in parallel), though in fact only one process can be executing at any one time on a single CPU (unless the CPU has multiple cores, then multithreading or other similar technologies can be used). It is usual to associate a single process with a main program, and child processes with any spin-off, parallel processes, which behave like asynchronous subroutines. A process is said to own resources, of which an image of its program (in memory) is one such resource. However, in multiprocessing systems many processes may run off of, or share, the same reentrant program at the same location in memory, but each process is said to own its own image of the program. Processes are often called "tasks" in embedded operating systems. The sense of "process" (or task) is "something that takes up time", as opposed to "memory", which is "something that takes up space". The above description applies to both processes managed by an operating system, and processes as defined by process calculi. If a process requests something for which it must wait, it will be blocked. When the process is in the blocked state, it is eligible for swapping to disk, but this is transparent in a virtual memory system, where regions of a process's memory may be really on disk and not in main memory at any time. Note that even portions of active processes/tasks (executing programs) are eligible for swapping to disk, if the portions have not been used recently. Not all parts of an executing program and its data have to be in physical memory for the associated process to be active. Process states An operating system kernel that allows multitasking needs processes to have certain states. Names for these states are not standardised, but they have similar functionality. First, the process is "created" by being loaded from a secondary storage device (hard disk drive, CD-ROM, etc.) into main memory. After that the process scheduler assigns it the "waiting" state. While the process is "waiting", it waits for the scheduler to do a so-called context switch. The context switch loads the process into the processor and changes the state to "running" while the previously "running" process is stored in a "waiting" state. If a process in the "running" state needs to wait for a resource (wait for user input or file to open, for example), it is assigned the "blocked" state. The process state is changed back to "waiting" when the process no longer needs to wait (in a blocked state). Once the process finishes execution, or is terminated by the operating system, it is no longer needed. The process is removed instantly or is moved to the "terminated" state. When removed, it just waits to be removed from main memory. Inter-process communication When processes need to communicate with each other they must share parts of their address spaces or use other forms of inter-process communication (IPC). For instance in a shell pipeline, the output of the first process need to pass to the second one, and so on; another example is a task that can be decomposed into cooperating but partially independent processes which can run at once (i.e., using concurrency, or true parallelism – the latter model is a particular case of concurrent execution and is feasible whenever enough CPU cores are available for all the processes that are ready to run). It is even possible for two or more processes to be running on different machines that may run different operating system (OS), therefore some mechanisms for communication and synchronization (called communications protocols for distributed computing) are needed (e.g., the Message Passing Interface, often simply called MPI). History By the early 1960s, computer control software had evolved from monitor control software, for example IBSYS, to executive control software. Over time, computers got faster while computer time was still neither cheap nor fully utilized; such an environment made multiprogramming possible and necessary. Multiprogramming means that several programs run concurrently. At first, more than one program ran on a single processor, as a result of underlying uniprocessor computer architecture, and they shared scarce and limited hardware resources; consequently, the concurrency was of a serial nature. On later systems with multiple processors, multiple programs may run concurrently in parallel. Programs consist of sequences of instructions for processors. A single processor can run only one instruction at a time: it is impossible to run more programs at the same time. A program might need some resource, such as an input device, which has a large delay, or a program might start some slow operation, such as sending output to a printer. This would lead to processor being "idle" (unused). To keep the processor busy at all times, the execution of such a program is halted and the operating system switches the processor to run another program. To the user, it will appear that the programs run at the same time (hence the term "parallel"). Shortly thereafter, the notion of a "program" was expanded to the notion of an "executing program and its context". The concept of a process was born, which also became necessary with the invention of re-entrant code. Threads came somewhat later. However, with the advent of concepts such as time-sharing, computer networks, and multiple-CPU shared memory computers, the old "multiprogramming" gave way to true multitasking, multiprocessing and, later, multithreading. See also Child process Exit Fork Light-weight process Orphan process Parent process Process group Wait Working directory Zombie process Notes References Further reading Remzi H. Arpaci-Dusseau and Andrea C. Arpaci-Dusseau (2014). "Operating Systems: Three Easy Pieces". Arpaci-Dusseau Books. Relevant chapters: Abstraction: The Process The Process API Gary D. Knott (1974) A proposal for certain process management and intercommunication primitives ACM SIGOPS Operating Systems Review. Volume 8, Issue 4 (October 1974). pp. 7 – 44 External links Online Resources For Process Information Computer Process Information Database and Forum Process Models with Process Creation & Termination Methods Concurrent computing Operating system technology
1519160
https://en.wikipedia.org/wiki/BioShock
BioShock
BioShock is a 2007 first-person shooter game developed by 2K Boston (later Irrational Games) and 2K Australia, and published by 2K Games. It is the first game in the BioShock series. The game's concept was developed by Irrational's creative lead, Ken Levine, and incorporates ideas by 20th century dystopian and utopian thinkers such as Ayn Rand, George Orwell, and Aldous Huxley, as well as historical figures such as John D. Rockefeller, Jr. and Walt Disney. The game is considered a spiritual successor to the System Shock series, on which many of Irrational's team, including Levine, had worked previously. BioShock is set in 1960. The player guides the protagonist, Jack, after his airplane crashes in the ocean near the bathysphere terminus that leads to the underwater city of Rapture. Built by the business magnate Andrew Ryan, the city was intended to be an isolated utopia, but the discovery of ADAM, a genetic material which can be used to grant superhuman powers, initiated the city's turbulent decline. Jack tries to find a way to escape, fighting through hordes of ADAM-obsessed enemies, and the iconic, deadly Big Daddies, while engaging with the few sane humans that remain and eventually learning of Rapture's past. The player, as Jack, can defeat foes in several ways by using weapons, utilizing plasmids that give unique powers, and by turning Rapture's defenses against them. It was released for Microsoft Windows and Xbox 360 platforms in August 2007; a PlayStation 3 port by Irrational, 2K Marin, 2K Australia and Digital Extremes was released in October 2008, and an OS X port by Feral Interactive in October 2009. A scaled-down mobile version was developed by IG Fun, which contained the first few levels of the game. BioShock includes elements of role-playing games, giving the player different approaches in engaging enemies such as by stealth, as well as moral choices of saving or killing characters. Additionally, the game and its biopunk theme borrow concepts from the survival horror genre, notably the Resident Evil series. BioShock received critical acclaim and was particularly praised by critics for its morality-based storyline, immersive environments, and its unique setting. It is considered to be one of the greatest video games ever made and a demonstration of video games as an art form. It received several Game of the Year awards from different media outlets, including from BAFTA, Game Informer, Spike TV, and X-Play. Since its release a direct sequel has been released, BioShock 2 by 2K Marin, as well as a third game titled BioShock Infinite by Irrational Games. A remastered version of the original game was released on Microsoft Windows, PlayStation 4 and Xbox One in September 2016, as part of BioShock: The Collection, along with BioShock 2 and Infinite. A standalone version of BioShock Remastered was released for macOS by Feral Interactive in August 2017. Releases of both the standalone remastered version along with The Collection for the Nintendo Switch were released in May 2020. Synopsis Setting BioShock takes place in Rapture, a large underwater city planned and constructed in the 1940s by individualist business magnate Andrew Ryan, who wanted to create a utopia for society's elite to flourish outside of government control and "petty morality". This philosophy resulted in remarkable advances in the arts and sciences, which included the discovery of "ADAM": a potent gene-altering substance which is created by a species of sea slug on the ocean floor. ADAM soon led to the creation of "Plasmids", mutagenic serums that grant users super-human powers like telekinesis and pyrokinesis. To protect and isolate Rapture, Ryan outlawed any contact with the surface world. As Rapture flourished, wealth disparities also grew, and conman/businessman Frank Fontaine used his influence over the disenfranchised working class to establish illegal enterprises and attain power—enough to rival even Ryan himself. Together with doctors Brigid Tenenbaum and Yi Suchong, Fontaine would create his own company dedicated towards researching plasmids and gene tonics. As ADAM became addictive and demand skyrocketed, Fontaine would secretly mass-produce ADAM through slugs implanted in the stomachs of orphaned girls, nicknamed "Little Sisters". Fontaine was then killed in a shootout with police, and Ryan took the opportunity to seize his assets, including control of the Little Sisters. In the months that followed, a man amongst the poor named Atlas rose up and began a violent revolution against Ryan, with both sides using plasmid-enhanced humans (known as "Splicers") to wage war on one another. To protect the Little Sisters, Ryan created the "Big Daddies": genetically-enhanced humans surgically grafted into gigantic lumbering diving suits, designed to escort the sisters as they scavenged ADAM from dead bodies. Tensions came to a head on New Year's Eve of 1958, when Atlas ordered an all-out assault on Ryan and his supporters. The conflict turns Rapture into a war-torn crumbling dystopia, resulting in societal collapse, countless deaths, many Splicers becoming disfigured and insane from ADAM abuse, and the few sane survivors barricading themselves away from the chaos. Plot In 1960, the protagonist, Jack, is a passenger on a plane that goes down in the Atlantic Ocean. As the only survivor, Jack makes his way to a nearby lighthouse that houses a bathysphere terminal, which takes him to Rapture. Jack is contacted by Atlas via radio, and is guided to confront the perils of the ruined city. Atlas requests Jack's help in stopping Ryan, directing him to a docked bathysphere where he says Ryan has trapped his family. When Jack first encounters the Little Sisters, Atlas urges him to kill them to harvest their ADAM, but Dr. Tenenbaum intervenes and insists Jack should spare them, providing him with a plasmid that can remove the sea slug from their bodies and free them of their brainwashing. Jack eventually works his way to the bathysphere, but Ryan destroys it before Jack can reach it. Infuriated, Atlas has Jack fight his way through various districts towards Ryan's lair, forcing Jack to contend with Ryan's deranged allies along the way: such as the mad surgical doctor J.S. Steinman, and insane former musician and art collector Sander Cohen. Eventually, Jack enters Ryan's office, where Ryan is casually playing golf and explains Jack's true origins. Through his dialogue and the evidence gathered up to this point, it is revealed that Jack is actually Ryan's illegitimate son, sold by Ryan's mistress as an embryo to Fontaine, who then had Tenenbaum and Suchong rapidly age Jack into adulthood and turned into an obedient assassin, capable of accessing any of Rapture's systems locked to Ryan's genetic code and thus ensure Fontaine's victory in the war. Jack was then smuggled to the surface with false memories of a normal life, waiting to be called back to Rapture when needed. Ryan suddenly takes control of Jack's actions by asking "Would you kindly?"; Jack realizes this phrase has preceded many of Atlas' commands as a hypnotic trigger, forcing him to follow any orders without question. Jack also realizes he was responsible for the plane crash, having read a letter onboard containing the same trigger phrase. Ryan chooses to die by his own will, and compels Jack to beat him to death with a golf club. Atlas then reveals himself to be Fontaine, having faked his death and used "Atlas" as an alias to hide his identity, while providing a heroic figure for the poor people to rally behind for his own ends. With Ryan finally dead, Fontaine takes control of Ryan's systems and leaves Jack to be killed by hostile security drones. Jack is saved by Dr. Tenenbaum and the Little Sisters that had been cured, and is helped to remove Fontaine's mental conditioning, including one that would have stopped Jack's heart. Jack pursues Fontaine to his lair, where he transforms himself into a blue-skinned humanoid creature by injecting himself with a large supply of ADAM. The Little Sisters aid Jack in draining the ADAM from Fontaine's body, and eventually kill him. The ending depends on how the player interacted with the Little Sisters: If the player rescues all of the Little Sisters, Jack takes them back to the surface with him and adopts five of them as his daughters, and Tenenbaum happily narrates how they go on to live full lives under his care, eventually surrounding him on his deathbed. This ending is considered canon in BioShock Infinite: Burial at Sea. If the player harvests one or more Little Sister, Jack turns on the Little Sisters to harvest their ADAM. Tenenbaum sadly narrates what occurs, condemning Jack and his actions. A US Navy submarine then comes across the wreckage of the plane and finds itself suddenly surrounded by bathyspheres containing Splicers, who attack the crew and take control of it. The submarine is revealed to be carrying nuclear missiles, with Tenenbaum claiming that Jack has now "stolen the terrible secrets of the world": The more Little Sisters Jack harvests throughout the game, the harsher and more furious Tenenbaum's narrative becomes. Gameplay BioShock is a first-person shooter with role-playing game customization and stealth elements, and is similar to System Shock 2. The player takes the role of Jack as he is guided through Rapture towards various objectives. The player collects multiple weapons and plasmids as they work their way through enemy forces. The player can switch between one active weapon and one active plasmid at any time, allowing them to find combination attacks that can be effective against certain enemies, such as first shocking a Splicer then striking them down with a wrench. Weapons are limited by ammunition that the player collects; many weapons have secondary ammo types that can be used instead for additional benefits, such as bullets that inflict fire damage. Plasmid use consumes a serum called EVE which can be restored using EVE syringes collected by the player. The player has a health meter that decreases when they take damage. The player can restore their health with medical packs found throughout Rapture. If the player's health reduces to zero, they will be regenerated at the last Vita-Chamber that they passed with limited amounts of health and EVE. A patch for the game allows players to disable these Vita-Chambers, requiring players to restart a saved game if the character dies. The game provides several options for players to face challenges. In addition to direct combat, the player can use plasmids to lure enemies into traps or to turn enemies against each other, or employ stealth tactics to avoid detection by hostiles including the security systems and turrets. The player can hack into any of Rapture's automated systems; the hacking process is done via a mini-game similar to Pipe Mania where the player must connect two points on opposite sides of a grid with a limited set of piping within a fixed amount of time, with failure to complete in time costing health and potentially sounding alarms. Early in the game, the player is given a research camera; by taking photographs of enemies, the player will cumulatively gain knowledge about the individual foes which translates into attack boosts and other benefits when facing that enemy type in the future. The player collects money by exploring Rapture and from the bodies of defeated foes; this money can be used at vending machines to restock on ammunition, health and EVE, and other items; like security cameras, vending machines can also be hacked to reduce the costs of items from it. The player will also receive rewards in the form of ADAM from completing some tasks, as well as from either saving or killing the Little Sisters after defeating their Big Daddy guardian. ADAM is used to purchase new plasmids from Gatherer's Garden machines scattered around Rapture. In addition to plasmids, the player will also collect and buy tonics that provide passive bonuses, such as increasing Jack's strength, using EVE more efficiently, or making Jack more resistant to damage. The player can only have a limited number of plasmids and tonics active at any time, and can swap between the various plasmids and tonics at certain stations located throughout Rapture. Development Game design Lead developer Ken Levine had created Irrational Games in 1997 out of former members from Looking Glass Studios. Their first game was System Shock 2, a sequel to Looking Glass's System Shock, and was met with critical success, though it did not prove a financial one. Levine had attempted to pitch a sequel to System Shock 2 to Electronic Arts, but the publisher rejected the idea based on the poor performance of the earlier game. Irrational would proceed to develop other games, including Freedom Force, Tribes: Vengeance, the canceled title Deep Cover, and the completed The Lost which was never released due to legal complications. At this point, Levine wanted to return to a game in the same style as System Shock 2, a more free-form game with strong narrative. In 2002, the team had come up with a core gameplay mechanic idea based on three groups of forces; drones that would carry a desirable resource, protectors that would guard the drones, and harvesters that would attempt to take the resource from the drones; these would eventually bear out as the Little Sisters, Big Daddies, and Splicers in the final game, but at the time of the concept, there was no set theme. They began working on creating a setting for the game as to be able to pitch the idea to game publishers. A 2002 demonstration version was based on the Unreal Engine 2 for the first Xbox. This demonstration was primarily set aboard a space station overtaken with genetically-mutated monsters; the main character was Carlos Cuello, a "cult deprogrammer"—a person charged with rescuing someone from a cult, and mentally and psychologically readjusting that person to a normal life. Ken Levine cites an example of what a cult deprogrammer does: "[There are] people who hired people to [for example] deprogram their daughter who had been in a lesbian relationship. They kidnap her and reprogram her, and it was a really dark person, and that was the [kind of] character that you were." This story would have been more political in nature, with the character hired by a Senator. The team collectively agreed that this game was not what they had set out to make, and were having trouble finding a publisher. They considered ending development, but as news about their efforts to create a spiritual successor to System Shock 2 began to appear in gaming magazines and websites, the team opted to continue development, performing a full revamp of the game. By 2004, 2K Games, a subsidiary of Take-Two Interactive, offered to publish the game primarily based on the drone/protector/harvester concept, giving Irrational the freedom to develop the story and setting. By this point, the story and setting had changed significantly, taking place in an abandoned World War II-era Nazi laboratory that had been recently unearthed by 21st-century scientists. Over the decades, the genetic experiments within the labs had gradually formed themselves into an ecosystem centered on the three groups. This version of the game included many of the gameplay elements that would remain in the final BioShock, themselves influenced by concepts from System Shock 2. These elements included the use of plasmids and EVE, the need to use stealth or other options to deal with automated security systems, direction through the environment from a non-player character relayed over a radio, and story elements delivered through audio recordings and "ghosts" of deceased characters. While the gameplay with the 2004 reveal was similar to what resulted in the released version of BioShock, both design and story changed, consistent with what Levine says was then-Irrational Games' guiding principle of putting game design first. These areas were also issued due to some internal strife and lack of communication between the various teams within Irrational, part of the result of having to expand the team from six to sixty members for the scope of the project. The environment was considered bland, and there were difficulties by the team's artists to come up with a consistent vision to meet the level designer's goals. A critical junction was a short experiment performed by level designer Jean Paul LeBreton and artist Hoagy de la Plante, setting themselves aside to co-develop a level that would later become part of the "Tea Garden" area in the released game, which Levine would later use as a prime example of a "great BioShock space", emphasizing the need for departments to work together. Levine also found that the cyberpunk theme had been overplayed considering their initial reject from Electronic Arts for System Shock 3, leading towards the underwater setting of Rapture. The game's lead level designer was Bill Gardner. He cited Capcom's survival horror series Resident Evil as a significant influence on BioShock, stating there are "all these nods and all these little elements that I think you can see where Resident Evil inspired us". The team were particularly influenced by Resident Evil 4, including its approach to the environments, combat, and tools, its game design and tactical elements, its "gameplay fuelled storytelling" and inventory system, and its opening village level in terms of how it "handled the sandbox nature of the combat" and in terms of "the environment". Story and theme development The thematic core of BioShock was born when Levine was walking at Rockefeller Center near the GE Building in New York City. He saw the uniqueness of the art deco styling of the building along with imagery around the building such as the statue of Atlas near it, and recognized that these were spaces that had not been experienced in the first-person shooter genre. The history of the Rockefeller Center also fed into the story concept; Levine noted how the Center had started construction prior to the Great Depression; when the primary financiers had pulled out, John D. Rockefeller, Jr. backed the remaining construction to complete the project himself, as stated by Edge magazine "a great man building an architectural triumph against all the odds". The history of Rapture and the character of Andrew Ryan is loosely based on Rockefeller's story. He also considered that many of the characters of Rapture were all people who were oppressed once before in their lives and now free of that oppression, have turned around and become the oppressors, a fact he felt resonated throughout human history. At this point in the development, the backdrop of Rapture had been fleshed out, but they had yet to come on how to represent the drones, protectors, and harvesters from their original game idea. The Big Daddy concept as the protector class was developed early in the process, but the team had yet to reach a satisfying design for the drones, having used several possible designs including bugs and dogs in wheelchairs. The team wanted to have the player care for the drones in some way and create pathos for these characters. The idea of using little girls came out of brainstorming, but was controversial and shocking within the team at first, recognizing that they could easily be killed and make the game more horrific in the style of Night Trap. However, as Levine worked on the story, he started to incorporate the ideas of dystopian and utopian thinkers from the twentieth century, including Ayn Rand, Aldous Huxley, and George Orwell, and considered their ideas "fascinating". He brought in the ideas of Objectivism that Rand primarily outlined in the book Atlas Shrugged, that man should be driven by selfishness and not altruism, and used this to inform the philosophy behind the city of Rapture and Andrew Ryan's work, viewing them as quite ludicrous, and primed to be applied to an antagonist, tied in with his previous observations on Rockefeller and his writings. This was extended to the use of the little girls as drones (now Little Sisters), particularly the question whether the player should try to save the girls or harvest the ADAM for their benefit. 2K Games expressed concern about the initial mechanic of the Little Sisters, where the player would actively prey on the Little Sister, which would have alerted a Big Daddy and setting up the fight with the player. This approach did not sit well with Levine, and 2K Games asserted that they would not ship a game "where the player gets punished for doing the right thing". They altered this approach where the Little Sisters would be invulnerable until the player had dealt with their Big Daddy, though LeBreton considered this "a massive kludge" into the game's fiction. The idea of creating the Little Sisters and presenting the player with this choice became a critical part of the game's appeal to the broader gaming market, although it was met with criticism from some outlets. Levine desired only to have one ending to the game, something that would have left the fate of the characters "much more ambiguous", but publisher pressure directed them to craft multiple endings depending on the choice of harvesting Little Sisters. Levine also noted that "it was never my intention to do two endings for the game. It sort of came very late and it was something that was requested by somebody up the food chain from me." Other elements came into the story design. Levine had an interest in "stem cell research and the moral issues that go around [it]". Regarding artistic influences, Levine cited the books Nineteen Eighty-Four and Logan's Run, representing societies that have "really interesting ideas screwed up by the fact that we're people". The idea of the mind control used on Jack was offered by LeBreton, inspired by films like The Manchurian Candidate, as a means to provide a better reason to limit the player's actions as opposed to the traditional use of locked doors to prevent them exploring areas they should not. The team had agreed that Jack's actions would be controlled by a key phrase but struggled with coming up with one that would not reveal Atlas' true nature. Levine happened upon "Would you kindly" after working on marketing materials for the game that asked the reader hypothetical questions such as "Would you kill people, even innocent people, to survive?", later working that phrase into the first script for the game. Numerous tensions within the team and from publisher 2K Games continued during the development process. According to LeBreton, Levine was distrustful of some of the more egotistical newer hires and was often arguing with them to enforce his vision of BioShock. 2K Games was concerned with the growing budget for the title, and told Levine to market the title more as a first-person shooter rather than the first-person shooter/role playing game hybrid they set out for. Near the targeted release date, Levine ordered the team into round-the-clock development, creating more strife in the team. Paul Hellquist, the game's lead designer, was often omitted from key design meetings, which he later recounted was due to his contrary nature to Levine, questioning several of his choices; he used his frustration to put into the design efforts for the Medical Pavilion level that he was in charge of at that time. Near the anticipated completion date, 2K decided to give Irrational another three months to polish up the game, extending the current crunch time the studio was already under. This left some hard-to-discover bugs and issues in the game undiscovered. One such case was an apparent Easter egg found in the remastered version in 2018, where under certain conditions, the player can end up looking at an object with the description "Paul Hellquist did not do his Job". Both Levine and Chris Kline, the game's lead programmer confirmed the message was a cheeky jab at Hellquist left as a debugging message; Kline and Hellquist were developing the systems to show descriptions of objects to players when looked at, and Hellquist offered to complete all the necessary descriptions in-game; to jokingly help prod Hellquist along, Kline put "Paul Hellquist did not do his Job" as the default message within the executable code. While the code message was changed for the original release, the remastered version likely used a pre-final version of the BioShock code, according to Kline. Hellquist took the revelation in good humor and tweeted that other Easter eggs should have been added to the game to display, "If you are enjoying this, Paul Hellquist did his job." A critical playtest of the game occurred in January 2007, where initial feedback from the players was mostly negative, including issues of the setting being too dark, having no idea where to go, and distrusting Atlas, who at the time was voiced in a southern drawl, described as a "lecherous Colonel Sanders". The team took this criticism to heart, revamping several elements during those extra months such as improving the lighting, implementing a quest marker, and using an Irish voice for Atlas to make him sound more trustworthy. During another late-stage playtest with the title "ninety-nine percent" complete according to Levine, the playtesters did not like the game at all as they felt no connection to the player-character Jack, and the person overseeing the tests told Levine that the game was likely to be a failure. At this point, BioShock did not have many cutscenes, as Levine was ideologically opposed to them. However, the following day, Levine and the lead group came up with a "cheap" way to correct this, by adding the initial cut scene within the plane and the subsequent plane crash, as this helped to set the time frame, place the player in the role of the character, and alluded to the "would you kindly" line later in the game. Levine likened this approach to the initial aircraft crash at the onset of the television show Lost to quickly establish character and setting. The game was successfully released in August 2007 with a final budget of about $25 million. In a 2016 interview, Levine felt that the game could have used about six more months of development to improve the gun combat system and fix lagging issues that occurred during the final boss fight. Despite the critical success of the title, many of those on the team would leave Irrational to pursue other projects due to late development strife that occurred. In an interview in 2018, Levine had come to recognize that BioShock reflected several Jewish themes, though this was not intentional. Levine, who considers himself culturally Jewish but does not follow Judaism, had grown up in New Jersey but spent much of his childhood time with his father who worked in Manhattan's Diamond District and visiting his grandparents in Queens, a neighborhood with a large proportion of Eastern European immigrants. Thus, Levine was exposed to much of the Jewish culture that flourished in the area following World War II and understood some of the anxiety Jewish people faced. In the 2018 interview, Levine recognized several of the characters, including Andrew Ryan (who was inspired by Ayn Rand who was also Jewish), Sander Cohen, and Brigid Tenenbaum, were written all as Jewish, and all seeking to escape a world they felt they did not fit into by going to Rapture; Levine said "'There's literal displacement and then there's a feeling of not fitting in, of 'I don't really belong here'. I think Jews are always going to feel a little bit like they don't belong wherever they are. There's always that 'what if we have to flee' mentality." Game engine BioShock uses a heavily modified Unreal Engine 2.5 with some of the advanced technologies from Unreal Engine 3. Irrational had previous experience with modifying and expanding on the Unreal Engine in SWAT 4, and continued this advancement of the engine within BioShock. One significant improvement they added was improved water effects, given the nature of the game's setting, hiring a programmer and artist to focus on the water effects. This graphical enhancement has been lauded by critics, with GameSpot saying, "Whether it's standing water on the floor or sea water rushing in after an explosion, it will blow you away every time you see it." BioShock also uses the Havok Physics engine that allows for an enhancement of in-game physics, the integration of ragdoll physics, and allows for more lifelike movement by elements of the environment. The Windows version was built to work in both Direct3D 10 (DirectX 10) and DirectX 9, with the DirectX 10 version supporting additional water and particle effects. Soundtrack BioShock contains both licensed music and an original score. The licensed music from the 1930s, 1940s, and 1950s can be heard playing on phonograph throughout Rapture. In total, 30 licensed songs can be heard throughout the game. The original score was composed by Garry Schyman. He composed his pieces to blend with the chosen licensed music as to keep the same feel, while also trying to bring out something that was "eerie, frightening and at times beautiful" to mesh well with Rapture's environments. 2K Games released an orchestral score soundtrack on their official homepage on August 24, 2007. Available in MP3 format, the score—composed by Garry Schyman—contains 12 of the 22 tracks from the game. The Limited Edition version of the game came with The Rapture EP remixes by Moby and Oscar The Punk. The three remixed tracks on the CD include "Beyond the Sea", "God Bless the Child" and "Wild Little Sisters"; the original recordings of these songs are in the game. BioShock score was released on a vinyl LP with the BioShock 2 Special Edition. Release and promotion An initial demo of the game was made available in August 2007 for Xbox 360 and Microsoft Windows. This demo included cutscenes to introduce the player to Rapture, the game's tutorial section, and its first levels; the demo also included weapons, plasmids, and tonics that would otherwise be introduced later in the full title, as to give the player more of the features that would be found in the published game. The Xbox 360 demo was the fastest demo at that time to reach one million downloads on the Xbox Live service. The full game was released for these platforms on August 21, 2007. The first patch for the Xbox 360 version was released about two weeks after release to fix some of the game stability issues players had reported. The patch was found to introduce more problems to the game for some users, including occasional freezes, bad framerates, and audio-related issues, though methods to resolve these issues through the console's cache system were outlined by Irrational Games. In December 2007, a common patch was released for both the Xbox 360 and Windows version. The patch included additional content such as new Plasmids, new achievements for the Xbox 360 version, and additional graphics settings to address some of the field-of-view issues identified by players. (See below). The patch also added in an option to disable the use of Vita-Chambers, a feature requested by players to make the game more challenging, as well as an achievement to complete the game at its hardest setting without using a Vita-Chamber. Ports In an August 2007 interview, when asked about the possibility of a PlayStation 3 version of BioShock, Ken Levine had stated only that there was "no PS3 development going on" at the time; however, on May 28, 2008, 2K Games confirmed that a PlayStation 3 version of the game was in development by 2K Marin, and it was released on October 17, 2008. On July 3, 2008, 2K Games announced a partnership with Digital Extremes and said that the PlayStation 3 version is being developed by 2K Marin, 2K Boston, 2K Australia, and Digital Extremes. Jordan Thomas was the director for the PlayStation 3 version. While there were no graphical improvements to the game over the original Xbox 360 version, the PlayStation 3 version offered the widescreen option called "horizontal plus", introduced via a patch on the 360 version, while cutscene videos were of a much higher resolution than in the DVD version. Additional add-on content was also released exclusively for the PlayStation 3 version. One addition was "Survivor Mode", in which the enemies were made tougher, and Vita-Chambers provided less of a health boost when used, forcing the player to be more creative in approaching foes and to rely more on the less-used plasmids in the game. BioShock also supports Trophies and PlayStation Home. A demo version was released on the PlayStation Store on October 2, 2008. An update for the PlayStation 3 version was released on November 13, 2008, to fix some graphical problems and occasions where users experienced a hang and were forced to reset the console. This update also incorporated the "Challenge Room" and "New Game Plus" features. A port to OS X systems was made by Feral Interactive and released in October 2009. In early 2008, IG Fun secured the rights to develop and publish a mobile phone version of BioShock. This version was developed as a top-down, two-dimensional platformer that attempted to recreate most of the plot and game elements of the original title; IG Fun worked with Irrational to determine the critical story elements they wanted to keep in the game. IG Fun recognized they would not be able to include the full storyline within a single mobile title, and so planned to split the title into three "episodes". Only the first episode was released. Another mobile port was developed by Tridev, known as BioShock 3D, released in 2010. Several parts of the game were reduced to single image graphics and the main gameplay engine had to use low-resolution and low-polygon models due to the limitations of mobile phones at the time of its release. A port to iOS devices done by the 2K China studio was released on August 27, 2014. The iOS version is content complete and functionally equivalent to the original Xbox 360 and Windows version, featuring either the use of touch-screen virtual gamepad controls or the use of a Bluetooth-enabled controller, and with a graphics engine optimized for iOS devices. The game was later delisted from the App Store in September 2015; the game had become unplayable for many that upgraded to iOS 8.4 on their devices, and while a patch had been discussed, a 2K representative stated that the decision to remove the game came from the developer. 2K later clarified that they will be working on resolving the issues with the game's compatibility with the new firmware and will re-release the title once that has been completed. However, by January 2017, 2K officially stated that it will no longer working to support the game's compatibility with newer iOS system. Reception Critical response BioShock has received "universal acclaim", according to review aggregator Metacritic, with the game receiving an average review score of 96/100 for Xbox 360 and Microsoft Windows, and 94/100 for PlayStation 3. , it is one of the highest-rated games on Metacritic, tied with several other games for the fourth-highest aggregate score. Mainstream press reviews have praised the immersive qualities of the game and its political dimension. The Boston Globe described it as "a beautiful, brutal, and disquieting computer game ... one of the best in years", and compared the game to Whittaker Chambers' 1957 riposte to Atlas Shrugged, "Big Sister Is Watching You". Wired also mentioned the Ayn Rand connection (a partial anagram of Andrew Ryan) in a report on the game which featured a brief interview with Levine. The Chicago Sun-Times review said "I never once thought anyone would be able to create an engaging and entertaining video game around the fiction and philosophy of Ayn Rand, but that is essentially what 2K Games has done ... the rare, mature video game that succeeds in making you think while you play". The Los Angeles Times review concluded, "Sure, it's fun to play, looks spectacular and is easy to control. But it also does something no other game has done to date: It really makes you feel." The New York Times reviewer described it as: "intelligent, gorgeous, occasionally frightening" and added, "Anchored by its provocative, morality-based story line, sumptuous art direction and superb voice acting, BioShock can also hold its head high among the best games ever made." GameSpy praised BioShock "inescapable atmosphere", and Official Xbox Magazine lauded its "inconceivably great plot" and "stunning soundtrack and audio effects." The gameplay and combat system have been praised for being smooth and open-ended, and elements of the graphics, such as the water, were commended for their quality. It has been noted that the combination of the game's elements "straddles so many entertainment art forms so expertly that it's the best demonstration yet how flexible this medium can be. It's no longer just another shooter wrapped up in a pretty game engine, but a story that exists and unfolds inside the most convincing and elaborate and artistic game world ever conceived." Reviewers did highlight a few negative issues in BioShock, however. The recovery system involving "Vita-Chambers", which revive a defeated player at half-health, but do not alter the enemies' health, makes it possible to wear down enemies through sheer perseverance, and was criticized as one of the most significant flaws in the gameplay. IGN noted that both the controls and graphics of the Xbox 360 version are inferior to those of the PC version, in that switching between weapons or plasmids is easier using the PC's mouse than the 360's radial menu, as well as the graphics being slightly better with higher resolutions. The game has been touted as a hybrid first-person shooter, but two reviewers found advances from comparable games lacking, both in the protagonist and in the challenges he faces. Some reviewers also found the combat behavior of the splicers lacking in diversity (and their A.I. behavior not very well done), and the moral choice too much "black and white" to be interesting. Some reviewers and essayists such as Jonathan Blow also claimed that the "moral choice" the game offered to the player (saving or harvesting the little sisters) was flawed because, to them, it had no real impact on the game, which ultimately lead them to think that the sisters were just mechanics of no real importance. Daniel Friedman for Polygon concurred with Blow, noting that the player only loses 10% of the possible ADAM rewards for saving the Little Sisters rather than killing them, and felt that this would have been better instituted as part of the game difficulty mechanic. Former LucasArts developer Clint Hocking wrote a noted essay that claimed that BioShock exhibited "ludonarrative dissonance" between its story and mechanics, as while he saw the story as advocating selflessness in helping others, its gameplay encourages what he views as selfishness by preying on Little Sisters. Awards At E3 2006, BioShock was given several "Games of the Show" awards from various online gaming sites, including GameSpot, IGN, GameSpy and GameTrailers's Trailer of the Year. BioShock received an award for Best Xbox 360 Game at the 2007 Leipzig Games Convention. After the game's release, the 2007 Spike TV Video Game Awards selected BioShock as Game of the Year, Best Xbox 360 Game, and Best Original Score, and nominated it for four awards: Best Shooter, Best Graphics, Best PC Game, and Best Soundtrack. The game also won the 2007 BAFTA "Best Game" award. X-Play also selected it as "Game of the Year", "Best Original Soundtrack", "Best Writing/Story", and "Best Art Direction". Game Informer named BioShock its Game of the Year for 2007. At IGN's "Best of 2007" BioShock was nominated for Game of The Year 2007, and won the award for PC Game of the Year, Best Artistic Design, and Best Use of Sound. GameSpy chose it as the third-best game of the year and gave BioShock the awards for Best Sound, Story, and Art Direction. GameSpot awarded the game for Best Story, while GamePro gave BioShock the Best Story, Xbox 360 and Best Single-Player Shooter awards. BioShock won the "Best Visual Art", "Best Writing", and "Best Audio" awards at the 2008 Game Developers Choice Awards. Guinness World Records awarded the game a record for "Most Popular Xbox Live Demo" in the Guinness World Records: Gamer's Edition 2008. BioShock is ranked first on Game Informer list of The Top 10 Video Game Openings. GamesRadar placed Bioshock as the 12th best game of all time. In 2011 BioShock was awarded the number 1 spot in GameTrailers' "Top 100 Video Game Trailers of All Time", for submerging the viewer into the BioShock universe and its enduring impact. In August 2012, IGN gave it the top spot on their list of the Top 25 Modern PC Games, a ranking of the best PC games released since 2006. In November 2012, Time named it one of the 100 greatest video games of all time. In July 2015, the game placed 9th on USgamer The 15 Best Games Since 2000 list. Sales The Xbox 360 version was the third best-selling game of August 2007, with 490,900 copies. The Wall Street Journal reported that shares in Take-Two Interactive "soared nearly 20%" in the week following overwhelmingly favorable early reviews of the game. Take-Two Interactive announced that by June 5, 2008, over 2.2 million copies of BioShock had been shipped. In a June 10, 2008 interview, Roy Taylor, Nvidia's VP of Content Business Development, stated that the PC version has sold over one million copies. According to Take-Two Interactive's chairman Strauss Zelnick, the game had sold around 3 million copies by June 2009. By March 2010, BioShock had sold 4 million copies. PC technical issues and DRM The initial Windows release was criticized by players for several perceived shortcomings. The game was shipped with SecuROM copy protection that required activation from 2K Games' servers over the Internet; the unavailability of these servers was reported as the reason for the cancellation of the game's midnight release in Australia. Players found that the SecuROM limited the number of times the game could be activated to two; user feedback led to 2K Games to increase the activation count to five, and later offer a tool that allowed users to revoke previous activations on their own. Ultimately 2K Games removed the activation limit, though retail versions of the game still required the activation process. Levine admitted that their initial approach to the activation process was malformed, harming their reputation during the launch period. The SecuROM software also caused some virus scanners and malware detector to believe the software was malicious. 2K Games assured players that the software installation process did not install any malicious code or rootkit. However, players observed that some of the SecuROM software was not entirely removed on uninstallation of the game. Some of the graphic capabilities of BioShock were criticized by players. The initial release of the game was found to have cut off the top and bottom of the field of view in order to fit widescreen monitors, resulting in less vertical view instead of more horizontal view compared to 4:3 monitors, conflicting with original reports from a developer on how widescreen would have been handled. 2K Games later stated that the choice of the field of view was a design decision made during development. Irrational included an option for "Horizontal FOV Lock" in the December 2007 patch that allows widescreen users a wider field of view, without cutting anything off the image vertically. BioShock was also criticized for not supporting pixel shader 2.0b video cards (such as the Radeon X800/X850), which were considered high-end graphics cards in 2004–2005, and accounted for about 24% of surveyed hardware collected through Valve's Steam platform at the time of BioShock release. On July 8, 2014, 2K Games released a DRM-free version of BioShock on the Humble 2K Bundle, and then re-released on the Humble Store. On December 17, 2018, BioShock and BioShock 2 remastered were released DRM-free on GOG. Legacy BioShock has received praise for its artistic style and compelling storytelling. In their book, Digital Culture: Understanding New Media, Glen Creeber and Royston Martin perform a case study of BioShock as a critical analysis of video games as an artistic medium. They praised the game for its visuals, sound, and ability to engage the player into the story. They viewed BioShock as a sign of the "coming of age" of video games as an artistic medium. John Lanchester of the London Review of Books recognized BioShock as one of the first video games to break into coverage of mainstream media to be covered as a work of art arising from its narrative aspects, whereas before video games had failed to enter into the "cultural discourse", or otherwise covered due to moral controversies they created. Peter Suderman for Vox in 2016 wrote that BioShock was the first game that demonstrated that video games could be a work of art, particularly highlighting that the game plays on the theme of giving the illusion of individual control. In February 2011, the Smithsonian Institution announced it would hold an exhibit dedicated to the art of video games. Several games were chosen by the Smithsonian's curators; when the public voted for additional games they felt deserved to be included in the exhibition, BioShock was among the winners. The game's plot twist, where the player discovers that the player-character Jack has been coerced into events by the trigger phrase, "Would you kindly...", is considered one of the strongest narrative elements of recent games, in part that it subverted the expectation that the player has control and influence on the game. In homage to BioShock, Black Mirror video game-centric episode "Playtest" includes the phrase. Related media Sequels In response to the game's high sales and critical acclaim, Take-Two Interactive chairman Strauss Zelnick revealed in a post-earnings report call that the company now considered the game part of a franchise. He also speculated that any follow-ups would mimic the release cadence of Grand Theft Auto, with a new release expected every two years. 2K's president Christoph Hartmann stated that BioShock could have five sequels, comparing the franchise to the Star Wars movies. BioShock 2 was announced in 2008, with its development led by 2K Marin. Levine stated that Irrational (then 2K Boston) was not involved in the game's sequel because they wanted to "swing for the fences" and try to come up with something "very, very different", which was later revealed as BioShock Infinite. BioShock 3 was also announced, with its release assumed to likely coincide with the BioShock film. BioShock 2 takes place about ten years following the events of the first game. The player assumes the role of Subject Delta, a precursor of the Big Daddies who must search the fallen city of Rapture for his former Little Sister, Eleanor. BioShock 2 was released for Windows PC, Mac, Xbox 360, and the PlayStation 3 worldwide on February 9, 2010. While BioShock Infinite, developed by Irrational Games and released in 2013, shares the name and many similar gameplay concepts with BioShock, the title is not a sequel or prequel in story, instead taking place aboard the collapsing air-city of Columbia in the year 1912, and following former Pinkerton agent Booker DeWitt as he tries to rescue a woman named Elizabeth from the dystopia it has become. Infinite involves the possibilities of multiple universes, and one scene during the game take place at the lighthouse and bathysphere terminus of Rapture as part of this exploration, though any direct canonical connection is not given in the main game. The episodic expansion, Burial at Sea, takes place in Rapture in 1959, before the war between Atlas and Ryan, while continuing the story of Booker and Elizabeth. This content links the two stories while providing expansion on the causes and behind-the-scenes events alluded to by the in-game background from BioShock. After completing BioShock Infinite and its expansion, Levine announced that he was restructuring Irrational Games to focus on smaller, narrative-driven titles. 2K Games continues to hold on to the BioShock intellectual property and plans to continue to develop games in this series, considering the framework set by Levine and his team as a "rich creative canvas" for more stories. Limited edition Following the creation of a fan petition for a special edition, Take-Two Interactive stated that they would publish a special edition of BioShock only if the petition received 5,000 signatures; this number of signatures was reached after just five hours. Subsequently, a poll was posted on the 2K Games operated Cult of Rapture community website in which visitors could vote on what features they would most like to see in a special edition; the company stated that developers would take this poll into serious consideration. To determine what artwork would be used for the Limited Edition cover, 2K Games ran a contest, with the winning entry provided by Crystal Clear Art's owner and graphic designer Adam Meyer. On April 23, 2007, the Cult of Rapture website confirmed that the Limited Collector's Edition would include a Big Daddy figurine (many of which were damaged due to a dropped shipping container; a replacement initiative is in place), a "Making Of" DVD, and a soundtrack CD. Before the special edition was released, the proposed soundtrack CD was replaced with The Rapture EP. Remastered edition BioShock was remastered to support 1080p and higher framerates as part of the 2016 BioShock: The Collection release for Windows, PlayStation 4, and Xbox One systems. The remastering was performed by Blind Squirrel Games and published by 2K Games. A standalone version of BioShock Remastered was released for macOS by Feral Interactive on August 22, 2017. The standalone version of the remastered version of BioShock along with The Collection were released on May 29, 2020 on the Nintendo Switch. Printed media BioShock: Breaking the Mold, a book containing artwork from the game, was released by 2K Games on August 13, 2007. It is available in both low and high resolution, in PDF format from 2K Games' official website. Until October 1, 2007, 2K Games was sending a printed version of the book to the owners of the collector's edition whose Big Daddy figurines had been broken, as compensation for the time it took to replace them. On October 31, 2008, the winners of "Breaking the Mold: Developers Edition Artbook Cover Contest" were announced on cultofrapture.com. A prequel novel, titled BioShock: Rapture written by John Shirley, was published July 19, 2011. The prequel book details the construction of Rapture and the events leading to its demise. The book follows multiple BioShock characters. Canceled Universal film adaptation Industry rumors after the game's release suggested a film adaptation of the game would be made, utilizing similar green screen filming techniques as in the movie 300 to recreate the environments of Rapture. On May 9, 2008, Take-Two Interactive announced a deal with Universal Studios to produce a BioShock movie, to be directed by Gore Verbinski and written by John Logan. The film was expected to be released in 2010, but was put on hold due to budget concerns. On August 24, 2009, it was revealed that Verbinski had dropped out of the project due to the studio's decision to film overseas to keep the budget under control. Verbinski reportedly feels this would have hindered his work on Rango. Then Juan Carlos Fresnadillo was in talks to direct with Verbinski as producer. In January 2010 the project was in the pre-production stage, with director Juan Carlos Fresnadillo and Braden Lynch, a voice artist from BioShock 2 both working on the film. By July, the film was facing budget issues, but producer Gore Verbinski said they were working it out. He also said the film would be a hard R. Ken Levine, during an interview on August 30, 2010, said: "I will say that it is still an active thing and it's something we are actively talking about and actively working on." Verbinski later cited that by trying to maintain the "R" rating, they were unable to find any studios that would back the effort, putting the film's future in jeopardy. Levine confirmed in March 2013 that the film had been officially canceled. Levine stated that after Warner's Watchmen film in 2009 did not do as well as the studio expected, they had concerns with the $200 million budget that Verbinski had for the BioShock film. They asked him to consider doing the film on a smaller $80 million budget, but Verbinski did not want to accept this. In February 2017, Verbinski said that his crew was about eight weeks from starting filming, with plans for many elaborate sets given that the setting of Rapture could not be something easily shot on existing locations, requiring the $200 million budget. Verbinski was anticipating on releasing the film with an R-rating when Universal approached him about changing the film's direction. Universal requested that he tone down the film and aim instead for a PG-13 movie, which would be able to draw more audiences to the film and recoup the larger budget he asked for. Verbinski insisted on keeping the R-rating and refused the smaller budget Universal offered to make the R-rated version. Universal felt that that expensive a film with the limited R-rating would be too much of a risk, and pulled him from the film. Universal then subsequently brought in a new director to work with the smaller budget, but Levine and 2K Games did not feel that the new director was a good fit with the material. Universal then let Levine decide to end the project, which he did, believing that the film would not work with the current set of compromises they would have had to make. In January 2014, artwork from the canceled film surfaced online, showing what an adaptation could have looked like. Verbinski said that there is "all kinds of crazy stuff" from the pre-production stage that still exists, such as screen tests for characters. He noted in the 2017 Reddit piece that with the success of the 2016 film Deadpool, he believes that there is now justification for his vision of the BioShock film. Netflix film adaptation In February 2022, Netflix announced it would be adapting the BioShock franchise into a movie with Vertigo Entertainment and 2K, a subsidiary of Take-Two Interactive Software, Inc producing. See also Ludonarrative dissonance — A term coined to describe conflict between narrative aspects of BioShock's story and gameplay. References Notes Footnotes Further reading BioShock: Rapture, by John Shirley (2011) External links The Cult of Rapture 2007 video games Alternate history video games BioShock (series) games Criticism of capitalism Dystopian video games Feral Interactive games First-person shooters Games for Windows certified games Horror video games Human experimentation in fiction Interactive Achievement Award winners IOS games Irrational Games Fiction about mind control Objectivism (Ayn Rand) MacOS games PlayStation 3 games Propaganda in fiction Science fiction video games Single-player video games Take-Two Interactive games Unreal Engine games Video games developed in Australia Video games developed in Canada Video games developed in the United States Video games scored by Garry Schyman Video games set in 1960 Video games using Havok Video games with alternate endings Video games with underwater settings Weird fiction video games Windows games Xbox 360 games
48570159
https://en.wikipedia.org/wiki/Open-channel%20SSD
Open-channel SSD
An open-channel solid state drive is a solid-state drive which does not have a firmware Flash Translation Layer implemented on the device, but instead leaves the management of the physical solid-state storage to the computer's operating system. The Linux 4.4 kernel is an example of an operating system kernel that supports open-channel SSDs which follow the NVM Express specification. The interface used by the operating system to access open-channel solid state drives is called LightNVM. NAND Flash Characteristics Since SSDs use NAND flash memory for storing data, it is important to understand the characteristics of this medium. NAND flash provides a read/write/erase interface. A NAND package is organized into a hierarchy of dies, planes, blocks and pages. There may be one or several dies within a single physical package. A die allows a single I/O command to be executed at a time. A plane allows similar flash commands to be executed in parallel within a die. There are three fundamental programming constraints that apply to NAND: (i) a write command must always contain enough data to program one (or several) full flash page(s), (ii) writes must be sequential within a block, and (iii) an erase must be performed before a page within a block can be (re)written. The number of program/erase (PE) cycles is limited. Because of these constraints SSD controllers write data to NAND flash memory in another order than the logical block order. This implies that the SSD controller must maintain a mapping table from host (logical) to NAND (physical) addresses. This mapping is usually called the L2P table. The layer that performs the translation from logical to physical addresses is called the flash translation layer or FTL. Comparison with Traditional SSDs Open Channel SSDs provide more flexibility with regard to data placement decisions, overprovisioning, scheduling, garbage collection and wear leveling. Open-Channel SSDs can, however, not be considered a uniform class of devices, as critical device characteristics such as minimum unit of read and minimum unit of write varies from device to device. One can therefore not design an FTL that automatically works on all Open-Channel SSDs. Traditional SSDs maintain the L2P table in DRAM on the SSD and use their own CPU for maintaining that L2P table. With Open Channel SSDs the L2P table is stored in host memory and the host CPU maintains that table. While the Open Channel SSD approach is more flexible, a significant amount of host memory and host CPU cycles is required for L2P management. With an average write size of 4 KB, almost 3 GB RAM is required for an SSD with a size of 1 TB. File Systems for Open-Channel SSDs With open-channel SSDs, the L2P mapping can be directly integrated or merged with storage management in file systems. This avoids the redundancy between system software and SSD firmware, and thus improves performance and endurance. Further, open-channel SSDs enables more flexible control over flash memory. The internal parallelism is exploited by coordinating the data layout, garbage collection and request scheduling of both system software and SSD firmware to remove the conflicts, and thus improves and smooths the performance. References Solid-state computer storage Computer storage devices Computer storage media Non-volatile memory
4961951
https://en.wikipedia.org/wiki/Transcytosis
Transcytosis
Transcytosis (also known as cytopempsis) is a type of transcellular transport in which various macromolecules are transported across the interior of a cell. Macromolecules are captured in vesicles on one side of the cell, drawn across the cell, and ejected on the other side. Examples of macromolecules transported include IgA, transferrin, and insulin. While transcytosis is most commonly observed in epithelial cells, the process is also present elsewhere. Blood capillaries are a well-known site for transcytosis, though it occurs in other cells, including neurons, osteoclasts and M cells of the intestine. Regulation The regulation of transcytosis varies greatly due to the many different tissues in which this process is observed. Various tissue-specific mechanisms of transcytosis have been identified. Brefeldin A, a commonly used inhibitor of ER-to-Golgi apparatus transport, has been shown to inhibit transcytosis in dog kidney cells, which provided the first clues as to the nature of transcytosis regulation. Transcytosis in dog kidney cells has also been shown be regulated at the apical membrane by Rab17, as well as Rab11a and Rab25. Further work on dog kidney cells has shown that a signaling cascade involving the phosphorylation of EGFR by Yes leading to the activation of Rab11FIP5 by MAPK1 upregulates transcytosis. Transcytosis has been shown to be inhibited by the combination of progesterone and estradiol followed by activation mediated by prolactin in the rabbit mammary gland during pregnancy. In the thyroid, follicular cell transcytosis is regulated positively by TSH . The phosphorylation of caveolin 1 induced by hydrogen peroxide has been shown to be critical to the activation of transcytosis in pulmonary vascular tissue. It can therefore be concluded that the regulation of transcytosis is a complex process that varies between tissues. Role in pathogenesis Due to the function of transcytosis as a process that transports macromolecules across cells, it can be a convenient mechanism by which pathogens can invade a tissue. Transcytosis has been shown to be critical to the entry of Cronobacter sakazakii across the intestinal epithelium as well as the blood–brain barrier. Listeria monocytogenes has been shown to enter the intestinal lumen via transcytosis across goblet cells. Shiga toxin secreted by enterohemorrhagic E. coli has been shown to be transcytosed into the intestinal lumen. From these examples, it can be said that transcytosis is vital to the process of pathogenesis for a variety of infectious agents. Clinical applications Pharmaceutical companies, such as Lundbeck, are currently exploring the use of transcytosis as a mechanism for transporting therapeutic drugs across the human blood–brain barrier (BBB). Exploiting the body's own transport mechanism can help to overcome the high selectivity of the BBB, which typically blocks the uptake of most therapeutic antibodies into the brain and central nervous system (CNS). The pharmaceutical company Genentech, after having synthesized a therapeutic antibody that effectively inhibited BACE1 enzymatic function, experienced problems transferring adequate, efficient levels of the antibody within the brain. BACE1 is the enzyme which processes amyloid precursor proteins into amyloid-β peptides, including the species that aggregate to form amyloid plaques associated with Alzheimer's disease. Molecules are transported across an epithelial or endothelial barrier by one of two routes: 1) a transcellular route through the intracellular compartment of the cell, or 2) a paracellular route through the extracellular space between adjacent cells. The transcellular route is also called transcytosis. Transcytosis can be receptor-mediated and consists of three steps: 1) receptor-mediated endocytosis of the molecule on one side of the cell, e.g. the luminal side; 2) movement of the molecule through the intracellular compartment typically within the endosomal system; and 3) exocytosis of the molecule to the extracellular space on the other side of the cell, e.g. the abluminal side. Transcytosis may be either unidirectional or bidirectional. Unidirectional transcytosis may occur selectively in the luminal to abluminal direction, or in the reverse direction, in the abluminal to luminal direction. Transcytosis is prominent in brain microvascular peptide and protein transport, because the brain microvascular endothelium, which forms the blood-brain barrier (BBB) in vivo, expresses unique, epithelial-like, high-resistance tight junctions. The brain endothelial tight junctions virtually eliminate the paracellular pathway of solute transport across the microvascular endothelial wall in brain. In contrast, the endothelial barrier in peripheral organs does not express tight junctions, and solute movement through the paracellular pathway is prominent at the endothelial barrier in organs other than the brain or spinal cord. Receptor-mediated transcytosis, or RMT, across the BBB is a potential pathway for drug delivery to the brain, particularly for biologic drugs such as recombinant proteins. The non-transportable drug, or therapeutic protein, is genetically fused to a transporter protein. The transporter protein may be an endogenous peptide, or peptidomimetic monoclonal antibody, which undergoes RMT across the BBB via transport on brain endothelial receptors such as the insulin receptor or transferrin receptor. The transporter protein acts as a molecular Trojan horse to ferry into brain the therapeutic protein that is genetically fused to the receptor-specific Trojan horse protein. Monoclonal antibody Trojan horses that target the BBB insulin or transferrin receptor have been in drug development for over 10 years at ArmaGen, Inc., a biotechnology company in Los Angeles. ArmaGen has developed genetically engineered antibodies against both the insulin and transferrin receptors, and has fused to these antibodies different therapeutic proteins, including lysosomal enzymes, therapeutic antibodies, decoy receptors, and neurotrophins. These therapeutic proteins alone do not cross the BBB, but following genetic fusion to the Trojan horse antibody, the therapeutic protein penetrates the BBB at a rate comparable to small molecules. In 2015, ArmaGen will be the first to enter human clinical trials with the BBB Trojan horse fusion proteins that delivery protein drugs to the brain via the transcytosis pathway. The human diseases initially targeted by ArmaGen are lysosomal storage diseases that adversely affect the brain. Inherited diseases create a condition where a specific lysosomal enzyme is not produced, leading to serious brain conditions including mental retardation, behavioral problems, and then dementia. Although the missing enzyme can be manufactured by drug companies, the enzyme drug alone does not treat the brain, because the enzyme alone does not cross the BBB. ArmaGen has re-engineered the missing lysosomal enzyme as a Trojan horse-enzyme fusion protein that crosses the BBB. The first clinical trials of the new Trojan horse fusion protein technology will treat the brain in lysosomal storage disorders, including one of the mucopolysaccharidosis type I diseases, (MPSIH), also called Hurler syndrome, and MPS Type II, also called Hunter syndrome. Researchers at Genentech proposed the creation of a bispecific antibody that could bind the BBB membrane, induce receptor-mediated transcytosis, and release itself on the other side into the brain and CNS. They utilized a mouse bispecific antibody with two active sites performing different functions. One arm had a low-affinity anti-transferrin receptor binding site that induces transcytosis. A high-affinity binding site would result in the antibody not being able to release from the BBB membrane after transcytosis. This way, the amount of transported antibody is based on the concentration of antibody on either side of the barrier. The other arm had the previously developed high-affinity anti-BACE1 binding site that would inhibit BACE1 function and prevent amyloid plaque formation. Genentech was able to demonstrate in mouse models that the new bispecific antibody was able to reach therapeutic levels in the brain. Genentech's method of disguising and transporting the therapeutic antibody by attaching it to a receptor-mediated transcytosis activator has been referred to as the "Trojan Horse" method. References External links Macromolecules Can Be Transferred Across Epithelial Cell Sheets by Transcytosis Transcytosis of IgA Transcytosis of bacteria Cellular processes
18521165
https://en.wikipedia.org/wiki/HP%20Application%20Security%20Center
HP Application Security Center
HP Application Security Center (ASC) was a set of technology solutions by HP Software Division. Much of the portfolio for this solution suite came from HP's acquisition of SPI Dynamics. The software solutions enabled developers, quality assurance (QA) teams and security experts to conduct web application security testing and remediation. The security products have been repackaged as enterprise security products from the HP Enterprise Security Products business in the HP Software Division. Products HP Application Security Center consisted of the following products: HP Assessment Management Platform software for managing a web application security testing program across the application lifecycle HP WebInspect software for web application security testing and assessment HP QAInspect software for standardized web application security testing during quality assurance (QA) testing In May 2008, HP Software announced the availability of HP Application Security Center through HP Software as a Service [] along with the announcement of new releases of the HP Application Security Center products. In September 2009, HP announced that it was discontinuing the HP DevInspect software products, formerly part of HP Application Security Center. HP stated that it had switched its focus to solutions for entire development groups rather than on a tool for individual developers. HP DevInspect was software for individual developers to use in creating secure web applications and services, and it integrated with specific IDEs (Integrated Development Environments). HP DevInspect for .NET operated with Microsoft Visual Studio, and HP DevInspect for Java operated with Eclipse or Rational (IBM) Application Developer. Benefits HP Application Security Center solutions helped find and fix security vulnerabilities for web applications throughout the application software development lifecycle (SDLC). By catching security vulnerabilities early in the application development lifecycle, organizations could reduce web attacks and vulnerabilities in their web applications. While some security vulnerabilities may exist in the web server or application infrastructure, at least 80 percent of those vulnerabilities existed in the web application itself. HP Application Security Center also creates compliance reports for more than 20 laws, regulations and best practices, including PCI DSS (Payment Card Industry Data Security Standard). PCI DSS is a worldwide information security standard defined by the Payment Card Industry Security Standards Council. More Information on Application Security Application security SQL injection Cross-site scripting PCI DSS Payment Card Industry Data Security Standard External links HP Software HP Enterprise Security References Application Security Center
48161380
https://en.wikipedia.org/wiki/Besiege%20%28video%20game%29
Besiege (video game)
Besiege is a strategy sandbox video game developed and published by Spiderling Studios. The game was released for Windows, macOS and Linux in February 2020, which followed a five-year long early access phase. A console version for Xbox One and Xbox Series X/S was released in February 2022. Overview The game allows players to build outlandish medieval siege engines to pit against castles or armies. Players select from a collection of mechanical parts that can be connected together to build a machine. Each level has a goal, such as "destroy the windmill" or "kill 100 soldiers". Although the goals are relatively simple, the wide variety of possible approaches allows for experimentation. Despite the medieval theme to the game, players are able to build intricate working models of four-stroke and two-stroke engines and vehicle systems, including computer systems, as well as modern vehicles such as tanks, automobiles, bomber planes, propeller planes, helicopters, airships, and battleships. An update in December 2017 added a level editor and multiplayer capabilities, such as pitting the vehicle creations against each other, or other players attempting to knock down a castle created by another. Later they added advanced build mode which grants the player the possibility to build complicated machines. With these additions, players developed systems to run tournaments similar to the television show BattleBots, pitting their Besiege creations in one-on-one matches with others to try to take the other out. The game was first released for Linux, OS X and Windows via early access on 28 January 2015 before officially releasing on 18 February 2020. A console version is set to be released for Xbox One and Xbox Series X/S on 10 February 2022. It features reworked user interface, photo mode, and a different Workshop for sharing user creations. However, it does not have the multiplayer or level editor functionality of the PC version. Reception Marsh Davies of Rock, Paper, Shotgun praised an early version of the game, comparing its "bouncily caricatured" science to a 12th-century version of Kerbal Space Program. Davies also praised the game's stylized graphics and sound. PC Gamer gave the game 85 out of 100. References External links 2020 video games Early access video games Indie video games Linux games MacOS games Multiplayer and single-player video games Simulation video games Video games with Steam Workshop support Strategy video games Video games developed in the United Kingdom Windows games Xbox One games Xbox Series X and Series S games
1256762
https://en.wikipedia.org/wiki/Mangled%20packet
Mangled packet
In computer networking, a mangled or invalid packet is a packet — especially IP packet — that either lacks order or self-coherence, or contains code aimed to confuse or disrupt computers, firewalls, routers, or any service present on the network. Their usage is associated with a type of network attack called a denial-of-service (DoS) attack. They aim to destabilize the network and sometimes to reveal its available services – when network operators must restart the disabled ones. Mangled packets can be generated by dedicated software such as nmap. , most invalid packets are easily filtered by modern stateful firewalls. References Packets (information technology)
46328497
https://en.wikipedia.org/wiki/Home%20health%20care%20software
Home health care software
Home health care software sometimes referred to as home care software or home health software falls under the broad category of health care information technology (HIT). HIT is “the application of information processing involving both computer hardware and software that deals with the storage, retrieval, sharing, and use of health care information, data, and knowledge for communication and decision making” Home health software is designed specifically for companies employing home health providers, as well as government entities who track payments to home health care providers. History The first use of home health care software was in the 1990s, with companies making software based on the Omaha system of software. The first software was made available for public health departments, nurse-managed centers, and community clinics. The use of home health care software increased with the technologies of cloud computing, telehealth and business analytical tools. Implementing these technologies with home health software is attributed to the enhancement of quality care at home, and reduction of fraud. During the early stages of 2000s, the home health care software industry expanded from simple databases to agencies being able to transmit electronic health records. There was also an increase in the available types of software as a result of software vendors working directly with health care providers. GrandPad is designed to reduce loneliness in older people. Types of software There are clinical and non-clinical applications of home health care software. Including types such as agency software, hospice solutions, clinical management systems, telehealth solutions, and electronic visit verification. Depending on the type of software used, companies can track health care employee visits to patients, verify payroll, and document patient care. Governments can also use home health care software to verify visits from providers who bill them for services. Use of some software is mandated by government agencies such as OASIS assessment information that must be transmitted electronically by home health care providers. Agency software Agency software is used by home health care providers for office use and is a subset of medical practice management software used by inpatient clinics and doctors offices. Agency software is used for billing, paying vendors, staff scheduling, and maintaining records associated with the business. Agency software can be standalone or part of software packages, that include electronic visit verification to track hours of employees and time spent on home visits and patient care. Agency software can be purchased or leased through various vendors. Agency software can range from Medical Home Health EMRs, Hospice EMRs, or non-medical Home Care Software. There are additional sub-categories of home health and related software solutions. Electronic visit verification Electronic visit verification (often referred to as EVV) is a method used to verify home healthcare visits to ensure patients are not neglected and to cut down on fraudulently documented home visits. EVV monitors locations of caregivers, and is mandated by certain states, including Texas and Illinois. Other states do not mandate it, but use it as part of its Medicaid fraud oversight, created by the passing of the Affordable Care Act in 2010. It is also widely used by employers of home healthcare providers to verify employee's locations and hours of work as well as document patient care. Outcome and assessment information set (OASIS) Home health care providers that participate in Medicaid are required to report specific data about patient care known as Outcome and Assessment Information Set-C (OASIS-C). Data includes health status, functional status, and support system information. The data is used to establish a measurement of patient home health care options. Home health care software allows health care providers to obtain and transmit such data while on location with a patient. Data collection is mandated by the Centers for Medicare and Medicaid Services, a division of the United States Department of Health and Human Services. Software for collecting and transmitting data is free through CMM and can also be purchased through private vendors as an add-on to other home health care software. Software Delivery Platforms On-Site Server Based Model The software is hosted on servers located and maintained on-site at the agency. The localization of the data provides the agency direct access to the computers hosting their software and may also require the agency to maintain the technology that hosts the software. Cloud Based Software Model The software is deployed on a cloud system maintained by the software vendor. The off-site maintenance of the cloud system results in a lower cost to the home health agency and provides access to the agency's data from off-site locations. See also Health informatics List of open-source health software Medical record References Healthcare in the United States Nursing informatics
65782
https://en.wikipedia.org/wiki/Teichoscopy
Teichoscopy
Teichoscopy or teichoscopia (), meaning "viewing from the walls," is a recurring narrative strategy in ancient Greek literature. One famous instance of teichoscopy occurs in Homer's Iliad, Book 3, lines 121–244. The passage begins with Helen approached in her chamber by Iris, disguised as her sister-in-law Laodice, the daughter of Priam. Helen is then led to the walls of the Skaian gates, where she is summoned by Priam, who asks her to point out the Achaean heroes she sees on the Trojan plain. Below her, the two armies are preparing for the duel between Menelaus and Paris. Helen identifies Agamemnon, Odysseus, Telamonian (Greater) Ajax, and Idomeneus. She also mentions that she does not see her brothers Castor and Pollux, who unbeknownst to her are already dead back in Greece. After this scene, the duel commences, with both armies praying to Zeus and the rest of the gods on Olympus to open the action. Analysis According to Maria C. Pantelia, Helen becomes the 'author' of a catalog when she describes for Priam the qualities of the most important Greek warriors. It has been suggested that the teichoscopy, as well as the duel between Paris and Menelaus, would have more likely occurred at the beginning of the war rather than during its tenth year. However, although Homer is not at the beginning of the Trojan War, he is at the beginning of the poem and therefore uses the teichoscopia as a poetic structure that provides information and suspense important for the remainder of the play and the duel to come. Although teichoscopy can be viewed as simply a vignette that surveys the major Greek warriors, it has been suggested that Homer is also trying to reveal something about Helen. Helen's open admiration of both the Greek and Trojan warriors is viewed as ironic, as it seems odd that the major cause of a war that has brought devastation to the Trojans should praise the enemy. However, by doing this, Homer is, according to Frederic Will, “insisting on the importance, and centrality, of Helen’s viewpoint. He is integrating a traditional form artistically.” The main object of teichoscopy is the synchronous discussion of events, as opposed to events being reported later by messengers or other eyewitnesses. It is a well-established technique in dramaturgy. Natural phenomena, too, may be conveyed by this device. Far-off drama such as the sun rising, or a description of stars across the firmament, lend themselves to this treatment. References Drama Homer Iliad Trojan War literature
816162
https://en.wikipedia.org/wiki/Jeskola%20Buzz
Jeskola Buzz
Jeskola Buzz is a freeware modular software music studio environment designed to run on Microsoft Windows using MFC. It is centered on a modular plugin-based machine view and a multiple pattern sequencer tracker. Buzz consists of a plugin architecture that allows the audio to be routed from one plugin to another in many ways, similar to how cables carry an audio signal between physical pieces of hardware. All aspects of signal synthesis and manipulation are handled entirely by the plugin system. Signal synthesis is performed by "generators" such as synthesizers, noise generator functions, samplers, and trackers. The signal can then be manipulated further by "effects" such as distortions, filters, delays, and mastering plugins. Buzz also provides support through adapters to use VST/VSTi, DirectX/DXi, and DirectX Media Objects as generators and effects. A few new classes of plugins do not fall under the normal generator and effect types. These include peer machines (signal and event automated controllers), recorders, wavetable editors, scripting engines, etc. Buzz signal output also uses a plugin system; the most practical drivers include ASIO, DirectSound, and MME. Buzz supports MIDI both internally and through several enhancements. Some MIDI features are limited or hacked together such as MIDI clock sync. Development Buzz was created by Oskari Tammelin who named the software after his demogroup, Jeskola. In 1997-98 Buzz was a "3rd Generation Tracker" and has since evolved beyond the traditional tracker model. The development of the core program, buzz.exe, was halted on October 5, 2000, when the developer lost the source code to the program. It was announced in June 2008 that development would begin again, eventually regaining much of the functionality. Development was restarted in June 2008. Plugin system Buzz's plugin system is intended to operate according to a free software model. The header files used to compile new plugins (known as the Buzzlib) contain a small notice that they are only to be used for making freeware plugins and Buzz file music players. The restriction requires that developers who wish to use the Buzz plugin system in their own sequencers pay a fee to the author. Notable users Some notable electronic musicians who use Jeskola Buzz include: Andrew Sega Hunz Andreas Tilliander James Holden, whose early work was produced entirely within Buzz. Lackluster Oliver Lieb The Field See also Buzztrax is an effort to recreate a Buzz-like environment under a free software license which runs under Linux. Visual programming language References External links BuzzMachines.com - The central buzz website for the last couple of years, since Oskari's own web site ceased to host Buzz distributions anymore. Several distributions of Buzz which include the core and selected plugins are distributed through this website. Jeskola Buzz Latest beta versions of Buzz Andrew Sega: Taking Tracking Mainstream Part 1, Part 2, Part 3, Part 4, Part 5 Tracking history with Buzz presentation, Notacon conference, April 27, 2007 Audio trackers Demoscene software Music software plugin architectures
16988558
https://en.wikipedia.org/wiki/Windows%20Phone
Windows Phone
Windows Phone (WP) is a discontinued family of mobile operating systems developed by Microsoft for smartphones as the replacement successor to Windows Mobile and Zune. Windows Phone featured a new user interface derived from the Metro design language. Unlike Windows Mobile, it was primarily aimed at the consumer market rather than the enterprise market. It was first launched in October 2010 with Windows Phone 7. Windows Phone 8 succeeded it in 2012, replacing the Windows CE-based kernel of Windows Phone 7 with the Windows NT kernel used by the PC versions of Windows (and, in particular, a large amount of internal components from Windows 8). Due to these changes, the OS was incompatible with all existing Windows Phone 7 devices, although it still supported apps originally developed for Windows Phone 7. In 2014, Microsoft released the Windows Phone 8.1 update, which introduced the Cortana virtual assistant, and Windows Runtime platform support to create cross-platform apps between Windows PCs and Windows Phone. In 2015, Microsoft released Windows 10 Mobile, which promoted increased integration and unification with its PC counterpart, including the ability to connect devices to an external display or docking station to display a PC-like interface. Although Microsoft dropped the Windows Phone brand at this time in order to focus more on synergies with Windows 10 for PCs, it was still a continuation of the Windows Phone line from a technical standpoint, and updates were issued for selected Windows Phone 8.1 devices. While Microsoft's investments in the platform were headlined by a major partnership with Nokia (whose Lumia series of smartphones, including the Lumia 520 in particular, would represent the majority of Windows Phone devices sold by 2013) and Microsoft's eventual acquisition of the company's mobile device business for just over US$7 billion (which included Nokia's then-CEO Stephen Elop joining Microsoft to lead its in-house mobile division), the duopoly of Android and iPhone remained the dominant platforms for smartphones, and interest in Windows Phone from app developers began to diminish by mid-decade. Microsoft laid off the Microsoft Mobile staff in 2016, after having taken a write-off of $7.6 billion on the acquired Nokia hardware assets, while market share sunk to 1% that year. Microsoft began to prioritize software development and integrations with Android and iOS instead, and ceased active development of Windows 10 Mobile in 2017. History Development Work on a major Windows Mobile update may have begun as early as 2004 under the codename "Photon", but work moved slowly and the project was ultimately cancelled. In 2008, Microsoft reorganized the Windows Mobile group and started work on a new mobile operating system. The product was to be released in 2009 as Windows Phone, but several delays prompted Microsoft to develop Windows Mobile 6.5 as an interim release. Following this, Windows Phone was developed quickly. One result was that the new OS would not be compatible with Windows Mobile applications. Larry Lieberman, senior product manager for Microsoft's Mobile Developer Experience, told eWeek: "If we'd had more time and resources, we may have been able to do something in terms of backward compatibility." Lieberman said that Microsoft was attempting to look at the mobile phone market in a new way, with the end user in mind as well as the enterprise network. Terry Myerson, corporate VP of Windows Phone engineering, said, "With the move to capacitive touch screens, away from the stylus, and the moves to some of the hardware choices we made for the Windows Phone 7 experience, we had to break application compatibility with Windows Mobile 6.5." From the beginning of Windows Phone until at least 2015, Joe Belfiore was the head of development and the face of the platform's initiatives. Partnership with Nokia On February 11, 2011, at a press event in London, Microsoft CEO Steve Ballmer and Nokia CEO Stephen Elop announced a partnership between their companies in which Windows Phone would become the primary smartphone operating-system for Nokia, replacing Symbian. The event focused largely on setting up "a new global mobile ecosystem", suggesting competition with Android and iOS with the words "It is now a three horse race". Elop stated the reason for choosing Windows Phone over Android, saying: "the single most important word is 'differentiation'. Entering the Android environment late, we knew we would have a hard time differentiating." While Nokia would have had more long-term creative control with Android (note that MeeGo as used by Nokia resembles Android more than it does Windows Phone 7 as both Android and MeeGo are based on the Linux kernel), Elop enjoyed familiarity with his past company where he had been a top executive. The pair announced integration of Microsoft services with Nokia's own services; specifically: Bing would power search across Nokia devices integration of Nokia Maps with Bing Maps integration of Nokia's Ovi store with the Windows Phone Store The partnership involves "funds changing hands for royalties, marketing and ad-revenue sharing", which Microsoft later announced as "measured in billions of dollars." Jo Harlow, whom Elop tapped to run Nokia's smartphone business, rearranged her team to match the structure led by Microsoft's VP of Windows Phone, Terry Myerson. Myerson was quoted as saying, "I can trust her with what she tells me. She uses that same direct and genuine communication to motivate her team." The first Nokia Lumia Windows Phones, the Lumia 800 and Lumia 710, were announced in October 2011 at Nokia World 2011. At the Consumer Electronics Show in 2012 Nokia announced the Lumia 900, featuring a 4.3-inch AMOLED ClearBlack display, a 1.4 GHz processor and 16 GB of storage. The Lumia 900 was one of the first Windows Phones to support LTE and was released on AT&T on April 8. An international version launched in Q2 2012, with a UK launch in May 2012. The Lumia 610 was the first Nokia Windows Phone to run the Tango Variant (Windows Phone 7.5 Refresh) and was aimed at emerging markets. On September 2, 2013, Microsoft announced a deal to acquire Nokia's mobile phone division outright, retaining former CEO Stephen Elop as the head of Microsoft's devices operation. The merger was completed after regulatory approval in all major markets in April 2014. As a result, Nokia's hardware division became a subsidiary of Microsoft operating under the name Microsoft Mobile. In February 2014, Nokia released the Nokia X series of smartphones, (later discontinued) using a version of Android forked from the Android Open Source Project. The operating system was modified; Google's software was not included in favour of competing applications and services from Microsoft and Nokia, and with a user interface highly modified to resemble Windows Phone. Versions Windows Phone 7 Windows Phone 7 was announced at Mobile World Congress in Barcelona, Catalonia, Spain, on February 15, 2010, and released publicly on November 8, 2010 in the United States. In 2011, Microsoft released Windows Phone 7.5 Mango. The update included a mobile version of Internet Explorer 9 that supports the same web standards and graphical capability as the desktop version, multi-tasking of third-party apps, Twitter integration for the People Hub, and Windows Live SkyDrive access. A minor update released in 2012 known as "Tango", along with other bug fixes, lowered the hardware requirements to allow for devices with 800 MHz CPUs and 256 MB of RAM to run Windows Phone. Windows Phone 7 devices can not be upgraded to Windows Phone 8 due to hardware limitations. Windows Phone 7.8 was released as a stopgap update in 2013 to include some of the user interface features from Windows Phone 8. Windows Phone 8 On October 29, 2012, Microsoft released Windows Phone 8, a new generation of the operating system. Windows Phone 8 replaced its previously Windows CE-based architecture with one based on the Windows NT kernel with many components shared with Windows 8. Windows Phone 8.1 Windows Phone 8.1 was announced on April 2, 2014, after being released in preview form to developers on April 10, 2014. New features added include a notification center, support for the Internet Explorer 11 web browser, with tab syncing among Windows 8.1 devices, separate volume controls, and the option to skin and add a third column of live tiles to the Start Screen. Starting with this release, Microsoft dropped the requirement that all Windows Phone OEMs include a camera button and physical buttons for back, Start, and Search. Windows Phone 8.1 introduced Cortana, a voice assistant similar to Siri and Google Now. Cortana replaced the previous Bing search feature, and was released as a beta in the United States in the first half of 2014, before expanding to other countries in early 2015. Windows 10 Mobile Windows 10 Mobile was announced on January 21, 2015, as a mobile operating system for smartphones and tablets running on ARM architecture. Its primary focus is unification with Windows 10, its PC counterpart, in software and services; in accordance with this strategy, the Windows Phone name has been phased out in favor of branding the platform as an edition of Windows 10, although it is still a continuation of Windows Phone, and most Windows Phone 8.1 devices can be upgraded to the platform. Windows 10 Mobile emphasized software using the Universal Windows Platform (UWP), which allowed apps to be designed for use across multiple Windows 10-based product families with nearly identical code, functionality, and adaptations for available input methods. When connected to an external display, devices could also render a stripped-down desktop interface similar to Windows on PCs, with support for keyboard and mouse input. Windows 10 Mobile featured Skype message integration, updated Office Mobile apps, notification syncing with other Windows 10 devices, support for the Microsoft Edge web browser, and other user interface improvements. Microsoft developed a middleware known as Windows Bridge to allow iOS Objective-C and Android C++ or Java software to be ported to run on Windows 10 Mobile with limited changes to code. With the diminishing interest and application development for the platform, Microsoft discontinued active development of Windows 10 Mobile in 2017, and the platform was declared end of life on January 14, 2020. Features User interface Windows Phone features a user interface based on Microsoft's "Metro" design language, and was inspired by the user interface in the Zune HD. The home screen, called the "Start screen", is made up of "Live Tiles", which have been the inspiration for the Windows 8 live tiles. Tiles are links to applications, features, functions and individual items (such as contacts, web pages, applications or media items). Users can add, rearrange, or remove tiles. Tiles are dynamic and update in real time – for example, the tile for an email account would display the number of unread messages or a tile could display a live update of the weather. Since Windows Phone 8, live tiles can also be resized to either a small, medium, or large appearance. Several features of Windows Phone are organized into "hubs", which combine local and online content via Windows Phone's integration with popular social networks such as Facebook, Windows Live, and Twitter. For example, the Pictures hub shows photos captured with the device's camera and the user's Facebook photo albums, and the People hub shows contacts aggregated from multiple sources including Windows Live, Facebook, and Gmail. From the hub, users can directly comment and 'like' on social network updates. The other built-in hubs are Xbox Music and Video, Xbox Live Games, Windows Phone Store, and Microsoft Office. Windows Phone uses multi-touch technology. The default Windows Phone user interface has a dark theme that prolongs battery life on OLED screens as fully black pixels do not emit light. Alternatively, users may choose a light theme in their phone's settings menu. The user may also choose from several accent colors. User interface elements such as links, buttons and tiles are shown in the user's chosen accent color. Third-party applications can be automatically themed with these colors. Windows Phone 8.1 introduces transparent tiles and a customizable background image for the Start screen. The image is visible through the transparent area of the tiles and features a parallax effect when scrolling which gives an illusion of depth. If the user does not pick a background image the tiles render with the accent color of the theme. Text input Users input text by using an on-screen virtual keyboard, which has a dedicated key for inserting emoticons, and features spell checking and word prediction. App developers (both inhouse and ISV) may specify different versions of the virtual keyboard in order to limit users to certain character sets, such as numeric characters alone. Users may change a word after it has been typed by tapping the word, which will invoke a list of similar words. Pressing and holding certain keys will reveal similar characters. The keys are somewhat larger and spaced farther apart when in landscape mode. Phones may also be made with a hardware keyboard for text input. Users can also add accents to letters by holding on an individual letter. Windows Phone 8.1 introduces a new method of typing by swiping through the keyboard without lifting the finger, in a manner similar to Swype and SwiftKey. Web browser Internet Explorer on Windows Phone allows the user to maintain a list of favorite web pages and tiles linking to web pages on the Start screen. The browser supports up to 6 tabs, which can all load in parallel. Other features include multi-touch gestures, smooth zoom in/out animations, the ability to save pictures that are on web pages, share web pages via email, and support for inline search which allows the user to search for a word or phrase in a web page by typing it. Tabs are synced with Windows 8.1 devices using Internet Explorer 11. Contacts Contacts are organized via the "People hub", and can be manually entered into contacts or imported from Facebook, Windows Live Contacts, Twitter, LinkedIn, Google, and Outlook. A "What's New" section shows a user's Facebook news feed and a "Pictures" section show pictures from those social networks, while a "Me" section within the "People" hub shows a user's own social network status and wall and allows them to view social network updates. Contacts can also be pinned to the Start Screen. The contact's "Live Tile" displays their social network status and profile picture on the homescreen. Clicking on a contact's tile or accessing their card within the "People" hub will reveal their recent social network activity as well as the rest of their contact information. If a contact has information stored on multiple networks, users can link the two separate contact accounts, allowing the information to be viewed and accessed from a single card. As of Windows Phone 7.5, contacts can also be sorted into "Groups". Here, information from each of the contacts is combined into a single page which can be accessed directly from the Hub or pinned to the Start screen. Email Windows Phone supports Outlook.com, Exchange, Yahoo! Mail and Gmail natively and supports many other services via the POP and IMAP protocols. Updates added support for more services such as iCloud and IBM Notes Traveler. Contacts and calendars may be synced from these services as well. Users can also search through their email by searching in the subject, body, senders, and receivers. Emails are shown with threads, and multiple email inboxes can be combined into a single view (a feature commonly referred to as "combined inbox") or can viewed separately. Multimedia Xbox Music and Xbox Video are built-in multimedia hubs providing entertainment and synchronization capabilities between PC, Windows Phone, and other Microsoft products. The two hubs were previously combined until standalone apps were released in late 2013, shortly before Windows Phone 8.1 debuted. The hubs allow users to access music, videos, and podcasts stored on the device, and links directly to the "Xbox Music Store" to buy or rent music and the "Xbox Video Store" to purchase movies and TV episodes. Xbox Music also allows the user to stream music with an Xbox Music Pass. When browsing the music by a particular artist, users are able to view artist biographies and photos. The Xbox Music hub also integrates with many other apps that provide video and music services, including, but not limited to, iHeartRadio, YouTube, and Vevo. This hub also includes Smart DJ which compiles a playlist of songs stored on the phone similar to the song or artist selected. The Pictures hub displays the user's Facebook and OneDrive photo albums, as well as photos taken with the phone's built-in camera. Users can also upload photos to social networks, comment on photos uploaded by other people, and tag photos posted to social networks. Multi-touch gestures permit zooming in and out of photos. An official file manager app called Files, which is available for download from the Windows Phone Store, enables users to move and rearrange documents, videos, music and other files within their device's hard drive or to an external SD card. Media support Windows Phone supports WAV, MP3, WMA, AMR, AAC/MP4/M4A/M4B and 3GP/3G2 standards. The video file formats supported on WP include WMV, AVI, MP4/M4V, 3GP/3G2 and MOV (QuickTime) standards. These supported audio and video formats would be dependent on the codecs contained inside them. It has also been previously reported that the DivX and Xvid codecs within the AVI file format are also playable on WP devices. Note that Windows Phone does not support DRM protected media files that are obtained from services other than Xbox Music Pass. The image file formats that are supported include JPG/JPEG, PNG, GIF, TIF and Bitmap (BMP). Users can also add custom ringtones which are less than 1MB in size and less than 40 seconds long. DLNA streaming and stereoscopic 3D are also supported. Games The "Games hub" provides access to games on a phone along with Xbox Live functionality, including the ability for a user to interact with their avatar, view and edit their profile, see their achievements and view leaderboards, and send messages to friends on Xbox Live. The hub also features an area for managing invitations and turn notifications in turn-based multiplayer games. Games are downloaded from Windows Phone Store. Search Bing is the default search engine on Windows Phone handsets because its functions are deeply integrated in the OS (which also include the utilization of its map service for location-based searches and queries). However, Microsoft has stated that other search engine applications can be used. In the area of location-based searches, Bing Maps (which is powered by Nokia's location services) provides turn-by-turn navigation service to Windows Phone users, and Local Scout shows interest points such as attractions and restaurants in the nearby area. On Nokia devices, Nokia's Here Maps is preinstalled in place of Bing Maps. Furthermore, Bing Audio allows the user to match a song with its name, and Bing Vision allows the user to scan barcodes, QR codes, and other types of tags. Cortana Every Windows Phone has either a dedicated physical Search button or an on-screen Search button, which was previously reserved for a Bing Search app, but has been replaced on Windows Phone 8.1 devices and later in the United Kingdom and United States by Cortana, a digital personal assistant which can also double as an app for basic searches. Cortana allows users to do tasks such as set calendar reminders and alarms, and recognizes a user's natural voice, and can be used to answer questions (like current weather conditions, sports scores, and biographies). The app also keeps a "Notebook" to learn a user's behavior over time and tailor reminders for them. Users can edit the "Notebook" to keep information from Cortana or reveal more about themselves. Office suite All Windows Phones come preinstalled with Microsoft Office Mobile, which provides interoperability between Windows Phone and the desktop version of Microsoft Office. Word Mobile, Excel Mobile, PowerPoint Mobile, and SharePoint Workspace Mobile apps are accessible through a single "Office Hub," and allow most Microsoft Office file formats to be viewed and edited directly on a Windows Phone device. The "Office Hub" can access files from OneDrive and Office 365, as well as files which are stored locally on the device's hard drive. Although they are not preinstalled in Windows Phone's "Office Hub," OneNote Mobile, Lync Mobile, and OneDrive for Business can be downloaded separately as standalone applications from the Windows Phone Store. Multitasking Multitasking in Windows Phone is invoked through long pressing the "back" arrow, which is present on all Windows Phones. Windows Phone 7 uses a card-based task switcher, whereas later versions of Windows Phone utilize true background multitasking. Sync Windows Phone 7 Zune Software manages the contents on Windows Phone 7 devices and Windows Phone can wirelessly sync with Zune Software. Later versions Syncing content between Windows Phone 8 and 8.1 and Windows PCs or Macs is provided through the Windows Phone App, which is available for both Windows and Mac OS X. It is the official successor to Zune software only for Windows Phone 8 and Windows Phone 8.1, and allows users to transfer content such as music, videos, and documents. Users also have the ability to use a "Tap and Send" feature that allows for file transfer between Windows phones, and NFC-compatible devices through NFC. Updates Software updates are delivered to Windows Phone users via Microsoft Update, as is the case with other Windows operating systems. Microsoft initially had the intention to directly update any phone running Windows Phone instead of relying on OEMs or wireless carriers, but on January 6, 2012, Microsoft changed their policy to let carriers decide if an update will be delivered. While Windows Phone 7 users were required to attach their phones to a PC to install updates, starting with Windows Phone 8, all updates are done via over-the-air downloads. Since Windows Phone 8, Microsoft has also begun releasing minor updates that add features to a current OS release throughout the year. These updates were first labeled "General Distribution releases" (or GDRs), but were later rebranded simply as "Updates". All third-party applications can be updated automatically from the Windows Phone Store. Advertising platform Microsoft has also launched an advertising platform for the Windows Phone platform. Microsoft's General Manager for Strategy and Business Development, Kostas Mallios, said that Windows Phone will be an "ad-serving machine", pushing advertising and brand-related content to the user. The platform will feature advertising tiles near applications and toast notifications, which will bring updating advertising notifications. Mallios said that Windows Phone will be able to "preserve the brand experience by going directly from the web site right to the application", and that Windows Phone "enables advertisers to connect with consumers over time". Mallios continued: "you're now able to push information as an advertiser, and stay in touch with your customer. It's a dynamic relationship that is created and provides for an ongoing dialog with the consumer." Bluetooth Windows Phone supports the following Bluetooth profiles: Advanced Audio Distribution Profile (A2DP 1.2) Audio/Video Remote Control Profile (AVRCP 1.3) Hands Free Profile (HFP 1.5) Headset Profile (HSP 1.1) Phone Book Access Profile (PBAP 1.1) Bluetooth File Transfer (OBEX) (from Windows Phone 7.8) Windows Phone BTF support is available from Windows Phone 7.8, but is limited to the transferring of pictures, music and videos via a 'Bluetooth Share' app. Feature additions Microsoft keeps a site where people can submit and vote on features they would like to see added to Windows Phone. Store The Windows Phone Store was used to digitally distribute music, video content, podcasts, and third-party applications to Windows Phone handsets. The store was accessible using the Zune Software client or the Windows Phone Store hub on devices (though videos were not downloadable through the store hub and must be downloaded and synced through the Zune software). The Store was managed by Microsoft, which included an approval process. As of March 2012, the Windows Phone Store was available in 54 countries. Music and videos Xbox Music offered approximately 50 million songs up to 320 kbit/s in DRM-free MP3 format from the big four music groups (EMI, Warner Music Group, Sony BMG and Universal Music Group), as well as smaller music labels. Xbox Video offered HD movies from Paramount, Universal, Warner Brothers, and other studios and plus television shows from popular television networks. Microsoft offered the Xbox Music Pass music subscription service, which allowed subscribers to download an unlimited number of songs for as long as their subscription was active and play them on current Microsoft devices. Applications and games Development Third-party applications and games for Windows Phone can be based on XNA, a Windows Phone-specific version of Silverlight, the GUI-based Windows Phone App Studio, or the Windows Runtime, which allows developers to develop an app for both the Windows Store and Windows Phone Store simultaneously. App developers can develop apps using C# / Visual Basic.NET (.NET), C++ (CX) or HTML5/JavaScript. For Windows Phone apps to be designed and tested within Visual Studio or Visual Studio Express, Microsoft offers Windows Phone Developer Tools, which run only on Windows Vista SP2 and later, as an extension Microsoft also offers Expression Blend for Windows Phone for free. On November 29, 2009, Microsoft announced the Release-to-web (RTW) version of its Visual Basic .NET Developer Tool, to aid development of Windows Phone apps in Visual Basic. Later versions of Windows Phone support the running of managed code through a Common Language Runtime similar to that of the Windows operating system itself, as opposed to the .NET Compact Framework. This, along with support for native C and C++ libraries, allows some traditional Windows desktop programs to be easily ported to Windows Phone. Submission Registered Windows Phone and Xbox Live developers can submit and manage their third-party applications for the platforms through the App Hub web applications. The App Hub provides development tools and support for third-party application developers. The submitted applications undergo an approval process for verifications and validations to check if they qualify the applications standardization criteria set by Microsoft. The cost of the applications that are approved is up to the developer, but Microsoft will take 20% of the revenue (the other 80% goes to the developer). Microsoft will only pay developers once they reach a set sales figure, and will withhold 30% tax from non-US developers, unless they first register with the United States Government's Internal Revenue Service. Microsoft only pays developers from a list of thirty countries. A yearly fee is also payable for developers wishing to submit apps. In order to get an application to appear in the Windows Phone Store, the application must be submitted to Microsoft for approval. Microsoft has outlined the content that it will not allow in the applications, which includes content that, among other things, advocates discrimination or hate, promotes usage of drugs, alcohol or tobacco, or includes sexually suggestive material. Hardware Windows Phone 7 devices were first produced by HTC, LG and Samsung. These hardware partners were later joined by Acer, Alcatel, Fujitsu, Toshiba, Nokia, and Chinese OEM ZTE. Windows Phone 8 devices were being produced by HTC, Huawei, Nokia, and Samsung. At the 2014 Mobile World Congress, Microsoft announced that upcoming Windows Phone 8.1 devices would be manufactured by Celkon, Gionee, HTC, Huawei, JSR, Karbonn, LG, Lenovo, Longcheer, Micromax, Microsoft Mobile, Samsung, Xolo, and ZTE among others. Sony (under the Xperia or Vaio brand) had also stated its intention to produce Windows Phone devices in the near future. Yezz announced two smartphones in May, and at Computex 2014 BYD, Compal, Pegatron, Quanta and Wistron were also named as new Windows Phone OEMs. In August 2014, Huawei said it was dropping support for Windows Phone due to low sales. Reception User interface The Metro UI and overall interface of the OS were highly praised for their style, with ZDNet noting its originality and fresh clean look. Engadget and ZDNet applauded the integration of Facebook into the People Hub as well as other built-in capabilities, such as Windows Live, etc. However, in version 8.1 the once tight Facebook and Twitter integration was removed so that updates from those social media sites had to be accessed via their respective apps. Market share Windows Phone 7 (2010–2012) For the first months, market specialists were optimistic about its adoption with IDC forecasting that Windows Phone would surpass IPhone by 2015. According to Gartner, there were 1.6 million devices running Microsoft OS sold to customers in Q1 2011 worldwide. 1.7 million smartphones using a Microsoft mobile OS were sold in Q2 2011, for a 1.6% market share. In Q3 2011, Microsoft's worldwide market share dropped slightly to 1.5%. In Q4 2011 market share increased to 1.9%, and it stayed at 1.9% for Q1 2012. Reports for Q2, Q3 and Q4 of year 2011 include both Windows Phone and small part of Windows Mobile marketshare under the same "Microsoft mobile OS" banner, and do not make the distinction of separating the marketshare values of the two. According to Nielsen, Windows Phone had a 1.7% market share in Q1 2012, and then dropped back to 1.3% in Q2 2012. Windows Phone 8 (2012–2015) After the release of Windows Phone 8, Gartner reported that Windows Phone's marketshare jumped to 3% in Q4 2012, a 124% increase over the same time period in 2011. In mid-2012, IDC had suggested that Windows Phone might surpass the faltering BlackBerry platform and potentially even Apple iOS, because of Nokia dominance in emerging markets like Asia, Latin America, and Africa, as the iPhone was considered too expensive for most of these regions and BlackBerry OS possibly going to feature a similar fate as Symbian. IDC's projections were partially correct, as in Q1 2013 Windows Phone shipments surpassed BlackBerry shipment volume for the first time. IDC had to slash the Windows Phone predictions once again, to 7 percent of total market in 2018, because of the slow growth. As of the third quarter of 2013, Gartner reported that Windows Phone holds a worldwide market share of 3.6%, up 123% from the same period in 2012 and outpacing Android's rate of growth. According to Kantar's October 2013 report, Windows Phone accounted for 10.2% of all smartphone sales in Europe and 4.8% of all sales in the United States. Some analysts have attributed this spike in sales to both Windows Phone 8 and Nokia's successful push to market low and mid-range Windows Phones like the Lumia 520 and Lumia 620 to a younger audience. Gartner reported that Windows Phone market share finished 2013 at 3.2%, which while down from the third quarter of 2013 was still a 46.7% improvement from the same period in 2012. IDC reported that Windows Phone market share, having peaked in 2013 at 3.4%, had dropped to 2.5% by the second quarter of 2014. In August 2017, the New York Police Department ordered Apple iPhone products to replace its deployment of 36,000 Lumia 830 and Lumia 640 XL Windows Phone devices, partly citing Microsoft's end of support for Windows Phone 8.1 on July 11, 2017 and its minuscule market share. Developer interest Microsoft's developer initiative programs and marketing have gained attention from application developers. As of Q3 2013, an average of 21% of mobile developers use the Windows Phone platform, with another 35% stating they are interested in adopting it. Some reports have indicated that developers may be less interested in developing for Windows Phone because of lower ad revenue when compared to competing platforms. The main criticism of Windows Phone was the lack of applications when compared to iOS and Android. This also affected Microsoft's largest partner in the platform, Nokia, whose vice president showed his frustration at the lack of apps for the platform. A few developers refused to develop apps while preventing third-party alternatives. A well known example was Snapchat, which announced a crackdown on third-party apps of its service and its users in November 2014. Microsoft was forced to remove third-party Snapchat apps (including the popular 6snap) from the Windows Phone Store a month later, while Snapchat never developed an official app for those users. A petition from users requesting an official Snapchat app reached 43,000 signatures in 2015, but the company still decided not to build an app. In addition, Google twice blocked Microsoft's own YouTube app for violating its terms of service, objecting to the app's ability to download videos and prevent ads. The app returned in October 2013 but stripped of many features. By 2014, Windows Phone was losing share and relevance; between that year and 2015 it was reported that developers were backing out of the platform and retiring apps because of the low market share. Many high-profile apps were discontinued by 2015 such as American Airlines, NBC, Pinterest and others. In addition, Microsoft itself retired some of its own first-party apps. See also Comparison of mobile operating systems Microsoft Surface References External links Official website (Archive) Mobile phones introduced in 2010 Products and services discontinued in 2020 Discontinued Microsoft products Smartphones Microsoft franchises Cloud clients Mobile operating systems ARM operating systems C (programming language) software C++ software Discontinued Microsoft operating systems Discontinued versions of Microsoft Windows Defunct consumer brands
15954826
https://en.wikipedia.org/wiki/Sawtooth%20Software
Sawtooth Software
Sawtooth Software, Inc. is a computer software company based in Provo, Utah, United States. The company provides survey software tools, and specializes in conjoint analysis. According to the American Marketing Association, Sawtooth Software was ranked fourth in 2005 among software used in market research (after SPSS, Microsoft Excel, and SAS System). History In the late 1960s, Rich Johnson, who was then an employee of Market Facts, Inc., developed a technique they called "Tradeoff Analysis". It consisted of questioning respondents about various concepts composed of multiple attributes and comparing the responses to the different concepts. In 1971, Paul Green published an article "Conjoint Measurement for Quantifying Judgmental Data". When Johnson became aware of Green's research, he identified his technique as a variety of conjoint analysis. In the mid 70's, Johnson co-founded the John Morton Company in part to apply computer technology to Tradeoff Analysis. Johnson purchased an Apple II and began to program interviews for use on client projects. Each project was programmed custom for the client. During this time Johnson was fascinated by the ability of the computer to facilitate collection of respondent data. In 1982 Johnson left the John Morton Company and founded Sawtooth Software to pursue creating generalized software for use in marketing research. Products Sawtooth Software has created products for traditional conjoint analysis, as well as discrete choice analysis and other forms of conjoint. In addition are non-conjoint products involving interviewing, perceptual mapping, and cluster analysis. Conjoint Techniques Adaptive Conjoint Analysis (ACA) Adaptive Choice Based Conjoint (ACBC) Choice Based Conjoint (CBC) Conjoint Value Analysis (CVA) Maximum Difference Scaling (MaxDiff) Menu-Based Choice (MBC) Analysis Techniques Hierarchical Bayes (HB) Latent Class (LC) Logit Ordinary Least Squares (OLS) Convergent Cluster & Ensemble Analysis (CCEA) Composite Product Mapping (CPM) (retired) General Interviewing Packages Lighthouse Studio Discover Ci3 (retired) Relationships Sawtooth Software maintains relationships with academics and market research organizations such as the American Marketing Association and ESOMAR. Researchers developing new interviewing techniques such as Maximum Difference (Best Worst) have used Sawtooth Software to assist in constructing their projects. In addition to producing products, Sawtooth Software has also hosted a research conference every 18 months in the United States since 1987. The conference is not a sales event, and speakers with contrary opinions or competing products attend and present. References External links Sawtooth Software Homepage Companies based in Utah 1983 establishments in Idaho Market research organizations Market research companies of the United States Software companies based in Utah Statistical survey software Software companies of the United States Decision-making software
32715276
https://en.wikipedia.org/wiki/Organization%20of%20United%20States%20Air%20Force%20Units%20in%20the%20Gulf%20War
Organization of United States Air Force Units in the Gulf War
The 1990–1991 Gulf War was the last major United States Air Force combat operation of the 20th Century. The command and control of allied forces deployed to the Middle East initially as part of Operation Desert Shield, later engaging in combat operations during Operation Desert Storm, were assigned to United States Central Command Air Forces (USCENTAF), the USAF component of the Joint United States Central Command. United States Air Force units were initially deployed to Saudi Arabia in August 1990, being assigned directly to CENTAF with a mission to defend the kingdom. In November 1990, the decision was made to enhance the force into an offensive-capable one, and additional units were ordered deployed to CENTAF. As a result, CENTAF set up a table of organization which established provisional Air Divisions to prevent too many units reporting directly to CENTAF headquarters. These were as follows: The 14th Air Division (Provisional) commanded deployed primarily Tactical Air Command and United States Air Forces in Europe units with the mission of destroying enemy air, missile and ground forces, as well as enemy infrastructure targets. To accomplish this mission, the 14th-controlled A-10 Thunderbolt II ground attack aircraft; F-15C Eagle and F-16 Fighting Falcon fighters; F-111 light tactical bombers; EF-111 Raven electronic combat aircraft and the F-117 stealth attack aircraft. The division also provided electronic warfare, reconnaissance, and in-theater attached Strategic Air Command refueling support. The 15th Air Division (Provisional) commanded deployed Tactical Air Command units with a reconnaissance and electronic warfare mission focused on defeating enemy ground base air defenses and increasing the effectiveness of friendly formations. Aircraft deployed included RF-4C Phantom II tactical reconnaissance; F-4G Phantom II anti-radar; EC-130H Compass Call electronic warfare and two prototype E-8A Joint Stars battle management and command and control aircraft. The 1610th Air Division (Provisional) controlled Military Airlift Command C-130E/H Hercules theater airlift, aeromedical evacuation and Air Force Special Operations Command forces. Strategic Air Command deployed strategic electronic warfare and reconnaissance units also attached. The 17th Air Division (Provisional) commanded primarily provisional air refueling wings created from active-duty KC-135/KC-10 units of the Strategic Air Command's Fifteenth Air Force and SAC Air National Guard KC-135 units deployed within the CENTAF AOR. The SAC 7th Air Division commanded deployed Air Refueling and B-52 Stratofortress Bombardment Wings located outside of the CENTAF AOR. The 7440th Composite Wing was a United States Air Forces in Europe (USAFE) Provisional Wing under Joint Task Force Proven Force that flew combat missions over Northern and Central Iraq from Incirlik Air Base, Turkey. Aftermath After the end of combat operations, most of the combat forces of CENTAF returned to their home stations. The provisional organizations established were inactivated; their temporary nature meaning that no official lineage or history was retained by the USAF. On 13 March 1991 Headquarters Tactical Air Command activated the 4404th Tactical Fighter Wing (Provisional) at Prince Sultan Air Base, Al Kharj, to replace the provisional Air Divisions. The original assets of the 4404th TFW came from the 4th TFW (Provisional), which had operated during the Gulf War. The long-term effect of the deployment and organization of Air Force Wings and Groups to CENTAF for the Gulf War eventually led to an Air Force-Wide reorganization of its Cold War command structure; the result being the modern Air Force organization structure which exists today. Air Force Expeditionary units, which are activated and inactivated as needed to support deployments were developed, replacing the "Provisional" units of the Gulf War. 14th Air Division (Provisional) Brigadier General Buster Glosson served as commander, 14th Air Division (Provisional), and director of campaign plans for U.S. Central Command Air Forces, Riyadh, Saudi Arabia. 1st Tactical Fighter Wing (Provisional) Deployed from Langley Air Force Base, Virginia Headquarters: King Abdul Aziz Air Base, Dhahran, Saudi Arabia 4th Tactical Fighter Wing (Provisional) Deployed from Seymour Johnson Air Force Base, North Carolina Headquarters: Prince Sultan Air Base, Al Kharj, Saudi Arabia 33d Tactical Fighter Wing (Provisional) Deployed from Eglin Air Force Base, Florida Headquarters: King Faisal Air Base, Tabuk, Saudi Arabia 37th Tactical Fighter Wing (Provisional) Deployed from Tonopah Test Range Airport, Nevada Headquarters: King Khalid Air Base, Khamis Mushait, Saudi Arabia 48th Tactical Fighter Wing (Provisional) Deployed from RAF Lakenheath, England Headquarters: Taif Air Base, Taif, Saudi Arabia 354th Tactical Fighter Wing (Provisional) Deployed from Myrtle Beach Air Force Base, South Carolina Headquarters: King Fahd International Airport, Dammam, Saudi Arabia 363d Tactical Fighter Wing (Provisional) Deployed from Shaw Air Force Base, South Carolina Headquarters: Al Dhafra Air Base, Abu Dhabi, United Arab Emirates 388th Tactical Fighter Wing (Provisional) Deployed from Hill Air Force Base, Utah Headquarters: Al Minhad Air Base, Dubai, United Arab Emirates 401st Tactical Fighter Wing (Provisional) Deployed from Torrejón Air Base, Spain Headquarters: Doha International Airport, Qatar 4410th Operational Support Wing (Provisional) Deployed from Eglin AFB, Eglin Aux Fld#3, Florida Headquarters: King Khalid Military City, Saudi Arabia Air Division, Provisional, 15 35th Tactical Fighter Wing (Provisional) Deployed from George Air Force Base, California Headquarters: Doha International Airport, Qatar 552d Airborne Warning and Control Wing (Provisional) Deployed from Tinker Air Force Base, Oklahoma Headquarters: King Khalid Air Base, Khamis Mushait, Saudi Arabia 41st Electronic Combat Squadron (Provisional) Deployed from Davis-Monthan Air Force Base, Arizona Headquarters: Al Banteen Air Base, Abu Dhabi, United Arab Emirates EC-130H Compass Call (Tail Code: DM), 27 August 1990–17 April 1991 4411th Joint Stars Squadron Headquarters: King Khalid Air Base, Khamis Mushait, Saudi Arabia E-8A Joint Stars, December 1990-March 1991 (2 Aircraft) 1620th Air Division (Provisional) 314th Tactical Airlift Wing (Provisional) Deployed from Little Rock Air Force Base, Arkansas Headquarters: Al Bateen Air Base, Abu Dhabi, United Arab Emirates 317th Tactical Airlift Wing (Provisional) Deployed from Pope Air Force Base, North Carolina Headquarters: RAFO Thumrait, Oman 435th Tactical Airlift Wing (Provisional) Deployed from Rhein-Main Air Base, Germany Headquarters: Al Ain International Airport, Al Ain, United Arab Emirates 1640th Tactical Airlift Wing (Provisional) Detached from Headquarters Military Airlift Command, Scott AFB, Illinois Headquarters: Masirah Air Base, Oman 1650th Tactical Airlift Wing (Provisional) Detached from Headquarters Military Airlift Command, Scott AFB, Illinois Headquarters: Sharjah International Airport, Sharjah, United Arab Emirates 1670th Tactical Airlift Group (Provisional) Detached from Headquarters Military Airlift Command, Scott AFB, Illinois Headquarters: Prince Sultan Air Base, Al Kharj, Saudi Arabia Air Force Special Operations Command (Provisional) Detached from Headquarters AFSOC, Hurlburt Field, Florida Headquarters: King Fahd International Airport, Dammam, Saudi Arabia 1612th Military Airlift Squadron (Provisional) Detached from Headquarters Military Airlift Command, Scott AFB, Illinois Headquarters: King Khalid Air Base, Khamis Mushait, Saudi Arabia C-21 Learjet, (8 Aircraft) C-12 Huron (7 Aircraft) RU-21 Learjet (7 Aircraft) Provided Special Air Mission transport for CENTAF/CENTCOM leadership and civilian VIPs from coalition nations and the United States. 17th Air Division (Provisional) 1700th Strategic Wing (Provisional) Detached from Headquarters 7th Air Division, Strategic Air Command, Ramstein AB, Germany Headquarters: King Khalid Air Base, Khamis Mushait, Saudi Arabia 1701st Air Refueling Wing (Provisional) Headquarters: King Abdul Aziz Air Base, Jeddah, Saudi Arabia 1701st Strategic Wing (Provisional) Headquarters: King Abdul Aziz Air Base, Jeddah, Saudi Arabia 1702d Air Refueling Wing (Provisional) Headquarters: Seeb International Airport, Muscat, Oman. 1706th Air Refueling Wing (Provisional) Headquarters: Cairo West Airport, Egypt 1708th Bombardment Wing (Provisional) Headquarters: King Abdul Aziz Air Base, Jeddah, Saudi Arabia 20 B-52G Stratofortresses. The lead unit within the 1708th BW (P) was the 524th BS/379th BW from Wurtsmith AFB, Michigan. Aircraft and crews were also drawn from the 62d and 596th BS/2d BW Barksdale AFB, Louisiana; 69th BS/42d BW at Loring AFB, Maine; 328th BS/93d BW, Castle AFB, California, and the 668th BS/416 BW at Griffiss AFB, New York. B-52 operations at Jeddah were not possible prior to the initiation of combat so the wing gained its aircraft when the conflict began. Six aircraft from the 42d BW were moved to Jeddah from Diego Garcia on 17 January, and 10 more flew in from Wurthsmith, attacking targets en route. Although launched from Wurtsmith and flown by 379th BW crews, three of the aircraft came from the 93d BW at Castle and two from the 42d BW at Loring. 1709th Air Refueling Wing (Provisional) Headquarters: King Abdul Aziz Air Base, Jeddah, Saudi Arabia 1712th Air Refueling Wing (Provisional) Headquarters: Al Banteen Air Base, Abu Dhabi, United Arab Emirates 1713th Air Refueling Wing (Provisional) Headquarters: Al Banteen Air Base, Abu Dhabi, United Arab Emirates 7th Air Division 801st Air Refueling Wing (Provisional) Headquarters: Morón Air Base, Spain 801st Bombardment Wing (Provisional) Headquarters: Morón Air Base, Spain The 801st BW (P) consisted of 28 B-52G Stratofortresses and was formed around a nucleus provided by he 2d Bombardment Wing at Barksdale AFB, Louisiana and drew aircraft from the crews of the 524th BS/379th BW, Wurtsmith AFB, Michigan; the 668th BS/416th BW at Griffiss AFB, New York and from 69th BS/42d BW at Loring AFB, Maine. One B-52G (52-6503) was sent from the 340th BS/97th BW at Eaker AFB, Arkansas. 802d Air Refueling Wing (Provisional) Headquarters: Lajes Field, Azores, Portugal 804th Air Refueling Wing (Provisional) Headquarters: Incirlik Air Base, Turkey 806th Bombardment Wing (Provisional) Headquarters: RAF Fairford, England The 806th BW (P) was formed around a cadre of air and ground crews provided by the 97th Bombardment Wing, Eaker AFB, Arkansas. It consisted of a total of 11 B-52G Stratofortresses, also being drawn from the 668th BS/416th BW at Griffiss AFB, New York; 596th BS/2d BW, Barksdale AFB, Louisiana, and the 328th BS/93d BW at Castle AFB, California. 807th Air Refueling Wing (Provisional) Headquarters: Incirlik Air Base, Turkey 810th Air Refueling Wing (Provisional) Headquarters: Incirlik Air Base, Turkey 4300th Bombardment Wing (Provisional) Headquarters: Diego Garcia, British Indian Ocean Territory [BIOT] The lead unit for the 4300th BW (P) was the 69th BS/42d BW from Loring AFB, Maine. Aircraft were also drawn from the 328th BS/93d BW at Castle AFB, California. Six aircraft were transferred to Jeddah, Saudi Arabia on 17 January 1991 and they were replaced by six B-52Gs from the 1500th SW (P) at Andersen AFB, Guam. See also List of MAJCOM wings of the United States Air Force List of USAF Provisional Wings assigned to Strategic Air Command References Department of Defense Final Report to Congress, "Conduct of the Persian Gulf War," April 1992 Smallwood, 2005, 'Warthog: Flying the A-10 in the Gulf War, Potomac Books Inc, Mixer, Ronald E., Genealogy of the Strategic Air Command, Battermix Publishing Company, 1999 and Mixer, Ronald E., Strategic Air Command, An Organizational History, Battermix Publishing Company, 2006. Steijger, Cees 1991, A History of USAFE', Airlife Publishing Limited, Baugher, Joe, 1999, McDonnell RF-4C Phantom II Baugher, Joe, 2003, McDonnell F-4G Phantom II Baugher, Joe, 2000, F-15 Eagle in Desert Storm Baugher, Joe, 2000, Service of General Dynamics F-16 Fighting Falcon with USAF Baugher, Joe, 2005, Developmental and Operational History of the F-117 Nighthawk F-117A: Desert Storm Baugher, Joe, 1999, General Dynamics F-111F Baugher, Joe, 1999, Grumman EF-111A Raven Globalsecurity.org Operation Desert Storm Air Forces Table of Organization Gulf War Air Power Survey Series Volume I, Part 2 Command and Control, pp 475-476 20th-century history of the United States Air Force Orders of battle United States Air Force units and formations by war
55251966
https://en.wikipedia.org/wiki/DNA%20encryption
DNA encryption
DNA encryption is the process of hiding or perplexing genetic information by a computational method in order to improve genetic privacy in DNA sequencing processes. The human genome is complex and long, but it is very possible to interpret important, and identifying, information from smaller variabilities, rather than reading the entire genome. A whole human genome is a string of 3.2 billion base paired nucleotides, the building blocks of life, but between individuals the genetic variation differs only by 0.5%, an important 0.5% that accounts for all of human diversity, the pathology of different diseases, and ancestral story. Emerging strategies incorporate different methods, such as randomization algorithms and cryptographic approaches, to de-identify the genetic sequence from the individual, and fundamentally, isolate only the necessary information while protecting the rest of the genome from unnecessary inquiry. The priority now is to ascertain which methods are robust, and how policy should ensure the ongoing protection of genetic privacy. History In 2003, the National Human Genome Research Institute and its affiliated partners successfully sequenced the first whole human genome, a project that took just under $3 billion to complete. Four years later, James Watson – one of the co-discoverers of the structure of DNA – was able to sequence his genome for less than $1.5 million. As genetic sequencing technologies have proliferated, streamlined and become adapted to clinical means, they can now provide incredible insight into individual genetic identities at a much lower cost, with biotech competitors vying for the title of the $1,000 genome. Genetic material can now be extracted from a person's saliva, hair, skin, blood, or other sources, sequenced, digitized, stored, and used for numerous purposes. Whenever data is digitized and stored, there is the possibility of privacy breaches. While modern whole genome sequencing technology has allowed for unprecedented access and understanding of the human genome, and excitement for the potentialities of personalized medicine, it has also generated serious conversation about the ethics and privacy risks that accompany this process of uncovering an individual's essential instructions of being: their DNA sequence. Research Genetic sequencing is a pivotal component of producing scientific knowledge about disease origins, disease prevention, and developing meaningful therapeutic interventions. Much of research utilizes large-group DNA samples or aggregate genome-wide datasets to compare and identify genes associated with particular diseases or phenotypes; therefore, there is much opposition to restricting genome database accessibility and much support for fortifying such wide-scale research. For example, if an informed consent clause were to be enforced for all genetics research, existing genetic databases could not be reused for new studies - all datasets would either need to be destroyed at the end of every study or all participants would need to re-authorize permissions with each new study. As genetic datasets can be extrapolated to closely related family members, this adds another dimension of required consent in the research process. This fundamentally raises the question of whether or not these restrictions are necessary privacy protections or a hindrance to scientific progress. Clinical Use In medicine, genetic sequencing is not only important for traditional uses, such as paternity tests, but also for facilitating ease in diagnosis and treatment. Personalized medicine has been heralded as the future of healthcare, as whole genome sequencing have provided the possibility personalizing treatment to individual expression and experience of disease. As pharmacology and drug development are based on population studies, current treatments are normalized to whole populations statistics, which might reduce treatment efficacy for individuals, as everyone's response to a disease and to drug therapy is uniquely bound to their genetic predispositions. Already, genetic sequencing has expedited prognostic counseling in monogenic diseases that requires rapid, differential diagnosis in neonatal care. However, the often blurred distinction between medical usage and research usage can complicate how privacy between these two realms are handled, as they often require different levels of consent and leverage different policy. Commercial Use Even in the consumer market, people have flocked to Ancestry.com and 23andMe to discover their heritage and elucidate their genotypes. As the nature of consumer transactions allows for these electronic click wrap models to bypass traditional forms of consent in research and healthcare, consumers may not completely comprehend the implications of having their genetic sequence digitized and stored. Furthermore, corporate privacy policies often operate outside the realm of federal jurisdiction, exposing consumers to informational risks, both in terms of their genetic privacy and their self-disclosed consumer profile, including self-disclosed family history, health status, race, ethnicity, social networks, and much more. Simply having databases invites potential privacy risks, as data storage inherently entails the possibility of data breaches and governmental solicitation of datasets. 23andMe have already received four requests from the Federal Bureau of Investigation (FBI) to access consumer datasets and although those requests were denied, this reveals a similar conundrum as the FBI–Apple encryption dispute. Forensic Use DNA-information can be used to solve criminal cases by establishing a match between a known suspect of a particular crime and an unknown suspect of an unsolved crime. However, DNA-information on its own can lead to expected errors of a certain probability and should not be used as entirely reliable evidence on its own. Policy As an individual's genomic sequence can reveal telling medical information about themselves, and their family members, privacy proponents believe that there should be certain protections in place to ultimately protect the privacy and identity of the user from possible discrimination by insurance companies or employers, the major concern voiced. There have been instances in which genetic discrimination has occurred, often revealing how science can be misinterpreted by non-experts. In 1970, African-Americans were denied insurance coverage or charged higher premiums because they were known carriers of sickle-cell anemia, but as carriers, they do not have any medical problems themselves, and this carrier advantage actually confers resistance against malaria. The legitimacy of these policies has been challenged by scientists who condemn this attitude of genetic determinism, that genotype wholly determines phenotype. Environmental factors, differential development patterns, and the field of epigenetics would argue gene expression is much more complex and genes are not a diagnosis, nor a reliable diagnosis, of an individual's medical future. Incipient legislations have manifested in response to genetic exceptionalism, the heightened scrutiny expected of genomics research, such as the 2008 Genetic Information Nondiscrimination Act (GINA) in the United States; however, in many cases, the scope and accountability of formal legislation is rather uncertain, as the science seems to be proceeding at a much more rapid pace than the law, and specialized ethics committees have had to fill this necessary niche. Much of the criticism targets how policy fundamentally lacks an understanding of technical issues involved in genome sequencing and fails to address how in the event of a data breach, an individual's personal genome can not be replaced, complicating privacy protection even further. As computational genomics is such a technical field, the translation of expert language to policy is difficult - let alone translation to laymen language -, presenting a certain barrier to public perception about the capabilities of current genomic sequencing technologies which, ultimately, makes the discourse about protecting genetic privacy without impeding scientific advancement an even more difficult one to have. Across the world, each country has unique healthcare and research frameworks that produce different policy needs – genetic privacy policy is further complicated when considering international collaborations on genetic research or international biobanks, databases that store biological samples and DNA information. Furthermore, research and healthcare are not the only fields that require formal jurisdiction; other areas of concern include the genetic privacy of those in the criminal justice system and those who engage with private consumer-based genomic sequencing. Forensic Science England and Wales 91% of the largest forensic DNA database in the world, the National Criminal Intelligence DNA Database (NDNAD), contains DNA information from residents of England and Wales. The NDNAD stores genetic information of criminally convicted individuals, those who were charged but acquitted of a recordable offence, those who were arrested but never charged with a recordable offense, and those who are under counterterrorism control. Of the 5.5 million people in the database, which represents 10% of the total population, 1.2 million have never been convicted of a crime. The European Court of Human Rights decided, in the case of S and Marper v United Kingdom (2008), that the government must present sufficient justification for differential treatment of DNA profiles of those in the criminal justice system compared to that of non-convicted individuals; essentially, there must be no abuse of retained biological materials and DNA-information. The decision highlighted several existing issues with the current system that poses privacy risks for the individuals involved: the storage of personal information with genetic information, the storage of DNA profiles with the inherent capacity to determine genetic relationships, and fundamentally, the act of storing of cellular samples and DNA profiles produces opportunities for privacy risks. As a result, the Protection of Freedoms Act 2012 was created to ensure proper use of collected DNA materials and regulate their storage and destruction. However, many problems still persist, as samples can still be retained indefinitely in databases, regardless of whether or not the affected individual was convicted – and even the samples of juvenile delinquents. Critics have argued that this long-term retention could lead to stigmatization of affected individuals and inhibit their re-integration into society and also, are subject to misuse by discriminatory behavior innate to the criminal justice system. Germany In 1990, the Federal Supreme Court of Germany and the Federal Constitutional Court of Germany decided that sections of the German Code of Criminal Procedure provided justifiable legal basis for the use of genetic fingerprinting in identifying criminals and absolving innocents. The decisions, however, lacked specific details on how biological materials can be obtained and how genetic fingerprinting can be utilized; only regulations of blood tests and physical examinations were explicitly outlined. In 1998, the German Parliament authorized the establishment of a national DNA database, due to mounting pressure to prevent cases of sexual abuse and homicides involving children. This decision rendered as constitutional and supported by a compelling public interest by the Federal Constitutional Court in 2001, despite some criticism that the right of informational self-determination was violated. The court did mandate that DNA information and samples must be supported by evidence that the individual can commit a similar crime in the future. To address the legal uncertainty, the Act on Forensic DNA Analysis of 2005 introduced provisions that included exact and limited legal grounds for the use of DNA based information in criminal proceedings. Some sections order that DNA samples may only be used if they are necessary to accelerate the investigation, eliminate suspects, and a court must order genetic fingerprinting. Since its implementation, there has been a monthly addition of 8000 new sets to the database, bringing into question the necessity of such wide scale data collection and whether or not the wording of the provisions provided effective privacy protection. A recent controversial decision by the German government expanded the range of familial searching by DNA dragnet to identify genetic relatives of sexual and violent perpetrators – an action that was previously deemed as having no legal basis by the Federal Supreme Court of Germany in 2012. South Korea The National Forensic Service of South Korea and the Public Prosecution Authority of South Korea established separate DNA analysis departments in 1991, despite initial public criticism that the data collection was enacted without considering the informational privacy of subjects involved, a criticism that turned to support with a series of high-profile cases. In 2006, a proposed bill by the General Assembly on the collection and operationalization of DNA information outlined crime categories for the storage, the control, and the destruction of DNA samples and DNA information. However, the bill failed to pass as it could not translate into any significant change in actual practice. The incomprehensive crime categories included were only applicable in obtaining biological information without an individual's consent, and the protocol to destroy collected samples were unclear, exposing them to misuse. The DNA Information Act of 2009 attempted to resolve these weaknesses, including provisions that stated biologically sensitive information may only be collected from convicted individuals, confined suspects, and crime scenes. Genetic fingerprinting was made permissible for specific crimes, including arson, murder, kidnapping, rape or sexual molestation, trespass upon residence at night for stealing, larceny, and burglary, and numerous other violent crimes. The act also required a written warrant for acquiring samples from convicted criminals or suspects if the concerned individuals do not give written consent. All samples must be destroyed in a timely manner if the concerned individual is proclaimed innocent, acquitted, their prosecution is dismissed, and upon their death. Importantly, if collected samples are used to ascertain individuals at the crime scene, the DNA information must be destroyed upon successful identification. However, there are still several flaws and criticisms to this legislation, in terms of clarifying the presumption of innocence, the rather trivial enforcement of sample destruction (only 2.03% of samples are deleted annually) and requisite of a written warrant (99.6% of samples are obtained without a warrant), and there is still much debate about whether or not this legislations violates the right of informational self-determination. Biobanks United States In the United States, biobanks are primarily under the jurisdiction of the Health Insurance Portability and Accountability Act (HIPAA) Privacy Rule and the Federal Policy for Protection of Human Subjects (Common Rule). As neither of these rules was conceived with the intention of regulating biobanks and the decentralized levels of regulation, there have been many challenges in their application and enforcement, and federal law fails to directly tackle international policy and how data can be shared outside of the EU-US Safe Harbor Agreement. An area that needs clarification is how federal and state laws are differentially and specifically applied to different biobanks, researchers, or projects, a situation further complicated by the fact that most biobanks are part of larger entities, or in collaboration with other institutions, confusing the line public and private interests. About 80% of all biobanks have internal oversight boards that regulate data collection, usage, and distribution. There are three basic access models applied to the accessibility of biobank samples and data: open access (unrestricted to anyone), tiered access (some restrictions to access dependent on the nature of the project), or controlled access (tightly controlled access). GINA provisions prohibit health insurers from requiring genetic testing or requesting genetic information for enrollment purposes and prohibit employers from requesting genetic testing or genetic information for any type of employment assessment,(hiring, promotion, termination). However, insurers can request genetic information to determine coverage of a specific procedure. Some groups are also excluded from following GINA's provisions, including insurers and employers of federal government employees, military, and employers with fewer than 15 employees. China China has a widespread network of hospitals and research institutes. It is currently undergoing a plan to create a more cohesive framework for data sharing among existing biobanks, which was previously under the jurisdiction of overlapping and confusing regulatory laws. Many biobanks operate under independently, or within a network of other networks, with the most prominent being the Shanghai Biobank network. Under this main network, guidelines detail specific de-identification policies and explicitly endorse broad consent. Recently, the Chinese Constitution has formally recognized individual privacy as a distinct and independent constitutional right, and therefore, legislators have begun developing a Draft Ordinance on Human Genetics Resources to organize national laws on biobanking management measures, legal liability, and punishment for violations. International data sharing will be even more strictly regulated under these federal laws. Australia Biobanks in Australia are mainly under the regulation of healthcare privacy guidelines and human research ethics committees – no formal biobank legislation exists but international data sharing is widely permitted. The National Health and Medical Research Council (NHMRC) develops guidelines for and funds many of these institutions. There is discussion towards broad consent for biobanking. Consumer Genetic Testing Electronic Frontier Foundation, a privacy advocate, found that existing legislation does not have formal jurisdiction in ensuring consumer privacy where DNA information is concerned. Genetic information stored by consumer businesses are not protected by the HIPAA; therefore, these companies can share genetic information with third parties, conditions contingent upon their own privacy statements. Most genetic testing companies only share anonymized, aggregated data with users’ consent. Ancestry.com and 23andMe do sell such data to research institutions and other organizations, and can ask for a case-by-case consent to release non-anonymized data to other parties, including employers or insurers. 23andMe even issues a warning that re-identification may take place and is possible. If a consumer explicitly refuses research use or requests for their data to be destroyed, 23andMe is still allowed to use their consumer identifying and behavioral information, such as browsing patterns and geographical location, for other marketing services. Areas of Concern Many computational experts have developed, and are developing, more secure systems of genomics sequencing to protect the future of this field from misguided jurisdiction, wrongful application of genetics data, and above all, the genetic privacy of individuals. There are currently four major areas of genetics research in which privacy-preserving technologies are being developed for: String searching and comparison Paternity tests, genetic compatibility tests, and ancestry testing are all types of medical tools that rely on string searching and comparison algorithms. Simply, this is a needle-in-a-haystack approach, in which a dataset is searched for a matching “string”, the sequence or pattern of interest. As these types of testing have become more common, and adapted to consumer genomic models, such as smartphone apps or trendy DNA tests, current privacy securing methods are focused on fortifying this process and protecting both healthcare and private usage. Aggregate data release The modern age of big data and large scale genomic testing necessitates processing systems that minimize privacy risks when releasing aggregate genomic data, which essentially means ensuring that individual data cannot be discerned within a genomic database. This differential privacy approach is a simple evaluation of the security of a genomic database and many researchers provide "checks" on the stringency of existing infrastructures. Alignment of raw genomic data One of the most important developments in the field of genomics is the capacity for read mapping, in which millions of short sequences can be aligned to a reference DNA sequence in order to process large datasets efficiently. As this high-capacity process is often divided up between public and private computing environments, there is a lot of associated risk and stages where genetic privacy is particularly vulnerable; therefore, current studies focus on how to provide secure operations within two different data domains without sacrificing efficiency and accuracy. Clinical use With the advent of high throughput genomic technology allowing unprecedented access to genetic information, personalized medicine is gaining momentum as the promised future of healthcare, rendering secure genomic testing models as imperative for the progress of medicine. Particularly, concerns voice how this process will involve multiparty engagement and access to data. The distinction between genetic sequencing for medicine and research purposes is a contentious one, and furthermore, anytime healthcare is involved in a discussion, the dimension of patient privacy must be considered, as it may conflict or complement genetic privacy. Encryption Methods Secure read mapping Sensitive read mapping is essential to genomics research, as read mapping is not only important for DNA sequencing, but also for identifying target regulatory molecules in RNA-Seq. A solution proposes splitting read mapping into two tasks on a hybridized computing operation: the exact matching of reads using keyed hash values can be conducted on a public cloud and the alignment of reads can be conducted on a private cloud. As only keyed hash values are exposed to public scrutiny, the privacy of the original sequence is preserved. However, as alignment processes tends to be high volume and work intensive, most sequencing schemes still functionally require third party computing operations, which reintroduce privacy risks in the public cloud domain. Secure string searching Numerous genetic screening tests rely on string searching and have become commonplace in healthcare; therefore, the privacy of such methodologies have been an important area of development. One protocol hides the position and size of partial substrings, allowing one party (the researcher or physician) with the digitized genome and a second party (research subject or patient) with sole propriety of his or her DNA marker to conduct secure genetic tests. Only the researcher or the physician knows the conclusion of the string searching and comparison scheme and neither party can access other information, ensuring privacy preservation. Secure genome query The basis of personalized medicine and preventative healthcare is establishing genetic compatibility by comparing an individual's genome against known variations to estimate susceptibility to diseases, such as breast cancer or diabetes, to evaluate pharmacogenomics, and to query biological relationships among individuals. For disease risk tests, studies have proposed a privacy preserving technique that utilizes homomorphic encryption and secure integer comparison, and suggests storing and processing sensitive data in an encrypted form. To ensure privacy, the storage and processing unit (SPU) stores all the single-nucleotide polymorphism (SNPs) as real SNPs - the observed SNPs in the patient - with redundant content from set of potential SNPs. Another solution developed three protocols to secure calculating edit distance using intersections of Yao's Garbled Circuit and a banded alignment algorithm. The major drawback of this solution is its inability of performing large scale computations while retaining accuracy. Secure genome-wide association studies Genome-wide association studies (GWAS) are important in locating specific variations in genome sequences that lead to disease. Privacy preserving algorithms that identify SNPs significantly associated with diseases are based on introducing random noise to aggregate statistics to protect individual privacy. In another study, the nature of linkage disequilibrium is utilized in selecting the most useful datasets while maximizing protection of patient privacy with injected noise; however, it may lack effective disease association capabilities. Critics of these methods note that a substantial amount of noise is required to satisfy differential privacy for a small ratio of SNPs, an impracticality in conducting efficient research. Authenticated encryption storage The nature of genomic sequences requires a specific encryption tool to protect against low complexity (repetitive content) attacks and KPA (Known-plaintext attack), given several expected symbols. Cryfa uses packing (reducing the storage size), a shuffling mechanism (randomizing the symbol positions), and the AES cipher (Advanced Encryption Standard) to securely store FASTA, FASTQ, VCF, SAM and BAM files with authenticated encryption. References Applied genetics
15300413
https://en.wikipedia.org/wiki/Embedded%20database
Embedded database
An embedded database system is a database management system (DBMS) which is tightly integrated with an application software; it is "embedded in the application". It is actually a broad technology category that includes database systems with differing application programming interfaces (SQL as well as proprietary, native APIs), database architectures (client-server and in-process), storage modes (on-disk, in-memory, and combined), database models (relational, object-oriented, entity–attribute–value model, network/CODASYL), and target markets. The term embedded database can be confusing because only a small subset of embedded database products are used in real-time embedded systems such as telecommunications switches and consumer electronics devices. (See mobile database for small-footprint databases that could be used on embedded devices.) Implementations Major embedded database products include, in alphabetical order: Advantage Database Server from Sybase Inc. Berkeley DB from Oracle Corporation CSQL from csqlcache.com Extensible Storage Engine from Microsoft eXtremeDB from McObject Filemaker from Claris Firebird Embedded HSQLDB from HSQLDB.ORG, Informix Dynamic Server (IDS) from IBM InfinityDB from Boiler Bay Inc. InnoDB from Oracle Corporation InterBase (Both server and mobile friendly deeply embedded version) from Embarcadero Technologies Lightning Memory-Mapped Database (LMDB) from Symas Corp. Raima Database Manager from Raima solidDB SQLite SQL Server Compact from Microsoft Corporation Sophia Embeddable key-value storage Comparisons of database storage engines Advantage Database Server Sybase's Advantage Database Server (ADS) is a full-featured embedded database management system. It provides both ISAM and relational data access and is compatible with multiple platforms including Windows, Linux, and Netware. It is available as a royalty-free local file-server database or a full client-server version. ADS has been around for many years and is highly scalable, with no administration, and has support for a variety of IDEs including .NET Framework (.NET), Object Pascal (Delphi), Visual FoxPro (FoxPro), PHP, Visual Basic (VB), Visual Objects (VO), Vulcan, Clipper, Perl, Java, xHarbour, etc. Apache Derby Derby is an embeddable SQL engine written entirely in Java. Fully transactional, multi-user with a decent SQL subset, Derby is a mature engine and freely available under the Apache license and is actively maintained. Derby project page. It is also distributed as part of Oracle's Java SE Development Kit (JDK) under the name of Java DB. Empress Embedded Database Empress Software, Inc., developer of the Empress Embedded Database, is a privately held company founded in 1979. Empress Embedded Database is a full-function, relational database that has been embedded into applications by organizations small to large, with deployment environments including medical systems, network routers, nuclear power plant monitors, satellite management systems, and other embedded system applications that require reliability and power. Empress is an ACID compliant, SQL database engine with C, C++, Java, JDBC, ODBC, SQL, ADO.NET and kernel level APIs. Applications developed using these APIs may be run in standalone and/or server modes. Empress Embedded Database runs on Linux, Unix, Microsoft Windows and real-time operating systems. Extensible Storage Engine ESE is an Indexed Sequential Access Method (ISAM) data storage technology from Microsoft. ESE is notably a core of Microsoft Exchange Server and Active Directory. Its purpose is to allow applications to store and retrieve data via indexed and sequential access. Windows Mail and Desktop Search in the Windows Vista operating system also make use of ESE to store indexes and property information respectively. eXtremeDB McObject LLC launched eXtremeDB as the first in-memory embedded database designed from scratch for real-time embedded systems. The initial product was soon joined by eXtremeDB High Availability (HA) for fault tolerant applications. The product family now includes 64-bit and transaction logging editions, and the hybrid eXtremeDB Fusion, which combines in-memory and on-disk data storage. In 2008, McObject introduced eXtremeDB Kernel Mode, the first embedded DBMS designed to run in an operating system kernel. Today, eXtremeDB is used in millions of real-time and embedded systems worldwide. McObject also offers Perst, an open source, object-oriented embedded database for Java, Java ME, .NET, .NET Compact Framework and Silverlight. Firebird Embedded Firebird Embedded is a relational database engine. It's an open source fork of InterBase, is ACID compliant, supports triggers and stored procedures, and is available on Linux, OSX and Windows systems. It has the same features as the classic and superserver version of Firebird, two or more threads (and applications) can access the same database at the same time starting with Firebird 2.5. So Firebird embedded acts as a local server for one threaded client accessing its databases (that means it works properly for ASP.NET web applications, because there, each user has its own thread, which means two users could access the same database at the same time, but they would not be in the same thread, because ASP.NET opens a new thread for each user). It exports the standard Firebird API entrypoints. The main advantage of Firebird embedded databases is, that unlike SQlite or Access databases, they can be plugged into a full Firebird server without any modifications at all also is multiplatform (runs on Linux, OS X with full ASP.NET Mono support) H2 Written in Java Open source very fast database engine. Embedded and Server mode, Clustering support, can run inside the Google App Engine. Supports encrypted database files (AES or XTEA). The development of H2 was started in May 2004, but it was first published on December 14, 2005. H2 is dual licensed and available under a modified version of the MPL 1.1 (Mozilla Public License) or under the (unmodified) EPL 1.0 (Eclipse Public License). HailDB, formerly Embedded InnoDB HailDB is a standalone, embeddable form of the InnoDB Storage Engine. Given that HailDB is based on the same code base as the InnoDB Storage Engine, it contains many of the same features: high-performance and scalability, multiversion concurrency control (MVCC), row-level locking, deadlock detection, fault tolerance, automatic crash recovery, etc. However, because the embedded engine is completely independent from MySQL, it lacks server components such as networking, object-level permissions, etc. By eliminating the MySQL server overhead, InnoDB has a small footprint and is well-suited for embedding in applications which require high-performance and concurrency. As with most embedded database systems, HailDB is designed to be accessed primarily with an ISAM-like C API rather than SQL (though an extremely rudimentary SQL variant is supported). The project is no longer maintained. HSQLDB HSQLDB is an opensource relational database management system with a BSD-like license that runs in the same Java Virtual Machine as the embedded application. HSQLDB supports a variety of in-memory and disk-based table modes, Unicode and SQL:2016. InfinityDB InfinityDB Embedded Java DBMS is a sorted hierarchical key/value store. It now has an Encrypted edition and a Client/Server edition. The multi-core speed is patent-applied-for. InfinityDB is secure, transactional, compressing, and robust, in a single file for instant installation and zero administration. APIs include the simple fast 'ItemSpace', a ConcurrentNavigableMap view, and JSON. A RemoteItemSpace can transparently redirect the embedded APIs to other db instances. Client/Server includes a light-weight Servlet server, web admin and database browsing, and REST for python. Informix Dynamic Server Informix Dynamic Server (IDS) is characterized as an enterprise class embeddable database server, combining embeddable features such as low footprint, programmable and autonomic capabilities with enterprise class database features such as high availability and flexible replication features. IDS is used in deeply embedded scenarios such as IP telephony call-processing systems, point of sale applications and financial transaction processing systems. InterBase InterBase is an IoT Award-winning cross-platform, Unicode enabled SQL database platform able to be embedded within turn-key applications. Out of the box SMP support and on disk AES strength 256bit encryption, SQL 92 & ACID compliance and support for Windows, Macintosh, Linux, Solaris, iOS and Android platforms. Ideal for both small-to-medium and large enterprises supporting hundreds of users and mobile application development. InterBase Light is a free version that can be used on any mobile device and is ideal for mobile applications. Enterprises can switch to a paid version as requirements for change management and security increase. InterBase has high adoption in defense, airspace, oil and gas, and manufacturing industries. LevelDB LevelDB is an ordered key/value store created by Google as a lightweight implementation of the Bigtable storage design. As a library (which is the only way to use LevelDB), its native API is C++. It also includes official C wrappers for most functionality. Third-party API wrappers exist for Python, PHP, Go (pure Go LevelDB implementation exists but is in progress still), Node.js and Objective C. Google distributes LevelDB under the New BSD License. LMDB Lightning Memory-Mapped Database (LMDB) is a memory-mapped key-value database for the OpenLDAP Project. It is written in C and the API is modeled after the Berkeley DB API, though much simplified. The library is extremely compact, compiling down to under 40KB of x86 object code, being usually faster than similar libraries like Berkeley DB, LevelDB, etc. The library implements B+trees with multiversion concurrency control (MVCC), single-level store, Copy on write and provides full ACID transactions with no deadlocks. The library is optimized for high read concurrency; readers need no locks at all. Readers don't block writers and writers don't block readers, so read performance scales perfectly linearly across arbitrarily many threads and CPUs. Third-party wrappers exist for C++, Erlang and Python. LMDB is distributed by the OpenLDAP Project under the OpenLDAP Public License. As of 2013 the OpenLDAP Project is deprecating the use of Berkeley DB, in favor of LMDB. Mimer SQL An embedded zero maintenance version of the proprietary Mimer SQL database is available. MonetDB/e MonetDB/e is the embedded version of the open source MonetDB SQL column store engine. Available for C, C++, Java (JDBC) and Python. MonetDB License, based on MPL 2.0. The predecessor MonetDBLite (for R, Python and Java) is no longer maintained. It's replaced by MonetDB/e. MySQL Embedded Server Library The Embedded MySQL Server Library provides most of the features of regular MySQL as a linkable library that can be run in the context of a client process. After initialization, clients can use the same C API calls as when talking to a separate MySQL server but with less communication overhead and with no need for a separate database process. NexusDB NexusDB is the commercial successor to the FlashFiler database which is now open source. They can both be embedded in Delphi applications to create stand-alone executables with full database functionality. Oracle Berkeley DB As the name implies, Oracle's embedded database is actually Berkeley DB, which Oracle acquired from Sleepycat Software. It was originally developed at the University of California. Berkeley DB is a fast, open-source embedded database and is used in several well-known open-source products, including the Linux and BSD Unix operating systems, Apache Web server, OpenOffice productivity suite. Nonetheless, over recent years many well-known projects switched to using LMDB, because it outperform Berkeley DB in key scenarios on the ground of "less is more" design, as well due to the license changing. Raima Database Manager Raima Database Manager, produced by Raima. According to Raima's definition, the product is embedded in two senses: first, it is embedded within an application, becoming an extension to the application, and second, it is possible to use it in embedded computer/OS or real-time environments because of its small footprint and efficient operation. Its APIs (for C/C++, SQL, JDBC, ODBC, ADO.NET, and RESTful) have been designed to support the limited resources of embedded environments. RocksDB RocksDB, created at Facebook, began as a fork of LevelDB. It focuses on performance, especially on SSDs. It adds many features, including transactions, backups, snapshots, bloom filters, column families, expiry, custom merge operators, more tunable compaction, statistics collection, and geospatial indexing. It is used as a storage engine inside of several other databases, including ArangoDB, Ceph, CockroachDB, MongoRocks, MyRocks, Rocksandra, TiKV. and YugabyteDB. solidDB solidDB is a hybrid on-disk/in-memory, relational database and is often used as an embedded system database in telecommunications equipment, network software, and similar systems. In-memory database technology is used to achieve throughput of tens of thousands of transactions per second with response times measured in microseconds. High availability option maintains two copies of the data synchronized at all times. In case of system failure, applications can recover access to solidDB in less than a second without loss of data. SQLite SQLite is a software library that implements a self-contained, server-less, zero-configuration, transactional SQL database engine. SQLite is the most widely deployed SQL database engine in the world. The source code, chiefly C, for SQLite is in the public domain. It includes both a native C library and a simple command line client for its database. It's included in several operating systems; among them are Android, FreeBSD, iOS, OS X and Windows 10. SQL Server Compact Microsoft's SQL Server Compact is an embedded database with wide variety of features like multi-process connections, T-SQL, ADO.NET Sync Services to sync with any back end database, Merge Replication with SQL Server, Programming API: LINQ to SQL, LINQ to Entities, ADO.NET. The product runs on both Desktop and Mobile Windows platforms. It has been in the market for long time, used by many enterprises in production software (Case Studies). The product went through multiple re-brandings and was known with multiple names like: SQL CE, SQL Server CE, SQL Server Mobile, SQL Mobile. See also In-memory database, main memory database Mobile database References
19389307
https://en.wikipedia.org/wiki/TP-Link
TP-Link
TP-Link Technologies Co., Ltd. (Chinese: 普联技术; Pinyin: pǔ lián jìshù), is a global manufacturer of computer networking products based in Hongkong and Shenzhen. TP-Link has consistently been ranked by analyst firm IDC as the World's No. 1 provider of Wi-Fi products, supplying more than 170 countries and serving billions of people worldwide. History TP-Link was founded in 1996 by two brothers, Zhao Jianjun and Zhao Jiaxing, to produce and market a network card they had developed. The company name was based on the concept of "twisted pair link" invented by Alexander Graham Bell, a kind of cabling that reduces electromagnetic interference, hence the "TP" in the company name. TP-Link began its first international expansion in 2005. In 2007, the company moved into its new 100,000-square-meter headquarters and facilities at Shenzhen's Hi-Tech Industry Park. TP-Link USA was established in 2008. In September 2016, TP-Link unveiled a new logo and slogan, "Reliably Smart"; the new logo is meant to portray the company as being a "lifestyle"-oriented brand as it expands into smart home products. Product ranges TP-Link products include high speed cable modems, wireless routers, mobile phones, ADSL, range extenders, routers, switches, IP cameras, powerline adapters, print servers, media converters, wireless adapters, power banks, USB Hub and SMART home technology devices. TP-Link also manufactured the OnHub router for Google. In 2016 the company launched the new brand Neffos for smart phones. TP-Link manufactures smart home devices under their Kasa Smart and Tapo product lines. TP-Link sells through multiple sales channels globally, including traditional retailers, online retailers, wholesale distributors, direct market resellers ("DMRs"), value-added resellers ("VARs") and broadband service providers. Its main competition includes companies such as Netgear, Buffalo, Belkin, Linksys, D-Link and Asus. Brands Tapo On September 30, 2019, TP-Link launched Tapo with one of its initial offerings being the mini smart Wi-Fi Plug—Tapo P100. The smart plug works over a 2.4 GHz wireless connection integrates with Amazon Alexa and the Google Assistant. Other offerings from Tapo include a line of home security Wi-Fi cameras and an upcoming line of smart lighting appliances. Deco Deco is a family of mesh-network products. The first of this category was the TP-Link M5, followed up by the M9 Plus which had backhaul compatibility improving on the usable bandwidth in certain cases compared to the M5. At the same time TP-Link also introduced the Deco P7 which was a power-line connected mesh-network system meaning nodes communicate through the electrical wiring of the domicile compared to the wireless transmissions of the other Deco products. The Deco P7 has since been replaced with the newer Deco P9 which has a different aesthetic but the same wireless performance. Recent products in the series include the Deco M4 and S4 which have same wireless bandwidth, with only slight differences in design. Further products have been introduced, including the Deco X20 as a new base-model, and the Deco X60 as a mid-tier model with higher bandwidth but the same overall design. The Deco X90 is the most powerful of the current Deco family, with more than double the bandwidth compared to the X60 model, and a larger design compared to the other models. Manufacturing TP-Link is one of the few major wireless networking companies to manufacture its products in-house as opposed to outsourcing to original design manufacturers (ODMs). The company says this control over components and the supply chain is a key competitive differentiator. References External links Tapo 1996 establishments in China Android (operating system) software Chinese brands Companies established in 1996 Home automation companies IOS software Manufacturing companies based in Shenzhen Networking companies Networking hardware Networking hardware companies Privately held companies of China Routers (computing) Telecommunication equipment companies of China Wireless networking
22338452
https://en.wikipedia.org/wiki/2009%20Troy%20Trojans%20football%20team
2009 Troy Trojans football team
The 2009 Troy Trojans football team represented Troy University in the 2009 NCAA Division I FBS football season. They played their home games at Movie Gallery Stadium in Troy, Alabama and competed in the Sun Belt Conference. The Trojans won their fourth straight Sun Belt championship going undefeated in conference play (8–0) with a regular season record of 9–3. They were invited to the GMAC Bowl, where they played Mid-American Conference champion Central Michigan and were defeated, 44–41, in two overtimes. Schedule Schedule Source: Personnel Coaching Staff Larry Blakeney – Head Coach Neal Brown – Offensive Coordinator/Quarterbacks Jeremy Rowell – Defensive Coordinator Randy Butler – Defensive Ends/Recruiting Coordinator Maurea Crain – Defensive Line Kenny Edenfield – Inside Receivers Benjy Parker – Linebackers John Schlarman – Offensive Line Chad Scott – Running Backs Richard Shaughnessy – Strength and Conditioning References Troy Troy Trojans football seasons Sun Belt Conference football champion seasons Troy Trojans football
9191733
https://en.wikipedia.org/wiki/GT%20Nexus
GT Nexus
Infor Nexus (formerly known as GT Nexus) is a privately-owned cloud supply chain platform, founded in 1998 in Oakland, California. It runs an on-demand global supply chain management platform that is used by organizations to manage global logistics and trade processes. In September 2015, GT Nexus was acquired by Infor. Today, Infor Nexus is a business unit of Infor. The company operates in the Americas, Europe, and Asia with a focus on manufacturing and retail including companies in pharmaceuticals, high tech, automotive, CPG, apparel and footwear. Logistics service providers, financial service providers, and suppliers are also part of the Infor Nexus network. Its customers include Brooks Brothers, Sears, Adidas, Procter & Gamble, Del Monte Foods, Caterpillar Inc., Koch Industries, Abercrombie & Fitch, and Home Depot. History 1998 – Founded in Alameda, CA as Tradiant. 2001 - Renamed GT Nexus from Tradiant. 2008 – Acquired Metaship, a provider of logistics management technology. 2013 – Merged with TradeCard. Joint company employs about 1,000 people, and serves about 20,000 businesses in manufacturing, retail, and logistics. 2014 - Acquired Clear Abacus, a cloud-based solution that optimizes multimodal transportation planning. 2015 - Acquired by Infor, a technology company delivering industry-specific cloud suites. The deal, valued at $675 million, closed on September 21, 2015. 2018 - GT Nexus launched new global trade management platform. 2019 - GT Nexus relaunched as Infor Nexus. Products Infor Nexus products are used by importers, exporters, logistics providers, and financial institutions to manage the flow of inventory, transactions, and information related to global trade. All capabilities are delivered in the cloud with a subscription pricing model. The platform includes: Supply Chain Visibility Supply Chain Intelligence Factory Management Transportation Management Inventory Management Supply Collaboration Procure-to-pay Supply Chain Finance Competitors include SAP, Descartes, Oracle, and IBM. See also Supply-chain management Supply chain management software Supply chain network Transportation management system Vendor relationship management References External links Official Site Supply chain software companies Software companies based in California Companies based in Oakland, California Software companies established in 1998 ERP software companies Service-oriented (business computing) Cloud platforms Business software companies Software companies of the United States
4056793
https://en.wikipedia.org/wiki/Prince%20George%27s%20County%20Public%20Schools
Prince George's County Public Schools
Prince George's County Public Schools (PGCPS) is a large public school district administered by the government of Prince George's County, Maryland, United States, and is overseen by the Maryland State Department of Education. The school system is headquartered in Upper Marlboro and the district serves Prince George's County. The district is headed by Dr. Monica Goldson and a 14-member Board of Education. With students enrolled for the 2017–2018 school year, the Prince George's County Public Schools system is the second largest school district in the state of Maryland; the third largest school district in both the Washington Metropolitan Area and Baltimore-Washington Metropolitan Area, after Fairfax County Public Schools in Virginia and Montgomery County Public Schools in Maryland; and it is one of the top 25 largest school districts in the nation. PGCPS operates 208 schools and special centers which include: 123 elementary schools (PreK-5), 24 middle schools (6-8), 23 high schools (9-12), and 12 academies (PreK-8). The school system also operates 9 special centers, 2 vocational centers, 3 alternative schools, 8 public charter schools, and the Howard B. Owens Science Center, serving students from Pre-Kindergarten through Grade 12. PGCPS operates the two largest high schools in the state of Maryland — (Dr. Henry A. Wise, Jr. High School and Northwestern High School), respectively. The school system transports over 90,536 students, daily, by its fleet of 1,335 GPS-equipped school buses, on 5,616 bus routes. PGCPS employs approximately 23,785 staff members which includes an estimated 9,197 teachers. The approved operating budget for FY2014-15 is approximately US$1.795 billion with a per pupil expenditure of US$11,753. Average teacher salary ranges from US$55,689 for teachers with a bachelor's degree to US$80,009 for teachers with a doctorate's degree. In terms of racial demographics, African-Americans make up the majority of the system's students at 55.32%, followed by 36.46% Hispanic, 3.67% Caucasian, 2.76% Asian, and the remaining 1.79% comprising various other races. In June 2009, the PGCPS became one of the first school systems in America to name one of its schools after former President Barack Obama. Barack Obama Elementary School, in the Westphalia census-designated place, near Upper Marlboro, opened in August 2010. History Early schools in Prince George's County In 1899, the first high school was built in Prince George's County, at the northeast corner of Montgomery and Eighth Streets in Laurel, Maryland, and was named Laurel High School. In 1902, Frederick Sasscer, Jr. became Superintendent of Schools, a post he held until 1914. Desegregation In 1974, Prince George's County, Maryland, became the largest school district in the United States forced to adopt a busing plan. The county was over 80 percent white in population and in the public schools. In some county communities close to Washington, there was a higher concentration of black residents than in more outlying areas. The county had a neighborhood-based system of school boundaries. However, the NAACP argued that housing patterns in the county still reflected segregation. The federal court ordered that a school busing plan be set in place. A 1974 Gallup poll showed that 75 percent of county residents were against forced busing, and that only 32 percent of blacks supported it. The transition happened quickly as the court ordered that the plan be administered with "all due haste". This happened during the middle of the school term, and students, except those in their senior year in high school, were transferred to different schools to achieve racial balance. Many typical school activities and life in general for families in the county was disrupted by things such as the changes in daily times, transportation logistics, and extracurricular activities. The federal case and the school busing order was officially ended in 2001, as segregation had been erased to the court's satisfaction, and neighborhood-based school boundaries were restored. School consolidation (2009–2010s) On March 26, 2009, the Prince George's County Public Schools Board of Education voted to consolidate eight under-enrolled schools in the county and expand magnet program offerings within the school system. This decision was made after a series of community briefings, public hearings, more than 2,500 survey responses, and additional public input. This process of expanding opportunities for students began in June 2008. The Board of Education directed the school district to conduct a comprehensive review of school enrollments in September 2008. Recognizing that some schools were significantly under-enrolled, the Board of Education sought to offer more educational opportunities in historically under-served areas of the county, relieve overcrowding where possible, and improve operating efficiencies. The Board of Education used constituent feedback to refine the proposal made by Interim Superintendent Dr. William R. Hite, Jr. earlier this year, and reduced the number of schools to be consolidated to eight instead of 12. The plan still relieves overcrowded schools, identifies space for new academic choices, and expands successful programs. The school district’s next step will be to solicit public input on what new or expanded programs communities would like to see in their schools. In January 2009, the Superintendent presented the Board with the first of four phases in a proposal. Phase I was approved with the following components for the 2009–2010 school year: No high schools were affected by Phase I. Eight schools were consolidated and students were reassigned for the 2009–2010 school year. The following schools were closed (permanently) starting with the 2009–2010 school year: Berkshire Elementary, John Carroll Elementary, John E. Howard Elementary, Matthew Henson Elementary, Middleton Valley Elementary, Morningside Elementary, Owens Road Elementary, and G. Gardner Shugart Middle School. Five schools were converted to Kindergarten through Grade 8 (K-8) programs: Andrew Jackson Middle School, Samuel P. Massie Elementary School, and William W. Hall Elementary School, whom of which enroll students in grades K-8, while Henry G. Ferguson Elementary School and Eugene Burroughs Middle School, were combined to create the Accokeek Academy PreK-8 school with the Talented & Gifted Center (TAG) Magnet Program from Henry Ferguson carrying over to the newly combined school and expanding to include grades 7 and 8. Benjamin D. Foulois Elementary School was converted to a K-8 Creative & Performing Arts magnet center for the southern end of the county, replicating the current program at Thomas G. Pullen Arts Magnet School. Concord, Dodge Park, District Heights, and Oakcrest elementary schools were removed from the list of potential schools to be closed/consolidated. Communities will make recommendations on what new magnet programs they want for their schools (i.e. Foreign Language Immersion, Montessori) Additional schools consolidated around 2016, as student populations in southern portions of the county and inside the Capital Beltway decreased; schools in northern parts of PG County, on the other hand, became overcrowded, including schools serving Beltsville, Hyattsville, and Laurel. Superintendent/CEO As part of the 2013 reorganization of the PGCPS Board of Education and PGCPS governance spearheaded by county executive Rushern Baker, the position of PGCPS superintendent was renamed as "CEO of PGCPS." The reorganization gave a greater degree of control of operation of the school system to the CEO and limited the powers of the school board. Dr. Kevin Maxwell served as the first CEO of PGCPS until his negotiated exit in 2018, at which time Monica Goldson was elevated from deputy superintendent to interim CEO. List of superintendents: Monica Goldson (interim 2018–2019; 2019–present) Kevin M. Maxwell (2013–2018) Alvin Crawley (interim 2012–2013) William R. Hite, Jr. (interim 2008–2009; 2009–2012) John E. Deasy (2006–2008) Howard A. Burnett (interim 2005–2006) André J. Hornsby (2003–2005) Iris T. Metts (1999–2003) Jerome Clark (1995–1999) Edward M. Felegy (1991–1995) John A. Murphy (1984–1991) Edward J. Feeney (1976–1984) Carl W. Hassel (1970–1976) William S. Schmidt (1951–1970) G. Gardner Shugart (1944–1951) Nicholas Orem Sr. (1921–1943) E.S. Burroughs (1915–1921) Frederick Sasscer Jr. (1902–1914) Transportation Prince George's County Public Schools offers students transportation to and from school with its own bus system. The system runs a fleet of various school bus models by Blue Bird Corporation, IC Bus, and Thomas Built Buses. Models include rear-engined and front-engined types, which all operate on diesel fuel. Special-needs children are provided with an accessible bus. All buses display "Prince George's County Public Schools" on both sides. The transportation department operates from 13 bus lots, which in total operate over 1200 buses on over 5000 routes. Ridership varies annually, although at least 93,000 students ride buses provided by the department. All routes consist of three digit numbers, such as 001, 219 or 615 and a letter-digit route, such as B12 or D14. In addition to transportation to and from schools, the school district runs buses for school field trips, athletic events, and other approved necessities for a bus in Maryland. Ridership of each bus is determined by the distance in which the student lives from their school, which includes but not limited to two miles for intermediate and secondary schools and one and half mile for primary schools. Each route is determined through a trapeze system, in which information regarding students is entered into a computer system and the outcome is their route number. List of schools High schools All high schools in Prince George's County operate with a "comprehensive" model as their base, with the exception of the new Academy of Health Sciences at Prince George's Community College, which is a middle college program. All students are assigned to a high school based on an attendance area. Magnet Programs operate as a "School-Within-A-School" model, where the magnet serves as an alternative program---in addition to the main comprehensive program---and students from outside the regular attendance area of the high school are enrolled and accepted into the magnet, either through continuity (automatic continuation from a middle school magnet program to the high school level equivalent) or more commonly, through a Magnet Lottery, in which students apply for a magnet program and are granted acceptance through a random drawing. Enrollment into the Center for the Visual and Performing Arts is through audition only. Several high schools have also implemented a Smaller Learning Community model, where they offer anywhere from two or more Academy Programs, which effectively breaks a school down into several smaller schools within the school, by allowing students to essentially declare a major (such as a student attending a college or university) through career academies such as "Arts, Media, and Communication" or the "National Academy of Finance," for example. All high schools within PGCPS operate on a staggered school day schedule, where some high schools start as early as 7:45am and end as early as 2:25pm, and other high schools start as late as 9:30am and end as late as 4:10pm. All high schools operate on an alternating A/B-day block scheduling system, where one group of classes are taken on "A-Days" and a different group of classes are taken on "B-Days," and the cycle repeats. Most high schools have between three and four lunch shifts, depending on enrollment and eating accommodations. The only exceptions are Eleanor Roosevelt High School — which has adopted a modified hybrid block schedule in which both traditional single period courses and double period (block schedule) courses are integrated — and the Academy of Health Science at Prince George's Community College. {| class="toccolours" border="1" cellpadding="5" style="border-collapse:collapse" |+ High schools |- style="background:darkblue;" !style="width:235px;"|School !Website !Location !style="width:90px;"|Opening date (current facility) !Grades !Enrollment (2014–15) !Square footage !style="width:80px;"|Attendance hours (start/end) !Specialized programs |- | rowspan="2"|Academy of Health Sciences at Prince George's Community College | align="center"|Link | align="center"|Largo | align="center"|2011 | align="center" |9-12 | align="right"|397 students | align="center"|N/A | align="center" |9:30a – 4:40p | Current program(s): Academy of Health Sciences |- | style="background:#fff;" colspan="8"|Notes & comments: This high school is run in conjunction with the Prince George's Community College (PGCC) with classes being held on the PGCC campus, and is the State of Maryland's first middle college. The school admitted the first class of 100 freshmen in the fall of 2011. A new grade level will be added each year until a full, four-year, grades 9-12 high school is operational. There will be a total of 400 students. |- | rowspan="2"|Bladensburg High School | align="center"|Link | align="center"|Bladensburg | align="center"|1936 | align="center"|9–12 | align="right"|1,857 students | align="right"|304,000 | align="center"|9:30a – 4:10p | Current program(s): Biomedical Magnet Program; Career and Technical Education Magnet Program; Academy of Hospitality and Tourism; America's Choice School Design Signature Program Future program(s): Academy of Health and Biosciences; Academy of Graphic Arts, Media and Communications |- | style="background:#fff;" colspan="8"|Notes & comments: Bladensburg received a state-of-the-art replacement facility in August 2004. |- | rowspan="2"|Bowie High School (included with Bowie High School Annex) | align="center"|Link | align="center"|Bowie | align="center"|1965 | align="center"|10–12 | align="right"|2,442 students | align="right"|280,306 | align="center"|7:45a – 2:25p | Current program(s): SUMMIT Scholar Signature Program Future program(s): Academy of Information Technology; Performing Arts Academy; Academy of Environmental Sciences |- | style="background:#fff;" colspan="8"|Notes & comments: Bowie High School has two physical campuses. 10th-12th grade attend classes at the main campus and 9th graders attend classes at the Belair Annex (a former middle school) a half mile away. Bowie was ranked #1,173 on Newsweeks 2010 list of Top 1500 Public High Schools in America. The SUMMIT Scholar Program at Bowie is a four-year course of study through which a select group of students (60-65 students per grade level) follow a comprehensive curriculum combining accelerated honors level and rigorous Advanced Placement course work. The program combines honors, SUMMIT, and Advanced Placement courses, yet remains an integral part of the high school community at Bowie; SUMMIT scholars do not comprise a school within a school. |- | rowspan="2"|Bowie High School Annex (included with Bowie High School) | align="center"|Link | align="center"|Bowie | align="center"|1963 | align="center"|9 | align="right"|N/A | align="right"|102,351 | align="center"|7:45a – 2:25p | Current program(s): SUMMIT Scholar Signature Program Future program(s): Academy of Information Technology; Performing Arts Academy; Academy of Environmental Sciences |- | style="background:#fff;" colspan="8"|Notes & comments: Bowie High School has two physical campuses. 10th-12th grade attend classes at the main campus and 9th graders attend classes at the Belair Annex (a former middle school) a half mile away. Bowie was ranked #1,173 on Newsweeks 2010 list of Top 1500 Public High Schools in America. The SUMMIT Scholar Program at Bowie is a four-year course of study through which a select group of students (60-65 students per grade level) follows a comprehensive curriculum combining accelerated honors level and rigorous Advanced Placement course work. The program combines honors, SUMMIT, and Advanced Placement courses yet remains an integral part of the high school community at Bowie; SUMMIT scholars do not comprise a school within a school. |- | rowspan="2"|Central High School | align="center"|Link | align="center"|Walker Mill | align="center"|1961 | align="center"|9–12 | align="right"|1,004 students | align="right"|168,366 | align="center"|7:45a – 2:25p | Current program(s): French Immersion Magnet Program; International Baccalaureate (IB) Magnet Program; Law, Education and Public Service Academy; AVID Signature Program; America's Choice School Design Signature Program Future program(s): Global Studies Academy; Academy of Graphic Arts, Media and Communications |- | style="background:#fff;" colspan="8"|Notes & comments: Central was ranked #1,429 on Newsweeks Top 1500 Public High Schools in America for 2010. It is an IB World School. Programs they have include Architecture and Design, Global Studies, Graphic Arts, Media and Communications Health and Biosciences Consumer Services, Hospitality and Tourism Law, Education and Public Service Cosmetology(CAPS) Culinary(CAPS) Electrical(CAPS) Carpentry(CAPS) French Immersion Nursing(CAPS) |- | rowspan="2"|Crossland High School | align="center"|Link | align="center"|Camp Springs | align="center"|1963 | align="center"|9–12 | align="right"|1,081 students | align="right"|313,276 | align="center"|7:45a – 2:25p | Current program(s): Technical Academy Magnet Program; International Baccalaureate (IB) Program (non-magnet); Global Studies Academy; America's Choice School Design Signature Program; Crossland Evening High School Future program(s): Academy of Architecture and Design; Academy of Transportation Technologies; Performing Arts Academy |- | style="background:#fff;" colspan="8"|Notes & comments: Crossland was named an IB World School in 2009. |- | rowspan="2"|Frederick Douglass High School | align="center"|Link | align="center"|Upper Marlboro | align="center"|1965 | align="center"|9–12 | align="right"|940 students | align="right"|184,417 | align="center"|7:45a – 2:25p | Current program(s): International Baccalaureate (IB) Middle Years Programme; America's Choice School Design Signature Program Future program(s): Academy of Global Studies; Academy of Business and Finance; Academy of Information Technology |- | style="background:#fff;" colspan="8"|Notes & comments: Frederick Douglass is an IB World School. |- | rowspan="2"|DuVal High School | align="center"|Link | align="center"|Lanham | align="center"|1960 | align="center"|9–12 | align="right"|1,697 students | align="right"|281,281 | align="center"|8:30a – 3:10p | Current program(s): Aerospace Engineering and Aviation Technology Program; Project Lead The Way Pre-Engineering Academy; America's Choice School Design Signature Program; Academy of Consumer Services, Hospitality & Tourism; Academy of Humanities, Leadership & Public Service; Academy of Engineering and Science; Academy of Graphic Arts, Media and Communications Future program(s): Academy of Transportation Technologies |- | style="background:#fff;" colspan="8"|Notes & comments: DuVal received a state-of-the-art, $13.4 million USD, 65,995 sq. ft., 600-student classroom addition in 2007. This added a music wing and two-story academic wing. Starting in 2014, DuVal housed a new specialized Aerospace Engineering and Aviation Technology Program. Admission is based on competitive examination only, and prospective students take the same specialized examination currently used for entrance into the Science and Technology Center. DuVal is currently constructing a new Aerospace building that will be placed next to the Cafeteria. |- | rowspan="2"|Fairmont Heights High School | align="center"|Link | align="center"|Chapel Oaks | align="center"|1950 | align="center"|9–12 | align="right"|788 students | align="right"|174,128 | align="center"|8:30a – 3:10p | Current program(s): Biotechnology Magnet Program; National Academy of Finance; Information Technology; America's Choice School Design Signature Program Future program(s): Academy of Environmental Studies; Performing Arts Academy |- | style="background:#fff;" colspan="8"|Notes & comments: Fairmont Heights is one of three PGCPS high schools which house a special Health and Wellness Center', an on-site medical facility operated under the auspices of the county's Health Department. |- | rowspan="2"|Charles Herbert Flowers High School | align="center"|Link | align="center"|Springdale | align="center"|2001 | align="center"|9–12 | align="right"|2,032 students | align="right"|332,500 | align="center" |7:45a – 2:25p | Current program(s): Science and Technology Center Magnet Program; National Academy of Finance; Project Lead The Way Pre-Engineering Academy; ProStart: Hospitality and Restaurant Management Program Future program(s): Academy of Engineering and Science; Academy of Information Technology |- | style="background:#fff;" colspan="8"|Notes & comments: Flowers was ranked #1,445 on Newsweeks Top 1500 Public High Schools in America, for 2009. |- | rowspan="2"|Friendly High School | align="center"|Link | align="center"|Friendly | align="center"|1970 | align="center"|9–12 | align="right"|979 students | align="right"|236,861 | align="center"|7:45a – 2:25p | Current program(s): Academy of Health and Biosciences; America's Choice School Design Signature Program Future program(s): Academy of Engineering and Science; Academy of Information Technology |- | style="background:#fff;" colspan="8"|Notes & comments: |- | rowspan="2"|Gwynn Park High School | align="center"|Link | align="center"|Brandywine | align="center"|1956 | align="center"|9–12 | align="right"|1,064 students | align="right"|194,845 | align="center"|7:45a – 2:25p | Current program(s): Technical Academy Magnet Program; America's Choice School Design Signature Program; Academy of Consumer Services, Hospitality and Tourism; Academy of Environmental Studies; Academy of Information Technology; Future program(s): Academy of Transportation Technologies |- | style="background:#fff;" colspan="8"|Notes & comments: |- | rowspan="2"|High Point High School | align="center"|Link | align="center"|Beltsville | align="center"|1954 | align="center"|9–12 | align="right"|2,426 students | align="right"|318,376 | align="center"|8:45a – 3:25p | Current program(s): AVID Signature Program; Academy of Engineering and Science Future program(s): Academy of Environmental Studies; Academy of Military Science |- | style="background:#fff;" colspan="8"|Notes & comments: High Point received the Siemens Award for Advanced Placement in 2004. High Point was ranked #1,361 on Newsweeks Top 1500 Public High Schools in America, for 2010. U.S. News & World Report named High Point a Silver Medal School in 2010. |- | rowspan="2"|Largo High School | align="center"|Link | align="center"|Largo | align="center"|1970 | align="center"|9–12 | align="right"|1,026 students | align="right"|243,581 | align="center" |7:45a – 2:25p | Current program(s): Biotechnology Magnet Program; AVID Signature Program; America's Choice School Design Signature Program; Largo Evening High School Future program(s): Academy of Health and Biosciences; Academy of Hospitality and Tourism |- | style="background:#fff;" colspan="8"|Notes & comments: |- | rowspan="2"|Laurel High School | align="center"|Link | align="center"|Laurel | align="center"|1961 | align="center"|9–12 | align="right"|1,814 students | align="right"|371,531 | align="center"|7:45a – 2:25p | Current program(s): Technical Academy Magnet Program; International Baccalaureate (IB) Program (Non-Magnet); Academy of Global Studies; America's Choice School Design Signature Program Future program(s): Academy of Transportation Technologies; Academy of Information Technology; Academy of Architecture and Design |- | style="background:#fff;" colspan="8"|Notes & comments: Laurel completed a 600-student classroom addition and a new auditorium in the spring of 2010. Laurel was ranked #1,343 on Newsweeks Top 1500 Public High Schools in America, for 2010. It is an IB World School. |- | rowspan="2"|Northwestern High School | align="center"|Link | align="center"|Hyattsville | align="center"|1951 | align="center"|9–12 | align="right"|2,262 students | align="right"|386,000 | align="center"|Comprehensive9:30a – 4:10p CVPA Magnet8:15a – 4:10p | Current program(s): The Jim Henson Center for the Visual and Performing Arts Program; America's Choice School Design Signature Program; School of Business Management and Finance (National Academy of Finance, Academy of Business Management); School of Human Resource Services (The International Studies Academy, NJROTC Academy of Military Science); School of Manufacturing, Engineering and Technology (Project Lead The Way Pre-Engineering Academy); Colours Performing Arts Program; Northwestern Evening High School; Northwestern Adult Evening High School; Northwestern Saturday Academy Future program(s): Academy of Law, Education and Public Service; Performing Arts Academy |- | style="background:#fff;" colspan="8"|Notes & comments: Northwestern received a state-of-the-art, $45 million replacement facility, which opened in August 2000. At 386,000sq. ft., it was then the largest high school in the state of Maryland in terms of total square footage. It was surpass in physical size by the new Dr. Henry Wise, Jr. HS (also in Prince George's County), in 2006. Northwestern is the second largest high school in Maryland. U.S. News & World Report named Northwestern a Silver Medal School in 2010. Northwestern became the county's second location for the Center for the Visual and Performing Arts program in the fall of 2013. The program is in-boundary only, and draws students from the Hyattsville Middle School for the Creative and Performing Arts. Entrance into the program is through competitive audition only. Northwestern is one of three PGCPS high schools which house a special Health and Wellness Center, an on-site medical facility operated under the auspices of the county's Health Department. |- | rowspan="2"|Oxon Hill High School | align="center"|Link | align="center"|Oxon Hill | align="center" |1948 | align="center"|9–12 | align="right" |1,456 students | align="right"|243,048 | align="center"|9:30a – 4:10p | Current program(s): Science and Technology Center Magnet Program; AVID Signature Program; America's Choice School Design Signature Program; Academy of Business and Finance (Academy of Accounting and Finance, Academy of Business Administrative Services, Academy of Business Management); Academy of Engineering; Academy of Graphic Arts and Media; Academy of Consumer Sciences, Hospitality and Tourism (Academy of Hospitality and Restaurant Management); Academy of Military SciencesFuture program(s): Academy of Health and Biosciences |- | style="background:#fff;" colspan="8"|Notes & comments: Oxon Hill was ranked #957 on Newsweeks Top 1500 Public High Schools in America, for 2010. In August 2013, Oxon Hill relocated into a brand new LEED-certified building, that replaced the decades-old former facility. The new school was constructed adjacent to the former building. Oxon Hill is one of three PGCPS high schools which house a special Health and Wellness Center, an on-site medical facility operated under the auspices of the county's Health Department. |- | rowspan="2"|Parkdale High School | align="center"|Link | align="center"|Riverdale | align="center"|1968 | align="center"|9–12 | align="right"|2,148 students | align="right"|265,201 | align="center"|7:45a – 2:25p | Current program(s): International Baccalaureate (IB) Magnet Program; America's Choice School Design Signature Program; Academy of Global Studies; Capital One Student Banking ProgramFuture program(s): Academy of Architecture and Design; Academy of Law, Education and Public Service; Academy of Military Science |- | style="background:#fff;" colspan="8"|Notes & comments: Parkdale received a state-of-the-art, 400-seat classroom addition in November 2007. Parkdale was ranked #1,481 on Newsweeks Top 1500 Public High Schools in America, for 2010. Parkdale is an IB World School. |- | rowspan="2"|Potomac High School | align="center"|Link | align="center"|Oxon Hill | align="center"|1965 | align="center"|9–12 | align="right"|1,145 students | align="right"|218,083 | align="center"|7:45a – 2:25p | Current program(s): America's Choice School Design Signature Program; National Academy of Finance; School of Arts, Media and Communications (Academy of the Arts-Dance, Academy of the Arts-Music, Academy of the Arts-Visual); School of Business Management and Finance (Academy of Finance, Academy of Business Management); School of Consumer Services, Hospitality and Tourism (Academy of Hospitality and Restaurant Management); School of Human Resource Services (Academy of Homeland Security and Military Science, Academy of Law, Education and Public Service, Teacher Academy of Maryland); School of Manufacturing, Engineering and Technology (Project Lead the Way Pre-Engineering Academy, Information Technology)Future program(s): Academy of Environmental Studies; Academy of Graphic Arts, Media and Communications |- | style="background:#fff;" colspan="8"|Notes & comments: Potomac received a state-of-the-art, 600-seat classroom addition in January 2008. |- | rowspan="2"|Eleanor Roosevelt High School | align="center"|Link | align="center"|Greenbelt | align="center"|1974 | align="center"|9–12 | align="right"|2,504 students | align="right"|327,458 | align="center"|8:30a – 3:10p | Current program(s): Science and Technology Center Magnet Program; Capstone Program; Gilder-Lehrman American History Program; National Academy of Finance; Quality Education in Science and Technology (QUEST) Program/Academy of Information Technology (AOIT) |- | style="background:#fff;" colspan="8"|Notes & comments: Eleanor Roosevelt has been twice recognized as a National Blue Ribbon School of Excellence, in 1991 and 1998, as well as a Maryland Blue Ribbon School of Excellence in 1991 and 1998. It was named a New American High School in 1999, and it received the Siemens Award for Advanced Placement in 2002. Roosevelt was named a National School of Character in 2002. It was ranked #409 on Newsweeks 2010 list of "Top 1500 Public High Schools in America. U.S. News & World Report named Roosevelt a Silver Medal School in 2008. |- | rowspan="2"|Suitland High School (included with Suitland High School CVPA Annex) | align="center"|Link | align="center"|Suitland | align="center"|1951 | align="center"|9–12 | align="right"|1,806 students | align="right"|324,046 | align="center"|Comprehensive8:40a – 3:25pCVPA Magnet8:30a – 4:40p | Current program(s): Center for the Visual and Performing Arts Magnet Program; International Baccalaureate (IB) Magnet Program; Technical Academy Magnet Program (the Jesse J. Warr Vocational Center); America's Choice School Design Signature Program; Navy Junior ROTC (NJROTC) Academy; School of Business and Finance (National Academy of Finance; Academy of Homeland Security and Military Science)Future program(s): Academy of Architecture and Design; Academy of Transportation Technologies |- | style="background:#fff;" colspan="8"|Notes & comments: Suitland High School has two physical campuses: the main campus and the "annex" (a former elementary school) located directly behind the main campus, which houses the majority of the school's Center for the Visual and Performing Arts magnet program. Suitland was named a 1989 National Blue Ribbon School of Excellence and a 1989 Maryland Blue Ribbon School. It is an IB World School. |- | rowspan="2"|Suitland High School CVPA Annex (included with Suitland High School) | align="center"|Link | align="center"|Suitland | align="center"|1963 | align="center"|9-12 | align="right"|N/A | align="right"|70,933 | align="center"|Comprehensive8:30a – 3:10pCVPA Magnet8:30a – 4:40p | Current program(s): Center for the Visual and Performing Arts Magnet Program; International Baccalaureate (IB) Magnet Program; Technical Academy Magnet Program (the Jesse J. Warr Vocational Center); America's Choice School Design Signature Program; Navy Junior ROTC (NJROTC) Academy; School of Business and Finance (National Academy of Finance; Academy of Homeland Security and Military Science)Future program(s): Academy of Architecture and Design; Academy of Transportation Technologies |- | style="background:#fff;" colspan="8"|Notes & comments: Suitland High School has two physical campuses: the main campus and the "annex" (a former elementary school) located directly behind the main campus, which houses the majority of the school's Center for the Visual and Performing Arts magnet program. It was named a 1989 National Blue Ribbon School of Excellence. |- | rowspan="2"|Dr. Henry A. Wise, Jr. High School | align="center"|Link | align="center"|Upper Marlboro | align="center"|2006 | align="center"|9–12 | align="right"|2,255 students | align="right"|434,600 | align="center"|9:00a – 3:40p | Current program(s): Technical Academy Magnet Program; Academy of Health and Biosciences; Academy of Computer NetworkingFuture program(s): Performing Arts Academy |- | style="background:#fff;" colspan="8"|Notes & comments: At 434,600 sq. ft. and with a capacity of 2,600 students, Wise is the largest high school in the state of Maryland when measured by total square footage. It was completed in August 2006 and features a 5,000-seat professional gymnasium, the largest of any school in the Washington metropolitan area. |} Middle schools Intermediate schools are referred to as "middle schools" in the PGCPS system, and operate as either grades 6–8 or grades 7–8 middle schools. Grades 7–9 junior high school were phased out in the mid-1980s. Recent efforts have been made to convert most middle schools to the more popular grades 6–8 model. Issues in the past such as over-enrollment, lack of classroom space, and funding, had made it hard to convert all middle schools to a grades 6–8 configuration, but with increased funding and the addition of new middle schools, the transitions is slowly being made. As of 2014–2015, only four of the 24 middle schools in the school district retain the old grades 7-8 configuration. Most middle schools in Prince George's County operate with a "comprehensive" model, as their base. Most students are assigned to a middle school based on an "attendance area." Most magnet programs operate as a "School-Within-A-School" model, where the magnet serves as an alternative program, in addition to the main comprehensive program, and students from outside the regular attendance area of the middle school are enrolled and accepted into the magnet, either through "continuity" (automatic continuation from an elementary school magnet program to the middle school level equivalent) or more commonly, through a magnet lottery, where students apply for a magnet program and are granted acceptance through a random drawing. Almost all middle schools have a whole-school "Signature Program" that includes a specialized program of instruction which is the foundation of the school's comprehensive program. All middle schools in the PGCPS operate on a staggered school day schedule, where some middle schools start as early as 7:30 am and the end as early as 2:50 pm, and other middle schools start as late as 9:00 am and end as late as 4:20 pm. All middle schools operate on a modified block scheduling system, where some classes meet for as long as 70-minutes, daily. For the 2012-13 school year and beyond, an additional 40-minutes of instruction time has been added to the school day for all middle schools and their students, within the school district. In a cooperative effort of the county government, Board of Education, and the Maryland-National Capital Park & Planning Commission (M-NCPPC) some M-NCPPC community centers are physically connected to middle schools, throughout the district. The unique community park/school centers features shared use areas which include a gymnasium, multi-purpose room, exercise/fitness room, dance room, arts and crafts room, computer lab, offices; storage areas, patio area, and restrooms. There are tennis courts and unlighted fields located on-site at select centers. Dedicated magnet schools Dedicated magnet schools are offered in the PGCPS system at the PreK-8th grade, elementary and middle school level only. As of 2012-13, Glenarden Woods and Heather Hills are the only full elementary-level dedicated magnet schools in the system. Dedicated magnet schools are "whole school" programs and differ from traditional comprehensive schools, as (1) all students at the school are enrolled and receive instruction in the magnet program and (2) traditional attendance areas for assigning students to a school are replaced by much larger geographical attendance zones, usually split between north county (areas north of Central Avenue) and south county (areas south of Central Avenue). Whole school, dedicated magnet programs are offered through the Creative and Performing Arts, French Immersion, Montessori, and Talented & Gifted Center magnet programs. Students receive specialized instruction that varies from the typical comprehensive program, offered at most other schools. Students are selected for the magnet programs through a magnet lottery for the French Immersion and Montessori programs and also for the Creative and Performing Arts program at the elementary school level. Acceptance into the Creative and Performing Arts program is through audition only at the middle school level. Acceptance into the TAG Centers at Glenarden Woods and Heather Hills Elementary Schools is through specialized TAG testing only. Combined elementary and middle schools Pre-kindergarten through grade 8 schools are essentially combined elementary and middle schools, facilitated in one building. Most of these schools are referred to as "academies" in the school district. The elementary school usually starts at pre-kindergarten and ends at grade 5 and the middle school starts at grade 6 and ends at grade 8. These schools usually offer a slightly enhanced standard of learning and studies have suggested that students have benefited from being in one continuous facility from kindergarten through 8th grade, without having the disruption having to attend a brand new school, for the middle school years. Cora L. Rice Elementary School and G. James Gholson Middle School are not true academies. Both schools are housed in one facility but they operate as two completely separate schools for all intents and purposes. Elementary schools Elementary schools in Prince George's County operate in several configurations, ranging from Pre-K (Head Start) through grade 6. Most elementary schools operate under a kindergarten through grade 6 configuration, and lack a pre-kindergarten/Head Start program. More recently, with boundary realignments to ease overcrowding and with the opening of newer and larger schools and increased funding, several schools have changed to a PreK-6th grade configuration while others have added a Pre-kindergarten, but dropped the sixth grade, to change to a Pre-K through grade 5 school. The sixth grades from those schools were added to the elementary schools' feeder middle schools. In a cooperative effort of the county government, Board of Education, and the Maryland National Capital Park & Planning Commission (M-NCPPC), several M-NCPPC community centers are physically connected to elementary schools, throughout the district. The unique community park/school centers features shared use areas which include a gymnasium, multi-purpose room, exercise/fitness room, dance room, arts and crafts room, computer lab, offices, storage areas, patio area, and restrooms. Tennis courts and unlighted fields are located on-site at select centers. Former schools High schools Forestville High School (Forestville Military Academy) Frederick Sasscer Junior-Senior High School (Upper Marlboro) - Established in 1948 to relieve crowding at the former Marlboro High School. The school was named after the first Prince George's County Superintendent of Schools, Frederick Sasscer Jr. (1902–1914). The last graduating class was in 1971. The school became a junior high school only beginning in the 1971-72 school year and, in the late 1970s, school operations ceased and the facility was transformed into the Prince George's County Board of Education's Sasscer Administration Building. Lakeland High School (College Park) - A segregated school for black children, operated from 1928, to 1950, when it was replaced by Fairmont Heights High School near Fairmount Heights. Marlboro High School (Upper Marlboro, Maryland) - Marlboro High School began as Upper Marlboro Academy in c. 1860. The building was replaced in 1934, and the school became Marlboro High School. It operated until 1948 when junior and high school operations were moved to the, at the time, new Frederick Sasscer High School to address overcrowding. Primary school operations continued at the former high school until 1974. Middle schools Eugene Borroughs Middle School (Accokeek) - Merged into Accokeek Academy in 2009. Frederick Sasscer Junior High School (Upper Marlboro) - Began operations in the 1971-72 school year as a junior high school only after students in grades 10 through 12 were moved to other schools, operating until the late 1970s when the facility was transformed into the Prince George's County Board of Education's Sasscer Administration Building. G. Gardner Shugart Middle School (Hillcrest Heights CDP)"2010 CENSUS - CENSUS BLOCK MAP: Hillcrest Heights CDP, MD ." U.S. Census Bureau. Retrieved on August 29, 2018. - Shugart was scheduled to close in 2009. According to a Washington Post article written by Nelson Hernandez, Shugart, in which 35% of its students passed a State of Maryland mathematics proficiency test and which underwent a restructuring required by State of Maryland authorities, "is among the schools with long-standing academic problems". Elementary schools Berkshire Elementary School (Suitland CDP)"2010 CENSUS - CENSUS BLOCK MAP (INDEX): Suitland CDP, MD ." U.S. Census Bureau. Retrieved on August 29, 2018. Pages: 1 and 2 . - It could hold up to 550 students. Berkshire Elementary closed in 2009. Its final enrollment was 281. John Carroll Elementary School (current Summerfield CDP)Home. John Carroll Elementary School. Retrieved on September 8, 2018. "1400 Nalley Terrace Landover, MD 20785" - Scheduled to close in 2009. Mullikin Elementary (Mitchellville, Maryland) Operated from at least the 1940s until the mid-1960s when it burned and was razed. Today, the land serves as a Prince George's County school bus lot. Thomas Claggett Elementary School (Walker Mill CDP)"2010 CENSUS - CENSUS BLOCK MAP (INDEX): Walker Mill CDP, MD ." U.S. Census Bureau. Retrieved on August 31, 2018. Pages: 1 , 2, and 3 . - Its official capacity was 464. In 2005 it had 236 students, filling 49% of the official capacity; this was the lowest percentage of any PGCPS school. At one point the capacity percentage was 38%. In 2010 it had 290 students, but after that year the student count declined: it had 216, and later 223 in the 2013-2014 school year, and the projected 2014-2015 enrollment was 187. In addition, in state tests circa 2014, about 56% of the students were proficient in reading while 36.7% were proficient in mathematics. In May 2014 PGCPS applied for a grant from the state of Maryland that would permit it to close Claggett. College Park Elementary School (College Park) - For a period Friends Community School occupied the building, but it moved out in 2007. The nascent College Park Academy attempted to lease the previous College Park elementary building, but there was community opposition. The grade 6-12 charter school currently is located in Riverdale Park. Henry G. Ferguson Elementary School (Accokeek) - Merged into Accokeek Academy in 2009. Matthew Henson Elementary School (Landover CDP)Home page. Matthew Henson Elementary School. May 16, 2001. Retrieved on September 7, 2018. "Matthew Henson Elementary/Montessori School 7910 Scott Road Landover, Maryland 20785" - Scheduled to close in 2009. In 2012 EXCEL Academy agreed to open a charter school in the former Henson space, and it moved from its previous campus in Riverdale. John Edgar Howard Elementary School (Coral Hills CDP)Home. John Edgar Howard Elementary School. February 19, 1999. Retrieved on September 7, 2018. "John Edgar Howard Elmentary School 4400 Shell Street Capitol Heights, MD 20743" - Scheduled to close in 2009. The facility is now used as the John E. Howard Community Center, operated by the Prince George's County Department of Parks and Recreation. Lakeland Elementary School (College Park) - A segregated school for black children, it opened in 1925. Middleton Valley Elementary School (Camp Springs CDP)"Middleton Valley Elementary." Prince George's County Public Schools. February 4, 2003. Retrieved on September 8, 2018. "Address: 4815 Dalton Street Temple Hills, MD 20748" - It was scheduled to close on June 18, 2009. Morningside Elementary School (Morningside)"About Our School ." Morningside Elemenatary School. Retrieved on August 29, 2018. "6900 Ames Street, Suitland, Maryland 20746" and "Situated in the town of Morningside,[...]" - It opened in 1956. The school, which had a capacity of 300 students, closed in 2009. At the end of its life it was one of the few PGCPS schools in which significant numbers of students traveled to school on foot. A report made by a non-PGCPS authority generated around 2009 stated that the condition of Morningside Elementary's building was one of the poorest of any school in Prince George's County. By 2011 Imagine Schools was scheduled to open a campus in the former Morningside Elementary, now known as Imagine Foundations at Morningside Public Charter School, which serves grades PK-8. Owens Road Elementary School (Glassmanor CDP)Home. Owens Road Elementary School. February 7, 2005. Retrieved on September 8, 2018. "Owens Road Elementary School 1616 Owens Road Oxon Hill, MD 20745" - It was scheduled to close on June 18, 2009. Skyline Elementary School (Camp Springs)Home. Skyline Elementary School. Retrieved on April 29, 2018. "6311 Randolph Road Suitland, MD 20744" - It closed in 2016. Post-closure its students were to be sent to Beanes Elementary. Until its closing it had a program for autistic students. Accolades and achievements Newsweeks America's Best High Schools In June 2010, seven PGCPS high school were listed in Newsweeks annual list of the top 1600 high schools in the nation. This was up from five county high schools which made the list from the previous year. The 2010 list included Eleanor Roosevelt High School in Greenbelt (#409), Oxon Hill High School in Oxon Hill (#957), Bowie High School in Bowie (#1,173), Laurel High School in Laurel (#1,343), High Point High School in Beltsville (#1,361), Central High School in Capitol Heights (#1,429), and Parkdale High School in Riverdale (#1,481). The schools are ranked on the number of Advanced Placement, International Baccalaureate and/or Cambridge tests taken by all students in a school in 2009, divided by the number of graduating seniors, called the "Challenge Index". The schools represent the top six percent of all public high schools in America. In June 2009, five PGCPS high schools were named in the best high schools list. It included Bowie High School in Bowie, Charles Herbert Flowers High School in Springdale, High Point High School in Beltsville, Oxon Hill High School in Oxon Hill, and Eleanor Roosevelt High School in Greenbelt. Eleanor Roosevelt ranked the highest out of county schools at 372nd on the nationwide list, Oxon Hill ranked 918th, High Point ranked 961st, Bowie ranked 1,370th, and Charles Herbert Flowers ranked 1,445th. U.S. News & World Reports Best High Schools Since 2007, U.S. News & World Report has ranked high schools in PGCPS among the Best High Schools in America. High Point High School, Northwestern High School, and Eleanor Roosevelt High School have been recognized as Silver Medal Schools. State and national Blue Ribbon Schools PGCPS has 16 state Blue Ribbon Schools, 13 of which are USDE National Blue Ribbon Schools of Excellence. National Blue Ribbon Schools of Excellence Beacon Heights Elementary School, Riverdale, 2003–04 Columbia Park Elementary School, Landover, 1987–88 Fort Foote Elementary School, Fort Washington, 2000–01 Glenarden Woods Elementary School, Glenarden, 2005–06 Greenbelt Center Elementary School, Greenbelt, 1991–92 Heather Hills Elementary School, Bowie, 1989–90 Templeton Elementary School, Riverdale, 1998–99 Whitehall Elementary School, Bowie, 2011–12 Kenmoor Middle School, Landover, 1988–89 Dora Kennedy French Immersion, Greenbelt, 2013–14 Kettering Middle School, Upper Marlboro, 1992–93 Martin Luther King, Jr. Middle School, Beltsville, 1992–93 Eleanor Roosevelt High School, Greenbelt, 1990-91 & 1997-98 Suitland High School, Forestville, 1988–89 Maryland Blue Ribbon Schools Beacon Heights Elementary School, Riverdale, 2003–04 Bond Mill Elementary School, Laurel (year N/A) Columbia Park Elementary School, Landover, 1987–88 Fort Foote Elementary School, Fort Washington, 2000–01 Glenarden Woods Elementary School, Glenarden, 2005–06 Greenbelt Center Elementary School, Greenbelt, 1991–92 Heather Hills Elementary School, Bowie, 1989-90 & 2006-07 Rockledge Elementary School, Bowie, 1997–98 Whitehall Elementary School, Bowie, 2011–12 Templeton Elementary School, Riverdale, 1998–99 Kenmoor Middle School, Landover, 1988–89 Dora Kennedy French Immersion, Greenbelt, 2013–14 Kettering Middle School, Upper Marlboro, 1992–93 Martin Luther King, Jr. Middle School, Beltsville, 1992–93 Eleanor Roosevelt High School, Greenbelt, 1990-91 & 1997-98 Suitland High School, Forestville, 1988–89 Magnet programs and centers Magnet programs were first implemented in PGCPS in 1985, to fulfill a court-ordered desegregation mandate. Up until as late as the late 80s, Prince George's County had been predominantly white in terms of racial demographics. In order to desegregate mostly all-White schools in the school system, PGCPS created several magnet programs that eventually were instituted in over fifty schools, spread throughout the county. By the late 1990s, the population demographics of the county had shifted towards a mostly African American majority. Magnet programs (as they were set up) were costing PGCPS approximately $14 million per year, to operate. The programs were costly and this was exacerbated by the fact that the school system's operating budget was greater than the final budget the school system had traditionally been allotted, an issue that had plagued the school system for years. Since the county's population now primarily consisted of African Americans, and due to the expense of operating the Magnet Schools Program, courts began to investigate the justification of PGCPS's magnet program. In 2004, a court ruled to discontinue court-ordered busing which had existed in the county, for over 30 years, based primarily on the fact that desegregation was no longer an issue in the predominantly Black Prince George's County. With the ending of the court-ordered busing, also came changes to the school system's Magnet Schools Program. The program had gained national attention, as it was one of the largest in the country. It served as a model for school systems across the nation. Dr. Iris T. Metts, the superintendent of schools at the time, formulated an ambitious plan to actually expand the magnet programs in PGCPS, as well as reassign magnet programs that weren't performing well at one location, to other schools. Due to long and highly publicized in-house issues between Metts and the Board of Education, Metts was replaced by Dr. Andre Hornsby at the end of her contract with PGCPS. When Hornsby arrived, he essentially reversed the decision that Metts had made, in regards to the future of the county's magnet programs, and he decided to instead eliminate most of the school system's magnet programs, most of which had been identified as under-performing for several years. Ten magnet programs were identified for elimination, which proved extremely controversial because some of the proposed eliminated programs were located at sites in which the program in question had been extremely successful, such as the Academic Center magnet program at Martin Luther King, Jr. Academic Center, which had been the highest performing middle school in the system for several years and also was a blue ribbon school. Despite the opposition by parents, in 2006 the magnet programs in PGCPS underwent an overhaul, and most of the magnets were eliminated. A few programs that were determined to be "successful" were either expanded and replicated at other locations, or consolidated and relocated to a dedicated magnet school that would serve large geographic areas of the county. Current magnet programs ES = elementary school; MS = middle school; HS = high school Aerospace Engineering and Aviation Technology Program (HS) Biomedical (HS) Biotechnology (HS) Career and Technical Education (HS) Centers for Visual and Performing Arts (HS) Chinese Immersion (ES, MS) Creative and Performing Arts (ES, MS) French Immersion (ES, MS, HS) International Baccalaureate (HS) Montessori (ES, MS) Science and Technology Center (HS) Spanish Dual Language Program (ES) Spanish Immersion (ES, MS) Talented and Gifted Center (ES, MS) Magnet program descriptions Aerospace Engineering and Aviation Technology The Aerospace Engineering and Aviation Technology program is a college and career preparatory program, offering areas of study in Aerospace Engineering and Aviation Technology. It is supported by partnerships with the College Park Aviation Museum, NASA, local colleges and universities, and private industry. This program is designed to prepare students for college and high-demand careers. Each student receives a laptop upon entry into the program, and is provided with transportation. Admission to the program is based on the same criteria and examination used for the Science and Technology Center.Locations:DuVal High School Biomedical The Biomedical Program at Bladensburg High School is a high school curriculum that focuses on medical and health careers, such as physicians and research doctors. Students who have a strong interest in pursuing a career in health-related fields have an opportunity to engage in biomedical research, internships, and practicums, and to enroll in medical-related science courses and other advanced placement courses. The curriculum introduces students to a wide variety of medical careers through field trips, speakers in the medical field, internships, accelerated courses, a wide variety of electives related to the biological and social sciences, and independent research.Locations: Bladensburg High School Biotechnology The Biotechnology Program offers a four-year, college-preparatory program of study in molecular biology, biochemistry and technical career training that includes scanning electron microscopy. Students have first-hand experience with the advanced technologies used in biotechnology research, academia, and industry. Courses are taught in modern laboratory classrooms equipped with the latest biotechnology instrumentation. The facilities include gel electrophoresis, refrigerated centrifugation, scanning spectrophotometry, high pressure liquid chromatography, gas chromatography and access to scanning electron microscopy. Computers will support classroom instruction as well as student initiated research projects. Students study biotechnology theory and technique in a cyclic fashion where concepts introduced in beginning courses will be emphasized in depth during upper level classes. Mini-research projects are conducted by science students to demonstrate their understanding of course content and laboratory procedures. Complementing the specific science offerings of the Biotechnology Program is a full selection of courses, including Advanced Placement level in English, social studies and mathematics.Eligibility Requirements: Students who express interest are eligible to apply. No pre-testing is required. Admission to the program is through a race-neutral random magnet lottery application process, on a space-available basis.Locations: Fairmont Heights High School Largo High School Career and Technical Education (CTE) Program The Technical Academy is a program that provides students with technical skills and knowledge. Benefits to students include gaining a foundation for a college major in a technical field, having access to a technical career after high school if college is postponed, and having access to a part-time technical job to help with college expenses.Locations: Bladensburg High School Crossland High School Gwynn Park High School Laurel High School Suitland High School Centers for the Visual and Performing Arts The Centers for the Visual and Performing Arts (CVPA) has been in existence since 1986, originally at Suitland High School. The program was expanded to Northwestern High School in the fall of 2013. The CVPA is a rigorous four-year arts program that offers artistically talented high school students educational opportunities designed to prepare them artistically for college, professional study, or career options in the arts. Strong association with the arts in the Washington, DC-area offers distinct advantages. Students study with professional artists, dancers, actors, musicians, singers, directors/producers, and radio/television personalities. Students explore, and eventually major, in any one of the six principal concentrations: vocal music, instrumental music, dance, theatre, visual arts, and interactive media production. Suitland High School offers a 1000-seat auditorium and experimental theatre, a fully equipped dance studio, and a television and recording studio. Northwestern High School offers an 1100-seat auditorium, fully equipped dance studio, state-of-the-art music rooms, several music practice rooms, a piano lab, and a television and recording studio. Admission into the CVPA magnet program is through audition only.Locations: Northwestern High School Suitland High School Creative and Performing Arts The Creative and Performing Arts Magnet Program is located at three sites. The programs at Thomas G. Pullen and Benjamin D. Foulois are open to students in Kindergarten through eighth grade; the program at Hyattsville Middle School is open to students in seventh and eighth grade (Hyattsville Middle School has a limited program boundary). The Creative and Performing Arts Magnet Program is designed to develop the interest and talents of students in the arts, and feature an enhanced interdisciplinary academic program that encourages creative and artistic expression. Experiences and training are designed to challenge and develop skills of all students, as well as to provide exceptional opportunities for artistically talented students. The curriculum provides in-depth experiences in each art discipline, plus related arts experiences and an infusion of the arts in the overall curriculum. The arts are provided as an integral part of a strong academic program. The Creative Arts Schools follow the general curriculum guidelines that are used for all Prince George's County public elementary and middle schools. Basic instruction is provided in reading, mathematics, English, science, and social studies, as well as specialized instruction in the arts - art, drama, music, dance, physical education, creative writing, media production, literary arts, and related computer lab experiences.Locations: Thomas G. Pullen Creative and Performing Arts Academy Hyattsville Middle School for the Creative and Performing Arts The Benjamin D. Foulois Creative and Performing Arts Academy French Immersion The French Immersion Magnet Program is designed for kindergarten through twelfth grade. It is referred to as a "full immersion program" as all academic subjects are taught through French, in grades K-5. In grades 6-8, the students have two periods per day of French, one period for French Language Arts and one period of world studies in French. In high school, students have two courses in grades 9 and 10 with a focus on literature and the francophone world, which are part of the Pre-International Baccalaureate (IB) Program. At the elementary level, students are immerse totally in French by their bilingual teachers, as they learn math, science, social studies and language arts. At the middle school level, students also study Italian. In addition, Algebra and Geometry are possible options in mathematics. The interdisciplinary approach for English, Art and World Studies includes special themes, seminars, field trips, and a strong focus on essay writing. International travel is an enrichment part of the French Immersion Program. At the high school level, students may take one of the immersion courses and the continuation of the second foreign language started at the middle school level. Other options are IB preparation courses for English, history, science, and access to Chemistry and Calculus. Higher level IB or Advanced Placement (AP) courses, are available. There is an Exchange Program with a school in France and other exchanges are being explored for high school students. In addition to the immersion continuity, students may continue the study of their second foreign language which began in middle school — either Russian, Italian, Latin, or German.Locations: Maya Angelou French Immersion Dora Kennedy French Immersion Central High School International Baccalaureate The International Baccalaureate (IB) Diploma Magnet Program is an academically challenging and balanced course of study, that prepares students for success in college and life beyond. The mission of the program is to develop inquiring, knowledgeable, and caring young people who help to create a better, more peaceful world through intercultural understanding and respect. The IB program offers many benefits to its participants, such as: higher university and college acceptance rates for IB graduates; increased scholarship and grant opportunities; a college-level academic program that transitions students to university and college standards; and teacher development using IB strategies.Locations: Central High School Crossland High School Laurel High School Parkdale High School Suitland High School Montessori Prince George's County Public Schools has implemented two facilities dedicated to the Montessori instructional program — the Robert Goddard Montessori School and the John Hanson Montessori School. As dedicated facilities, these schools do not have a neighborhood attendance area. Entry into the program is through the random lottery application process only. The Montessori Primary Program for children ages 3 to 6 years old is based on the Montessori educational philosophy. Taught by Montessori accredited teachers, young children are guided in developing an inner discipline, strengthening their coordination, and extending their concentration span. These accomplishments result with their readily learning to read, write and grasp mathematics. The program consists of a half-day morning for preschoolers (ages 3 ). Children older than four must be enrolled in a certified Montessori program to be accepted into the program. The Montessori Lower Elementary Program is designed for students ages 6 to 9 years old with prior Montessori experience. Rapid growth and learning is observed in classrooms filled with appropriate educational materials. The Montessori Upper Elementary Program continues for the next age grouping of students ages 9–12 with prior Montessori experience. Taught by Montessori accredited teachers, these elementary program students study an integrated curriculum that includes: mathematics, geometry, language, cultural studies, astronomy, biology, chemistry, geography, history, geology, philosophy, art, music and physical education. The Montessori Middle School Program completes the Montessori studies for students progressing to the seventh and eighth grades. An interdisciplinary teaching team provides the Montessori Program for multidisciplinary learning to include English Language Arts, mathematics, science and social studies. At the high school level, the student can apply for entry to Biotechnology, Biomedical, Military Academy, Center for the Visual & Performing Arts and/or the Science & Technology Center.Locations:Robert Goddard Montessori School John Hanson Montessori School Judith P. Hoyer Montessori School Science and Technology Center The Science and Technology Center (S/T) is a highly challenging four-year curriculum which provides college-level academic experiences in science, mathematics, and technology. The program is not a true magnet program, as students are admitted into the S/T program based on competitive examination only, as opposed to the standard magnet lottery process. Of twenty-eight possible credits, a student is required to obtain a minimum of thirteen credits in specific mathematics, pre-engineering technology, research and science courses. In grades nine and ten, the program consists of common experiences courses for all student. In grades eleven and twelve, each student must choose course work from at least one of four major study areas. Students are expected to be enrolled in a full schedule of classes during the entire four-year program. External experiences are possible and encouraged, but must be a direct extension or enrichment of the Science and Technology Program, and have the recommendation of the Science and Technology Center Coordinator prior to approval by the principal. The program is offered at three centers — Eleanor Roosevelt High School in northern Prince George's County, Oxon Hill High School in southern Prince George's County, and Charles Herbert Flowers High School in central Prince George’s County. Students attend the center that serves their legal residence. Transportation is provided for all students. Each school is a four-year comprehensive high school, as well as a Science and Technology Center. Each school is an active member of the National Consortium for Specialized Secondary Schools of Mathematics, Science and Technology (NCSSSMST). Admission into the Science and Technology Center is highly competitive and contingent upon three criterion, with all criterion weighed equally. The criterion are: Grades from four quarters of 7th grade and the first quarter of 8th grade (or four quarters of 8th grade and first quarter of 9th grade) in math, science, English, and social studies A standardized reading comprehension test A standardized numerical test All of these are factored into a final score. The number of students admitted into the S/T program vary from each school, but as an example, 225-250 students with the top scores are admitted to Roosevelt's Science and Technology Program. The next 60 students are placed on a waiting list. All interested 8th and 9th grade students who are residents of Prince George's County are eligible to apply for admission to the Science and Technology Center. Locations Charles Herbert Flowers High School Oxon Hill High School Eleanor Roosevelt High School Spanish Dual Language Program The Spanish Dual Language Program gives equal emphasis to English and non-English language speakers. Students learn Spanish and English through content based instruction in selected core subjects with a cross cultural understanding for both native and non-native speakers. Students read, write, listen and speak in both languages, becoming bilingual, biliterate and bicultural. Locations Cesar Chavez Elementary School Spanish Immersion Language Immersion is an educational approach in which students are taught the curriculum content through the medium of a second language, Spanish. Children learn their entire core subjects (reading, writing, mathematics, social studies, and science) in Spanish. Spanish speaking teachers immerse student completely in Spanish as they learn. In this way, immersion students not only learn the content, but also gain knowledge of the language in which it is taught. Locations Overlook Elementary School Phyllis E. Williams Elementary School Talented and Gifted Center (TAG) Talented and Gifted Center (TAG) Magnet Schools provide a full-day intensive educational program appropriate for identified talented and gifted students, in grades 2-8. Each school offers a full-day of enriched and accelerated educational experiences in the four major content areas. Special offerings include elementary foreign language programs, computer laboratories, laboratory based science program, and fine arts programs.Locations:''' The Accokeek Academy Capitol Heights Elementary School Glenarden Woods Elementary School Heather Hills Elementary School Highland Park Elementary School Longfields Elementary School Valley View Elementary School Greenbelt Middle School Kenmoor Middle School Walker Mill Middle School See also List of Prince George's County Public Schools Middle Schools Prince George's County Public Schools Magnet Programs List of schools in Prince George's County, Maryland References External links Public Schools School districts in Maryland School districts established in 1899
17064122
https://en.wikipedia.org/wiki/Ver%20%28command%29
Ver (command)
In computing, ver (short for version) is a command in various command-line interpreters (shells) such as COMMAND.COM, cmd.exe and 4DOS/4NT. It prints the name and version of the operating system, the command shell, or in some implementations the version of other commands. It is roughly equivalent to the Unix command uname. Implementations The command is available in FLEX, HDOS, DOS, FlexOS, SpartaDOS X, 4690 OS, OS/2, Windows, and ReactOS. It is also available in the open-source MS-DOS emulator DOSBox, in the KolibriOS Shell and in the EFI shell. TSC FLEX In TSC's FLEX operating system, the VER command is used to display the version number of a utility or program. In some versions the command is called VERSION. DOS The command is available in MS-DOS versions 2 and later. MS-DOS versions up to 6.22 typically derive the DOS version from the DOS kernel. This may be different from the string printed on start-up. The argument "/r" can be added to give more information and to list whether DOS in running in the HMA (high memory area). PC DOS typically derives the version from an internal string in command.com (so PC DOS 6.1 command.com reports the version as 6.10, although the kernel version is 6.00.) DR DOS 6.0 also includes an implementation of the command. DR-DOS reports whatever value the environment variable OSVER reports. PTS-DOS includes an implementation of this command that can display, modify, and restore the DOS version number. IBM OS/2 OS/2 command.com reports an internal string, with the OS/2 version. The underlying kernel here is 5.00, but modified to report x0.xx (where x.xx is the OS/2 version). Microsoft Windows Windows 9x command.com report a string from inside command.com. The build version (e.g. 2222), is also derived from there. Windows NT command.com reports either the 32-bit processor string (4nt, cmd), or under some loads, MS-DOS 5.00.500, (for all builds). The underlying kernel reports 5.00 or 5.50 depending on the interrupt. MS-DOS 5.00 commands run unmodified on NT. Microsoft Windows also includes a GUI (Windows dialog) variant of the command called winver, which shows the Service Pack or Windows Update installed (if any) as well as the version. In Windows before Windows for Workgroups 3.11, running winver from DOS reported an embedded string in winver.exe. Windows also includes the setver command that is used to set the version number that the MS-DOS subsystem (NTVDM) reports to a DOS program. This command is not available on Windows XP 64-Bit Edition. DOSBox In DOSBox, the command is used to view and set the reported DOS version. It also displays the running DOSBox version. The syntax to set the reported DOS version is the following: VER SET <MAJOR> [MINOR] The parameter MAJOR is the number before the period, and MINOR is what comes after. Versions can range from 0.0 to 255.255. Any values over 255 will loop from zero. (That is, 256=0, 257=1, 258=2, etc.) Others AmigaDOS provides a version command. It displays the current version number of the Kickstart and Workbench. The DEC OS/8 CCL ver command prints the version numbers of both the OS/8 Keyboard Monitor and CCL. Syntax C:\WINDOWS\system32>ver Microsoft Windows [Version 10.0.10586] Some versions of MS-DOS support an undocumented /r switch, which will show the revision as well as the version. Version list The following table lists version numbers from various Microsoft operating systems: See also Comparison of Microsoft Windows versions List of DOS commands uname References Further reading External links ver | Microsoft Docs How to find Windows version, service pack number and edition from CMD How to determine what version of Windows you are running in a batch file Internal DOS commands MSX-DOS commands OS/2 commands ReactOS commands Windows commands Microcomputer software Windows administration
2750352
https://en.wikipedia.org/wiki/Internet%20leak
Internet leak
An Internet leak occurs when a party's confidential information is released to the public on the Internet. Various types of information and data can be, and have been, "leaked" to the Internet, the most common being personal information, computer software and source code, and artistic works such as books or albums. For example, a musical album is leaked if it has been made available to the public on the Internet before its official release date. Album leaks Songs of Faith and Devotion (1993) by Depeche Mode Pop (1997) by U2 I Am... (1999) by Nas Vol. 3... Life and Times of S. Carter (1999) by Jay-Z Kid A (2000) by Radiohead No Strings Attached (2000) by NSYNC Yankee Hotel Foxtrot (2001) by Wilco All Eyez on Me (2002) by Monica The Eminem Show (2002) by Eminem Kamaal/The Abstract (2002) by Q-Tip Steal This Album! (2002) by System of a Down Dangerously in Love (2003) by Beyoncé Hail to the Thief (2003) by Radiohead Transatlanticism (2003) by Death Cab for Cutie The College Dropout (2004) by Kanye West Encore (2004) by Eminem How to Dismantle an Atomic Bomb (2004) by U2 Jeanius (2004) by Jean Grae Extraordinary Machine (2005) by Fiona Apple Jacksonville City Nights (2005) by Ryan Adams Back to Basics (2006) by Christina Aguilera King (2006) by T.I. Love for Sale (unreleased; leaked in 2006) by Bilal Lupe Fiasco's Food & Liquor (2006) by Lupe Fiasco Return to Cookie Mountain (2006) by TV on the Radio Ys (2006) by Joanna Newsom Blackout (2007) by Britney Spears The Flying Club Cup (2007) by Beirut Graduation (2007) by Kanye West Icky Thump (2007) by The White Stripes Strawberry Jam (2007) by Animal Collective Chinese Democracy (2008) by Guns N' Roses Microcastle and Weird Era Cont. (2008) by Deerhunter Tha Carter III (2008) by Lil Wayne Before I Self Destruct (2009) by 50 Cent The Blueprint 3 (2009) by Jay-Z No Line on the Horizon (2009) by U2 Veckatimest (2009) by Grizzly Bear Sir Lucious Left Foot: The Son of Chico Dusty (2010) by Big Boi Thank Me Later (2010) and Take Care (2011) by Drake 4 (2011) by Beyoncé Toy (2011) by David Bowie Dreams and Nightmares (2012) by Meek Mill Good Kid, M.A.A.D City (2012) by Kendrick Lamar Indicud (2013) by Kid Cudi La Familia 013 (2013) by Charlie Brown Jr. Rebel Heart (2015) by Madonna Froot (2015) by Marina and the Diamonds To Pimp a Butterfly (2015) by Kendrick Lamar Vulnicura (2015) by Björk Yandhi (2018) by Kanye West Fear Inoculum (2019) by Tool Future Nostalgia (2020) by Dua Lipa Act II: Patents of Nobility (The Turn) (2020) by Jay Electronica Positions (2020) by Ariana Grande Pegasus (2020) by Trippie Redd Whole Lotta Red (2020) by Playboi Carti Sour (2021) by Olivia Rodrigo Montero (2021) by Lil Nas X Red (Taylor's Version) (2021) by Taylor Swift 30 (2021) by Adele 2 Alivë (2022) by Yeat The rise in leaks during the 2000s led to some popular recording artists surprise-releasing their albums. Source code leaks Source code leaks are usually caused by misconfiguration of software like CVS or FTP which allow people to get source files through exploits, software bugs, or employees that have access to the sources or part of them revealing the code in order to harm the company. There were many cases of source code leaks in the history of software development. As Fraunhofer IIS released in 1994 only a low quality version of their MP3 encoding software (l3enc), a hacker named SoloH gathered the source code from the unprotected servers of the University of Erlangen and developed a higher quality version, which started the MP3 revolution on the internet. Around 1996 Electronic Arts accidentally put the source code of the video game FIFA 97 on a demo disc. In 2003, Axel Gembe, a German hacker, who had infiltrated Valve's internal network months earlier, exploited a security hole in Microsoft's Outlook to get the complete source of the video game Half-Life 2. The source code was leaked online a week later, a playable version of Half-Life 2 was compiled from the source code, revealing how unfinished it was. The leaks damaged morale at Valve and slowed development. In March 2004, Gembe contacted Gabe Newell, CEO of Valve, and identified himself, saying he was a fan and had not acted maliciously. Newell worked with the FBI to invite Gembe to a fake job interview, planning to have him arrested in the USA; however, police arrested him in Germany. The complete source was soon available in various file sharing networks. Also in 2003, source code to Diebold Election Systems Inc. voting machines was leaked. Researchers at Johns Hopkins University and Rice University published a critique of Diebold's products, based on an analysis of the software. They found, for example, that it would be easy to program a counterfeit voting card to work with the machines and then use it to cast multiple votes inside the voting booth. In 2003 a Chinese hacker acquired the source code for Lineage II and sold it to someone in California who then used it to create a bootleg version of the game, powered by his own servers. Despite warnings from NCSoft that pirating an online game was considered illegal, he continued doing so for a few years, until the Federal Bureau of Investigation finally raided his home in 2007, seized the servers and permanently disabled the website that fronted his bootleg version of Lineage II.CRACKING THE CODE Online IP Theft Is Not a Game on FBI.gov (02/01/2007) In 2003, one year after 3dfx was bought by Nvidia and support ended, the source code for their drivers leaked, resulting in fan-made, updated drivers. In 2004, a large portion of Windows NT 4.0's source code and a small percentage (reportedly about 15%) of Windows 2000's were leaked online. The Windows 2000 source code leak was analysed by a writer for (now defunct) website Kuro5hin who noted that while the code was generally well written, it allegedly contained about "a dozen" instances of profanity and the milder euphemism "crap". The writer also noted that there were a lot of code hacks, with the "uglier" ones mostly being for compatibility with older programs and some hardware. It was feared that because of the leak, the number of security exploits would increase due to wider scrutiny of the source code. It was later discovered that the source of the leak originated from Mainsoft. Also in 2004, partial (800 MB) proprietary source code that drives Cisco Systems' networking hardware was made available in the internet. The site posted two files of source code written in the C programming language, which apparently enables some next-generation IPv6 functionality. News of the latest source code leak appeared on a Russian security site. In 2006, Anonymous hackers stole source code (about 1 GiB) for Symantec's pcAnywhere from the company's network. While confirmed in January 2012, it is still unclear how the hackers accessed the network. In late 2007, the source code of Norton Ghost 12 and a Norton Anti-Spyware version were available via BitTorrent. In December 2007 and January 8, a Pirate Bay user published the sources of five Idera SQL products via BitTorrent. In January 2011 the "stolen source code" of Kaspersky Anti-Virus 2008 was published on the Pirate Bay. On May 20, 2011, EVE Online's source code was published by someone on a GitHub repository. After being online for four days, CCP Games issued a DMCA take-down request which was followed by GitHub. In 2011, the source code of GunZ: The Duel v1.5 became available online. In December 2011, the source code of the Solaris 11 operating system's kernel was leaked via BitTorrent. In August 2014 S.T.A.L.K.E.R.: Clear Sky's X-Ray Engine source code (and its successor) became available on GitHub under a non-open-source license.xray-16 on github.com On December 29, 2015 the AmigaOS 3.1 source code leaked to the web, confirmed by the rights holder Hyperion Entertainment.amiga-os-kickstart-and-workbench-source-coded-leaked on December 29, 2015 In January 2017 the source code of Opera's Presto Browser engine was leaked to GitHub. The source code was shortly after taken down with a DMCA notice. In June 2017 a small part of Microsoft's Windows 10 source code leaked to the public. The leak was of the Shared Source Kit, a small portion of the source code given to OEMs to help with writing drivers. In February 2018, the iBoot bootloader for Apple operating systems' source code was leaked onto GitHub by an Apple engineer. The code was from 2016, and by the time it was leaked, iBoot had been restructured, making it obsolete. On April 22, 2020, Counter-Strike: Global Offensive and Team Fortress 2 code was leaked. Some time during March 2018, Nintendo suffered a significant leak when a hacker obtained an alleged 2 TB of confidential material containing source codes to game consoles, games, and internal documentation. Starting in 2018, the contents of this breach slowly made their way onto the Internet, starting with iQue Player ROMs and various Pokémon games. Later in 2020, the leaks gained more attention and grew in size, culminating into the release of Wii and Nintendo 64 source code, and the so-called "Gigaleak", a massive release containing multiple N64 games' source code and SNES Prototypes. After December 2020, no more releases have occurred. On August 7, 2020, 20 GB of Intellectual property of Intel, including source code (in SystemVerilog and otherwise) of their System on Chips leaked (with preserving git structure). That included Intel ME, Intel Microcode and software simulators of their hardware. Their various BIOS source code was also leaked. The SpaceX cameras firmware that Intel worked on also leaked. The data is being distributed through a torrent. On September 23, 2020, Windows XP SP1 and Windows Server 2003 complete source code depots were leaked. The archives included all the source code from the time it was used at Microsoft, including documentation and build tools. The leak was first dismissed as illegitimate, but it was soon clear that it was legitimate, the source code contained Windows XP specific code and resources, later one user managed to compile the OS and pack it into an ISO image. On January 4, 2021, Nissan North America source code was leaked online due to misconfiguration of a company Git server, which was left exposed online with a default username and password of admin/admin. Software engineer Tillie Kottmann learned of the leak and analyzed the data, which they shared with ZDNet. The repository reportedly contained Nissan NA mobile apps, parts of the Nissan ASIST diagnostics tool, Nissan's internal core mobile library, Dealer Business Systems and Dealer Portal, client acquisition and retention tools, market research tools and data, vehicle logistics portal, vehicle connected services, and various other back ends and internal tools, they reported. On February 10, 2021, Cyberpunk 2077 and Witcher 3 developer CD Projekt Red (CDPR) announced hackers had targeted the company and attempted to hold it to ransom. On 6 June 2021, someone in possession of the data had leaked all of Cyberpunk 2077 code (96.02 GB of data in 7z archive) online publicly, while previously it was only available in encrypted form. On October 6th, 2021, streaming site Twitch had its source code along with earnings reports of top streamers leaked by hackers on 4chan, citing the streaming site's negative community and desire for competition and disruption of the online video streaming space. The breach was confirmed by Twitch on Twitter. The leak was distributed freely via a torrent file and was 135.17 GB in size. As a precaution, all the stream keys have been reset by Twitch. End-of-life leaks by developers Sometimes software developers themselves will intentionally leak their source code in an effort to prevent a software product from becoming abandonware after it has reached its end-of-life, allowing the community to continue development and support. Reasons for leaking instead of a proper release to public domain or as open-source can include scattered or lost intellectual property rights. An example is the video game Falcon 4.0 which became available in 2000; another one is Dark Reign 2. Other leaks In late 1998, a number of confidential Microsoft documents later dubbed the Halloween documents were leaked to Eric S. Raymond, an activist in the open-source software movement, who published and commented on them online. The documents revealed that internally Microsoft viewed free and open-source software such as Linux as technologically competitive and a major threat for Microsoft's dominance in the market, and they discussed strategies to combat them. The discovery caused a public controversy. The documents were also used as evidence in several court cases. Nintendo's crossover fighting video game series Super Smash Bros. has a history of having unconfirmed content leaked. Every game since, including 2008's Super Smash Bros. Brawl has been affected by leaks in some form:Super Smash Bros. Brawl for the Wii was leaked by a video on the Japanese language wii.com website, revealing unconfirmed playable characters on January 28, 2008 (three days before the game's Japanese release).Super Smash Bros. for Nintendo 3DS and Wii U was afflicted in August 2014 by the "ESRB leak", where many screenshots and limited video footage of the 3DS version were leaked by a supposed member of the ESRB. The leak gained traction very quickly due to the screenshots mostly containing elements that the game ratings board would be interested in, such as trophies of suggestively-dressed female characters (some of which were later found to be edited or cut altogether in the final game).Super Smash Bros. Ultimate was leaked in its entirety two weeks before its release, allowing many to play and datamine in advance. While the entire roster of characters and stages had already been officially revealed, many unrevealed collectibles, music tracks, and story elements were discovered and distributed. This prompted Nintendo to issue copyright strikes to many YouTube and Twitch channels. Several high-profile books have been leaked on the Internet before their official release date, including If I Did It, Harry Potter and the Deathly Hallows, and an early draft of the first twelve chapters of Midnight Sun. The leak of the latter prompted the author Stephenie Meyer to suspend work on the novel. On January 31, 2014 the original uncensored version of the South Park episode "201" was leaked, when it was illegally pulled from the South Park Studios servers and was posted online in its entirety without any approval by Comedy Central. The episode was heavily censored by the network when it aired in 2010 against the will of series creators Trey Parker and Matt Stone, and was never formally released uncensored to the public. The episode was the second in a two parter and was censored after the airing of the first part as a result of death threats from Islamic extremists who were angry of the episode's storyline satirizing censorship of depictions of Muhammad. In 2015 the unaired Aqua Teen Hunger Force episode "Boston" was leaked online. The episode was set to air during the fifth season as a response to a controversial publicity stunt for Aqua Teen Hunger Force Colon Movie Film for Theaters that occurred in the titular city, but Adult Swim was forced to pull it to avoid further controversy. On March 13, 2016, the full list of qualifying teams and first round match-ups for the 2016 NCAA Men's Division I Basketball Tournament leaked on Twitter in the midst of a television special being broadcast by CBS to officially unveil them. The leak exacerbated criticism of a new, two-hour format for the selection broadcast, which was criticized for revealing the full tournament bracket at a slower pace than in previous years. Throughout 2020 and 2021, Genshin Impact'' developer miHoYo has dealt with the leaking of character information and skills online through Twitter and other online social media. Moderators of the official Genshin Impact forum and Discord are instructed to delete unreleased content and to ban posters as necessary. On April 20, 2021, Apple supplier Quanta Computer was hit by a ransomware attack. The attackers began posting documents and schematics of MacBook computer designs as recent as March 2021. The attackers threatened to release everything they had obtained by May 1, 2021, unless a ransom had been paid, however nothing further came out of the breach. The teaser trailer for Spider-Man: No Way Home (2021) was leaked a day before the trailer officially came out. High-profile Internet leaks February 13, 2004: portions of Windows NT 4.0 and Windows 2000 source code leaked from Microsoft November 2009: Climatic Research Unit email leak, aka Climategate See also Distributed Denial of Secrets GlobaLeaks News leak Nikki Catsouras photographs controversy, a 2006 California case in which police photographs of a fatal automobile accident were leaked online royaldutchshellplc.com Software release life cycle WikiLeaks References Internet terminology Intellectual property law Internet trolling Internet leaks
46286798
https://en.wikipedia.org/wiki/Cheon%2C%20Jung%20Hee
Cheon, Jung Hee
Cheon, Jung Hee is a South Korean mathematician and cryptographer whose research interest includes computational number theory, cryptography, and information security. He is one of the inventors of braid cryptography, one of group-based cryptography, and approximate homomorphic encryption HEAAN. As one of co-inventors of approximate homomorphic encryption HEaaN, he is actively working on homomorphic encryptions and their applications including machine learning, homomorphic control systems, and DNA computation on encrypted data. He is particularly known for his work on an efficient algorithm on strong DH problem. He received the best paper award in Asiacrypt 2008 for improving Pollard rho algorithm, and the best paper award in Eurocrypt 2015 for attacking Multilinear Maps. He was also selected as Scientist of the month by Korean government in 2018 and won the POSCO science prize in 2019. He is a professor of Mathematical Sciences at the Seoul National University (SNU) and the director of IMDARC (the center for industrial math) in Seoul National University. He received Ph.D degrees in Mathematics from KAIST in 1997. Before joining SNU, he was in ETRI, Brown University, and ICU. He is a program co-chair of ICISC 2008, MathCrypt 2013, ANTS-XI, Asiacrypt 2015, MathCrypt 2018/2019/2021, and PQC2021. He is one of two invited speakers in Asiacrypt 2020. He also contributes academics as being an associate editor of “Design, Codes and cryptography”, “Journal of Communication network”, and “Journal of cryptology". Awards The best paper award in Asiacrypt 2008 The best paper award in Eurocrypt 2015 The Scientist of the month by Korean government in Dec, 2018 POSCO science prize in 2019 PKC Test-of-Time award 2021 References External links Faculty page at Seoul National University Department of Computer Science and Engineering Number theorists Living people South Korean mathematicians KAIST alumni Seoul National University faculty 1969 births
40076829
https://en.wikipedia.org/wiki/SIGCSE%20Technical%20Symposium%20on%20Computer%20Science%20Education
SIGCSE Technical Symposium on Computer Science Education
The Association for Computing Machinery's Special Interest Group on Computer Science Education (SIGCSE) Technical Symposium is the main ACM conference for computer science educators. It has been held annually in February or March in the United States since 1970, with the exception of 2020 when it was cancelled due to COVID-19. In 2019, there were 1,809 attendees and 994 total submissions from over 50 countries, with a total of 2,668 unique authors representing over 800 institutions and organizations. There were 526 paper submissions (up 15% on 2018), with 169 papers accepted across the three paper tracks (CS Education Research, Experience Reports & Tools, and Curricula Initiatives) which was up 5% over 2018. It is a CORE A Conference. SIGCSE members often refer to the Symposium as "SIGCSE" (pronounced SIG-see), as in "Are you going to SIGCSE this year?" or "I attended her talk at last year's SIGCSE". Thus, while "SIGCSE" refers to the ACM Special Interest Group (SIG) that is SIGCSE, it also refers to the SIGCSE Technical Symposium. Conferences Susan Rodger maintains a page with the history of the SIGCSE Technical Symposium and other SIGCSE conferences. SIGCSE 2022 - Providence, Rhode Island - March 2–5, 2022 - 53rd conference SIGCSE 2021 Toronto, Canada (virtual due to COVID-19 pandemic) - March 13–20, 2021 SIGCSE 2020 Portland, Oregon - March 11–14, 2020 - 51st conference SIGCSE 2019 Minneapolis, Minnesota - February 27 - March 2, 2019 - 50th conference SIGCSE 2018 Baltimore, Maryland - February 21–24, 2018 - 49th conference SIGCSE 2017 Seattle, Washington - March 8–11, 2017 - 48th conference SIGCSE 2016 Memphis, Tennessee - March 2–5, 2016 - 47th conference SIGCSE 2015 Kansas City, Missouri - March 4–7, 2015 - 46th conference SIGCSE 2014 Atlanta, Georgia - March 5–8, 2014 - 45th conference SIGCSE 2013 - Denver, Colorado - 44th conference SIGCSE 2012 - Raleigh, NC - 43rd conference SIGCSE 2011 - Dallas, Texas - 42nd conference SIGCSE 2010 - Milwaukee, Wisconsin - 41st conference SIGCSE 2009 - Chattanooga, Tennessee - 40th conference SIGCSE 2008 - Portland, Oregon - 39th conference SIGCSE 2007 - Covington, Kentucky - 38th conference SIGCSE 2006 - Houston, Texas - 37th conference Nifty Assignments The Nifty Assignments session is one of the most popular sessions at the conference. Started by Nick Parlante in 1999, the session serves as a place for educators to share ideas and materials for successful computer science assignments. Nifty assignments are shared publicly for general reference and usage. Presenters have included Owen Astrachan, Allison Obourne, Richard E. Pattis, Suzanne Matthews, Joseph Zachary, Eric S. Roberts, Cay Horstmann, Michelle Craig, Mehran Sahami, David Malan, and Mark Guzdial. References External links Nifty Assignments SIGCSE Technical Symposium Computer conferences Computer science education Association for Computing Machinery conferences
1707039
https://en.wikipedia.org/wiki/UNIVAC%20418
UNIVAC 418
The UNIVAC 418 was a transistorized, 18-bit word core memory machine made by Sperry Univac. The name came from its 4-microsecond memory cycle time and 18-bit word. The assembly language for this class of computers was TRIM III and ART418. Over the three different models, more than 392 systems were manufactured. It evolved from the Control Unit Tester (CUT), a device used in the factory to test peripherals for larger systems. Architecture The instruction word had three formats: Format I - common Load, Store, and Arithmetic operations f - Function code (6 bits) u - Operand address (12 bits) Format II - Constant arithmetic and Boolean functions f - Function code (6 bits) z - Operand address or value (12 bits) Format III - Input/Output f - Function code (6 bits) m - Minor function code (6 bits) k - Designator (6 bits) used for channel number, shift count, etc. Numbers were represented in ones' complement, single and double precision. The TRIM assembly source code used octal numbers as opposed to more common hexadecimal because the 18-bit words are evenly divisible by 3, but not by 4. The machine had the following addressable registers: A - Register (Double precision Accumulator, 36 bits) composed of: AU - Register (Upper Accumulator, 18 bits) AL - Register (Lower Accumulator, 18 bits) ICR - Register (Index Control Register, 3 bits), also designated the "B-register" SR - Register ("Special Register", 4 bits), a paging register allowing direct access to memory banks other than the executing (P register) bank P - Register (Program address, 15 bits) All register values were displayed in real time on the front panel of the computer in binary, with the ability of the user to enter new values via push button (a function that was safe to perform only when the computer was not in run mode). UNIVAC 418-I The first UNIVAC 418-I was delivered in June 1963. It was available with 4,096 to 16,384 words of memory. UNIVAC 1218 Military Computer The 418-I was also available in a militarized version as the UNIVAC 1218. It was almost 6 feet tall and weighed . It required both 115VAC, 1-phase, 60 Hz and 115VAC, 3-phase, 400 Hz power. UNIVAC 418-II The first UNIVAC 418-II was delivered in November 1964. It was available with 4,096 to 65,536 (18-bit) words of memory. Memory cycle time was reduced to 2 microseconds. The militarized version was called the UNIVAC 1219 (known as the "Mk 152 Fire Control Computer.") It was part of the Navy's Mk 76 missile fire control system, used to control the AN/SPG-55 radar system. UNIVAC 418-III The first UNIVAC 418-III was delivered in 1969. It was available with 32,768 to 131,072 words of memory. Memory cycle time was reduced to 750 nanoseconds. New instructions were added for floating-point arithmetic, binary-to-decimal and decimal-to-binary conversions, and block transfers up to 64 words. The SR register was expanded to 6 bits. The 418-III had two unique hardware features which enabled it to handle continuously high speed serial character streams. One was called the buffer overflow interrupt and the other hardware buffer chaining. By the 1990s, all the 418 hardware was gone, but the California Department of Water Resources was still running 418 emulation on a UNIVAC 1100/60. References External links UNIVAC 418 documents on bitsavers.org UNIVAC 1218 Military Computer 1964 BRL report from Aberdeen Proving Grounds The Automated Weather Network The USAF creates a real-time network of UNIVAC 418s 18-bit Computers - Computer Unit Tester, 1218 (CP-789), AN/UYK-5 Moonbeam, 1219B-CP-848/UYK , CP-914, ILAAS, 1819, AN/UYK-11(V) Article about the Univac 1219 and its use in the Navy's Tartar Missile System Design of the real-time executive for the Univac 418 system The Univac 1218 at the System Source Computer Museum 418 Military electronics of the United States Transistorized computers 18-bit computers
12671580
https://en.wikipedia.org/wiki/YNAB
YNAB
You Need a Budget (YNAB) (pronounced Why-nab) is an American multi-platform personal budgeting program based on the envelope method. In 2013 it was the most popular personal finance software among Lifehacker readers. It is also listed by Wirecutter for 2021 as a "great pick for hard-core budgeters". Overview YNAB is a personal budgeting software platform that can be used across desktop computers, the iPhone and Android operating systems, iPads, Apple Watches, and the Amazon Echo system. The general theory of YNAB is to "give every dollar a job". Each dollar is allocated to a specific purpose, such as annual car insurance payment, long-term housing repair fund, college savings, etc. The app encourages users to consider recurring expenses every month to prevent spending "surprises" and break the paycheck-to-paycheck cycle. Overspending is strongly discouraged — the app encourages users to move money between categories to "roll with the punches" if more funds than allocated are spent in a category. Over time, users are encouraged to "age their money", accumulating savings and watching their money grow. Users can either import transactions automatically from their financial institutions or input them manually. The software also displays financial reports to keep users informed about their finances at a glance. The platform also has several open-source add-ons that can expand on YNAB's features. YNAB is a paid software program. After the 34-day free trial ends, users pay $98.99 per year, or $14.99 per month. Students who verify their status by providing a school document receive their first year free. Versions The latest version, dubbed "The New YNAB" or "nYNAB", was launched December 30, 2015 as a web-based application, with apps for iPhone, iPad, and Android devices. The software is updated multiple times a month to add new features, tweak existing ones, and improve security and back-end functioning. The previous version, YNAB4, was released in June 2012. Version 4 was a desktop-based application available for Windows and Mac OS, with apps for iPhone and Android devices. Storing the budget file in Dropbox allowed synchronization between the desktop and mobile applications. Version 4 was maintained through 2016, and the company ended support for Version 4 in October 2019. YNAB 3 (released December 2009) ran on multiple platforms using the Adobe AIR runtime, and previous versions included a Microsoft Excel/OpenOffice.org Calc spreadsheet implementation (dubbed YNAB Basic and discontinued in July 2009) and a Windows-only executable under the name YNAB Pro (discontinued in December 2009). YNAB for iPhone was released in 2010 and runs on the iPhone, iPod touch, and iPad. It is not a standalone budgeting application but is instead designed to complement the YNAB for Desktop application. A version tailored for iPad and including budgeting support was released in 2014. YNAB for Android was released in September 2011. See also List of personal finance software Software as a service Web application References External links 2004 software Accounting software Cloud applications Shareware Web applications
2471540
https://en.wikipedia.org/wiki/Security%20hacker
Security hacker
A security hacker is someone who explores methods for breaching defenses and exploiting weaknesses in a computer system or network. Hackers may be motivated by a multitude of reasons, such as profit, protest, information gathering, challenge, recreation, or evaluation of a system weaknesses to assist in formulating defenses against potential hackers. The subculture that has evolved around hackers is often referred to as the "computer underground". Longstanding controversy surrounds the meaning of the term "hacker". In this controversy, computer programmers reclaim the term hacker, arguing that it refers simply to someone with an advanced understanding of computers and computer networks and that cracker is the more appropriate term for those who break into computers, whether computer criminals (black hats) or computer security experts (white hats). A 2014 article noted that "the black-hat meaning still prevails among the general public". History Birth of subculture and entering mainstream: 1960's-1980's The subculture around such hackers is termed network hacker subculture, hacker scene, or computer underground. It initially developed in the context of phreaking during the 1960s and the microcomputer BBS scene of the 1980s. It is implicated with 2600: The Hacker Quarterly and the alt.2600 newsgroup. In 1980, an article in the August issue of Psychology Today (with commentary by Philip Zimbardo) used the term "hacker" in its title: "The Hacker Papers". It was an excerpt from a Stanford Bulletin Board discussion on the addictive nature of computer use. In the 1982 film Tron, Kevin Flynn (Jeff Bridges) describes his intentions to break into ENCOM's computer system, saying "I've been doing a little hacking here". CLU is the software he uses for this. By 1983, hacking in the sense of breaking computer security had already been in use as computer jargon, but there was no public awareness about such activities. However, the release of the film WarGames that year, featuring a computer intrusion into NORAD, raised the public belief that computer security hackers (especially teenagers) could be a threat to national security. This concern became real when, in the same year, a gang of teenage hackers in Milwaukee, Wisconsin, known as The 414s, broke into computer systems throughout the United States and Canada, including those of Los Alamos National Laboratory, Sloan-Kettering Cancer Center and Security Pacific Bank. The case quickly grew media attention, and 17-year-old Neal Patrick emerged as the spokesman for the gang, including a cover story in Newsweek entitled "Beware: Hackers at play", with Patrick's photograph on the cover. The Newsweek article appears to be the first use of the word hacker by the mainstream media in the pejorative sense. Pressured by media coverage, congressman Dan Glickman called for an investigation and began work on new laws against computer hacking. Neal Patrick testified before the U.S. House of Representatives on September 26, 1983, about the dangers of computer hacking, and six bills concerning computer crime were introduced in the House that year. As a result of these laws against computer criminality, white hat, grey hat and black hat hackers try to distinguish themselves from each other, depending on the legality of their activities. These moral conflicts are expressed in The Mentor's "The Hacker Manifesto", published 1986 in Phrack. Use of the term hacker meaning computer criminal was also advanced by the title "Stalking the Wily Hacker", an article by Clifford Stoll in the May 1988 issue of the Communications of the ACM. Later that year, the release by Robert Tappan Morris, Jr. of the so-called Morris worm provoked the popular media to spread this usage. The popularity of Stoll's book The Cuckoo's Egg, published one year later, further entrenched the term in the public's consciousness. Classifications In computer security, a hacker is someone who focuses on security mechanisms of computer and network systems. Hackers can include someone who endeavors to strengthen security mechanisms by exploring their weaknesses and also those who seek to access secure, unauthorized information despite security measures. Nevertheless, parts of the subculture see their aim in correcting security problems and use the word in a positive sense. White hat is the name given to ethical computer hackers, who utilize hacking in a helpful way. White hats are becoming a necessary part of the information security field. They operate under a code, which acknowledges that breaking into other people's computers is bad, but that discovering and exploiting security mechanisms and breaking into computers is still an interesting activity that can be done ethically and legally. Accordingly, the term bears strong connotations that are favorable or pejorative, depending on the context. Subgroups of the computer underground with different attitudes and motives use different terms to demarcate themselves from each other. These classifications are also used to exclude specific groups with whom they do not agree. Cracker Eric S. Raymond, author of The New Hacker's Dictionary, advocates that members of the computer underground should be called crackers. Yet, those people see themselves as hackers and even try to include the views of Raymond in what they see as a wider hacker culture, a view that Raymond has harshly rejected. Instead of a hacker/cracker dichotomy, they emphasize a spectrum of different categories, such as white hat, grey hat, black hat and script kiddie. In contrast to Raymond, they usually reserve the term cracker for more malicious activity. According to Ralph D. Clifford, a cracker or cracking is to "gain unauthorized access to a computer in order to commit another crime such as destroying information contained in that system". These subgroups may also be defined by the legal status of their activities. White hat A white hat hacker breaks security for non-malicious reasons, either to test their own security system, perform penetration tests or vulnerability assessments for a client, or while working for a security company which makes security software. The term is generally synonymous with ethical hacker, and the EC-Council, among others, have developed certifications, courseware, classes, and online training covering the diverse arena of ethical hacking. Black hat A black hat hacker is a hacker who "violates computer security for little reason beyond maliciousness or for personal gain" (Moore, 2005). The term was coined by Richard Stallman, to contrast the maliciousness of a criminal hacker versus the spirit of playfulness and exploration in hacker culture, or the ethos of the white hat hacker who performs hacking duties to identify places to repair or as a means of legitimate employment. Black hat hackers form the stereotypical, illegal hacking groups often portrayed in popular culture, and are "the epitome of all that the public fears in a computer criminal". Grey hat A grey hat hacker lies between a black hat and a white hat hacker. A grey hat hacker may surf the Internet and hack into a computer system for the sole purpose of notifying the administrator that their system has a security defect, for example. They may then offer to correct the defect for a fee. Grey hat hackers sometimes find the defect of a system and publish the facts to the world instead of a group of people. Even though grey hat hackers may not necessarily perform hacking for their personal gain, unauthorized access to a system can be considered illegal and unethical. Elite hacker A social status among hackers, elite is used to describe the most skilled. Newly discovered exploits circulate among these hackers. Elite groups such as Masters of Deception conferred a kind of credibility on their members. Script kiddie A script kiddie (also known as a skid or skiddie) is an unskilled hacker who breaks into computer systems by using automated tools written by others (usually by other black hat hackers), hence the term script (i.e. a computer script that automates the hacking) kiddie (i.e. kid, child an individual lacking knowledge and experience, immature), usually with little understanding of the underlying concept. Neophyte A neophyte ("newbie", or "noob") is someone who is new to hacking or phreaking and has almost no knowledge or experience of the workings of technology and hacking. Blue hat A blue hat hacker is someone outside computer security consulting firms who is used to bug-test a system prior to its launch, looking for exploits so they can be closed. Microsoft also uses the term BlueHat to represent a series of security briefing events. Hacktivist A hacktivist is a hacker who utilizes technology to publicize a social, ideological, religious or political message. Hacktivism can be divided into two main groups: Cyberterrorism – Activities involving website defacement or denial-of-service attacks; and, Freedom of information – Making information that is not public, or is public in non-machine-readable formats, accessible to the public. Nation state Intelligence agencies and cyberwarfare operatives of nation states. Organized criminal gangs Groups of hackers that carry out organized criminal activities for profit. Modern-day computer hackers have been compared to the privateers of by-gone days. These criminals hold computer systems hostage, demanding large payments from victims to restore access to their own computer systems and data. Furthermore, recent ransomware attacks on industries, including energy, food, and transportation, have been blamed on criminal organizations based in or near a state actor – possibly with the country’s knowledge and approval. Cyber theft and ransomware attacks are now the fastest-growing crimes in the United States. Bitcoin and other cryptocurrencies facilitate the extortion of huge ransoms from large companies, hospitals and city governments with little or no chance of being caught. Attacks Hackers can usually be sorted into two types of attacks: mass attacks and targeted attacks. They are sorted into the groups in terms of how they choose their victims and how they act on the attacks. A typical approach in an attack on Internet-connected system is: Network enumeration: Discovering information about the intended target. Vulnerability analysis: Identifying potential ways of attack. Exploitation: Attempting to compromise the system by employing the vulnerabilities found through the vulnerability analysis. In order to do so, there are several recurring tools of the trade and techniques used by computer criminals and security experts. Security exploits A security exploit is a prepared application that takes advantage of a known weakness. Common examples of security exploits are SQL injection, cross-site scripting and cross-site request forgery which abuse security holes that may result from substandard programming practice. Other exploits would be able to be used through File Transfer Protocol (FTP), Hypertext Transfer Protocol (HTTP), PHP, SSH, Telnet and some Web pages. These are very common in Web site and Web domain hacking. Techniques Vulnerability scanner A vulnerability scanner is a tool used to quickly check computers on a network for known weaknesses. Hackers also commonly use port scanners. These check to see which ports on a specified computer are "open" or available to access the computer, and sometimes will detect what program or service is listening on that port, and its version number. (Firewalls defend computers from intruders by limiting access to ports and machines, but they can still be circumvented.) Finding vulnerabilities Hackers may also attempt to find vulnerabilities manually. A common approach is to search for possible vulnerabilities in the code of the computer system then test them, sometimes reverse engineering the software if the code is not provided. Experienced hackers can easily find patterns in code to find common vulnerabilities. Brute-force attack Password guessing. This method is very fast when used to check all short passwords, but for longer passwords other methods such as the dictionary attack are used, because of the time a brute-force search takes. Password cracking Password cracking is the process of recovering passwords from data that has been stored in or transmitted by a computer system. Common approaches include repeatedly trying guesses for the password, trying the most common passwords by hand, and repeatedly trying passwords from a "dictionary", or a text file with many passwords. Packet analyzer A packet analyzer ("packet sniffer") is an application that captures data packets, which can be used to capture passwords and other data in transit over the network. Spoofing attack (phishing) A spoofing attack involves one program, system or website that successfully masquerades as another by falsifying data and is thereby treated as a trusted system by a user or another program – usually to fool programs, systems or users into revealing confidential information, such as user names and passwords. Rootkit A rootkit is a program that uses low-level, hard-to-detect methods to subvert control of an operating system from its legitimate operators. Rootkits usually obscure their installation and attempt to prevent their removal through a subversion of standard system security. They may include replacements for system binaries, making it virtually impossible for them to be detected by checking process tables. Social engineering In the second stage of the targeting process, hackers often use social engineering tactics to get enough information to access the network. They may contact the system administrator and pose as a user who cannot get access to his or her system. This technique is portrayed in the 1995 film Hackers, when protagonist Dade "Zero Cool" Murphy calls a somewhat clueless employee in charge of security at a television network. Posing as an accountant working for the same company, Dade tricks the employee into giving him the phone number of a modem so he can gain access to the company's computer system. Hackers who use this technique must be familiar with their target's security practices in order to trick the system administrator into giving them information. In some cases, a help-desk employee with limited security experience will answer the phone and be relatively easy to trick. Another approach is for the hacker to pose as an angry supervisor, and when his/her authority is questioned, threaten to fire the help-desk worker. Social engineering is very effective, because users are the most vulnerable part of an organization. No security devices or programs can keep an organization safe if an employee reveals a password to an unauthorized person. Social engineering can be broken down into four sub-groups: Intimidation As in the "angry supervisor" technique above, the hacker convinces the person who answers the phone that their job is in danger unless they help them. At this point, many people accept that the hacker is a supervisor and give them the information they seek. Helpfulness The opposite of intimidation, helpfulness exploits many people's natural instinct to help others solve problems. Rather than acting angry, the hacker acts distressed and concerned. The help desk is the most vulnerable to this type of social engineering, as (a.) its general purpose is to help people; and (b.) it usually has the authority to change or reset passwords, which is exactly what the hacker wants. Name-dropping The hacker uses names of authorized users to convince the person who answers the phone that the hacker is a legitimate user him or herself. Some of these names, such as those of webpage owners or company officers, can easily be obtained online. Hackers have also been known to obtain names by examining discarded documents ("dumpster diving"). Technical Using technology is also a way to get information. A hacker can send a fax or email to a legitimate user, seeking a response that contains vital information. The hacker may claim that he or she is involved in law enforcement and needs certain data for an investigation, or for record-keeping purposes. Trojan horses A Trojan horse is a program that seems to be doing one thing but is actually doing another. It can be used to set up a back door in a computer system, enabling the intruder to gain access later. (The name refers to the horse from the Trojan War, with the conceptually similar function of deceiving defenders into bringing an intruder into a protected area.) Computer virus A virus is a self-replicating program that spreads by inserting copies of itself into other executable code or documents. By doing this, it behaves similarly to a biological virus, which spreads by inserting itself into living cells. While some viruses are harmless or mere hoaxes, most are considered malicious. Computer worm Like a virus, a worm is also a self-replicating program. It differs from a virus in that (a.) it propagates through computer networks without user intervention; and (b.) does not need to attach itself to an existing program. Nonetheless, many people use the terms "virus" and "worm" interchangeably to describe any self-propagating program. Keystroke logging A keylogger is a tool designed to record ("log") every keystroke on an affected machine for later retrieval, usually to allow the user of this tool to gain access to confidential information typed on the affected machine. Some keyloggers use virus-, trojan-, and rootkit-like methods to conceal themselves. However, some of them are used for legitimate purposes, even to enhance computer security. For example, a business may maintain a keylogger on a computer used at a point of sale to detect evidence of employee fraud. Attack patterns Attack patterns are defined as series of repeatable steps that can be applied to simulate an attack against the security of a system. They can be used for testing purposes or locating potential vulnerabilities. They also provide, either physically or in reference, a common solution pattern for preventing a given attack. Tools and Procedures A thorough examination of hacker tools and procedures may be found in Cengage Learning's E|CSA certification workbook. Notable intruders and criminal hackers Notable security hackers Andrew Auernheimer, sentenced to 3 years in prison, is a grey hat hacker whose security group Goatse Security exposed a flaw in AT&T's iPad security. Dan Kaminsky was a DNS expert who exposed multiple flaws in the protocol and investigated Sony's rootkit security issues in 2005. He spoke in front of the United States Senate on technology issues. Ed Cummings (also known as Bernie S) is a longstanding writer for 2600: The Hacker Quarterly. In 1995, he was arrested and charged with possession of technology that could be used for fraudulent purposes, and set legal precedents after being denied both a bail hearing and a speedy trial. Eric Corley (also known as Emmanuel Goldstein) is the longstanding publisher of 2600: The Hacker Quarterly. He is also the founder of the Hackers on Planet Earth (HOPE) conferences. He has been part of the hacker community since the late 1970s. Susan Headley (also known as Susan Thunder), was an American hacker active during the late 1970s and early 1980s widely respected for her expertise in social engineering, pretexting, and psychological subversion. She became heavily involved in phreaking with Kevin Mitnick and Lewis de Payne in Los Angeles, but later framed them for erasing the system files at US Leasing after a falling out, leading to Mitnick's first conviction. Gary McKinnon is a Scottish hacker who was facing extradition to the United States to face criminal charges. Many people in the UK called on the authorities to be lenient with McKinnon, who has Asperger syndrome. The extradition has now been dropped. Gordon Lyon, known by the handle Fyodor, authored the Nmap Security Scanner as well as many network security books and web sites. He is a founding member of the Honeynet Project and Vice President of Computer Professionals for Social Responsibility. Guccifer 2.0, who claimed that he hacked into the Democratic National Committee (DNC) computer network Jacob Appelbaum is an advocate, security researcher, and developer for the Tor project. He speaks internationally for usage of Tor by human rights groups and others concerned about Internet anonymity and censorship. Joanna Rutkowska is a Polish computer security researcher who developed the Blue Pill rootkit and Qubes OS. Jude Milhon (known as St. Jude) was an American hacker and activist, founding member of the cypherpunk movement, and one of the creators of Community Memory, the first public computerized bulletin board system. Kevin Mitnick is a computer security consultant and author, formerly the most wanted computer criminal in United States history. Len Sassaman was a Belgian computer programmer and technologist who was also a privacy advocate. Meredith L. Patterson is a well-known technologist and biohacker who has presented research with Dan Kaminsky and Len Sassaman at many international security and hacker conferences. Kimberley Vanvaeck (known as Gigabyte) is a Belgian hacker recognized for writing the first virus in C#. Michał Zalewski (lcamtuf) is a prominent security researcher. Solar Designer is the pseudonym of the founder of the Openwall Project. Kane Gamble, sentenced to 2 years in youth detention, who is autistic, gained access to highly sensitive information and "cyber-terrorised" high-profile U.S. intelligence officials such as then CIA chief John Brennan or Director of National Intelligence James Clapper. Customs The computer underground has produced its own specialized slang, such as 1337speak. Writing software and performing other activities to support these views is referred to as hacktivism. Some consider illegal cracking ethically justified for these goals; a common form is website defacement. The computer underground is frequently compared to the Wild West. It is common for hackers to use aliases to conceal their identities. Hacker groups and conventions The computer underground is supported by regular real-world gatherings called hacker conventions or "hacker cons". These events include SummerCon (Summer), DEF CON, HoHoCon (Christmas), ShmooCon (February), BlackHat, Chaos Communication Congress, AthCon, Hacker Halted, and HOPE. Local Hackfest groups organize and compete to develop their skills to send a team to a prominent convention to compete in group pentesting, exploit and forensics on a larger scale. Hacker groups became popular in the early 1980s, providing access to hacking information and resources and a place to learn from other members. Computer bulletin board systems (BBSs), such as the Utopias, provided platforms for information-sharing via dial-up modem. Hackers could also gain credibility by being affiliated with elite groups. Consequences for malicious hacking India Netherlands Article 138ab of Wetboek van Strafrecht prohibits computervredebreuk, which is defined as intruding an automated work or a part thereof with intention and against the law. Intrusion is defined as access by means of: Defeating security measures By technical means By false signals or a false cryptographic key By the use of stolen usernames and passwords. Maximum imprisonment is one year or a fine of the fourth category. United States , more commonly known as the Computer Fraud and Abuse Act, prohibits unauthorized access or damage of "protected computers". "Protected computers" are defined in as: A computer exclusively for the use of a financial institution or the United States Government, or, in the case of a computer not exclusively for such use, used by or for a financial institution or the United States Government and the conduct constituting the offense affects that use by or for the financial institution or the Government. A computer which is used in or affecting interstate or foreign commerce or communication, including a computer located outside the United States that is used in a manner that affects interstate or foreign commerce or communication of the United States; The maximum imprisonment or fine for violations of the Computer Fraud and Abuse Act depends on the severity of the violation and the offender's history of violations under the Act. The FBI has demonstrated its ability to recover ransoms paid in cryptocurrency by victims of cybertheft. Hacking and the media Hacker magazines The most notable hacker-oriented print publications are Phrack, Hakin9 and 2600: The Hacker Quarterly. While the information contained in hacker magazines and ezines was often outdated by the time they were published, they enhanced their contributors' reputations by documenting their successes. Hackers in fiction Hackers often show an interest in fictional cyberpunk and cyberculture literature and movies. The adoption of fictional pseudonyms, symbols, values and metaphors from these works is very common. Books The cyberpunk novels of William Gibsonespecially the Sprawl trilogyare very popular with hackers. Helba from the .hack manga and anime series Merlin of Amber, the protagonist of the second series in The Chronicles of Amber by Roger Zelazny, is a young immortal hacker-mage prince who has the ability to traverse shadow dimensions. Lisbeth Salander in The Girl with the Dragon Tattoo by Stieg Larsson Alice from Heaven's Memo Pad Ender's Game by Orson Scott Card Evil Genius by Catherine Jinks Hackers (anthology) by Jack Dann and Gardner Dozois Little Brother by Cory Doctorow Neuromancer by William Gibson Snow Crash by Neal Stephenson Films Antitrust Blackhat Cypher Eagle Eye Enemy of the State Firewall Girl With The Dragon Tattoo Hackers Live Free or Die Hard The Matrix series The Net The Net 2.0 Pirates of Silicon Valley Skyfall Sneakers Swordfish Terminator 2: Judgment Day Terminator Salvation Take Down Tron Tron: Legacy Untraceable WarGames Weird Science The Fifth Estate Who Am I – No System Is Safe (film) Non-fiction books The Art of Deception by Kevin Mitnick The Art of Intrusion by Kevin Mitnick The Cuckoo's Egg by Clifford Stoll Ghost in the Wires: My Adventures as the World's Most Wanted Hacker by Kevin Mitnick The Hacker Crackdown by Bruce Sterling The Hacker's Handbook by Hugo Cornwall (Peter Sommer) Hacking: The Art of Exploitation Second Edition by Jon Erickson Out of the Inner Circle by Bill Landreth and Howard Rheingold Underground by Suelette Dreyfus See also Cracking of wireless networks Cyber spying Cyber Storm Exercise Cybercrime Hacker culture Hacker (expert) Hacker Manifesto IT risk Mathematical beauty Metasploit Project Penetration test Technology assessment Vulnerability (computing) References Further reading External links CNN Tech PCWorld Staff (November 2001). Timeline: A 40-year history of hacking from 1960 to 2001 Can Hackers Be Heroes? Video produced by Off Book (web series) Computer occupations Identity theft Illegal occupations Computer security Security breaches
37534
https://en.wikipedia.org/wiki/Hyderabad
Hyderabad
Hyderabad ( ; , ) is the capital and largest city of the Indian state of Telangana and the de jure capital of Andhra Pradesh. It occupies on the Deccan Plateau along the banks of the Musi River, in the northern part of South India. With an average altitude of , much of Hyderabad is situated on hilly terrain around artificial lakes, including the Hussain Sagar lake, predating the city's founding, in the north of the city centre. According to the 2011 Census of India, Hyderabad is the fourth-most populous city in India with a population of residents within the city limits, and has a population of residents in the metropolitan region, making it the sixth-most populous metropolitan area in India. With an output of 74 billion, Hyderabad has the fifth-largest urban economy in India. Muhammad Quli Qutb Shah established Hyderabad in 1591 to extend the capital beyond the fortified Golconda. In 1687, the city was annexed by the Mughals. In 1724, Mughal Viceroy Nizam Asaf Jah I declared his sovereignty and founded the Asaf Jahi dynasty, also known as the Nizams. Hyderabad served as the imperial capital of the Asaf Jahis from 1769 to 1948. As capital of the princely state of Hyderabad, the city housed the British Residency and cantonment until Indian independence in 1947. Hyderabad was annexed by the Indian Union in 1948 and continued as a capital of Hyderabad State (1948–56). After the introduction of the States Reorganisation Act of 1956, Hyderabad was made the capital of the newly formed Andhra Pradesh. In 2014, Andhra Pradesh was bifurcated to form Telangana and Hyderabad became the joint capital of the two states with a transitional arrangement scheduled to end in 2024. Since 1956, the city has housed the winter office of the President of India. Relics of the Qutb Shahi and Nizam rules remain visible today; the Charminar has come to symbolise the city. By the end of early modern era, the Mughal Empire declined in the Deccan and the Nizam's patronage had attracted men of letters from various parts of the world. A distinctive culture arose from the amalgamation of local and migrated artisans. Painting, handicraft, jewellery, literature, dialect and clothing are prominent still today. Through its cuisine, the city is listed as a UNESCO creative city of gastronomy. The Telugu film industry based in the city was the country's second-largest producer of motion pictures . Until the Hyderabad was known for the pearl industry and was nicknamed the "City of Pearls", and was the only Golconda Diamonds trading centre in the world. Many of the city's historical and traditional bazaars remain open. Hyderabad's central location between the Deccan Plateau and the Western Ghats, and industrialisation throughout the attracted major Indian research, manufacturing, educational and financial institutions. Since the 1990s, the city has emerged as an Indian hub of pharmaceuticals and biotechnology. The formation of special economic zones and HITEC City dedicated to information technology has encouraged leading multinationals to set up operations in Hyderabad. History Toponymy The name Hyderabad means "Haydar's city" or "lion city", from haydar 'lion' and ābād 'city', after Caliph Ali Ibn Abi Talib, also known as Haydar because of his lion-like valour in battle. The city was originally called Baghnagar "city of gardens", and later acquired the name Hyderabad. The European travellers von Poser and Thévenot found both names in use in the 17th century. One popular legend suggests that the founder of the city, Muhammad Quli Qutb Shah, named it Bhagya-nagar after Bhagmati, a local nautch (dancing) girl whom he married. She converted to Islam and adopted the title Hyder Mahal. The city would have been named Hyderabad in her honour. Early and medieval history The discovery of Megalithic burial sites and cairn circles in the suburbs of Hyderabad, in 1851 by Philip Meadows Taylor, a polymath in the service of the Nizam, had provided evidence that the region in which the city stands has been inhabited from the Stone Age. Archaeologists excavating near the city have unearthed Iron Age sites that may date from 500 BCE. The region comprising modern Hyderabad and its surroundings was ruled by the Chalukya dynasty from 624 CE to 1075 CE. Following the dissolution of the Chalukya empire into four parts in the 11th century, Golconda came under the control of the Kakatiya dynasty from 1158, whose seat of power was at Warangal— northeast of modern Hyderabad. The Kakatiya ruler Ganapatideva 1199–1262 built a hilltop outpost—later known as Golconda Fort—to defend their western region. The Kakatiya dynasty was reduced to a vassal of the Khalji dynasty in 1310 after its defeat by Sultan Alauddin Khalji of the Delhi Sultanate. This lasted until 1321, when the Kakatiya dynasty was annexed by Malik Kafur, Allaudin Khalji's general. During this period, Alauddin Khalji took the Koh-i-Noor diamond, which is said to have been mined from the Kollur Mines of Golconda, to Delhi. Muhammad bin Tughluq succeeded to the Delhi sultanate in 1325, bringing Warangal under the rule of the Tughlaq dynasty, Malik Maqbul Tilangani was appointed its governor. In 1336 the regional chieftains Musunuri Nayakas—who revolted against the Delhi sultanate in 1333—took Warangal under their direct control and declared it as their capital. In 1347 when Ala-ud-Din Bahman Shah, a governor under bin Tughluq, rebelled against Delhi and established the Bahmani Sultanate in the Deccan Plateau, with Gulbarga— west of Hyderabad—as its capital, both the neighboring rulers Musunuri Nayakas of Warangal and Bahmani Sultans of Gulbarga engaged in many wars until 1364–65 when a peace treaty was signed and the Musunuri Nayakas ceded Golconda Fort to the Bahmani Sultan. The Bahmani Sultans ruled the region until 1518 and were the first independent Muslim rulers of the Deccan. In 1496 Sultan Quli was appointed as a Bahmani governor of Telangana, he rebuilt, expanded and fortified the old mud-fort of Golconda and named the city "Muhammad nagar". In 1518, he revolted against the Bahmani Sultanate and established the Qutb Shahi dynasty. The fifth Qutb Shahi sultan, Muhammad Quli Qutb Shah, established Hyderabad on the banks of the Musi River in 1591, to avoid water shortages experienced at Golconda. During his rule, he had the Charminar and Mecca Masjid built in the city. On 21 September 1687, the Golconda Sultanate came under the rule of the Mughal emperor Aurangzeb after a year-long siege of the Golconda Fort. The annexed city "Hyderabad" was renamed Darul Jihad (House of War), whereas its state "Golconda" was renamed Deccan Suba (Deccan province) and the capital was moved from Golconda to Aurangabad, about northwest of Hyderabad. Modern history In 1713, Mughal emperor Farrukhsiyar appointed Mubariz Khan as Governor of Hyderabad. During his tenure, he fortified the city and controlled the internal and neighbouring threats. In 1714 Farrukhsiyar appointed Asaf Jah I as Viceroy of the Deccan—(administrator of six Mughal governorates) with the title Nizam-ul-Mulk (Administrator of the Realm). In 1721, he was appointed as Prime Minister of the Mughal Empire. His differences with the court nobles led him to resign from all the imperial responsibilities in 1723 and leave for Deccan. Under the influence of Asaf Jah I's opponents, Mughal Emperor Muhammad Shah issued a decree to Mubariz Khan, to stop Asaf Jah I which resulted in the Battle of Shakar Kheda. In 1724, Asaf Jah I defeated Mubariz Khan to establish autonomy over the Deccan, named the region Hyderabad Deccan, and started what came to be known as the Asaf Jahi dynasty. Subsequent rulers retained the title Nizam ul-Mulk and were referred to as Asaf Jahi Nizams, or Nizams of Hyderabad. The death of Asaf Jah I in 1748 resulted in a period of political unrest as his sons and grandson—Nasir Jung (1748–1750), Muzaffar Jang (1750-1751) and Salabat Jung (1751-1762)—contended for the throne backed by opportunistic neighbouring states and colonial foreign forces. The accession of Asaf Jah II, who reigned from 1762 to 1803, ended the instability. In 1768 he signed the Treaty of Masulipatam, surrendering the coastal region to the East India Company in return for a fixed annual rent. In 1769 Hyderabad city became the formal capital of the Asaf Jahi Nizams. In response to regular threats from Hyder Ali (Dalwai of Mysore), Baji Rao I (Peshwa of the Maratha Empire), and Basalath Jung (Asaf Jah II's elder brother, who was supported by French General the Marquis de Bussy-Castelnau), the Nizam signed a subsidiary alliance with the East India Company in 1798, allowing the British Indian Army to be stationed at Bolarum (modern Secunderabad) to protect the state's capital, for which the Nizams paid an annual maintenance to the British. Until 1874 there were no modern industries in Hyderabad. With the introduction of railways in the 1880s, four factories were built to the south and east of Hussain Sagar lake, and during the early 20th century, Hyderabad was transformed into a modern city with the establishment of transport services, underground drainage, running water, electricity, telecommunications, universities, industries, and Begumpet Airport. The Nizams ruled the princely state of Hyderabad during the British Raj. After India gained independence, the Nizam declared his intention to remain independent rather than become part of the Indian Union or newly formed Dominion of Pakistan. The Hyderabad State Congress, with the support of the Indian National Congress and the Communist Party of India, began agitating against Nizam VII in 1948. On 17 September that year, the Indian Army took control of Hyderabad State after an invasion codenamed Operation Polo. With the defeat of his forces, Nizam VII capitulated to the Indian Union by signing an Instrument of Accession, which made him the Rajpramukh (Princely Governor) of the state until it was abolished on 31 October 1956. Post-Independence Between 1946 and 1951, the Communist Party of India fomented the Telangana uprising against the feudal lords of the Telangana region. The Constitution of India, which became effective on 26 January 1950, made Hyderabad State one of the part B states of India, with Hyderabad city continuing to be the capital. In his 1955 report Thoughts on Linguistic States, B. R. Ambedkar, then chairman of the Drafting Committee of the Indian Constitution, proposed designating the city of Hyderabad as the second capital of India because of its amenities and strategic central location. On 1 November 1956 the states of India were reorganised by language. Hyderabad state was split into three parts, which were merged with neighbouring states to form Maharashtra, Karnataka and Andhra Pradesh. The nine Telugu- and Urdu-speaking districts of Hyderabad State in the Telangana region were merged with the Telugu-speaking Andhra State to create Andhra Pradesh, with Hyderabad as its capital. Several protests, known collectively as the Telangana movement, attempted to invalidate the merger and demanded the creation of a new Telangana state. Major actions took place in 1969 and 1972, and a third began in 2010. The city suffered several explosions: one at Dilsukhnagar in 2002 claimed two lives; terrorist bombs in May and August 2007 caused communal tension and riots; and two bombs exploded in February 2013. On 30 July 2013 the government of India declared that part of Andhra Pradesh would be split off to form a new Telangana state, and that Hyderabad city would be the capital city and part of Telangana, while the city would also remain the capital of Andhra Pradesh for no more than ten years. On 3 October 2013 the Union Cabinet approved the proposal, and in February 2014 both houses of Parliament passed the Telangana Bill. With the final assent of the President of India, Telangana state was formed on 2 June 2014. Geography Hyderabad is south of Delhi, southeast of Mumbai, and north of Bangalore by road. It is situated in the southern part of Telangana in southeastern India, along the banks of the Musi River, a tributary of Krishna River located on the Deccan Plateau in the northern part of South India. Greater Hyderabad covers , making it one of the largest metropolitan areas in India. With an average altitude of , Hyderabad lies on predominantly sloping terrain of grey and pink granite, dotted with small hills, the highest being Banjara Hills at . The city has numerous lakes sometime referred to as sagar, meaning "sea". Examples include artificial lakes created by dams on the Musi, such as Hussain Sagar (built in 1562 near the city centre), Osman Sagar and Himayat Sagar. , the city had 140 lakes and 834 water tanks (ponds). Climate Hyderabad has a tropical wet and dry climate (Köppen Aw) bordering on a hot semi-arid climate (Köppen BSh). The annual mean temperature is ; monthly mean temperatures are . Summers (March–June) are hot and humid, with average highs in the mid-to-high 30s Celsius; maximum temperatures often exceed between April and June. The coolest temperatures occur in December and January, when the lowest temperature occasionally dips to . May is the hottest month, when daily temperatures range from ; December, the coldest, has temperatures varying from . Heavy rain from the south-west summer monsoon falls between June and October, supplying Hyderabad with most of its mean annual rainfall. Since records began in November 1891, the heaviest rainfall recorded in a 24-hour period was on 24 August 2000. The highest temperature ever recorded was on 2 June 1966, and the lowest was on 8 January 1946. The city receives 2,731 hours of sunshine per year; maximum daily sunlight exposure occurs in February. Conservation Hyderabad's lakes and the sloping terrain of its low-lying hills provide habitat for an assortment of flora and fauna. , the tree cover is 1.7% of total city area, a decrease from 2.7% in 1996. The forest region in and around the city encompasses areas of ecological and biological importance, which are preserved in the form of national parks, zoos, mini-zoos and a wildlife sanctuary. Nehru Zoological Park, the city's one large zoo, is the first in India to have a lion and tiger safari park. Hyderabad has three national parks (Mrugavani National Park, Mahavir Harina Vanasthali National Park and Kasu Brahmananda Reddy National Park), and the Manjira Wildlife Sanctuary is about from the city. Hyderabad's other environmental reserves are: Kotla Vijayabhaskara Reddy Botanical Gardens, Ameenpur Lake, Shamirpet Lake, Hussain Sagar, Fox Sagar Lake, Mir Alam Tank and Patancheru Lake, which is home to regional birds and attracts seasonal migratory birds from different parts of the world. Organisations engaged in environmental and wildlife preservation include the Telangana Forest Department, Indian Council of Forestry Research and Education, the International Crops Research Institute for the Semi-Arid Tropics (ICRISAT), the Animal Welfare Board of India, the Blue Cross of Hyderabad and the University of Hyderabad. Administration Common capital status According to the Andhra Pradesh Reorganisation Act, 2014 part 2 Section 5: "(1) On and from the appointed day, Hyderabad in the existing State of Andhra Pradesh, shall be the common capital of the State of Telangana and the State of Andhra Pradesh for such period not exceeding ten years. (2) After expiry of the period referred to in sub-section (1), Hyderabad shall be the capital of the State of Telangana and there shall be a new capital for the State of Andhra Pradesh." The same sections also define that the common capital includes the existing area designated as the Greater Hyderabad Municipal Corporation under the Hyderabad Municipal Corporation Act, 1955. As stipulated in sections 3 and 18(1) of the Reorganisation Act, city MLAs are members of Telangana state assembly. Local government The Greater Hyderabad Municipal Corporation (GHMC) oversees the civic infrastructure of the city, there are six administrative zones of GHMC: South Zone–(Charminar), East Zone–(L. B. Nagar), West Zone–(Serilingampally), North Zone–(Kukatpally), Northeast Zone–(Secunderabad) and Central Zone–(Khairatabad); these zones consist of 30 "circles", which together encompass 150 municipal wards. Each ward is represented by a corporator, elected by popular vote, the city has 7,400,000 voters of which 3,850,000 are male and 3,500,000 are female. The corporators elect the Mayor, who is the titular head of GHMC; executive powers rest with the Municipal Commissioner, appointed by the state government. The GHMC carries out the city's infrastructural work such as building and maintenance of roads and drains, town planning including construction regulation, maintenance of municipal markets and parks, solid waste management, the issuing of birth and death certificates, the issuing of trade licences, collection of property tax, and community welfare services such as mother and child healthcare, and pre-school and non-formal education. The GHMC was formed in April 2007 by merging the Municipal Corporation of Hyderabad (MCH) with 12 municipalities of the Hyderabad, Ranga Reddy and Medak districts covering a total area of . The Secunderabad Cantonment Board is a civic administration agency overseeing an area of , where there are several military camps. The Osmania University campus is administered independently by the university authority. Appointed in February 2021, Gadwal Vijayalakshmi of Telangana Rashtra Samithi (TRS) is serving as the Mayor of GHMC. Law and order in Hyderabad city is supervised by the Governor of Telangana. The jurisdiction is divided into three police commissionerates: Hyderabad, Cyberabad, and Rachakonda. Each zone is headed by a deputy commissioner of police. The jurisdictions of the city's administrative agencies are, in ascending order of size: the Hyderabad Police area, Hyderabad district, the GHMC area ("Hyderabad city") and the area under the Hyderabad Metropolitan Development Authority (HMDA). The HMDA is an apolitical urban planning agency that covers the GHMC and its suburbs, extending to 54 mandals in five districts encircling the city. It coordinates the development activities of GHMC and suburban municipalities and manages the administration of bodies such as the Hyderabad Metropolitan Water Supply and Sewerage Board (HMWSSB). Hyderabad is the seat of the Government of Telangana, Government of Andhra Pradesh and the President of India's winter retreat Rashtrapati Nilayam, as well as the Telangana High Court and various local government agencies. The Lower City Civil Court and the Metropolitan Criminal Court are under the jurisdiction of the High Court. The GHMC area contains 24 State Legislative Assembly constituencies, which form five constituencies of the Lok Sabha (the lower house of the Parliament of India). Utility services The HMWSSB (Hyderabad Metropolitan Water Supply & Sewage Board) regulates rainwater harvesting, sewerage services and water supply. In 2005, the HMWSSB started operating a water supply pipeline from Nagarjuna Sagar Dam to meet increasing demand. The Telangana Southern Power Distribution Company Limited (TSPDCL) manages electricity supply. , there were 15 fire stations in the city, operated by the Telangana State Disaster and Fire Response Department. The government-owned India Post has five head post offices and many sub-post offices in Hyderabad, which are complemented by private courier services. Pollution control Hyderabad produces around 4,500 tonnes of solid waste daily, which is transported from collection units in Imlibun, Yousufguda and Lower Tank Bund to the dumpsite in Jawaharnagar. Disposal is managed by the Integrated Solid Waste Management project which was started by the GHMC in 2010. Rapid urbanisation and increased economic activity has led to increased industrial waste, air, noise and water pollution, which is regulated by the Telangana Pollution Control Board (TPCB). The contribution of different sources to air pollution in 2006 was: 20–50% from vehicles, 40–70% from a combination of vehicle discharge and road dust, 10–30% from industrial discharges and 3–10% from the burning of household rubbish. Deaths resulting from atmospheric particulate matter are estimated at 1,700–3,000 each year. The city's "VIP areas", the Assembly building, Secretariat, and Telangana chief minister's office, have particularly low air quality index ratings, suffering from high levels of PM2.5's. Ground water around Hyderabad, which has a hardness of up to 1000 ppm, around three times higher than is desirable, is the main source of drinking water but the increasing population and consequent increase in demand has led to a decline in not only ground water but also river and lake levels. This shortage is further exacerbated by inadequately treated effluent discharged from industrial treatment plants polluting the water sources of the city. Healthcare The Commissionerate of Health and Family Welfare is responsible for planning, implementation and monitoring of all facilities related to health and preventive services. –11, the city had 50 government hospitals, 300 private and charity hospitals and 194 nursing homes providing around 12,000 hospital beds, fewer than half the required 25,000. For every 10,000 people in the city, there are 17.6 hospital beds, 9 specialist doctors, 14 nurses and 6 physicians. The city has about 4,000 individual clinics. Private clinics are preferred by many residents because of the distance to, poor quality of care at and long waiting times in government facilities, despite the high proportion of the city's residents being covered by government health insurance: 24% according to a National Family Health Survey in 2005. , many new private hospitals of various sizes were opened or being built. Hyderabad has outpatient and inpatient facilities that use Unani, homoeopathic and Ayurvedic treatments. In the 2005 National Family Health Survey, it was reported that the city's total fertility rate is 1.8, which is below the replacement rate. Only 61% of children had been provided with all basic vaccines (BCG, measles and full courses of polio and DPT), fewer than in all other surveyed cities except Meerut. The infant mortality rate was 35 per 1,000 live births, and the mortality rate for children under five was 41 per 1,000 live births. The survey also reported that a third of women and a quarter of men are overweight or obese, 49% of children below 5 years are anaemic, and up to 20% of children are underweight, while more than 2% of women and 3% of men suffer from diabetes. Demographics When the GHMC was created in 2007, the area occupied by the municipality increased from to . Consequently, the population increased by 87%, from 3,637,483 census to 6,809,970 census, 24% of which are migrants from elsewhere in India, making Hyderabad the nation's fourth most populous city. , the population density is and the Hyderabad urban agglomeration had a population of 7,749,334 making it the sixth most populous urban agglomeration in the country. census, there are 3,500,802 male and 3,309,168 female citizens—a sex ratio of 945 females per 1000 males, higher than the national average of 926 per 1000. Among children aged years, 373,794 are boys and 352,022 are girls—a ratio of 942 per 1000. Literacy stands at 83% (male 86%; female 80%), higher than the national average of 74.04%. The socio-economic strata consist of 20% upper class, 50% middle class and 30% working class. Ethnicity Referred to as "Hyderabadi", the residents of Hyderabad are predominantly Telugu and Urdu speaking people, with minority Bengali, Sindhi, Kannada, Memon, Nawayathi, Malayalam, Marathi, Gujarati, Marwari, Odia, Punjabi, Tamil and Uttar Pradeshi communities. Hyderabadi Muslims are a unique community who owe much of their history, language, cuisine, and culture to Hyderabad, and the various dynasties who previously ruled. Hadhrami Arabs, African Arabs, Armenians, Abyssinians, Iranians, Pathans and Turkish people are also present; these communities, of which the Hadhrami Arabs are the largest, declined after Hyderabad State became part of the Indian Union, as they lost the patronage of the Asaf Jahi Nizams. Religion Hindus are in the majority. Muslims form a very large minority, and are present throughout the city and predominate in and around old Hyderabad. There are also Christian, Sikh, Jain, Buddhist and Parsi communities and iconic churches, mosques and temples. census, the religious make-up of Greater Hyderabad was: Hindus (64.9%), Muslims (30.1%), Christians (2.8%), Jains (0.3%), Sikhs (0.3%) and Buddhists (0.1%); 1.5% did not state any religion. Language Telugu and Urdu are both official languages of the city, and most Hyderabadis are bilingual. The Telugu dialect spoken in Hyderabad is called Telangana Mandalika, and the Urdu spoken is called Deccani. English is a "Secondary official language" used monumentally in business and administration, and it is an important medium of instruction in education and publications. A significant minority speak other languages, including Hindi, Bengali, Kannada, Marathi, Punjabi, Marwari, Odia and Tamil. Slums In the greater metropolitan area, 13% of the population live below the poverty line. According to a 2012 report submitted by GHMC to the World Bank, Hyderabad has 1,476 slums with a total population of 1.7 million, of whom 66% live in 985 slums in the "core" of the city (the part that formed Hyderabad before the April 2007 expansion) and the remaining 34% live in 491 suburban tenements. About 22% of the slum-dwelling households had migrated from different parts of India in the last decade of the 20th century, and 63% claimed to have lived in the slums for more than 10 years. Overall literacy in the slums is and female literacy is . A third of the slums have basic service connections, and the remainder depend on general public services provided by the government. There are 405 government schools, 267 government aided schools, 175 private schools and 528 community halls in the slum areas. According to a 2008 survey by the Centre for Good Governance, 87.6% of the slum-dwelling households are nuclear families, 18% are very poor, with an income up to per annum, 73% live below the poverty line (a standard poverty line recognised by the Andhra Pradesh Government is per annum), 27% of the chief wage earners (CWE) are casual labour and 38% of the CWE are illiterate. About 3.7% of the slum children aged 5–14 do not go to school and 3.2% work as child labour, of whom 64% are boys and 36% are girls. The largest employers of child labour are street shops and construction sites. Among the working children, 35% are engaged in hazardous jobs. Cityscape Neighbourhoods The historic city established by Muhammad Quli Qutb Shah on the southern side of the Musi River forms the heritage region of Hyderabad called the Purana Shahar (Old City), while the "New City" encompasses the urbanised area on the northern banks. The two are connected by many bridges across the river, the oldest of which is Purana Pul—("old bridge") built in 1578 AD. Hyderabad is twinned with neighbouring Secunderabad, to which it is connected by Hussain Sagar. Many historic and heritage sites lie in south central Hyderabad, such as the Charminar, Mecca Masjid, Salar Jung Museum, Nizam Museum, Telangana High Court, Falaknuma Palace, Chowmahalla Palace and the traditional retail corridor comprising the Pearl Market, Laad Bazaar and Madina Circle. North of the river are hospitals, colleges, major railway stations and business areas such as Begum Bazaar, Koti, Abids, Sultan Bazar and Moazzam Jahi Market, along with administrative and recreational establishments such as the Reserve Bank of India, the Telangana Secretariat, the India Government Mint, the Telangana Legislature, the Public Gardens, the Nizam Club, the Ravindra Bharathi, the State Museum, the Birla Temple and the Birla Planetarium. North of central Hyderabad lie Hussain Sagar, Tank Bund Road, Rani Gunj and the Secunderabad railway station. Most of the city's parks and recreational centres, such as Sanjeevaiah Park, Indira Park, Lumbini Park, NTR Gardens, the Buddha statue and Tankbund Park are located here. In the northwest part of the city there are upscale residential and commercial areas such as Banjara Hills, Jubilee Hills, Begumpet, Khairtabad, Tolichowki, Jagannath Temple and Miyapur. The northern end contains industrial areas such as Kukatpally, Sanathnagar, Moosapet, Balanagar, Patancheru and Chanda Nagar. The northeast end is dotted with residential areas such as Malkajgiri, Neredmet, A. S. Rao Nagar and Uppal. In the eastern part of the city lie many defence research centres and Ramoji Film City. The "Cyberabad" area in the southwest and west of the city, consisting of Madhapur and Gachibowli has grown rapidly since the 1990s. It is home to information technology and bio-pharmaceutical companies and to landmarks such as Hyderabad Airport, Osman Sagar, Himayath Sagar, Kasu Brahmananda Reddy National Park and Durgam Cheruvu. Landmarks Heritage buildings constructed during the Qutb Shahi and Nizam eras showcase Indo-Islamic architecture influenced by Medieval, Mughal and European styles. After the 1908 flooding of the Musi River, the city was expanded and civic monuments constructed, particularly during the rule of Mir Osman Ali Khan (the VIIth Nizam), whose patronage of architecture led to him being referred to as the maker of modern Hyderabad. In 2012, the government of India declared Hyderabad the first "Best heritage city of India". Qutb Shahi architecture of the 16th and early 17th centuries followed classical Persian architecture featuring domes and colossal arches. The oldest surviving Qutb Shahi structure in Hyderabad is the ruins of the Golconda Fort built in the 16th century. Most of the historical bazaars that still exist were constructed on the street north of Charminar towards the fort. The Charminar has become an icon of the city; located in the centre of old Hyderabad, it is a square structure with sides long and four grand arches each facing a road. At each corner stands a -high minaret. The Charminar, Golconda Fort and the Qutb Shahi tombs are considered to be monuments of national importance in India; in 2010 the Indian government proposed that the sites be listed for UNESCO World Heritage status. Among the oldest surviving examples of Nizam architecture in Hyderabad is the Chowmahalla Palace, which was the seat of royal power. It showcases a diverse array of architectural styles, from the Baroque Harem to its Neoclassical royal court. The other palaces include Falaknuma Palace (inspired by the style of Andrea Palladio), Purani Haveli, King Kothi Palace and Bella Vista Palace all of which were built at the peak of Nizam rule in the 19th century. During Mir Osman Ali Khan's rule, European styles, along with Indo-Islamic, became prominent. These styles are reflected in the Indo-Saracenic style of architecture seen in many civic monuments such as the Hyderabad High Court, Osmania Hospital, City College and the Kacheguda railway station, all designed by Vincent Esch. Other landmark structures of the city constructed during his regin are the State Central Library, the Telangana Legislature, the State Archaeology Museum, Jubilee Hall, and Hyderabad railway station. Other landmarks of note are Paigah Palace, Asman Garh Palace, Basheer Bagh Palace, Errum Manzil and the Spanish Mosque, all constructed by the Paigah family. A 216 feet high statue of Ramanuja is in the city outskirts. Economy Recent estimates of the economy of Hyderabad's metropolitan area have ranged from 40-74 billion (PPP GDP), and have ranked it either fifth- or sixth- most productive metro area of India. Hyderabad is the largest contributor to the gross domestic product (GDP), tax and other revenues, of Telangana, and the sixth largest deposit centre and fourth largest credit centre nationwide, as ranked by the Reserve Bank of India (RBI) in June 2012. Its per capita annual income in 2011 was . , the largest employers in the city were the state government (113,098 employees) and central government (85,155). According to a 2005 survey, 77% of males and 19% of females in the city were employed. The service industry remains dominant in the city, and 90% of the employed workforce is engaged in this sector. Hyderabad's role in the pearl trade has given it the name "City of Pearls" and up until the 18th century, the city was the only global trading centre for diamonds known as Golconda Diamonds. Industrialisation began under the Nizams in the late 19th century, helped by railway expansion that connected the city with major ports. From the 1950s to the 1970s, Indian enterprises, such as Bharat Heavy Electricals Limited (BHEL), Nuclear Fuel Complex (NFC), National Mineral Development Corporation (NMDC), Bharat Electronics (BEL), Electronics Corporation of India Limited (ECIL), Defence Research and Development Organisation (DRDO), Hindustan Aeronautics Limited (HAL), Centre for Cellular and Molecular Biology (CCMB), Centre for DNA Fingerprinting and Diagnostics (CDFD), State Bank of Hyderabad (SBH) and Andhra Bank (AB) were established in the city. The city is home to Hyderabad Securities formerly known as Hyderabad Stock Exchange (HSE), and houses the regional office of the Securities and Exchange Board of India (SEBI). In 2013, the Bombay Stock Exchange (BSE) facility in Hyderabad was forecast to provide operations and transactions services to BSE-Mumbai by the end of 2014. The growth of the financial services sector has helped Hyderabad evolve from a traditional manufacturing city to a cosmopolitan industrial service centre. Since the 1990s, the growth of information technology (IT), IT-enabled services (ITES), insurance and financial institutions has expanded the service sector, and these primary economic activities have boosted the ancillary sectors of trade and commerce, transport, storage, communication, real estate and retail. , the IT exports from Hyderabad was 128,807 crore (15 billion), the city houses 1500 IT and ITES companies that provide 582,126 employment. Hyderabad's commercial markets are divided into four sectors: central business districts, sub-central business centres, neighbourhood business centres and local business centres. Many traditional and historic bazaars are located throughout the city, Laad Bazaar being the prominent among all is popular for selling a variety of traditional and cultural antique wares, along with gems and pearls. The establishment of Indian Drugs and Pharmaceuticals Limited (IDPL), a public sector undertaking, in 1961 was followed over the decades by many national and global companies opening manufacturing and research facilities in the city. , the city manufactured one third of India's bulk drugs and 16% of biotechnology products, contributing to its reputation as "India's pharmaceutical capital" and the "Genome Valley of India". Bharat Biotech has produced world class Covaxin saving millions of lives from COVID-19. Hyderabad is a global centre of information technology, for which it is known as Cyberabad (Cyber City). , it contributed 15% of India's and 98% of Andhra Pradesh's exports in IT and ITES sectors and 22% of NASSCOM's total membership is from the city. The development of HITEC City, a township with extensive technological infrastructure, prompted multinational companies to establish facilities in Hyderabad. The city is home to more than 1300 IT and ITES firms that provide employment for 407,000 individuals; the global conglomerates include Microsoft, Apple, Amazon, Google, IBM, Yahoo!, Oracle Corporation, Dell, Facebook, CISCO, and major Indian firms including Tech Mahindra, Infosys, Tata Consultancy Services (TCS), Polaris, Cyient and Wipro. In 2009 the World Bank Group ranked the city as the second best Indian city for doing business. The city and its suburbs contain the highest number of special economic zones of any Indian city. The Automotive industry in Hyderabad is also emerging and making it an automobile hub. Automobile companies including as Hyundai, Hyderabad Allwyn, Praga Tools, HMT Bearings, Ordnance Factory Medak, Deccan Auto and Mahindra & Mahindra have units in the Hyderabad economic zone. Fiat Chrysler Automobiles, Maruti Suzuki and Triton Energy will invest in Hyderabad. Like the rest of India, Hyderabad has a large informal economy that employs 30% of the labour force. According to a survey published in 2007, it had 40–50,000 street vendors, and their numbers were increasing. Among the street vendors, 84% are male and 16% female, and four fifths are "stationary vendors" operating from a fixed pitch, often with their own stall. Most are financed through personal savings; only 8% borrow from moneylenders. Vendor earnings vary from to per day. Other unorganised economic sectors include dairy, poultry farming, brick manufacturing, casual labour and domestic help. Those involved in the informal economy constitute a major portion of urban poor. Culture Hyderabad emerged as the foremost centre of culture in India with the decline of the Mughal Empire. After the fall of Delhi in 1857, the migration of performing artists to the city particularly from the north and west of the Indian subcontinent, under the patronage of the Nizam, enriched the cultural milieu. This migration resulted in a mingling of North and South Indian languages, cultures and religions, which has since led to a co-existence of Hindu and Muslim traditions, for which the city has become noted. A further consequence of this north–south mix is that both Telugu and Urdu are official languages of Telangana. The mixing of religions has resulted in many festivals being celebrated in Hyderabad such as Ganesh Chaturthi, Diwali and Bonalu of Hindu tradition and Eid ul-Fitr and Eid al-Adha by Muslims. Traditional Hyderabadi garb reveals a mix of Muslim and Hindu influences with men wearing sherwani and kurta–paijama and women wearing khara dupatta and salwar kameez. Most Muslim women wear burqa and hijab outdoors. In addition to the traditional Hindu and Muslim garments, increasing exposure to western cultures has led to a rise in the wearing of western style clothing among youths. Literature In the past, Qutb Shahi rulers and Asaf Jahi Nizams attracted artists, architects and men of letters from different parts of the world through patronage. The resulting ethnic mix popularised cultural events such as mushairas (poetic symposia) and Qawwali (devotional songs). The Qutb Shahi dynasty particularly encouraged the growth of Deccani Urdu literature leading to works such as the Deccani Masnavi and Diwan poetry, which are among the earliest available manuscripts in Urdu. Lazzat Un Nisa, a book compiled in the 15th century at Qutb Shahi courts, contains erotic paintings with diagrams for secret medicines and stimulants in the eastern form of ancient sexual arts. The reign of the Asaf Jahi Nizams saw many literary reforms and the introduction of Urdu as a language of court, administration and education. In 1824, a collection of Urdu Ghazal poetry, named Gulzar-e-Mahlaqa, authored by Mah Laqa Bai—the first female Urdu poet to produce a Diwan—was published in Hyderabad. Hyderabad has continued with these traditions in its annual Hyderabad Literary Festival, held since 2010, showcasing the city's literary and cultural creativity. Organisations engaged in the advancement of literature include the Sahitya Akademi, the Urdu Academy, the Telugu Academy, the National Council for Promotion of Urdu Language, the Comparative Literature Association of India, and the Andhra Saraswata Parishad. Literary development is further aided by state institutions such as the State Central Library, the largest public library in the state which was established in 1891, and other major libraries including the Sri Krishna Devaraya Andhra Bhasha Nilayam, the British Library and the Sundarayya Vignana Kendram. Music and films South Indian music and dances such as the Kuchipudi and Bharatanatyam styles are popular in the Deccan region. As a result of their culture policies, North Indian music and dance gained popularity during the rule of the Mughals and Nizams, and it was also during their reign that it became a tradition among the nobility to associate themselves with tawaif (courtesans). These courtesans were revered as the epitome of etiquette and culture, and were appointed to teach singing, poetry and classical dance to many children of the aristocracy. This gave rise to certain styles of court music, dance and poetry. Besides western and Indian popular music genres such as filmi music, the residents of Hyderabad play city-based marfa music, Dholak ke Geet (household songs based on local folklore), and qawwali, especially at weddings, festivals and other celebratory events. The state government organises the Golconda Music and Dance Festival, the Taramati Music Festival and the Premavathi Dance Festival to further encourage the development of music. Although the city is not particularly noted for theatre and drama, the state government promotes theatre with multiple programmes and festivals in such venues as the Ravindra Bharati, Shilpakala Vedika, Lalithakala Thoranam and Lamakaan. Although not a purely music oriented event, Numaish, a popular annual exhibition of local and national consumer products, does feature some musical performances. The city is home to the Telugu film industry, popularly known as Tollywood. In the 1970s, Deccani language realist films by globally acclaimed Shyam Benegal started a movement of coming of age art films in India, which came to be known as parallel cinema. The Deccani film industry ("Dollywood") produces films in the local Hyderabadi dialect, which have gained regional popularity since 2005. The city has hosted international film festivals such as the International Children's Film Festival and the Hyderabad International Film Festival. In 2005, Guinness World Records declared Ramoji Film City to be the world's largest film studio. Art and handicrafts The region is well known for its Golconda and Hyderabad painting styles which are branches of Deccan painting. Developed during the 16th century, the Golconda style is a native style blending foreign techniques and bears some similarity to the Vijayanagara paintings of neighbouring Mysore. A significant use of luminous gold and white colours is generally found in the Golconda style. The Hyderabad style originated in the 17th century under the Nizams. Highly influenced by Mughal painting, this style makes use of bright colours and mostly depicts regional landscape, culture, costumes and jewellery. Although not a centre for handicrafts itself, the patronage of the arts by the Mughals and Nizams attracted artisans from the region to Hyderabad. Such crafts include: Wootz steel, Filigree work, Bidriware, a metalwork handicraft from neighbouring Karnataka, which was popularised during the 18th century and has since been granted a Geographical Indication (GI) tag under the auspices of the WTO act; and Zari and Zardozi, embroidery works on textile that involve making elaborate designs using gold, silver and other metal threads. Chintz—a glazed calico textiles was originated in Golconda in 16th century. and another example of a handicraft drawn to Hyderabad is Kalamkari, a hand-painted or block-printed cotton textile that comes from cities in Andhra Pradesh. This craft is distinguished in having both a Hindu style, known as Srikalahasti and entirely done by hand, and an Islamic style, known as Machilipatnam that uses both hand and block techniques. Examples of Hyderabad's arts and crafts are housed in various museums including the Salar Jung Museum (housing "one of the largest one-man-collections in the world"), the Telangana State Archaeology Museum, the Nizam Museum, the City Museum and the Birla Science Museum. Cuisine Hyderabadi cuisine comprises a broad repertoire of rice, wheat and meat dishes and the skilled use of various spices. Hyderabad is listed by UNESCO as a creative city of gastronomy. The Hyderabadi biryani and Hyderabadi haleem with their blend of Mughlai and Arab cuisines, carry the national Geographical Indications tag. Hyderabadi cuisine is influenced to some extent by French, but more by Arabic, Turkish, Iranian and native Telugu and Marathwada cuisines. Popular native dishes include nihari, chakna, baghara baingan and the desserts qubani ka meetha, double ka meetha and kaddu ki kheer (a sweet porridge made with sweet gourd). Media One of Hyderabad's earliest newspapers, The Deccan Times, was established in the 1780s. Major Telugu dailies published in Hyderabad are Eenadu, Sakshi and Namasthe Telangana, while major English papers are The Times of India, The Hindu and Deccan Chronicle. The major Urdu papers include The Siasat Daily, The Munsif Daily and Etemaad. The Secunderabad Cantonment Board established the first radio station in Hyderabad State around 1919. Deccan Radio was the first radio public broadcast station in the city starting on 3 February 1935, with FM broadcasting beginning in 2000. The available channels in Hyderabad include All India Radio, Radio Mirchi, Radio City, Red FM, Big FM and Fever FM. Television broadcasting in Hyderabad began in 1974 with the launch of Doordarshan, the government of India's public service broadcaster, which transmits two free-to-air terrestrial television channels and one satellite channel. Private satellite channels started in July 1992 with the launch of Star TV. Satellite TV channels are accessible via cable subscription, direct-broadcast satellite services or internet-based television. Hyderabad's first dial-up internet access became available in the early 1990s and was limited to software development companies. The first public internet access service began in 1995, with the first private sector internet service provider (ISP) starting operations in 1998. In 2015, high-speed public WiFi was introduced in parts of the city. Education Public and private schools in Hyderabad are governed by the Central Board of Secondary Education and follow a "10+2+3" plan. About two-thirds of pupils attend privately run institutions. Languages of instruction include English, Hindi, Telugu and Urdu. Depending on the institution, students are required to sit the Secondary School Certificate or the Indian Certificate of Secondary Education. After completing secondary education, students enroll in schools or junior colleges with a higher secondary facility. Admission to professional graduation colleges in Hyderabad, many of which are affiliated with either Jawaharlal Nehru Technological University Hyderabad (JNTUH) or Osmania University (OU), is through the Engineering Agricultural and Medical Common Entrance Test (EAM-CET). There are 13 universities in Hyderabad: two private universities, two deemed universities, six state universities and three central universities. The central universities are the University of Hyderabad (Hyderabad Central University, HCU), Maulana Azad National Urdu University and the English and Foreign Languages University. Osmania University, established in 1918, was the first university in Hyderabad and is India's second most popular institution for international students. The Dr. B. R. Ambedkar Open University, established in 1982, is the first distance-learning open university in India. Hyderabad is home to a number of centres specialising in particular fields such as biomedical sciences, biotechnology and pharmaceuticals, such as the National Institute of Pharmaceutical Education and Research (NIPER) and National Institute of Nutrition (NIN). Hyderabad has five major medical schools—Osmania Medical College, Gandhi Medical College, Nizam's Institute of Medical Sciences, Deccan College of Medical Sciences and Shadan Institute of Medical Sciences—and many affiliated teaching hospitals. An All India Institute of Medical Sciences has been sanctioned in the outskirts of Hyderabad. The Government Nizamia Tibbi College is a college of Unani medicine. Hyderabad is also the headquarters of the Indian Heart Association, a non-profit foundation for cardiovascular education. Institutes in Hyderabad include the National Institute of Rural Development, NALSAR University of Law, Hyderabad (NLU), the Indian School of Business, the National Geophysical Research Institute, the Institute of Public Enterprise, the Administrative Staff College of India and the Sardar Vallabhbhai Patel National Police Academy. Technical and engineering schools include the International Institute of Information Technology, Hyderabad (IIITH), Birla Institute of Technology and Science, Pilani – Hyderabad (BITS Hyderabad), Gandhi Institute of Technology and Management Hyderabad Campus (GITAM Hyderabad Campus), and Indian Institute of Technology, Hyderabad (IIT-H) as well as agricultural engineering institutes such as the International Crops Research Institute for the Semi-Arid Tropics (ICRISAT) and the Acharya N. G. Ranga Agricultural University. Hyderabad also has schools of fashion design including Raffles Millennium International, NIFT Hyderabad and Wigan and Leigh College. The National Institute of Design, Hyderabad (NID-H), will offer undergraduate and postgraduate courses from 2015. Sports At the professional level, the city has hosted national and international sports events such as the 2002 National Games of India, the 2003 Afro-Asian Games, the 2004 AP Tourism Hyderabad Open women's tennis tournament, the 2007 Military World Games, the 2009 World Badminton Championships and the 2009 IBSF World Snooker Championship. The city hosts a number of venues suitable for professional competition such as the Swarnandhra Pradesh Sports Complex for field hockey, the G. M. C. Balayogi Stadium in Gachibowli for athletics and football, and for cricket, the Lal Bahadur Shastri Stadium and Rajiv Gandhi International Cricket Stadium, home ground of the Hyderabad Cricket Association. Hyderabad has hosted many international cricket matches, including matches in the 1987 and the 1996 ICC Cricket World Cups. The Hyderabad cricket team represents the city in the Ranji Trophy—a first-class cricket tournament among India's states and cities. Hyderabad is home to the Indian Premier League franchise Sunrisers Hyderabad champions of 2016 Indian Premier League. A previous franchise was the Deccan Chargers, which won the 2009 Indian Premier League held in South Africa. The new professional football club of the city Hyderabad FC competes in Indian Super League. During British rule, Secunderabad became a well-known sporting centre and many race courses, parade grounds and polo fields were built. Many elite clubs formed by the Nizams and the British such as the Secunderabad Club, the Nizam Club and the Hyderabad Race Club, which is known for its horse racing especially the annual Deccan derby, still exist. In more recent times, motorsports has become popular with the Andhra Pradesh Motor Sports Club organising popular events such as the Deccan Mile Drag, TSD Rallies and 4x4 off-road rallying. International-level sportspeople from Hyderabad include: cricketers Ghulam Ahmed, M. L. Jaisimha, Mohammed Azharuddin, V. V. S. Laxman, Pragyan Ojha, Venkatapathy Raju, Shivlal Yadav, Arshad Ayub, Syed Abid Ali, Mithali Raj and Noel David; football players Syed Abdul Rahim, Syed Nayeemuddin and Shabbir Ali; tennis player Sania Mirza; badminton players S. M. Arif, Pullela Gopichand, Saina Nehwal, P. V. Sindhu, Jwala Gutta and Chetan Anand; hockey players Syed Mohammad Hadi and Mukesh Kumar; rifle shooters Gagan Narang and Asher Noria and bodybuilder Mir Mohtesham Ali Khan. Transport , the most commonly used forms of medium-distance transport in Hyderabad include government-owned services such as light railways and buses, as well as privately operated taxis and auto rickshaws. These altogether serve 3.5 million passengers daily. Bus services operate from the Mahatma Gandhi Bus Station in the city centre with a fleet of 3800 buses serving 3.3 million passengers. Hyderabad Metro—(a light-rail rapid transit system) was inaugurated in November 2017. it is a 3 track network spread upon with 57 stations, it is the second-largest metro rail network in India. Hyderabad's Multi-Modal Transport System (MMTS), is a three-line suburban rail service with 121 services carrying 180,000 passengers daily. Complementing these government services are minibus routes operated by Setwin (Society for Employment Promotion & Training in Twin Cities). Intercity rail services operate from Hyderabad; the main, and largest, station is Secunderabad railway station, which serves as Indian Railways' South Central Railway zone headquarters and a hub for both buses and MMTS light rail services connecting Secunderabad and Hyderabad. Other major railway stations in Hyderabad are , , , and . , there are over 5.3 million vehicles operating in the city, of which 4.3 million are two-wheelers and 1.04 million four-wheelers. The large number of vehicles coupled with relatively low road coverage—roads occupy only 9.5% of the total city area—has led to widespread traffic congestion especially since 80% of passengers and 60% of freight are transported by road. The Inner Ring Road, the Outer Ring Road, the Hyderabad Elevated Expressway, the longest flyover in India, and various interchanges, overpasses and underpasses were built to ease congestion. Maximum speed limits within the city are for two-wheelers and cars, for auto rickshaws and for light commercial vehicles and buses. Hyderabad sits at the junction of three National Highways linking it to six other states: NH-44 runs from Srinagar, Jammu and Kashmir, in the north to Kanyakumari, Tamil Nadu, in the south; NH-65, runs east-west between Machilipatnam, Andhra Pradesh connects Hyderabad and Suryapet with Pune, Maharashtra; NH-163 links Hyderabad and Bhopalpatnam, Chhattisgarh; NH-765 links Hyderabad to Srisailam, Andhra Pradesh. Five state highways, SH-1 links Hyderabad, to Ramagundam, SH-2, SH-4, and SH-6, either start from, or pass through, Hyderabad. Air traffic was previously handled via Begumpet Airport established in 1930, but this was replaced by Rajiv Gandhi International Airport (RGIA) in 2008, capable of handling 25 million passengers and 150,000 metric-tonnes of cargo per annum. In 2020, Airports Council International, an autonomous body representing the world's airports, judged RGIA the Best Airport in Environment and Ambience and the Best Airport by Size and Region in the passenger category. See also List of flyovers and under-passes in Hyderabad List of people from Hyderabad List of tallest buildings in Hyderabad List of tourist attractions in Hyderabad Notes References Bibliography Further reading External links A guide to Hyderabad Cities and towns in Hyderabad district, India Cities in Telangana Indian capital cities High-technology business districts in India Metropolitan cities in India Historic districts Capitals of former nations Former national capitals Former capital cities in India Populated places established in 1591 1591 establishments in Asia 1590s establishments in India
43047532
https://en.wikipedia.org/wiki/Technology%20support%20net
Technology support net
A Technology Support Net (TSN) is the required physical, energy, information, legal and cultural structures that support the development of technology core. In order to function effectively, the technology core (hardware, software and brainware) needs to be embedded in its support structure (TSN). Changes in the core then trigger requisite changes in TSN. Any core and its TSN co-evolve in a symbiotic way of mutual strengthening. At certain stage, TSN starts dictating acceptable changes in the core and ultimately becomes an effective barrier to further innovation. At such point, a time for new, disruptive technology emerges. The entire structure of the technology core and its support network of requisite flows are sketched in Figure 1. It is clear that the architecture of the Technology Support Net functions as the main determinant of technology use, change and the rate of innovation. Milan Zeleny in his book Human Systems Management, has laid down the foundation of modern technology management, innovation and change. Technology support network (TSN) is the necessary condition for continued technology core innovation. Without matching the support network, any new technology has little chance of succeeding. The infrastructure of technology support net, when fully established, could present significant barriers to significant innovation. The process of innovation is no longer open and autonomous, but often technically and politically subservient to the “holders and owners” of the support net. Technology, through its requisite support net, limits and predetermines the flows and types of innovation. Nowadays the processes of invention and innovation are not limited only by lack of knowledge or too narrow business criteria, but by the defenders of the existing support network (including infrastructure). The focus is not so much on hardware (which is becoming commoditized), nor software or brainware, but on the boundaries and architecture of the support net itself. Components of technology Any technology can be divided into four key components: Hardware The physical structure or logical layout, plant or equipment of machine or contrivance. This is the means to carry out required tasks of transformation to achieve purpose or goals. Hardware therefore refers not only to particular physical structure of components, but also to their logical layout. Software The set of rules, guidelines, and algorithms necessary for using the hardware (program, covenants, standards, rules of usage) to carry out the tasks. This is the know-how, which means how to carry out tasks to achieve purpose or goals. Brainware The purpose (objectives and goals), reason and justification for using or deploying the hardware/software in a particular way. This is the know-what and the know-why of technology. That is, the determination of what to use or deploy, when, where and why. These three components form the technology core. The components of the technology core are co-determinant, their relations circular and mutually enhancing. The interdependence among the three components is well illustrated in the example of automobile as technology: A car consists of its own physical structure and logical layout, its own hardware. Its software consists of operating rules of the push, turn, press, etc., described in manuals or acquired through learning. The brainware is supplied by the driver and includes decisions where to go, when, how fast, which way and why to use a car at all. Computers, satellites or the Internet can be defined in terms of these above three dimensions. Any information technology or system should also be clearly identifiable through its hardware, software and brainware. Technology Support Net TSN is a network of flows: materials, information, energies, skills, laws, rules of conduct that circulate to, through and from the network in order to enable the proper functioning of the technology core and the achieving of given purpose or goals. Ultimately, all the requisite network flows are initiated, maintained and consumed by people participating in the use and support of the use of a given technology. They might similarly and simultaneously participate in supporting many different technologies through many different TSNs. Sameer Kumar wrote that TSNs can be intermeshed into larger hyper networks, thereby revealing important complementary, competing and collaborating technologies. The relationship between the technology core and its requisite TSN is that of mutual enhancement and codetermination. Every unique technology core gives rise to a specific and requisite TSN and thus to a specific set of relationships among people. Ultimately, the TSN can be traced to and translated into the relationships among human participants: initiators, providers and maintainers of the requisite flows in cooperative social settings. In this sense, every technology is a form of a social relationship brought forth from the background environment. The following example describes the various social relationships among people caused by technology support net. As for the automobile technology, its TSN consists of an infrastructure of roads, bridges, facilities and traffic signals, but also of maintenance and emergency services, rules and laws of conduct, institutions of their enforcement, style and culture of driving behavior, etc. A large number of people have to be organized in a specific and requisite pattern in order to enable cars to function as technology. Moreover, technology and its four components could also be defined from the vantage point of its own user or observer, not in a context-free or absolute sense. In other words, roads, bridges and traffic signals can be technologies themselves, with their own hardware, software, brainware and support nets. For example, traffic lights are a part of the TSN of an automobile, but their own hardware can be driven by their own software (a computer-controlled switching program or schedule) and brainware (purposes of safety, volume and flow control, and interaction with pedestrians). This technology core has its own support net of electricity, signal interpretations and car traffic. So the traffic light is a technology of its own. Similarly, a piece of software from some technology can itself become viewed as technology, in order to achieve specific business purposes or goals with its own hardware, software, brainware and TSN. Functions of technology At its most fundamental, technology is a tool used in transforming inputs into output (products) or, more generally, towards achieving purposes or goals. For example, the inputs can be material, information, skills or services. The product can be goods, services or information. Such a tool can be both physical (machine, computer) and logical (methodology, technique). Technology as a tool does not have to be from steel, wood or silica, it could also be a recipe, process or algorithm. The nature of technology has changed in the global era during the development of human history: it is becoming more integrative and more knowledge-oriented, it is available globally and it includes also logical schemes, procedures and software, not just tools and machinery. It tends to complement or extend the user, not to make him a simple appendage. In order to utilize technology efficiently and effectively, it should be viewed as a form of social relationship, with hardware and software being enabled by brainware and the requisite support network. This is the view from Joseph Stiglitz, the recipient of the Nobel Memorial Prize in Economic Sciences (2001) on technology transfer: Stiglitz's emphasis is on the insufficiency of information (or codified “knowledge”) and the hardware-software mindset. Information can always be “downloaded”, knowledge cannot. Knowledge has to be produced within the local circumstances and structural support. In recent times, technology refers to a package of hardware, software, brainware – and primarily, the requisite support net which fixates, limits and predetermines the flows and types of innovation. In many modern technologies, the hardware is becoming a commodity, the least decisive component, a mere physical casing for the real power of effective knowledge contents. The enabling technology support network is often becoming the most important component of technology: Charu Chandra described that technology organizations aim at achieving total system productivity, not task. In order to achieve this, workflow within and between departments must be integrated. This is achieved by organizing a project with interconnected tasks of activities that use inputs to produce output(s) according to common objectives and goals. One of the ways to do this is managing its effects on the support net of requisite relationships. In the near future it will not be the number of computers per capita, but the density and capacity of their network interconnectedness which will determine their effective usage. References Networks
61619
https://en.wikipedia.org/wiki/Ghostscript
Ghostscript
Ghostscript is a suite of software based on an interpreter for Adobe Systems' PostScript and Portable Document Format (PDF) page description languages. Its main purposes are the rasterization or rendering of such page description language files, for the display or printing of document pages, and the conversion between PostScript and PDF files. Features Ghostscript can be used as a raster image processor (RIP) for raster computer printers—for instance, as an input filter of line printer daemon—or as the RIP engine behind PostScript and PDF viewers. It can also be used as a file format converter, such as PostScript to PDF converter. The ps2pdf conversion program comes with the Ghostscript distribution. Ghostscript can also serve as the back-end for PDF to raster image (png, tiff, jpeg, etc.) converter; this is often combined with a PostScript printer driver in "virtual printer" PDF creators. As it takes the form of a language interpreter, Ghostscript can also be used as a general purpose programming environment. Ghostscript has been ported to many operating systems, including Unix-like systems, classic Mac OS, OpenVMS, Microsoft Windows, Plan 9, MS-DOS, FreeDOS, OS/2, ArcaOS, Atari TOS, RISC OS and AmigaOS. History Ghostscript was originally written by L. Peter Deutsch for the GNU Project, and released under the GNU General Public License in 1986. Later, Deutsch formed Aladdin Enterprises to dual-license Ghostscript also under a proprietary license with an own development fork: "Aladdin Ghostscript" under the Aladdin Free Public License (which, despite the name, is not a free software license, as it forbids commercial distribution) and "GNU Ghostscript" distributed with the GNU General Public License. With version 8.54 in 2006, both development branches were merged again, and dual-licensed releases were still provided. Ghostscript is currently owned by Artifex Software and maintained by Artifex Software employees and the worldwide user community. According to Artifex, as of version 9.03, the commercial version of Ghostscript can no longer be freely distributed for commercial purposes without purchasing a license, though the (A)GPL variant allows commercial distribution provided all code using it is released under the (A)GPL. In February 2013, with version 9.07, Ghostscript changed its license from GPLv3 to GNU AGPL. which raised license compatibility questions, for example by Debian. Front ends Ghostscript graphical user interfaces (GUIs) view PostScript or PDF files on screens, scroll, page forward, page backward, zoom text, and print pages. Such GUIs include Evince, IrfanView, Inkscape and PDF24 Creator. Virtual printers can also create PDF files. Free fonts There are several sets of free fonts supplied for Ghostscript, intended to be metrically compatible with common fonts attached with the PostScript standard. These include: 35 basic PostScript fonts contributed by URW++ Design and Development Incorporated, of Hamburg, Germany in 1996 under the GPL and AFPL. It is a full set fonts similar to the classic Adobe set: Bookman L (Bookman), Century Schoolbook L (New Century Schoolbook), Chancery L (Zapf Chancery), Dingbats (Zapf Dingbats), Gothic L (Avant Garde), Nimbus Mono L (Courier), Nimbus Roman No9 L (Times), Nimbus Sans L (Helvetica), Palladio L (Palatino), Standard Symbols L (Symbol), in Type1, TrueType, and OpenType formats. The GhostPDL package (including Ghostscript as well as companion implementations of HP PCL and Microsoft XPS) includes additional fonts under the AFPL which bars commercial use. It includes URW++ versions of Garamond (Garamond No. 8), Optima (URW Classico), Arial (A030), Antique Olive, and Univers (U001), Clarendon, Coronet, Letter Gothic, as well as URW Mauritius and a modified form of Albertus known as A028. Combined with the base set, they represent a little more than half of the standard PostScript 3 font complement. A miscellaneous set including Cyrillic, kana, and fonts derived from the free Hershey fonts, with improvements by Thomas Wolff (such as adding accented characters). The Ghostscript fonts were developed in the PostScript Type 1 format but have been converted into the TrueType format, usable by most current software, and are often used within the open-source community. The Garamond font has additionally been improved upon. URW's core 35 fonts have been subsequently incorporated into GNU FreeFont and TeX Gyre. See also Common Unix Printing System Foomatic PostScript Printer Description Printer driver pstoedit References External links Ghostscript version 8.56 and earlier Ghostscript/GhostPDL binaries download page at GitHub (cross-platform, this site is actively maintained) GPL Ghostscript binaries download page at SourceForge (cross-platform, this site is no longer actively maintained) Computer-related introductions in 1988 Cross-platform software Digital press Free PDF readers PostScript Software using the GNU AGPL license
7336577
https://en.wikipedia.org/wiki/GNATS
GNATS
GNATS is the GNU project's issue-tracking software. GNATS is a set of tools for tracking bugs reported by users to a central site. It allows problem report management and communication with users via various means. GNATS stores all the information about problem reports in its databases and provides tools for querying, editing, and maintenance of the databases. GNATS is free software, distributed under the terms of the GNU General Public License. Usage GNATS is used by GNU packages and NetBSD. The Apache Software Foundation used the software from 1996-2002, and the Mutt project until 2006. It is also used, or was used in the past, by the FreeBSD Project, OpenBSD, Juniper Networks, Nordic Optical Telescope, CERN, Green Bank Telescope, NRAO AIPS++, European Software Institute, and the BaBar Project at SLAC. In early June 2014, FreeBSD announced concrete plans to migrate from GNATS to Bugzilla, claiming that Bugzilla supports finer granularity for categories and keywords. Furthermore, the announcement states that GNATS is missing many features that people expect from a modern bug tracker. It has been described as having been "the cornerstone" of free software bug-tracking systems. History GNATS was written by Heinz G. Seidl of Cygnus Solutions, inspired by BSD Unix's sendbug and filebug programs, and had its first stable release in 1992. Initially, its only interface was via email, but multiple web and graphical interfaces were later added. During the 1990s, other Cygnus employees rewrote it, and a further major rewrite was done for release 4, with other features contributed by users. Although GNATS is still in use, development slowed since the 4.1 release in 2005. Several changes lingered in the developers' source code repository, and a 4.2 release was discussed in 2012 but no official release was made until some further development, leading to release 4.2.0 on the 28th of February 2015. Features Built as a client-server architecture, GNATS works with many interfaces (described below) including email, command line, and web interfaces. All GNATS databases and configuration can be stored in plain text files, which helps in the modularity of GNATS. Categorisation and recategorisation of bug reports is particularly simple. Interfaces Four official interfaces exist for GNATS: GnatswebA Web interface to query and open tickets, with GNATS running as a background process (a "daemon") Emacs GNATS modeAn extension (a "major mode") for GNU Emacs and XEmacs allowing direct access to GNAT issue-trackers send-pr / edit-pr / query-pr The traditional command line interface to create, edit, and query Problem Reports TkGnatsA cross-platform application, written in the Tcl/Tk language Apart from these, custom ones can be developed such as OpenBSD's sendbug interface which collects system information and submits Problem Reports via email. See also Comparison of issue-tracking systems References External links Dan Kegel's GNATS-related links - many are broken but available via archive.org Free project management software GNU Project software
17901978
https://en.wikipedia.org/wiki/Grand%20Central%20Dispatch
Grand Central Dispatch
Grand Central Dispatch (GCD or libdispatch), is a technology developed by Apple Inc. to optimize application support for systems with multi-core processors and other symmetric multiprocessing systems. It is an implementation of task parallelism based on the thread pool pattern. The fundamental idea is to move the management of the thread pool out of the hands of the developer, and closer to the operating system. The developer injects "work packages" into the pool oblivious of the pool's architecture. This model improves simplicity, portability and performance. GCD was first released with Mac OS X 10.6, and is also available with iOS 4 and above. The name "Grand Central Dispatch" is a reference to Grand Central Terminal. The source code for the library that provides the implementation of GCD's services, libdispatch, was released by Apple under the Apache License on September 10, 2009. It has been ported to FreeBSD 8.1+, MidnightBSD 0.3+, Linux, and Solaris. Attempts in 2011 to make libdispatch work on Windows were not merged into upstream. Apple has its own port of libdispatch.dll for Windows shipped with Safari and iTunes, but no SDK is provided. Since around 2017, the original libdispatch repository hosted by Nick Hutchinson was deprecated in favor of a version that is part of the Swift core library created in June 2016. The new version supports more platforms, notably including Windows. Design GCD works by allowing specific tasks in a program that can be run in parallel to be queued up for execution and, depending on availability of processing resources, scheduling them to execute on any of the available processor cores (referred to as "routing" by Apple). A task can be expressed either as a function or as a "block." Blocks are an extension to the syntax of C, C++, and Objective-C programming languages that encapsulate code and data into a single object in a way similar to a closure. GCD can still be used in environments where blocks are not available. Grand Central Dispatch still uses threads at the low level but abstracts them away from the programmer, who will not need to be concerned with as many details. Tasks in GCD are lightweight to create and queue; Apple states that 15 instructions are required to queue up a work unit in GCD, while creating a traditional thread could easily require several hundred instructions. A task in Grand Central Dispatch can be used either to create a work item that is placed in a queue or assign it to an event source. If a task is assigned to an event source, then a work unit is made from the block or function when the event triggers, and the work unit is placed in an appropriate queue. This is described by Apple as more efficient than creating a thread whose sole purpose is to wait on a single event triggering. Features The dispatch framework declares several data types and functions to create and manipulate them: Dispatch Queues are objects that maintain a queue of tasks, either anonymous code blocks or functions, and execute these tasks in their turn. The library automatically creates several queues with different priority levels that execute several tasks concurrently, selecting the optimal number of tasks to run based on the operating environment. A client to the library may also create any number of serial queues, which execute tasks in the order they are submitted, one at a time. Because a serial queue can only run one task at a time, each task submitted to the queue is critical with regard to the other tasks on the queue, and thus a serial queue can be used instead of a lock on a contended resource. Dispatch Sources are objects that allow the client to register blocks or functions to execute asynchronously upon system events, such as a socket or file descriptor being ready for reading or writing, or a POSIX signal. Dispatch Groups are objects that allow several tasks to be grouped for later joining. Tasks can be added to a queue as a member of a group, and then the client can use the group object to wait until all of the tasks in that group have completed. Dispatch Semaphores are objects that allow a client to permit only a certain number of tasks to execute concurrently. Libdispatch comes with its own object model, OS Object, that is partially compatible with the Objective-C model. As a result, its objects can be bridged toll-free to ObjC objects. Examples Two examples that demonstrate the use of Grand Central Dispatch can be found in John Siracusa's Ars Technica Snow Leopard review. Initially, a document-based application has a method called analyzeDocument which may do something like count the number of words and paragraphs in the document. Normally, this would be a quick process, and may be executed in the main thread without the user noticing a delay between pressing a button and the results showing. - (IBAction)analyzeDocument:(NSButton *)sender { NSDictionary *stats = [myDoc analyze]; [myModel setDict:stats]; [myStatsView setNeedsDisplay:YES]; } If the document is large and analysis takes a long time to execute then the main thread will wait for the function to finish. If it takes long enough, the user will notice, and the application may even "beachball". The solution can be seen here: - (IBAction)analyzeDocument:(NSButton *)sender { dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^{ NSDictionary *stats = [myDoc analyze]; dispatch_async(dispatch_get_main_queue(), ^{ [myModel setDict:stats]; [myStatsView setNeedsDisplay:YES]; }); }); } Here, the call to [myDoc analyze] is placed inside a Block, which is then placed on one of the global concurrent queues. After it has finished running [myDoc analyze], a new block is placed on the main queue (on which the main thread of the application runs), which updates the GUI (This is necessary because the GUI can only be updated by the main thread). By making these two small changes, the developer has avoided a potential stall of the application as seen by the user, and allowed their application to make better use of hardware resources. The second example is that of parallelising a for loop: for (i = 0; i < count; i++) { results[i] = do_work(data, i); } total = summarize(results, count); This code runs the do_work function count times, assigning the ith result to the ith element in the array results, and then calls summarize on array once the loop has ended. Unfortunately the work is computed sequentially, where it may not need to be. Assuming that do_work doesn't rely on the results of any of the other calls made to it, there is no reason why these calls cannot be made concurrently. This is how this would be done in GCD: dispatch_apply(count, dispatch_get_global_queue(0, 0), ^(size_t i){ results[i] = do_work(data, i); }); total = summarize(results, count); Here, dispatch_apply runs the block passed to it, count times, placing each invocation on a global queue, and passing each block invocation a different number from 0 to count-1. This will allow the OS to spread out the work as it sees fit, choosing the optimal number of threads to run on for the current hardware and system load. dispatch_apply does not return until all the blocks it places on the given queue have completed execution, so that it can be guaranteed that all the work inside the original loop has completed before calling summarize. Programmers can create their own serial queues for tasks which they know must run serially but which may be executed on a separate thread. A new queue would be created like so: dispatch_queue_t exampleQueue; exampleQueue = dispatch_queue_create( "com.example.unique.identifier", NULL ); // exampleQueue may be used here. dispatch_release( exampleQueue ); Care must be taken to avoid a dispatched block on a queue synchronously placing another block on the same queue as this is guaranteed to deadlock. Such code might do the following: dispatch_queue_t exampleQueue = dispatch_queue_create( "com.example.unique.identifier", NULL ); dispatch_sync( exampleQueue, ^{ dispatch_sync( exampleQueue, ^{ printf( "I am now deadlocked...\n" ); }); }); dispatch_release( exampleQueue ); Applications GCD is used throughout macOS (beginning with 10.6 Snow Leopard), and Apple has encouraged its adoption by macOS application developers. FreeBSD developer Robert Watson announced the first adaptation of a major open source application, the Apache HTTP Server, to use GCD via the Apache GCD MPM (Multi-Processing Module) on May 11, 2010, in order to illustrate the programming model and how to integrate GCD into existing, large-scale multi-threaded, applications. His announcement observed that the GCD MPM had one third to half the number of lines as other threaded MPMs. Internals GCD is implemented by libdispatch, with support from pthreads non-POSIX extensions developed by Apple. Apple has changed the interface since its inception (in OS X 10.5) through the official launch of GCD (10.6), Mountain Lion (10.8) and recently Mavericks (10.9). The latest changes involve making the code supporting pthreads, both in user mode and kernel, private (with kernel pthread support reduced to shims only, and the actual workqueue implementation moved to a separate kernel extension). On other systems, libdispatch implements its own workqueue using the system's own event facilities (epoll, kevent, or Windows NT). On macOS, kevent is used with the kernel workqueue. See also Task Parallel Library Java Concurrency OpenMP Threading Building Blocks (TBB) References External links GCD project on GitHub GCD Reference from the Mac Developer Library The Introducing Blocks and Grand Central Dispatch article from the Mac Developer Library MacOS Parallel computing
17183849
https://en.wikipedia.org/wiki/One-Net
One-Net
ONE-NET is an open-source standard for wireless networking. ONE-NET was designed for low-cost, low-power (battery-operated) control networks for applications such as home automation, security & monitoring, device control, and sensor networks. ONE-NET is not tied to any proprietary hardware or software, and can be implemented with a variety of low-cost off-the-shelf radio transceivers and micro controllers from a number of different manufacturers. Wireless Transmission ONE-NET uses UHF ISM radio transceivers and currently operates in the 868 MHz and 915 MHz frequencies with 25 channels available for use in the United States. The ONE-NET standard allows for implementation on other frequencies, and some work is being done to implement it in the 433 MHz and 2.4 GHz frequency ranges. ONE-NET utilizes Wideband FSK (Frequency-shift keying) to encode data for transmission. ONE-NET features a dynamic data rate protocol with a base data rate of 38.4 kbit/s. The specification allows per-node dynamic data rate configuration for data rates up to 230 kbit/s. Network Characteristics ONE-NET supports star, peer-to-peer and multi-hop topology. Star network topology can be used to lower complexity and cost of peripherals, and also simplifies encryption key management. In peer-to-peer mode, a master device configures and authorizes peer-to-peer transactions. Employing repeaters and a configurable repetition radius multi-hop mode allows to cover larger areas or route around dead areas. Mesh routing is not supported. Outdoor peer-to-peer range has been measured to over 500 m, indoor peer-to-peer range has been demonstrated from 60 m to over 100 m, and mesh mode can extend operational range to several kilometers. Simple, block, and streaming transactions are supported. Simple transactions typically use message types as defined by the ONE-NET protocol to exchange sensor data such as temperature or energy consumption, and control data such as on/off messages. Simple transactions use encryption techniques to avoid susceptibility to replay attacks. Block transactions can be used to transmit larger blocks of data than simple messages. Block transactions consist of multiple packets containing up to 58 bytes per packet. Blocks transactions can transfer up to 65,535 bytes per block. Streaming transactions are similar in format to block transactions but do not require retransmission of lost data packets. Power Management ONE-NET is optimized for low power consumption such as battery-powered peripherals. Low-duty-cycle battery-powered ONE-NET devices such as window sensors, moisture detectors, etc. can achieve a three to five year battery life with “AA” or "AAA" alkaline cells. Dynamic power adjustment allows signal strength info to be used to scale back transmit power to conserve battery power. High data rates and short packet sizes minimize Transceiver On time. Further power efficiency can be gained utilizing deterministic sleep periods for client devices. Security By default, ONE-NET uses the Extended Tiny Encryption Algorithm (XTEA) version 2 with 32 iterations (XTEA2-32). The ONE-NET protocol provides extensions to even higher levels of encryption. Encryption is integral to the ONE-NET protocol, there are no unencrypted modes. Alternate encryption ID tag allows extension to stronger algorithms. ONE-NET helps resist a spoofing attack or replay attack by using embedded nonces to ensure unique packets. Cryptographic nonce tracking allows source verification. Security key update rate can be set on a per-system basis to allow greater control of security level - faster key updates increase network security. Programmable “still operational” messages can be used to detect sensor tampering or device failure. Hardware ONE-NET works on a number of transceivers from manufacturers such as TI, Analog Devices, Semtech, RFM, Integration and Micrel. Transceivers that have been tested as working with ONE-NET include: TRC102 XE1203F XE1205 ADF7025 IA4421 CC1100 MICRF505 AX5051 SX1211 Simple ONE-NET devices such as motion sensors have modest host processor requirements: 16K ROM 1K RAM 128 bytes user non-volatile memory ONE-NET is well-suited for low-cost 8-bit and 16-bit processors and has been tested with the TI MSP430, Renesas R8C, C8051, and Freescale 68HC08 (HC08). Open Source License ONE-NET is available to use for free using an open source license. ONE-NET uses the OSI-approved “Simplified BSD License” which is one of the so-called permissive free software licenses. ONE-NET website provides a variety of open source community-supported resources including: Schematics Bill of Materials Printed Circuit Board layouts Antenna designs Implementation examples Source Code Documentation User forums Supporting Companies A number of companies have announced support for the ONE-NET open source initiative including: Analog Devices Freescale Integration Associates IQD Frequency Products Micrel Renesas RF Monolithics Semtech Silicon Labs Texas Instruments Threshold References External links ONE-NET website "Open standard for One-Net wireless network and hardware based on it." - The Paper in Russian magazine ONE-NET on SourceForge Wireless networking standards Home automation Building automation Personal area networks
51478670
https://en.wikipedia.org/wiki/TI-DNOS
TI-DNOS
Distributed Network Operating System (DNOS) The Distributed Network Operating System (DNOS) is a general purpose, multitasking operating system designed to operate with the Texas Instruments 990/10, 990/10A and 990/12 minicomputers. DNOS includes a sophisticated file management package which provides support for key indexed files, sequential files, and relative record files. DNOS is a multiterminal system that is capable of making each of several users appear to have exclusive control of the system. DNOS supports output spooling and program accessible accounting data. Job level and task level operations enable more efficient use of system resources. In addition to multiterminal applications, DNOS provides support for advanced program development. Users communicate with DNOS by entering commands at a terminal or by providing a file of commands. The System Command Interpreter (SCI) processes those commands and directs the operating system to initiate the action specified by a command. A text editor allows the user to enter source programs or data into the system. A Macro Assembler is provided for assembly language programs. Several high level languages, including Fortran, COBOL, BASIC, RPG II, and Pascal, are supported. A link editor and extended debugging facilities are also provided. A variety of utility programs and productivity tools support access to and management of information contained in a data base, design of specific forms on the screen of a video display terminal (VDT), and word processing. The system supports a wide range of user environments. DNOS can support as few as one or two terminals, thus allowing the user of a smaller System to perform tasks efficiently and yet inexpensively. Larger configurations with a wide variety of peripherals are also supported. The maximum configuration size varies with the user's environment. Almost every minicomputer system requirement or application need can be met with DNOS. DNOS provides the base for a variety of communications products. Standard protocols for IBM 2780/3780 and for IBM 3270 communications are supported. Local area network software is supported for network input/output (I/O) and logon. In addition, sophisticated networking software is available with the Distributed Network Communications System (DNCS) and Distributed Network I/O (DNIO) packages. DNCS includes networking capabilities for X.25 and IBM's Systems Network Architecture (SNA) protocols. DNIO provides users transparent access to other TI 990s running DNOS and can be used to connect to local or wide-area networks. DNOS is an international operating system designed to meet the commercial requirements of the United States, most European countries, and Japan. DNOS supports a complete range of international data terminals that permit users to enter, view, and process data in their own languages. The system includes error text files that can be edited so that error messages can be easily translated into languages other than English. DNOS Features DNOS supports features that incorporate the computing power of the larger computers, and it is upwardly compatible with other Texas Instruments operating systems. DNOS features include: Multiple Terminals - The number of online terminals is limited only by the available computing power and memory for system structures. File Security - The system can optionally include a file security system that allows system managers and other users to determine which user groups can access data files and execute specific programs. Output Spooling - Output spooling is the process of queueing files for printing. Files are scheduled for printing based on job priority and availability of the printing device(s). You can specify special printing forms and formats. Accounting Function - The system can optionally include an accounting function that allows you to maintain accounting information on the use of system resources. Job Structure - The system incorporates a job structure that assists program management and enables efficient use of resources. A job is a sequence of cooperating tasks. I/O Resource Management - Resource-specific and resource-independent I/O operations allow flexibility in the selection of devices and file types. Program Segmentation - Program segmentation is the process of separating a program into segments. A program can consist of up to three segments at any one time. Additional segments can be accessed as necessary during program execution. Segments can be shared by programs. Interprocess Communication - The system provides the capability, through interprocess communication (IPC), for two programs (tasks) to exchange information. Power Failure Recovery - Should a power failure occur, DNOS maintains the state of the system at the time of the failure, if the optional software and backup power supply have been added to your system. When full power resumes, the operation will continue from the point at which the power failure occurred. Synchronization Methods - Event and semaphore synchronization methods are included to assist interaction between programs, either within a job or across job boundaries. Event synchronization allows the program to wait for one or more events to be completed before processing is continued. Semaphore synchronization uses variables to exchange signal information between programs. Concatenated Files - The system supports file concatenation, in which two or more physical files are recognized as a logically contiguous set of data. These files can exist on one or more volumes. Temporary Files A temporary file is one that exists only during the life of the created job, or during the extent of a program within that job. A job temporary file can be accessed by any program in a job, and is deleted when the job terminates. Other temporary files are created for use by a single program and are deleted when the program terminates. Diagnostic Support - The system supports online diagnostics that operate concurrently with program execution and system log analysis tasks. Batch Jobs - A batch job is a job that executes in the background, independent of a terminal. A user at a terminal can be executing multiple batch jobs, and at the same time, be performing foreground and/or background operations in an interactive job. Dynamic Configuration Changes - Table size, system characteristics, and device configuration changes can be enabled and take effect after the next Initial Program Load (IPL) sequence, rather than during system generation. Compatibility - DNOS design enables compatibility with the DX10 operating system. Many of the familiar operating concepts of the DX10 operating system are integrated within the design of DNOS. DNOS includes disk and other media formats, a Supervisor Call (SVC) interface, and SCI user commands that are all upwardly compatible with DX10. System Generation - The system generation utility allows a user to interactively specify all necessary features, available devices, and optional functions when creating an operating system. This data is used to construct a file that defines the configuration of the operating system. Message Facilities - The system provides a comprehensive set of codes and messages describing errors and special conditions that occur when using the operating system. The system handles messages in a uniform manner to ensure that they are easy to use. Two system directories maintain message files that contain text describing errors, information, and completion messages generated by the system. The directories are expandable to include message files written by users. System Log - The system log stores information about errors and messages generated by hardware, input/output operations, tasks, and user programs on two files and an optional specified device. Systems Problem Analysis - If problems occur during system operation, they can be diagnosed by using a system utility that can analyze the system whether the system is operating or has experienced a failure. In a failure situation, an image of memory can be copied to a file. When the system is operating again, an analysis utility can be used with a variety of commands entered by the user. System Configuration History - Information about all supplied software products installed on a system is maintained on a system disk file. Users can also send entries to the file for application products they develop. DNOS International Features - Error message files can be text edited to translate into a language other than English. It is also necessary to change the collating sequence of key indexed files (KIF) according to the translation. DNOS provides methods to change the required collating sequence. DNOS Performance Package - An optional add-on package is available for DNOS on the larger 990 series computers. This package enhances DNOS performance by using several system routines implemented in microcode in the writable control storage. References External links Dave Pitts' TI 990 page — Includes a simulator and DNOS Operating System images. Proprietary operating systems Texas Instruments
55806272
https://en.wikipedia.org/wiki/Uptane
Uptane
Uptane is a Linux Foundation / Joint Development Foundation hosted software framework designed to ensure that valid, current software updates are installed in adversarial environments. It establishes a process of checks and balances on these electronic control units (ECUs) that can ensure the authenticity of incoming software updates. Uptane is designed for “compromise-resilience,” or to limit the impact of a compromised repository, an insider attack, a leaked signing key, or similar attacks. It can be incorporated into most existing software update technologies, but offers particular support for over-the-air programming or OTA programming strategies. History Uptane was developed by a team of engineers at New York University Tandon School of Engineering in Brooklyn, NY, the University of Michigan Transportation Research Institute in Ann Arbor, MI, and the Southwest Research Institute in San Antonio, TX. It was developed as open source software under a grant from the U.S. Department of Homeland Security. In 2018, the Uptane Alliance, a non-profit organization, was formed under the aegis of IEEE-ISTO to oversee the first formal release of a standard for Uptane. The first standard volume, entitled IEEE-ISTO 6100.1.0.0 Uptane Standard for Design and Implementation, was released on July 31, 2019. Uptane was recognized in 2017 by Popular Science as one of that year’s top security innovations. As of 2020, multiple implementations of Uptane are available, both through open source projects such as the Linux Foundation’s Automotive Grade Linux, and through third party commercial suppliers, such as Advanced Telematic Systems (ATS), now part of Here Technologies, and Airbiquity. There is also a reference implementation meant to aid adopters implementing Uptane. References External links Uptane website Further reading Proceedings of 14th Embedded Security in Cars Conference (16-17 November 2016) Kuppusamy,T.K., Brown, A., Awwad, S., McCoy, D., Bielawski,R., Mott, C., Lauzon, S., Weimerskirch, A., and Cappos, J. “Uptane: Securing Software Updates for Automobiles”. ;login: (Summer 2017) Kuppusamy,T.K., DeLong, L.A. and Cappos, J. “Securing Software Updates for Automotives Using Uptane". IEEE Vehicular Technology Magazine (March 2018) Kuppusamy,T.K., DeLong, L.A. and Cappos, J. “Uptane: Security and Customizability of Software Updates for Vehicles" ESCAR USA 2020 Special Issue (24 August 2020) Moore, M., McDonald, I., Weimerskirch, A., Awwad, S., DeLong, L.A., and Cappos, J. Software frameworks Linux software
29303158
https://en.wikipedia.org/wiki/VMware%20Carbon%20Black
VMware Carbon Black
VMware Carbon Black (formerly Bit9, Bit9 + Carbon Black, and Carbon Black) is a cybersecurity company based in Waltham, Massachusetts. The company develops cloud-native endpoint security software that is designed to detect malicious behavior and to help prevent malicious files from attacking an organization. The company leverages technology known as the Predictive Security Cloud (PSC), a big data and analytics cloud platform that analyzes customers’ unfiltered data for threats. The company has approximately 100 partners. It has over 5,600 customers including approximately one-third of the Fortune 100. In October 2019, the company was acquired by VMware. History Carbon Black was founded as Bit9 in 2002 by Todd Brennan, Allen Hillery, and John Hanratty. The company's first CEO was George Kassabgi. The current CEO, Patrick Morley, was formerly the chief operating officer of Corel. He took over the position in 2007. In 2013, the company's network was broken into by malicious actors who copied a private signing key for a certificate and used it to sign malware. In February 2014, Bit9 acquired start-up security firm Carbon Black. At the time of the acquisition, the company also raised $38.25 million in Series E funding, bringing Bit9’s total venture capital raised to approximately $120 million. The company acquired Objective Logistics in June 2015. In August 2015, the company announced that it had acquired data analytics firm Visitrend and would open a technology development center in downtown Boston. A month later, the company announced it would partner with SecureWorks, Ernst & Young, Kroll, Trustwave, and Rapid7 to provide managed security and incident response services. The company changed its name to Carbon Black on February 1, 2016, after being known as "Bit9 + Carbon Black" for approximately two years. In July 2016, Carbon Black announced it had acquired next-generation antivirus software provider Confer for an undisclosed sum. Prior to the deal, Confer had raised $25 million in venture funding and had more than 50 employees. According to The Wall Street Journal, the deal was valued at $100 million. On May 4, 2018, the company joined public markets, listing as "CBLK" on the Nasdaq exchange. As part of its initial public offering (IPO), Carbon Black raised approximately $152 million at a valuation of $1.25 billion. Prior to its IPO, the firm had raised $190M from investors including Kleiner Perkins, Highland Capital, Sequoia, Accomplice, and Blackstone. In October 2019, the company was acquired by VMware. References External links Software companies based in Massachusetts Companies based in Waltham, Massachusetts Software companies established in 2002 2002 establishments in Massachusetts Computer security companies Computer security software companies Companies formerly listed on the Nasdaq 2018 initial public offerings 2019 mergers and acquisitions VMware Software companies of the United States
646717
https://en.wikipedia.org/wiki/BZFlag
BZFlag
BZFlag (an abbreviation for Battle Zone capture the Flag) is a free and open-source, multiplayer online, tank game. Development Inspired by Battlezone, BZFlag was first written in C by Chris Schoeneman in 1992, as a part of his studies at Cornell University. BZFlag was initially called "bz" and despite its similarity to the SGI game of the same title by Chris Fouts, the games are completely independent of each other. In 1993, BZFlag was released to the public for the first time. This release took a new turn compared to older versions after a cheater, who edited the source code of his client to give himself powers that do not come from official releases, inspired Schoeneman and Pasetto to add "super-flags." Super flags affect a tank's performance by adding abilities or weapons to its arsenal. The first four flags were High Speed (boosted tank speed), Quick Turn (tank turned faster), Rapid Fire (shots moved faster), and Oscillation Overthruster (tank could go through objects). There was only one of each flag, and all flags had a marker on them so tanks knew what type it was. Soon after, bad and good flags were added, and the idea remains part of game play today; however, flags do not have markers and the flag type is unknown to the player until it is picked up (unless the player's tank is carrying an identify flag). In 1997, the release of version 1.7d came with a groundbreaking new feature: an in-game public server list. Previously, players had to either set up their own servers, know of servers, or read a list published and maintained by a third-party. Now the server list is hosted on the official BZFlag website and allows anybody to play games on servers that choose to be public. Schoeneman eventually re-wrote BZFlag in C++ for SGI's third IndiZone competition, which won in the "Reality Engine" category. Tim Riker was later given the project prior to version 1.7e to maintain and evolve. BZFlag is written in C++ and uses OpenGL for rendering. Its audio and several other sub-systems have been written using OS specific methods, although newer releases use SDL to perform low-level operations on all platforms. Textures for in-game objects are loaded from PNG files; audio, from WAVs. Zlib is used to decompress data files, which is written in C. Developers The number of contributors to the project has steadily increased over time. The project invites all sufficiently experienced developers to contribute. Though there are 64 listed developers, a much smaller number of those are active contributors. Developers are able to edit any of the project's files to make changes at any time. However, when a developer has made an edit of which other developers do not approve, or is inappropriate for the game, they are requested to revert to the previous version of the file; most developers monitor source edits on IRC. The copyright holder for the game is Tim Riker but maintenance is guided by Scott Wichser and Jeff Makey as project managers. The game's original author, Chris Schoeneman, is no longer involved in development. Gameplay In a game of BZFlag, players drive around tanks, viewed from a first-person view, in a server-defined world (also known as a "map"), which can be modified. Tanks have the ability to drive through other tanks, but cannot travel through buildings or other world objects. The basic objective is to destroy opponents' tanks, which are tanks of another team's color. Since all players can see the position of all the tanks on their radar, it is a game of outmaneuvering rather than sneaking. There are styles of game play that modify the objective. Styles are server-based, as the server operator chooses what style to host. If there is no special style indicated by the server owner, the only objective is the above (to simply kill opponent tanks); it is called a "free for all", or "FFA" for short. There are three other objectives and corresponding styles (four in total): a style called "capture-the-flag" (or "CTF" for short) in which tanks try to pick up an opponent's flag and bring to their own home base, a style called "rabbit chase" in which the objective is to have every hunter (orange) tank try to destroy a particular white tank, called the "rabbit," and a style called "King of the Hill," in which a team attempts to stay in a certain area for 30–60 seconds without being killed. If they succeed, that team becomes "The King of the Hill." Servers can change the game mode and have custom maps made to fit the properties of the game. Certain thresholds are used to catch malicious players and kick them off the server, as well as message filters and an entire collection of other anti-cheating features. There are around 250 servers active at any given time (although only about 10-20% have active players most of the time). Teams Tanks can join as one of the four team colors, as a rogue, or as an observer. Observers cannot play, but can move anywhere in the world and watch what the tank they are linked to is doing. Observers do not have a tank and are therefore not visible to players, but are shown in the scoreboard. The colored teams are Red, Green, Blue and Purple. Rogue players are teamless players: they are allowed to kill colored team players and other rogues. Rogue tanks are colored dark grey out the window, and yellow on the radar. In rabbit-hunt games there is a white tank, known as the "rabbit", against the orange-brown "hunters", or every other player. The hunters are considered a team, so rabbits with genocide, shockwave, Guided Missile, or Laser flags are dangerous, and often team kills occur due to a group assault on the "rabbit." Teams are necessary in capture-the-flag games, in which they have to protect their team flag from capture. Because rogues are occasionally allowed on servers, a rogue tank does not have any flag to defend, and in turn cannot capture flags. However, rogues usually tend to aid other teams of choice, or merely enjoy adding a distraction to all teams. There is a plugin to prevent this, however it is only used on servers with two large teams and one or two rogue players. Maps A BZFlag server can be configured to create a basic, random map for play, or users can load custom map files. BZFlag uses a customized text based map format to define the placement of objects. While writing a map is fairly simple in this format, most map-makers use a 3d modeling program such as Wings 3D or blender. Graphical map editors, BZEdit or iBZEdit have also been used. Note that BZEdit is not distributed with the game, and is no longer under active development (versions of it are available at the BZFlag SourceForge.net site). However, using blender in combination with a BZFlag map plug-in is currently the most popular mapping method. As to the simplicity of maps, there are a number of basic objects in a map: boxes, pyramids, teleporters, cones, arcs, cylinders, spheres, team bases and meshes. Teleporters are rectangular, yellow-bordered objects that teleport a tank to another teleporter. A mapmaker may choose to not have a teleporter teleport tanks by leaving out links, or simple definitions of two points for teleporters to link between. Teleporters are also capable of teleporting to themselves, reflecting bullets and tanks that enter. Team Bases are used for CTF style games. Full three dimensional meshes have been available in maps since the 2.0 release. Flags BZFlag has three types of flags: team flags, bad flags and super flags. Team flags are only placed in a world during a capture-the-flag game, and represent the team it is colored to. Super flags are flags that can be in both free-for-all games and capture-the-flag games, but are strongly controlled by a server operator. The number, types of super flags, as well as where they are placed can all be controlled by the operator. Super flags come in both bad and good form, and affect a tank accordingly. A bad flag may take away a certain sense of the tank: its sight, speed, or related things, while a good flag does the opposite and actually helps a tank. Good super flags are usually held until the tank is killed and explodes, or until the player driving the tank chooses to drop the flag. Bad flags are dropped after a short amount of time, after a certain number of "wins", or until the tank dies. The rules for dropping bad flags are set by the operator. All super flags have a one or two letter code that is displayed next to a player's name on the scoreboard when that player has that flag. Once in a while, a new flag is introduced, which anyone can contribute to via developer contact or the wiki. Server environment Servers have environments that simulate the real world. A server's environment consists of three things: The map in play, the time of day that is being simulated, and weather conditions, introducing elements of which players have no control, like rain, snow, icy and/or slippery ground, modified friction and gravity, and fog. BZFlag takes the local time from the geographical location of the server and creates a night or day-time atmosphere in the background. Servers may synchronize the local server time or allow players to change the time to any time they may desire. Critical reception BZFlag was selected in Summer 2015 as "HotPick" by Linux Format. BZFlag was selected as the SourceForge.net Project of the Month for April 2004. Both Free Software Magazine and Linux Magazine noted that BZFlag was fun to play and suitable for younger gamers. References External links Official website BZFlag on GitHub 1993 video games First-person shooters Free software programmed in C++ Open-source video games Shooter video games IRIX games Linux games Lua (programming language)-scripted video games MacOS games Multiplayer online games Video games developed in the United States Windows games
44358781
https://en.wikipedia.org/wiki/Pointr
Pointr
Pointr is a startup company based in London specialized in indoor positioning and navigation utilising iBeacons, which are Bluetooth Low Energy devices formalised by Apple Inc. Pointr have created a GPS-like experience with true position and turn-by-turn navigation that is supported by most modern smartphones operating on both Android and iOS. Analytics and messaging modules can be added on to help communicate with users and understand venue usage respectively. The features are provided through a software package (SDK) which aims to improve user experience whilst connecting the online and offline worlds. Many of the features are available without an internet connection, including sending messages between users with a form of Mesh networking, however for intelligent offers and live analytics then an internet connection is required. The markets where the technology is most frequently used are retail, exhibition centres, airports and museums, but there are a number of uses in hospitals, warehouses, offices and entertainment venues as well. The majority of software development is done in their office in Istanbul, with specialist modules created in London. The technology is commonly used in permanent installations where the SDK is offered with a license fee model, however some installations have been temporary and hence one-off payments have been used. History Pointr was founded in November 2013 by Ege Akpinar under the name Indoorz; he was then joined by co-founders Axel Katalan, Chris Charles and Can Akpinar in early 2014. The software was developed for seven months before launching, allowing time to build and test the product. In November 2014 the company adopted its current name of Pointr after receiving a client question about whether it could work outdoors as well. Pointr raised its first round of angel funding in January 2015 and has grown steadily with its first customers in retail, warehouses, offices and libraries. In February 2015, Pointr was accepted onto the Microsoft Ventures accelerator program based in Liverpool Street, London. Pointr are also supported by Level 39 (the Fintech Accelerator programme for Canary Wharf Group) and have installed their technology there to locate colleagues and assist new users navigating the venue. References External links 2014 software Android (operating system) software IOS software Indoor positioning system
1310632
https://en.wikipedia.org/wiki/Nintendo%20Entertainment%20Analysis%20%26%20Development
Nintendo Entertainment Analysis & Development
commonly abbreviated as Nintendo EAD and formerly known as Nintendo Research & Development No.4 Department (abbreviated as Nintendo R&D4), was the largest software development division within the Japanese video game company Nintendo. It was preceded by the Creative Department, a team of designers with backgrounds in art responsible for many different tasks, to which Shigeru Miyamoto and Takashi Tezuka originally belonged. Both served as managers of the EARD studios and were credited in every game developed by the division, with varying degrees of involvement. Nintendo EAD was best known for its work on games in the Donkey Kong, Mario, The Legend of Zelda, F-Zero, Star Fox, Animal Crossing, Pikmin and Wii series. Following a large company restructuring after the death of company president Satoru Iwata, the division merged with Nintendo's Software Planning & Development division in September 2015, becoming Nintendo Entertainment Planning & Development. History Background During the 1970s, when Nintendo was still predominantly a toy company, it decided to expand into interactive entertainment and the video game industry. Several designers were hired to work under the Creative Department, which, at the time, was the only game development department within Nintendo. Among these new designers were Makoto Kano, who went on to design various Game & Watch games, and Shigeru Miyamoto, who would create various Nintendo franchises. In 1972, the department was renamed to Research & Development Department; it had about 20 employees. The department was later consolidated into a division and separated into three groups, Nintendo R&D1, R&D2 and R&D3. 1980–1989: Creation as Research & Development 4 Around 1983/1984, in the wake of Donkey Kong'''s commercial success, a game designed by Shigeru Miyamoto, Hiroshi Imanishi oversaw the creation of Research & Development No. 4 Department (commonly abbreviated to Nintendo R&D4), as a new development department dedicated to developing video game titles for dedicated consoles, complementing the other three existing departments in the Nintendo Manufacturing Division, green-lit by then-Nintendo president Hiroshi Yamauchi. Imanishi appointed Hiroshi Ikeda, a former anime director at Toei Animation, as general manager of the newly created department, and Miyamoto as its chief producer, who would later become one of the most recognized video game developers in the world. Nintendo also drafted a couple of key graphic designers to the department including Takashi Tezuka and Kenji Miki. With the arcade market dwindling, Nintendo R&D1's former focus, the department concentrated most of their software development resources on the emerging handheld video game console market, primarily thanks to the worldwide success of Nintendo's Game Boy. This catapulted the R&D4 department to become the lead software developer for Nintendo home video game consoles, developing a myriad of games for the Family Computer home console (abbreviated to Famicom, known as the Nintendo Entertainment System in North America, Europe, and Australia). Hiroshi Ikeda's creative team had many video game design ideas but was lacking the necessary programming power to make it all happen. Toshihiko Nakago, and his small company Systems Research & Development (SRD), had its expertise in computer-aided design (CAD) tools and was very familiar with the Famicom chipset, and was originally hired to work with Masayuki Uemura's Nintendo R&D2 to internally develop software development kits. When Nintendo R&D2 and SRD jointly began porting over R&D1 arcade games to the Famicom, Shigeru Miyamoto took the opportunity to lure Nakago away from R&D2, to help Miyamoto create his first Nintendo R&D4 video game, Excitebike. And so the original R&D4 department became composed of Miyamoto, Takashi Tezuka, Kenji Miki, and Minoru Maeda handling design; Koji Kondo, Akito Nakatsuka, and Hirokazu Tanaka handling sound design; and Toshihiko Nakago and SRD became the technology and programming core. The same Miyamoto-led team that developed Excitebike went on to develop a 1985 NES port of the scrolling beat 'em up arcade game Kung-Fu Master (1984) called Kung Fu. Miyamoto's team used the technical knowledge they had gained from working on both side-scrollers to further advance the platforming "athletic game" genre they had created with Donkey Kong and were key steps towards Miyamoto's vision of an expansive side-scrolling platformer. One of the first games developed by the R&D4 department was Mario Bros. in 1983, designed and directed by Miyamoto. The department was, however, unable to program the game with such an inexperienced team, and so counted on programming assistance from Gunpei Yokoi and the R&D1 department. One of the first completely self-developed games was Super Mario Bros., the sequel to Mario Bros. The game set standards for the platform genre, and went on to be both a critical and commercial success. In 1986, R&D4 developed The Legend of Zelda, for which Miyamoto again served as a director. The phenomenal sales of Super Mario Bros. and The Legend of Zelda fueled the expansion of the department with young game designers such as Hideki Konno, Katsuya Eguchi, Kensuke Tanabe, Takao Shimizu, who would later become producers themselves. 1989–2002: Renamed to Entertainment Analysis & Development In 1989, one year before the Super Famicom was released in Japan, the R&D4 department was spun-off and made its own division named Nintendo Entertainment Analysis & Development (commonly abbreviated as Nintendo EAD). The division was comprised into two departments: the Software Development Department, which focused on video game development and was led by Miyamoto, and the Technology Development Department, which focused on programming and developing tools and was led by Takao Sawano. The technology department was born out of several R&D2 engineers that were assisting SRD with software libraries. After that, the same department later collaborated with Argonaut Games to develop the Super FX chip technology for the SNES, first used in Star Fox in 1993. This venture allowed the Technology Development Department to become more prominent in the 3D era, where they programmed several of Nintendo EAD's 3D games with SRD. F-Zero, released in 1990, was the first video game fully programmed at the division. Prior to that, most programming was outsourced to SRD Co. Ltd. In 1997, Miyamoto explained that about twenty to thirty employees were devoted to each Nintendo EAD title during the course of its development. It was then that he also disclosed the existence of the SRD programming company within the division, formally Nintendo R&D2's software unit, which was composed of about 200 employees with proficiency in software programming. In the advent of launching both the GameCube and Game Boy Advance, Nintendo sought to change the structure of its corporate management. In June 2000, in an attempt to include both software and hardware experts in the board of directors, EAD and Integrated Research & Development general managers, Shigeru Miyamoto and Genyo Takeda respectively, entered the body. In addition, former HAL Laboratory president and future Nintendo president, Satoru Iwata, also entered the board. With Miyamoto being promoted to the board of directors, he was now in charge of overseeing all of Nintendo's software development. To fill Miyamoto's void as a producer, there were a series of promotions in the division: starting with long-time Miyamoto colleague Takashi Tezuka, as deputy general manager, as well as promoting several senior directors like Eiji Aonuma, Hideki Konno, Takao Shimizu, Tadashi Sugiyama and Katsuya Eguchi to producers overseeing their own development teams in the division. Nevertheless, after the promotion, Miyamoto still went on to produce some games. On November 24, 2000, Nintendo moved its Japanese headquarters, along with its internal teams, into a newly built facility. The new building was primarily built to provide a more expansive workplace for Nintendo's growing development teams. In 2002, Nintendo opened a Nintendo EAD studio in Tokyo, appointing Takao Shimizu as manager of the branch. The studio was created with the goal of bringing in fresh new talent from the capital of Japan who wouldn't be willing or able to travel to Kyoto. Their first project was Donkey Kong Jungle Beat for the GameCube which made use of the DK Bongos, initially created for Donkey Konga. 2003–2015: Restructure, new managers, and merger with SPD On September 30, 2003, as a result of a corporate restructure Nintendo was undergoing, in which several members of the Nintendo R&D1 and R&D2 were reassigned under Nintendo EAD, the department was consolidated into a division and began welcoming a new class of managers and producers. Hideki Konno, Katsuya Eguchi, Eiji Aonuma, Hiroyuki Kimura, and Tadashi Sugiyama were appointed project managers of their own groups within the Software Development Department; Shimizu was appointed project manager of the Tokyo Software Development Department, and Keizo Ota and Yasunari Nishida were appointed project managers of their own groups in the Technology Development Department. In 2013, Katsuya Eguchi was promoted Department Manager of both Software Development Departments in Kyoto and Tokyo. As such, he left his role as Group Manager of Software Development Group No. 2, and was replaced by Hisashi Nogami. On June 18, 2014, the EAD Kyoto branch was moved from the Nintendo Central Office to the Nintendo Development Center in Kyoto. The building housed more than 1100 developers from all of Nintendo's internal research and development divisions, which included the Nintendo EAD, SPD, IRD and SDD divisions. On September 16, 2015, EAD merged with Nintendo Software Planning & Development into a single game development division, Entertainment Planning & Development (EPD). The move followed an internal restructuring of Nintendo executives and departments after the death of former president Satoru Iwata in July 2015. Structure The Nintendo Entertainment Analysis & Development division was headed by Nintendo-veteran Takashi Tezuka who acted as general manager. The division was divided in two development departments: one in Kyoto, with Katsuya Eguchi acting as its deputy general manager; and one in Tokyo, with Yoshiaki Koizumi acting as its deputy general manager. Kyoto Software Development Department The Nintendo EAD Kyoto Software Development Department was the largest and one of the oldest research and development departments within Nintendo, housing more than 700 video game developers. It was located in Kyoto, Japan, formerly in the Nintendo Central Office, but on June 28, 2014, it was relocated to the new Nintendo Development Center, which housed all of Nintendo's internal research and development divisions. The development department integrated Nintendo's most notable producers: Hideki Konno, producer of the Nintendogs and Mario Kart series; Katsuya Eguchi, producer of the Wii and Animal Crossing series; Eiji Aonuma, producer of The Legend of Zelda series; Hiroyuki Kimura, producer Big Brain Academy, Super Mario Bros., and Pikmin series; and Tadashi Sugiyama, producer of the Wii Fit, Steel Diver and Star Fox series. The department was managed by veteran Nintendo game designer Katsuya Eguchi. As such, Hisashi Nogami later succeeded him as the producer of the Animal Crossing franchise and was responsible for the creation of the Splatoon series. Technology Development Department Tokyo Software Development Department The Nintendo EAD Tokyo Software Development Department was created in 2002 with the goal of bringing in fresh new talent from the capital of Japan who wouldn't be willing to travel hundreds of miles away to Kyoto. It is located in Tokyo, Japan, in the Nintendo Tokyo Office. In 2003, twenty members of the Entertainment Analysis & Development Division in Kyoto volunteered to relocate to Nintendo's Tokyo Office to expand development resources. These twenty volunteers were primarily from the Super Mario Sunshine team. Management saw it as a good opportunity to expand and recruit several developers who were more comfortable living in Tokyo than relocating to Kyoto. Takao Shimizu (original manager and producer) and Yoshiaki Koizumi (director) began hiring several recruits in Tokyo coming from several established companies like SEGA, Koei, and Square-Enix. Shimizu and Koizumi jointly spearheaded their first project, Donkey Kong Jungle Beat. This was followed in 2007 by the release of the critically and commercially acclaimed Super Mario Galaxy. After the release of Super Mario Galaxy'', Koizumi was promoted to manager and producer and officially opened Tokyo Software Development Group No. 2. The Tokyo group had veteran game developer Katsuya Eguchi as its general manager, who also oversaw development operations for the Kyoto Software Development Department. List of software developed The following is a list of software developed by the Nintendo Entertainment Analysis & Development Division. Notes References Nintendo divisions and subsidiaries Video game companies established in 1983 Video game companies disestablished in 2015 Defunct video game companies of Japan Japanese companies disestablished in 2015 Japanese companies established in 1983
4397102
https://en.wikipedia.org/wiki/Chicken%20%28Scheme%20implementation%29
Chicken (Scheme implementation)
Chicken (stylized as CHICKEN) is a programming language, specifically a compiler and interpreter which implement a dialect of the programming language Scheme, and which compiles Scheme source code to standard C. It is mostly R5RS compliant and offers many extensions to the standard. The newer R7RS standard is supported through an extension library. Chicken is free and open-source software available under a BSD license. It is implemented mostly in Scheme, with some parts in C for performance or to make embedding into C programs easier. Focus Chicken's focus is quickly clear from its slogan: "A practical and portable Scheme system". Chicken's main focus is the practical application of Scheme for writing real-world software. Scheme is well known for its use in computer science curricula and programming language experimentation, but it has seen little use in business and industry. Chicken's community has produced a large set of libraries to perform a variety of tasks. The Chicken wiki (the software running it is also a Chicken program) also contains a list of software that has been written in Chicken. Chicken's other goal is to be portable. By compiling to an intermediate representation, in this case portable C (as do Gambit and Bigloo), programs written in Chicken can be compiled for common popular operating systems such as Linux, macOS, other Unix-like systems, Windows, Haiku, and mobile platforms iOS and Android. It also has built-in support for cross-compiling programs and extensions, which allows it to be used on various embedded system platforms. Design Like many Scheme compilers, Chicken uses standard C as an intermediate representation. A Scheme program is translated into C by the Chicken compiler, and then a C compiler translates the C program into machine code for the target computer architecture, producing an executable program. The universal availability of C makes it useful for this purpose. Chicken's design was inspired by a 1994 paper by Henry Baker that outlined an innovative strategy to compile Scheme into C. A Scheme program is compiled into C functions. These C functions never reach the return statement; instead, they call a new continuation when complete. These continuations are C functions and are passed on as extra arguments to other C functions. They are calculated by the compiler. So far, this is the essence of continuation-passing style. Baker's novel idea is to use the C call stack for the Scheme heap. Hence, normal C stack operations such as automatic variable creation, variable-sized array allocation, and so on can be used. When the stack fills up (that is, the stack pointer reaches the top of the stack), a garbage collection can be initiated. The design used is a copying garbage collector originally devised by C. J. Cheney, which copies all live continuations and other live objects to the heap. Despite this, the C code does not copy C stack frames, only Scheme objects, so it does not require knowledge of the C implementation. In full, the Scheme heap consists of the C stack as the nursery together with the two heaps required by the generational garbage collector. This approach gives the speed of the C stack for many operations, and it allows the use of continuations as simple calls to C functions. Further, Baker's solution guarantees asymptotic tail recursive behavior, as required by the Scheme language standard. The implementation in the Chicken Scheme compiler is even asymptotically safe for space. Limitations and deviations from the standard Chicken Scheme is mostly R5RS-compliant, with a few notable limitations and deviations. R7RS compatibility is supplied as an extension library. The core system has basic support for UTF-8 characters, however the string indexing and manipulation procedures are not UTF-8 aware. An extension library exists which adds support for full UTF-8 awareness. Add-on software Chicken has a large software repository of added libraries and programs, termed eggs. This system is very similar to RubyGems. Initially, these eggs were developed in one central svn repository, in which creating a tag would automatically cause a new version of the extension to become available for download. Currently, eggs can be developed anywhere and under any version control system, while still maintaining semi-automatic release management when using most of the popular code hosting sites. This release method is VCS-agnostic in the sense that the user does not need to have these VCSes installed. The developer is free to host anywhere they choose, and can even choose to avoid public version control and distribute only plain tarballs. For all released eggs, the latest version is tested automatically as part of a continuous integration process. A canonical test server exists, where the core system and all eggs are tested daily against the most recent development version (to catch regressive bugs), and the most recent stable version (to ensure that everything works for users of the stable system). Also, anyone can volunteer to supply further testing capacity, on different: hardware, operating systems, or core releases. Features Chicken supports most of R5RS standard Scheme, but it also adds a few nonstandard features which are not available in all Scheme implementations. Foreign function interface Chicken compiling to C makes it possible to inject custom C code into the compiled result, which eases integrating with C libraries. Its foreign function interface supports converting back and forth between most built-in C types and corresponding Scheme objects. Also, extension libraries exist for interfacing to Python, Lua, and Java, via Java Native Interface (JNI) or a bridge. Cross-compiling It is relatively easy to cross-compile Scheme code to another platform (for example for embedded use on a device). To make cross-compiling possible for Scheme code, Chicken imposes a model of separate compiling: A compiled module consists of two shared libraries. One library contains the actual code which will be used at runtime (compiled for the target platform), and the other is an import module, which will be used to load the code which runs at compile-time (on the host platform), such as procedural macro code. The Chicken compiler can also be easily cross-compiled. After translation to C has been achieved, one can simply use a C compiler which is set up to build for another platform. Modules and macros Since version 4, Chicken has a built-in module system and support for low-level hygienic macros through explicit renaming macros (before version 4, this was available through an add-on library). Standard syntax-rules macros are also supported, and implicit renaming macros, which is basically a reversed version of explicit renaming. This mechanism trades performance for convenience. Each identifier not explicitly injected as unhygienic will be automatically renamed to avoid name capture. The performance cost occurs because implicit renaming requires the macro-expander to retraverse the expressions two more times. This cost is paid at expansion time, so a macro author must consider if longer compiling times are acceptable. Remote debugger Since version 4.11, Chicken comes shipped with a debugger named Feathers. When Scheme code is compiled with the needed debugging option, debugging events are injected at specific points in the code. These are implemented as calls to a C function, which is relatively low-overhead when not actually debugging the code. When debugging, it will try to make a TCP connection to a Feathers server process, possibly on a different machine. The process is halted, the user may set breakpoints and start the program. Then, when the breakpoint is hit, the client (process being debugged) enters a command loop, which allows interrogation of the client, to read out variables, or mutate them. Limited static type analysis Chicken supports local flow analysis. This allows the compiler to catch variable type errors at compile-time, and perform type specialisation. This specialisation makes it possible to remove several safety checks for type detection at runtime when the type can be deduced at compile-time. This results in improved run-time performance. This scrutinizer does not allow cross-module flow analysis, so it can only be used to optimize code that's part of one compiling unit (or module). See also Tail recursion Cheney's algorithm "M.T.A. (song)", a song reference in Baker's 1994 paper Gambit (Scheme implementation) Stalin (Scheme implementation) References External links Scheme (programming language) compilers Scheme (programming language) interpreters Scheme (programming language) implementations Free compilers and interpreters Software using the BSD license
50160571
https://en.wikipedia.org/wiki/Meson%20%28software%29
Meson (software)
Meson () is a software tool for automating the building (compiling) of software. The overall goal for Meson is to promote programmer productivity. Meson is free and open-source software written in Python, under the Apache License 2.0. Interoperability Being written in Python, Meson runs natively on Unix-like operating systems, including macOS, as well as Microsoft Windows and on other operating systems. Meson supports the C, C++, CUDA, D, Objective-C, Fortran, Java, C#, Rust, and Vala languages, and has a mechanism for handling dependencies called Wrap. Meson supports GNU Compiler Collection, Clang, Microsoft Visual C++ and other compilers, including non-traditional compilers such as Emscripten and Cython. The project uses ninja as the primary backend buildsystem, but can also use Microsoft Visual Studio or Xcode backends. Language The syntax of Meson's build description files (the Meson language) borrows from Python, but is not Python: It is designed such that it can be reimplemented in any other language; for example, Meson++ is a C++ implementation – the dependency on Python is an implementation detail. The Meson language is intentionally not Turing complete, and can therefore not express an arbitrary program. Instead, arbitrary build steps beyond compiling supported languages can be represented as custom targets. The Meson language is strongly typed, such that builtin types like library, executable, string, and lists thereof, are non-interchangeable. In particular, unlike Make, the list type does not split strings on whitespace. Thus, whitespace and other characters in filenames and program arguments are handled cleanly. Speed and correctness As with any typical buildsystem, correct incremental builds is the most significant speed feature (because all incremental progress is discarded whenever the user is forced to do a clean build). Unlike bare Make, the separate configure step ensures that changes to arguments, environment variables and command output are not partially applied in subsequent builds, which would lead to a stale build. Like Ninja, Meson does not support globbing of source files. By requiring all source files to be listed in the build definition files, the build definition file timestamps are sufficient to determine if the set of source files has changed, thereby ensuring that removed source files are detected. CMake supports globbing, but recommends against it for the same reason. Meson uses ccache automatically if installed. It also detects changes to symbol tables of shared libraries to skip relinking executables against the library when there are no ABI changes. Precompiled headers are supported, but requires configuration. Debug builds are without optimization by default. Features A stated goal of Meson is to facilitate modern development practices. As such, Meson knows how to do unity builds, build with test coverage, link time optimization etc without the programmer having to write support for this. Subprojects Meson can automatically find and use external dependencies via pkg-config, CMake, and project-specific lookups, but this only finds installed dependencies, which Meson can not do anything about. Alternatively, or as a fallback, a dependency can be provided as a subproject – a Meson project within another, either contained or as a download link, possibly with patches. This lets Meson resolve dependency hell for the convenience of casual users who want to compile the project, but may contribute to software bloat if a common installed dependency could have been used instead. The mode favored by Linux packagers is therefore fallback. Meson supports Meson and CMake subprojects. A Meson build file may also refer to the WrapDB service. Comparison of dependency resolution use cases in different build systems Cross compilation Cross compilation requires extra configuration, which Meson supports in the form of a separate cross file, which can be external to the Meson project. Adopters GNOME has made it a goal to port its projects to Meson. As of late 2017, GNOME Shell itself exclusively requires Meson after abandoning Autotools, and central components like GTK+, Clutter-GTK, GLib and GStreamer can be built with Meson. Systemd relies on Meson since dropping Autotools in version 234. Also X.Org and Mesa were ported to Meson. The Meson homepage lists further projects using Meson. See also References External links Build automation Compiling tools Free software programmed in Python Meson build system Software using the Apache license
45330308
https://en.wikipedia.org/wiki/Biometric%20device
Biometric device
A biometric device is a security identification and authentication device. Such devices use automated methods of verifying or recognising the identity of a living person based on a physiological or behavioral characteristic. These characteristics include fingerprints, facial images, iris and voice recognition. History Biometric devices have been in use for thousands of years. Non-automated biometric devices have in use since 500 BC, when ancient Babylonians would sign their business transactions by pressing their fingertips into clay tablets. Automation in biometric devices was first seen in the 1960s. The Federal Bureau of Investigation (FBI) in the 1960s, introduced the Indentimat, which started checking for fingerprints to maintain criminal records. The first systems measured the shape of the hand and the length of the fingers. Although discontinued in the 1980s, the system set a precedent for future Biometric Devices. Subgroups The characteristic of the human body is used to access information by the users. According to these characteristics, the sub-divided groups are: Chemical biometric devices: Analyses the segments of the DNA to grant access to the users. Visual biometric devices: Analyses the visual features of the humans to grant access which includes iris recognition, face recognition, Finger recognition, and Retina Recognition. Behavioral biometric devices: Analyses the Walking Ability and Signatures (velocity of sign, width of sign, pressure of sign) distinct to every human. Olfactory biometric devices: Analyses the odor to distinguish between varied users. Auditory biometric devices: Analyses the voice to determine the identity of a speaker for accessing control. Uses Workplace Biometrics are being used to establish better and accessible records of the hour's employee's work. With the increase in "Buddy Punching" (a case where employees clocked out coworkers and fraudulently inflated their work hours) employers have looked towards new technology like fingerprint recognition to reduce such fraud. Additionally, employers are also faced with the task of proper collection of data such as entry and exit times. Biometric devices make for largely fool proof and reliable ways of enabling to collect data as employees have to be present to enter biometric details which are unique to them. Immigration As the demand for air travel grows and more people travel, modern-day airports have to implement technology in such a way that there are no long queues. Biometrics are being implemented in more and more airports as they enable quick recognition of passengers and hence lead to lower volume of people standing in queues. One such example is of the Dubai International Airport which plans to make immigration counters a relic of the past as they implement IRIS on the move technology (IOM) which should help the seamless departures and arrivals of passengers at the airport. Handheld and personal devices Fingerprint sensors can be found on mobile devices. The fingerprint sensor is used to unlock the device and authorize actions, like money and file transfers, for example. It can be used to prevent a device from being used by an unauthorized person. Present day biometric devices Personal signature verification systems This is one of the most highly recognised and acceptable biometrics in corporate surroundings. This verification has been taken one step further by capturing the signature while taking into account many parameters revolving around this like the pressure applied while signing, the speed of the hand movement and the angle made between the surface and the pen used to make the signature. This system also has the ability to learn from users as signature styles vary for the same user. Hence by taking a sample of data, this system is able to increase its own accuracy. Iris recognition system Iris recognition involves the device scanning the pupil of the subject and then cross referencing that to data stored on the database. It is one of the most secure forms of authentication, as while fingerprints can be left behind on surfaces, iris prints are extremely hard to be stolen. Iris recognition is widely applied by organisations dealing with the masses, one being the Aadhaar identification carried out by the Government of India to keep records of its population. The reason for this is that iris recognition makes use of iris prints of humans, which hardly evolve during one's lifetime and are extremely stable. Problems with present day biometric devices Biometric spoofing Biometric spoofing is a method of fooling a biometric identification management system, where a counterfeit mold is presented in front of the biometric scanner. This counterfeit mold emulates the unique biometric attributes of an individual so as to confuse the system between the artifact and the real biological target and gain access to sensitive data/materials. One such high-profile case of Biometric spoofing came to the limelight when it was found that German Defence Minister, Ursula von der Leyen's fingerprint had been successfully replicated by Chaos Computer Club. The group used high quality camera lenses and shot images from 6 feet away. They used a professional finger software and mapped the contours of the Ministers thumbprint. Although progress has been made to stop spoofing. Using the principle of pulse oximetry- the liveliness of the test subject is taken into account by measure of blood oxygenation and the heart rate. This reduces attacks like the ones mentioned above, although these methods aren't commercially applicable as costs of implementation are high. This reduces their real world application and hence makes biometrics insecure until these methods are commercially viable. Accuracy Accuracy is a major issue with biometric recognition. Passwords are still extremely popular, because a password is static in nature, while biometric data can be subject to change (such as one's voice becoming heavier due to puberty, or an accident to the face, which could lead to improper reading of facial scan data). When testing voice recognition as a substitute to PIN-based systems, Barclays reported that their voice recognition system is 95 percent accurate. This statistic means that many of its customers' voices might still not be recognised even when correct. This uncertainty revolving around the system could lead to slower adoption of biometric devices, continuing the reliance of traditional password-based methods. Benefits of biometric devices over traditional methods of authentication Biometric data cannot be lent and hacking of Biometric data is complicated hence it makes it safer to use than traditional methods of authentication like passwords which can be lent and shared. Passwords do not have the ability to judge the user but rely only on the data provided by the user, which can easily be stolen while Biometrics work on the uniqueness of each individual. Passwords can be forgotten and recovering them can take time, whereas Biometric devices rely on biometric data which tends to be unique to a person, hence there is no risk of forgetting the authentication data. A study conducted among Yahoo! users found that at least 1.5 percent of Yahoo users forgot their passwords every month, hence this makes accessing services more lengthy for consumers as the process of recovering passwords is lengthy. These shortcomings make Biometric devices more efficient and reduces effort for the end user. Future Researchers are targeting the drawbacks of present-day biometric devices and developing to reduce problems like biometric spoofing and inaccurate intake of data. Technologies which are being developed are- The United States Military Academy are developing an algorithm that allows identification through the ways each individual interacts with their own computers; this algorithm considers unique traits like typing speed, rhythm of writing and common spelling mistakes. This data allows the algorithm to create a unique profile for each user by combining their multiple behavioral and stylometric information. This can be very difficult to replicate collectively. A recent innovation by Kenneth Okereafor and, presented an optimized and secure design of applying biometric liveness detection technique using a trait randomization approach. This novel concept potentially opens up new ways of mitigating biometric spoofing more accurately, and making impostor predictions intractable or very difficult in future biometric devices. A simulation of Kenneth Okereafor's biometric liveness detection algorithm using a 3D multi-biometric framework consisting of 15 liveness parameters from facial print, finger print and iris pattern traits resulted in a system efficiency of the 99.2% over a cardinality of 125 distinct randomization combinations. The uniqueness of Okereafor's innovation lies in the application of uncorrelated biometric trait parameters including intrinsic and involuntary biomedical properties from eye blinking pattern, pulse oxymetry, finger spectroscopy, electrocardiogram, perspiration, etc. A group of Japanese Researchers have created a system which uses 400 sensors in a chair to identify the contours and unique pressure points of a person. This derrière authenticator, still undergoing massive improvements and modifications, is claimed to be 98% accurate and is seen to have application in anti theft device mechanisms in cars. Inventor Lawrence F. Glaser has developed and patented technology which appears at first to be a high definition display. However, unlike displays with 2 dimensional pixel arrays, this technology incorporates pixel stacks, accomplishing a series of goals leading to the capture of a multi-biometric. It is believed to be the first man-made device which can capture 2 or more distinct biometrics from the same region of pixel stacks (forming a surface) at the same instant, allowing for the data to form a third biometric, which is a more complex pattern inclusive as to how the data aligns. An example would be to capture the finger print and the capillary pattern at precisely the same moment. Other opportunities exist with this technology, such as to capture kirlean data which assures the finger was alive during an event, or capture of bone details forming another biometric used with the others previously mentioned. The concept of stacking pixels to achieve increased functionality from less surface area is combined with the ability to emit any color from a single pixel, eliminating the need for RGB (RED GREEN BLUE) surface emissions. Lastly, the technology was tested with high power cadmium magnetics to check for distortion or other anomalies, as the inventor wanted to also embed magnetic emission and magnetic collection with this same surface technology, but without exhibiting any magnetic stripes on the surface. Devices, such as smart cards, can pass magnetic data from any orientation by automatically sensing what the user has done, and using data about where the card is when "swiped" or inserted into a reader. This technology can detect touch or read gestures at distance, without a user side camera and with no active electronics on its surface. The use of Multibiometrics hardens automated identity acquisition by a factor of 800,000,000 and will prove to be very difficult to hack or emulate. References Computer security Security Perimeter security
27994372
https://en.wikipedia.org/wiki/Internet%20kill%20switch
Internet kill switch
An Internet kill switch is a countermeasure concept of activating a single shut off mechanism for all Internet traffic. The concept behind having a kill switch is based on creating a single point of control (i.e. a switch) for a single authority to control or shut down the Internet in order to protect it or its users. In the United States, groups such as the American Civil Liberties Union and the Nominet Trust have criticized proposals for implementing the idea so far. China China has completely shut down Internet service in the autonomous region of Xinjiang for almost a year after the July 2009 Ürümqi riots. Egypt On January 27, 2011, during the Egyptian Revolution of 2011, the government of President Hosni Mubarak cut off access to the Internet by all four national ISPs, and all mobile phone networks. This version of a kill switch was effected by a government-ordered shutdown of the Egyptian-run portion of the Domain Name System, and Border Gateway Protocol (BGP), making transmission of Internet traffic impossible for Egyptian ISPs. All network traffic ceased within two hours, according to Arbor Networks. India India sometimes terminates Internet connections in Kashmir and north eastern states. Iran The Iranian Government activated the internet kill switch during the 2019-2020 Iranian Protests to prevent the organization of new protests and riots. Turkey In June 2016, Turkey introduced an Internet kill switch law permitting authorities to "partially or entirely" suspend Internet access due to wartime measures, national security or public order. The mechanism came to attention when Internet monitoring group Turkey Blocks detected a nationwide slowdown affecting several social network services on the eve of a major offensive during the 2016 Turkish military intervention in Syria. Similar Internet restrictions had previously been implemented during national emergencies to control the flow of information in the aftermath of terrorist attacks, originally without any clear legal grounding. United Kingdom In the United Kingdom, the Communications Act 2003 and the Civil Contingencies Act 2004 allow the Secretary of State for Culture, Media and Sport to suspend Internet services, either by ordering Internet service providers to shut down operations or by closing Internet exchange points. A representative of the Department for Culture, Media and Sport said in 2011 that: Dr. Peter Gradwell, a trustee of the Nominet Trust, criticized the provisions in the Communications Act: United States History The prospect of cyberwarfare during the 2000s has prompted the drafting of legislation by US officials, but worldwide the implications of actually "killing" the Internet has prompted criticism of the idea in the United States. During the Arab Spring in Tunisia, Egypt, and Libya access to the Internet was denied in an effort to limit peer networking to facilitate organization. While the effects of shutting off information access are controversial, the topic of a kill switch does remain to be resolved. Communications Act of 1934 The Communications Act of 1934 established the United States' Federal regulation of electronic communications by the Federal Communications Commission (FCC). This act, created by the Franklin D. Roosevelt Administration, gave the president powers of control over the media under certain circumstances. This act was the basis of regulatory power for the executive branch of the government to control electronic communications in the United States. Telecommunications Act of 1996 Presidential Decision Directive 63 (PDD-63), signed in May 1998, established a structure under White House leadership to coordinate the activities of designated lead departments and agencies, in partnership with their counterparts from the private sector, to "eliminate any significant vulnerability to both physical and cyber attacks on our critical infrastructures, including especially our cyber systems". Proposed Protecting Cyberspace as a National Asset Act of 2010 On June 19, 2010, Senator Joe Lieberman (I-CT) introduced the Protecting Cyberspace as a National Asset Act, which he co-wrote with Senator Susan Collins (R-ME) and Senator Thomas Carper (D-DE). If signed into law, this controversial bill, which the American media dubbed the kill switch bill, would have granted the President emergency powers over the Internet. Other parts of the bill focused on the establishment of an Office of Cyberspace Policy and on its missions, as well as on the coordination of cyberspace policy at the federal level. The American Civil Liberties Union (ACLU) criticized the scope of the legislation in a letter to Senator Lieberman signed by several other civil liberty groups. Particularly, they asked how the authorities would classify what is critical communications infrastructure (CCI) and what is not, and how the government would preserve the right of free speech in cybersecurity emergencies. An automatic renewal provision within the proposed legislation would keep it going beyond thirty days. The group recommended that the legislation follows a strict First Amendment scrutiny test. All three co-authors of the bill subsequently issued a statement claiming that the bill "[narrowed] existing broad Presidential authority to take over telecommunications networks", and Senator Lieberman contended that the bill did not seek to make a 'kill switch' option available ("the President will never take over – the government should never take over the Internet"), but instead insisted that serious steps had to be taken in order to counter a potential mass scale cyber attack. The Protecting Cyberspace as a National Asset Act of 2010 expired at the end of the 2009–2010 Congress without receiving a vote from either chamber. Implementation issues There are several issues that may prevent a system to be established in the United States. The Telecommunications Act of 1996 deregulated the telecommunications market and allowed for the growth of data carrier services. Since the Federal Communications Commission (FCC) does not require registration of a company as an Internet Service Provider (ISP), there are only estimates available based on publicly available data. The FCC estimated in April 2011 that there were over 7,800 ISPs operating in the United States. This makes implementation of a kill switch that much more difficult: each company would have to voluntarily comply. There is no law that gives the United States authority over an ISP without a court order. A court order is not necessarily the solution either. Even if an ISP is forced by court order, the attack may have already taken place and the prophylactic methods too late in implementing. There are thousands of ISPs and since they do not have to register, there is no known way of contacting them in time and forcing the ISP to comply. The regulations that the United States uses to regulate the information and data industry may have inadvertently made a true "Internet kill switch" impossible. The lack of regulation allowed for building of a patch-work system (ISPs, Internet Backbone) that is extremely complex and not fully known. In the United States, there are strong citizen and business protection systems. There is redress of grievances allowed to the courts or administrative authority. There is also the need for a court order for the government to shut off services. In addition to these fairly large roadblocks, there are human rights groups such as the ACLU, Amnesty International, and others. All of these reasons make implementing the Internet kill switch difficult. Policy issues The key policy issue is whether or not the United States has the right constitutionally to restrict or cut off access to the Internet. The powers granted to the presidency starting with the Communications Act of 1934 seem to be adequate in dealing with this threat, and is one of the major criticisms of legislation determined to regulate this question. The next most important question is whether or not the United States even need this legislation or it would chip away at individual liberties. The trade offs are apparent – if the government can control information online then it can limit access to information online. One of the biggest problems with the theory is what to classify as critical communications infrastructure and what to leave out. Legislators have to take into account the cost of shutting down the Internet, if it is even possible. The loss of the network for even a day could cost billions of dollars in lost revenue. The National Cybersecurity Center was set up to deal with these questions, to research threats and design and recommend prophylactic methods. In many ways, the integration of networked computer-mediated communication systems into users' business and personal lives means that potential cybersecurity threats are increasing along with the potential problem of protecting a wide class of products, such as the Internet of things. Utility systems can be monitored and controlled remotely and no longer require the physical presence of a maintainer. So the issue of what an Internet kill switch could affect is growing steadily. The 2009 White House Assessment stated that there needed to be more work done on this issue and the National Cybersecurity Center was created to handle security issues. It is not publicly known at this point if the Center has a policy regarding asserting control of the national networks. Zimbabwe On January 15, 2019, internet monitoring group NetBlocks reported the blocking of over a dozen social media platforms in Zimbabwe followed by a wider internet blackout amid protests over the price of fuel. The first three days of the disruption cost the Zimbabwe's economy an estimated $17 million as the government extended its disruption to a full shutdown to prevent the use of VPN circumvention tools by demonstrators. See also References Safety switches Internet censorship Internet outages
42155
https://en.wikipedia.org/wiki/Amiga%20demos
Amiga demos
Amiga demos are demos created for the Commodore Amiga home computer. A "demo" is a demonstration of the multimedia capabilities of a computer (or more to the point, a demonstration of the skill of the demo's constructors). There was intense rivalry during the 1990s among the best programmers, graphic artists and computer musicians to continually outdo each other's demos. Since the Amiga's hardware was more or less fixed (unlike today's PC industry, where arbitrary combinations of hardware can be put together), there was competition to test the limits of that hardware and perform theoretically "impossible" feats by refactoring the problem at hand. In Europe the Amiga was the undisputed leader of mainstream multimedia computing in the late 1980s and early 1990s, though it was eventually overtaken by PC architecture. Some Amiga demos, such as the RSI Megademo, Kefrens Megademo VIII or Crionics & The Silents "Hardwired" are considered seminal works in the demo field. New Amiga demos are released even today, although the demo scene has firmly moved onto PC hardware. Many Amiga game developers were active in the demo scene. The demo scene spearheaded development in multimedia programming techniques for the Amiga, such that it was de rigueur for the latest visual tricks, soundtrackers and 3D algorithms from the demo scene to end up being used in computer game development. Demo software Most demos were written in 68000 assembly language, although a few were written in C and other languages. To utilize full hardware performance, Amiga demos were optimized and written entirely for one purpose in assembly (avoiding generic and portable code). Additional performance was achieved by utilizing several co-processors in parallel with the 68000. These co-processors include, Copper (a co-processor for synchronizing custom chipset writes to video display sync) and Blitter (a chip capable of quickly moving blocks of graphical data from one position on the screen to another). To achieve the best speed most demos disabled the operating system and addressed the hardware directly. First bigger demos were released in 1987, one of them was "Tech Tech" by Sodan & Magician 42, it was released in November 1987 and is considered a classic by many. Eric Schwartz produced a series of animated demos that ran with MoviePlayer, an animation software package similar to Toon Boom. The animated demos drew heavily on the whimsy and graphic style of comic strips. Red Sector Incorporated (RSI) produced a piece of software called the RSI Demomaker, which allowed users to script their own demos, replete with scrolltext, vectorballs, plasma screens, etc. Full demos range from under 128 KB to several megabytes. There have been several thousand demos produced in many countries. Some active demo countries were Denmark, Finland, Germany, Italy, the Netherlands, Norway, Sweden, UK, Poland and others. Intros Smaller demos are often known as intros. They are typically limited to between 4 and 64KB in size. Intros were originally used as tags by cracking groups on computer games and other software. The purpose of the intro was to advertise cracking and distribution skill of a particular group. Later it developed into a stand-alone art form. Many demo and intro groups disassociated themselves from the cracking and copying scene, although the same people could still be involved in it. Ripping The Amiga thrived on public domain, freeware and other not-for-profit development. The architecture provided no substantial mechanism for protecting software from inspection. In order to read the memory one simply performed a hot reset (which preserved the contents of RAM) and then booted to a dedicated floppy disk that could inspect and dump the memory's contents. It was therefore common for developers and hackers to "rip" music, graphics and code and then reuse it in their own productions. This led to intense competition in certain fields, for example, in the development of sound tracking software and Tetris clones, with each group of developers trying to outdo the current state of the art. In fact, some demos even featured their source code as part of the executable to save hackers the trouble of disassembly, though it came strewn with incendiary comments for those who would seek to improve on it. Amiga demo groups Equinox Fairlight / Virtual Dreams Melon Dezign Phenomena Spaceballs Tristar and Red Sector Incorporated External links Amiga Demoscene Archive Amigascne FTP site Kestra BitWorld Amiga Demoscene Database v2, Database of over 31000 demos for the Amiga. Scene.org Classicamiga.com - Amiga Demoscene directory Amigademos.org An archive of Amiga demos Demoscene Amiga software Assembly language software
9773
https://en.wikipedia.org/wiki/EBCDIC
EBCDIC
Extended Binary Coded Decimal Interchange Code (EBCDIC; ) is an eight-bit character encoding used mainly on IBM mainframe and IBM midrange computer operating systems. It descended from the code used with punched cards and the corresponding six-bit binary-coded decimal code used with most of IBM's computer peripherals of the late 1950s and early 1960s. It is supported by various non-IBM platforms, such as Fujitsu-Siemens' BS2000/OSD, OS-IV, MSP, and MSP-EX, the SDS Sigma series, Unisys VS/9, Unisys MCP and ICL VME. History EBCDIC was devised in 1963 and 1964 by IBM and was announced with the release of the IBM System/360 line of mainframe computers. It is an eight-bit character encoding, developed separately from the seven-bit ASCII encoding scheme. It was created to extend the existing Binary-Coded Decimal (BCD) Interchange Code, or BCDIC, which itself was devised as an efficient means of encoding the two zone and number punches on punched cards into six bits. The distinct encoding of 's' and 'S' (using position 2 instead of 1) was maintained from punched cards where it was desirable not to have hole punches too close to each other to ensure the integrity of the physical card. While IBM was a chief proponent of the ASCII standardization committee, the company did not have time to prepare ASCII peripherals (such as card punch machines) to ship with its System/360 computers, so the company settled on EBCDIC. The System/360 became wildly successful, together with clones such as RCA Spectra 70, ICL System 4, and Fujitsu FACOM, thus so did EBCDIC. All IBM mainframe and midrange peripherals and operating systems use EBCDIC as their inherent encoding (with toleration for ASCII, for example, ISPF in z/OS can browse and edit both EBCDIC and ASCII encoded files). Software and many hardware peripherals can translate to and from encodings, and modern mainframes (such as IBM Z) include processor instructions, at the hardware level, to accelerate translation between character sets. There is an EBCDIC-oriented Unicode Transformation Format called UTF-EBCDIC proposed by the Unicode consortium, designed to allow easy updating of EBCDIC software to handle Unicode, but not intended to be used in open interchange environments. Even on systems with extensive EBCDIC support, it has not been popular. For example, z/OS supports Unicode (preferring UTF-16 specifically), but z/OS only has limited support for UTF-EBCDIC. Not all IBM products use EBCDIC; IBM AIX, Linux on IBM Z, and Linux on Power all use ASCII. Compatibility with ASCII There were numerous difficulties to writing software that would work in both ASCII and EBCDIC. The gaps between letters made simple code that worked in ASCII fail on EBCDIC. For example would print the alphabet from A to Z if ASCII is used, but print 41 characters (including a number of unassigned ones) in EBCDIC. Fixing this required complicating the code with function calls which was greatly resisted by programmers. Sorting EBCDIC put lowercase letters before uppercase letters and letters before numbers, exactly the opposite of ASCII. Programming languages and file formats and network protocols designed for ASCII quickly made use of available punctuation marks (such as the curly braces and ) that did not exist in EBCDIC, making translation to EBCDIC systems difficult. Conversely EBCDIC had a few characters such as (US cent) that got used on IBM systems and could not be translated to ASCII. The most common newline convention used with EBCDIC is to use a NEL (NEXT LINE) code between lines. Converters to other encodings often replace NEL with LF or CR/LF, even if there is a NEL in the target encoding. This causes the LF and NEL to translate to the same character and be unable to be distinguished. If seven-bit ASCII was used, there was an "unused" high bit in 8-bit bytes, and many pieces of software stored other information there. Software would also pack the seven bits and discard the eighth, such as packing five seven-bit ASCII characters in a 36-bit word. On the PDP-11 bytes with the high bit set were treated as negative numbers, behavior that was copied to C, causing unexpected problems if the high bit was set. These all made it difficult to switch from ASCII to the 8-bit EBCDIC (and also made it difficult to switch to 8-bit extended ASCII encodings). Code page layout There are hundreds of EBCDIC code pages based on the original EBCDIC character encoding; there are a variety of EBCDIC code pages intended for use in different parts of the world, including code pages for non-Latin scripts such as Chinese, Japanese (e.g., EBCDIC 930, JEF, and KEIS), Korean, and Greek (EBCDIC 875). There is also a huge number of variations with the letters swapped around for no discernible reason. The table below shows the "invariant subset" of EBCDIC, which are characters that should have the same assignments on all EBCDIC code pages that use the Latin alphabet. (This includes most of the ISO/IEC 646 invariant repertoire, except the exclamation mark.) It also shows (in gray) missing ASCII and EBCDIC punctuation, located where they are in Code Page 37 (one of the code page variants of EBCDIC). The blank cells are filled with region-specific characters in the variants, but the characters in gray are often swapped around or replaced as well. Definitions of non-ASCII EBCDIC controls Following are the definitions of EBCDIC control characters which either do not map onto the ASCII control characters, or have additional uses. When mapped to Unicode, these are mostly mapped to C1 control character codepoints in a manner specified by IBM's Character Data Representation Architecture (CDRA). Although the default mapping of New Line (NL) corresponds to the ISO/IEC 6429 Next Line (NEL) character (the behaviour of which is also specified, but not required, in Unicode Annex 14), most of these C1-mapped controls match neither those in the ISO/IEC 6429 C1 set, nor those in other registered C1 control sets such as ISO 6630. Although this effectively makes the non-ASCII EBCDIC controls a unique C1 control set, they are not among the C1 control sets registered in the ISO-IR registry, meaning that they do not have an assigned control set designation sequence (as specified by ISO/IEC 2022, and optionally permitted in ISO/IEC 10646 (Unicode)). Besides U+0085 (Next Line), the Unicode Standard does not prescribe an interpretation of C1 control characters, leaving their interpretation to higher level protocols (it suggests, but does not require, their ISO/IEC 6429 interpretations in the absence of use for other purposes), so this mapping is permissible in, but not specified by, Unicode. Code pages with Latin-1 character sets The following code pages have the full Latin-1 character set (ISO/IEC 8859-1). The first column gives the original code page number. The second column gives the number of the code page updated with the euro sign (€) replacing the universal currency sign (¤) (or in the case of EBCDIC 924, with the set changed to match ISO 8859-15) Criticism and humor Open-source software advocate and software developer Eric S. Raymond writes in his Jargon File that EBCDIC was loathed by hackers, by which he meant members of a subculture of enthusiastic programmers. The Jargon File 4.4.7 gives the following definition: EBCDIC design was also the source of many jokes. One such joke, found in the Unix fortune file of 4.3BSD Tahoe (1990) went: References to the EBCDIC character set are made in the 1979 computer game series Zork. In the "Machine Room" in Zork II, EBCDIC is used to imply an incomprehensible language: In 2021, it became public that a Belgian bank was still using EBCDIC internally in 2019. This came to attention because a customer insisted that the correct spelling of his surname included an umlaut, which the bank omitted. The customer filed a complaint citing the guarantee in the General Data Protection Regulation of the right to timely "rectification of inaccurate personal data." The bank argued in part that it could not comply because its computer system was only compatible with EBCDIC, which does not support umlauted letters. The appeals court ruled in favor of the customer. See also UTF-EBCDIC References External links . Contains IBM's official information on code pages and character sets. Code page 37 Code page 1047 Host Code Page Reference from IBM, shows code charts for several single-byte EBCDIC pages. from ICU Converter Explorer Contains more information about EBCDIC derived from IBM's CDRA, including DBCS EBCDIC (Double Byte Character Set EBCDIC) ICU Charset Mapping Tables Contains computer readable Unicode mapping tables for EBCDIC and many other character sets EBCDIC character list, including decimal and hex values, symbolic name, and character/function EBCDIC-code pages with Latin-1-charset (JavaScript) IBM mainframe operating systems
285645
https://en.wikipedia.org/wiki/Cyberport
Cyberport
Cyberport is a business park in Southern District, Hong Kong consisting of four office buildings, a hotel, and a retail entertainment complex. It describes itself as a digital technology community with over 1,650 digital and technology companies, including established enterprises such as Microsoft, Lenovo, and ZhongAn, and homegrown companies such as Gogovan, Klook, SleekFlow, GRWTH and Bowtie. Cyberport is managed by Hong Kong Cyberport Management Company Limited, which is wholly owned by the Hong Kong SAR Government. In operation since 2004, Cyberport focuses on fintech, smart living, digital entertainment and esports, AI and big data, blockchain, and cybersecurity. Cyberport is currently home to the largest Fintech community in Hong Kong with more than 350 Fintech companies. Bowtie, a member of the Cyberport community, was authorised by the Hong Kong Insurance Authority to become Hong Kong's first virtual insurer under the Fast Track system in November 2018. Zhong An and WeLab, who are also members of Cyberport, were two of the initial eight financial institutions and firms to be granted a virtual bank license. To promote the development of esports in Hong Kong, Cyberport launched the opening of Hong Kong's largest professional-grade esports venue on 16 July 2019 located in the Cyberport Arcade. It also launched the Esports Industry Facilitation Scheme and the Esports Internship Scheme. The former offers cash grants to support industry activities, while the latter provides a cash subsidy for internships in the esports industry. As of September 2019, Cyberport Incubation Program had incubated and funded over 600 technology start-ups companies since its inception in 2005. The HK$200 million Cyberport Macro Fund, which was announced in 2016 to support local start-ups after seed stage but generally before or around Series A stage of funding, has, as of October 2019, invested in 14 start-ups totalling about HK$106 million. Cyberport currently comprises four phases, providing a total of 119,000 square metres of office space. The occupancy rate of Cyberport's office reached 97% in the 4th quarter of 2018. The number of applications for the “Cyberport Incubation Programme” has substantially increased from less than 100 per year before 2011 to over 600 per year in recent years. The Cyberport project has courted controversy since its inception because of the government's bypassing the open-tender process in awarding the project to real estate developer Richard Li Tzar-Kai, and also because of its reliance on "ancillary residential" revenue. Project background In March 1999, the Hong Kong government announced its intention to develop a "Cyberport", to help local businesses capitalise on the rapid growth of the Internet. The government called it a development where information technology and multimedia would be nurtured so that future demands of these industries could be met. According to the press release by the Commerce and Economic Development Bureau, only one-third of the site would be residential, the sale of which would help finance the Cyberport development. The Cyberport is billed as home to an incubator for ICT startups, providing office space, financial aid, training, micro fund and network access to the investment community. The Hong Kong government inked a partnership deal with the Pacific Century Group (PCG) to develop a site with open sea view at Telegraph Bay in Pok Fu Lam, Hong Kong Island, at a total cost of HK$13 billion. It was announced as part of the 1999 budget by then-Financial Secretary Donald Tsang. It was also hoped that this development would help the HKSAR's economy rebound after the 1997 Asian financial crisis, and bring a "strategic cluster of information technology and services companies situated in a world-class setting". The "strategic telecommunication node" was due to be formed due to its close proximity to the proposed "Teleport" in Chung Hom Kok. Touted benefits include "a range of shared facilities for tenants, including a multimedia-based network, telecommunication links, media laboratory, cyber library and other information technology and services support facilities. There will also be educational, entertainment and recreational facilities related to information technology and services for local visitors and tourists". As part of the deal, PCG would construct a office complex with a shopping mall and a 173-room hotel that would be put out to management. Title to these properties would be transferred to the government at zero cost, while PCG received land for of residential housing in exchange, and would reap 64.5 percent of the profits from their sale. The construction of the Cyberport portion consisting of four office buildings, The Arcade and Le Meridien Cyberport Hotel, was completed in phases between 2002 and 2004. The residential developments consist of approximately 2,800 units or houses was completed in phases between 2004 and 2008. Controversy Awarding the project without formal open tender The government's decision in granting the project to Pacific Century Group (PCG) controlled by Richard Li, son of Hong Kong's wealthiest man Li Ka-shing, to develop the site generated much controversy. Awarding the project to PCG without a formal open tender attracted criticism for lack of transparency; other interested developers complained of being sidelined. Three private and wholly owned companies, namely, Hong Kong Cyberport Development Holdings Limited, Hong Kong Cyberport Management Company Limited and Hong Kong Cyberport (Ancillary Development) Limited (collectively referred to as the "Cyberport companies") were set up under the Financial Secretary Incorporated (FSI) to oversee implementation of the project. The project was criticised as unnecessary government intervention in the real estate sector. Residential project disguised as tech hub According to critics, Cyberport was a residential project in disguise, as it arguably failed in its mission to become a high-technology hub for the city. Eurasia Review suggested that the government land was injected into the project below value. The overall rationale of the project has been questioned by its critics, as details have emerged about the planning and budgeting for the project that indicate that 75% of the area developed is residential, and that office space for the technology companies was only be about 17% of the total. Also the "shared facilities" made up only part of a small block which includes houses and apartments. Low occupancy of rental office towers The project had the reputation of a "ghost town", as the government-owned portion suffered low occupancy. Fifteen companies signed letters of intent with the developer, including Hewlett-Packard, IBM, Microsoft and Yahoo, but only three moved in at the initial opening, due to a technology slump. The government rejected accusations of favouritism, arguing that PCG's presence as an anchor tenant would be a marketing plus to prestigious international technology companies. In addition, tendering was bypassed ostensibly to shorten the sensitive time frame to bring forward the economic benefits of the project. PCG later hived off the residential property interests into a shell company separate from the telecoms operation so that the shell company would receive the residential housing sales revenues; it was also accorded right of first refusal to redevelop sites of 60 existing telephone exchanges of PCCW, the telecoms operator. In October 2004, David Webb cited lack of transparency in the government's business dealings and demanded audited financial accounts and directors' reports for three companies related to the project to be released under the non-statutory Code on Access to Information. Housing Bel-Air is a luxury residential development in Cyberport. The development is split into 6 phases; phase 1 and 2 are referred to as Residence Bel-Air and Phase 3 is referred to as Bel-Air on the Peak. Phase 1 and 2 each have a clubhouse and 7 blocks that are about 48 floors tall. Floor 40 and above are flats that have the area of two flats combined into one, creating over . All of which feature sky gardens with ocean views. Enumeration of "Block 4" and all 4th floors of each block has been avoided for superstitious reasons. However, the management company neglected to omit 4 when naming the construction phases. Each floor of "Bel-Air on the Peak" has 2 or 3 apartments, 2 larger units, 'A' and 'C' and a smaller 2-bedroom unit 'B'. The clubhouse of "Bel-Air on the Peak" is significantly newer with more artistic features. They maintain an indoor pool, gym, restaurant, snooker room and children's game room. Bel-Air has 2 clubhouses: the Bay Wing and Peak Wing. They feature a spa, indoor and outdoor swimming pools, game's room, gym, children's playroom, restaurant and personal cinema. Each floor of the "Residence Bel-Air" has 2 flats with 3 bedrooms, 1 kitchen and a balcony. Additionally, there are single-family homes near "Residence Bel-Air". Gallery See also Hong Kong Free Press – a digital news outlet operating from Cyberport Jolla – R&D centre located in Cyberport Sailfish Alliance – Cyberport Hong Kong is one of the partners in the Sailfish Alliance. References External links The Arcade official site Bel-Air Residence Official website ReUbird Official website Residence Bel-Air property Void Official website Hong Kong Florist Internet in Hong Kong Office buildings in Hong Kong Science and technology in Hong Kong Business parks Telegraph Bay Hongkong Land Bel-Air Restricted areas of Hong Kong red public minibus Intelligent Community Forum Esports venues in China
4904928
https://en.wikipedia.org/wiki/Intel%20vPro
Intel vPro
Intel vPro technology is an umbrella marketing term used by Intel for a large collection of computer hardware technologies, including VT-x, VT-d, Trusted Execution Technology (TXT), and Intel Active Management Technology (AMT). When the vPro brand was launched (circa 2007), it was identified primarily with AMT, thus some journalists still consider AMT to be the essence of vPro. vPro features Intel vPro is a brand name for a set of PC hardware features. PCs that support vPro have a vPro-enabled processor, a vPro-enabled chipset, and a vPro-enabled BIOS as their main elements. A vPro PC includes: Multi-core, multi-threaded Xeon or Core processors. Intel Active Management Technology (Intel AMT), a set of hardware-based features targeted at businesses, allow remote access to the PC for management and security tasks, when an OS is down or PC power is off. Note that AMT is not the same as Intel vPro; AMT is only one element of a vPro PC. Remote configuration technology for AMT, with certificate-based security. Remote configuration can be performed on "bare-bones" systems, before the OS and/or software management agents are installed. Wired and wireless (laptop) network connection. Intel Trusted Execution Technology (Intel TXT), which verifies a launch environment and establishes the root of trust, which in turn allows software to build a chain of trust for virtualized environments. Intel TXT also protects secrets during power transitions for both orderly and disorderly shutdowns (a traditionally vulnerable period for security credentials). Support for IEEE 802.1X, Cisco Self Defending Network (SDN), and Microsoft Network Access Protection (NAP) in laptops, and support for 802.1x and Cisco SDN in desktop PCs. Support for these security technologies allows Intel vPro to store the security posture of a PC so that the network can authenticate the system before the OS and applications load, and before the PC is allowed access to the network. Intel Virtualization Technology, including Intel VT-x for CPU and memory, and Intel VT-d for I/O, to support virtualized environments. Intel VT-x accelerates hardware virtualization which enables isolated memory regions to be created for running critical applications in hardware virtual machines in order to enhance the integrity of the running application and the confidentiality of sensitive data. Intel VT-d exposes protected virtual memory address spaces to DMA peripherals attached to the computer via DMA buses, mitigating the threat posed by malicious peripherals. Execute disable bit that, when supported by the OS, can help prevent some types of buffer overflow attacks. Remote management Intel AMT is the set of management and security features built into vPro PCs that makes it easier for a sys-admin to monitor, maintain, secure, and service PCs. Intel AMT (the management technology) is sometimes mistaken for being the same as Intel vPro (the PC "platform"), because AMT is one of the most visible technologies of an Intel vPro-based PC. Intel AMT includes: Encrypted remote power up/down/reset (via wake-on-LAN, or WOL) Remote/redirected boot (via integrated device electronics redirect, or IDE-R) Console redirection (via serial over LAN, or SOL) Preboot access to BIOS settings Programmable filtering for inbound and outbound network traffic Agent presence checking Out-of-band policy-based alerting Access to system information, such as the PC's universally unique identifier (UUID), hardware asset information, persistent event logs, and other information that is stored in dedicated memory (not on the hard drive) where it is accessible even if the OS is down or the PC is powered off. Hardware-based management has been available in the past, but it has been limited to auto-configuration (of computers that request it) using DHCP or BOOTP for dynamic IP address allocation and disk-less workstations, as well as wake-on-LAN for remotely powering on systems. VNC-based KVM remote control Starting with vPro with AMT 6.0, PCs with i5 or i7 processors and embedded Intel graphics, now contains an Intel proprietary embedded VNC server. You can connect out-of-band using dedicated VNC-compatible viewer technology, and have full KVM (keyboard, video, mouse) capability throughout the power cycle—including uninterrupted control of the desktop when an operating system loads. Clients such as VNC Viewer Plus from RealVNC also provide additional functionality that might make it easier to perform (and watch) certain Intel AMT operations, such as powering the computer off and on, configuring the BIOS, and mounting a remote image (IDER). Not all i5 & i7 Processors with vPro may support KVM capability. This depends on the OEM's BIOS settings as well as if a discrete graphics card is present. Only Intel integrated HD graphics support KVM ability. Wireless communication Intel vPro supports encrypted wired and wireless LAN wireless communication for all remote management features for PCs inside the corporate firewall. Intel vPro supports encrypted communication for some remote management features for wired and wireless LAN PCs outside the corporate firewall. vPro laptop wireless communication Laptops with vPro include a gigabit network connection and support IEEE 802.11 a/g/n wireless protocols. AMT wireless communication Intel vPro PCs support wireless communication to the AMT features. For wireless laptops on battery power, communication with AMT features can occur when the system is awake and connected to the corporate network. This communication is available if the OS is down or management agents are missing. AMT out-of-band communication and some AMT features are available for wireless or wired laptops connected to the corporate network over a host OS-based virtual private network (VPN) when laptops are awake and working properly. A wireless connection operates at two levels: the wireless network interface (WLAN) and the interface driver executing on the platform host. The network interface manages the RF communications connection. If the user turns off the wireless transmitter/receiver using either a hardware or software switch, Intel AMT cannot use the wireless interface under any conditions until the user turns on the wireless transmitter/receiver. Intel AMT Release 2.5/2.6 can send and receive management traffic via the WLAN only when the platform is in the S0 power state (the computer is on and running). It does not receive wireless traffic when the host is asleep or off. If the power state permits it, Intel AMT Release 2.5/2.6 can continue to send and receive out-of-band traffic when the platform is in an Sx state, but only via a wired LAN connection, if one exists. Release 4.0 and later releases support wireless out-of-band manageability in Sx states, depending on the power setting and other configuration parameters. Release 7.0 supports wireless manageability on desktop platforms. When a wireless connection is established on a host platform, it is based on a wireless profile that sets up names, passwords and other security elements used to authenticate the platform to the wireless Access Point. The user or the IT organization defines one or more profiles using a tool such as Intel PROSet/Wireless Software. In release 2.5/6, Intel AMT must have a corresponding wireless profile to receive out-of-band traffic over the same wireless link. The network interface API allows defining one or more wireless profiles using the same parameters as the Intel PROSet/Wireless Software. See Wireless Profile Parameters. On power-up of the host, Intel AMT communicates with the wireless LAN driver on the host. When the driver and Intel AMT find matching profiles, the driver routes traffic addressed to the Intel AMT device for manageability processing. With certain limitations, Intel AMT Release 4.0/1 can send and receive out-of-band traffic without an Intel AMT configured wireless profile, as long as the host driver is active and the platform is inside the enterprise. In release 4.2, and on release 6.0 wireless platforms, the WLAN is enabled by default both before and after configuration. That means that it is possible to configure Intel AMT over the WLAN, as long as the host WLAN driver has an active connection. Intel AMT synchronizes to the active host profile. It assumes that a configuration server configures a wireless profile that Intel AMT uses in power states other than S0. When there is a problem with the wireless driver and the host is still powered up (in an S0 power state only), Intel AMT can continue to receive out-of-band manageability traffic directly from the wireless network interface. For Intel AMT to work with a wireless LAN, it must share IP addresses with the host. This requires the presence of a DHCP server to allocate IP addresses and Intel AMT must be configured to use DHCP. Encrypted communication while roaming Intel vPro PCs support encrypted communication while roaming. vPro PCs version 4.0 or higher support security for mobile communications by establishing a secure tunnel for encrypted AMT communication with the managed service provider when roaming (operating on an open, wired LAN outside the corporate firewall). Secure communication with AMT can be established if the laptop is powered down or the OS is disabled. The AMT encrypted communication tunnel is designed to allow sys-admins to access a laptop or desktop PC at satellite offices where there is no on-site proxy server or management server appliance. Secure communications outside the corporate firewall depend on adding a new element—a management presence server (Intel calls this a "vPro-enabled gateway")—to the network infrastructure. This requires integration with network switch manufacturers, firewall vendors, and vendors who design management consoles to create infrastructure that supports encrypted roaming communication. So although encrypted roaming communication is enabled as a feature in vPro PCs version 4.0 and higher, the feature will not be fully usable until the infrastructure is in place and functional. vPro security vPro security technologies and methodologies are designed into the PC's chipset and other system hardware. During deployment of vPro PCs, security credentials, keys, and other critical information are stored in protected memory (not on the hard disk drive), and erased when no longer needed. Security and privacy concerns According to Intel, it is possible to disable AMT through the BIOS settings, however, there is apparently no way for most users to detect outside access to their PC via the vPro hardware-based technology. Moreover, Sandy Bridge and future chips will have, "...the ability to remotely kill and restore a lost or stolen PC via 3G ... if that laptop has a 3G connection" Many vPro features, including AMT, are implemented in the Intel Management Engine (ME), a distinct processor in the chipset running MINIX 3, which has been found to have numerous security vulnerabilities. Unlike for AMT, there is generally no official, documented way to disable the Management Engine (ME); it is always on unless it is not enabled at all by the OEM. Security features Intel vPro supports industry-standard methodologies and protocols, as well as other vendors' security features: Intel Trusted Execution Technology (Intel TXT) Industry-standard Trusted Platform Module (TPM) Intel Platform Trust Technology (Intel PTT), an TPM 2.0 fTPM that introduced in Skylake Support for IEEE 802.1x, Preboot Execution Environment (PXE), and Cisco Self Defending Network (SDN) in desktop PCs, and additionally Microsoft Network Access Protection (NAP) in laptops Execute Disable Bit Intel Virtualization Technology (Intel VT-x and VT-d) Intel VMCS-Intel Virtual Machine Control Structure Shadowing Intel Data Protection Technology Intel Identity Protection technology Intel Secure key (RDRAND) Intel Anti-Theft Technology Intel Boot Guard Intel OS Guard Intel Active Management Technology (Intel AMT) Intel Stable Image Platform Program (SIPP) Intel Small Business Advantage (Intel SBA) Intel Boot Guard Intel Boot Guard is a processor feature that prevents the computer from running firmware (UEFI) images not released by the system manufacturer (OEM or ODM). When turned on, the processors verifies a digital signature contained in the firmware image before executing it, using the public key of the keypair, the OEM/ODM public key is fused into the system's Platform Controller Hub (PCH) by the system manufacturer (not by Intel). As a result, Intel Boot Guard, when activated, makes it impossible for end users to install replacement firmware (such as Coreboot) or modded BIOS. Technologies and methodologies Intel vPro uses several industry-standard security technologies and methodologies to secure the remote vPro communication channel. These technologies and methodologies also improve security for accessing the PC's critical system data, BIOS settings, Intel AMT management features, and other sensitive features or data; and protect security credentials and other critical information during deployment (setup and configuration of Intel AMT) and vPro use. Transport layer security (TLS) protocol, including pre-shared key TLS (TLS-PSK) to secure communications over the out-of-band network interface. The TLS implementation uses AES 128-bit encryption and RSA keys with modulus lengths of 2048 bits. HTTP digest authentication protocol as defined in RFC 2617. The management console authenticates IT administrators who manage PCs with Intel AMT. Single sign-on to Intel AMT with Microsoft Windows domain authentication, based on the Microsoft Active Directory and Kerberos protocols. A pseudorandom number generator (PRNG) in the firmware of the AMT PC, which generates high-quality session keys for secure communication. Only digitally signed firmware images (signed by Intel) are permitted to load and execute. Tamper-resistant and access-controlled storage of critical management data, via a protected, persistent (nonvolatile) data store (a memory area not on the hard drive) in the Intel AMT hardware. Access control lists for Intel AMT realms and other management functions. vPro hardware requirements The first release of Intel vPro was built with an Intel Core 2 Duo processor. The current versions of Intel vPro are built into systems with 10 nm Intel 10th Generation Core i5 & i7 processors. PCs with Intel vPro require specific chipsets. Intel vPro releases are usually identified by their AMT version. Laptop PC requirements Laptops with Intel vPro require: For Intel AMT release 9.0 (4th Generation Intel Core i5 and Core i7): 22 nm Intel 4th Generation Core i7 Mobile processors 22 nm Intel 4th Generation Core i5 Mobile processors Mobile QM87 chipsets For Intel AMT release 8.0 (3rd Generation Intel Core i5 and Core i7): 32 & 45 nm Intel 3rd Generation Core i7 Mobile processors 32 & 45 nm Intel 3rd Generation Core i5 Mobile processors Mobile QM77 & Q77 chipsets For Intel AMT release 4.1 (Intel Centrino 2 with vPro technology): 45 nm Intel Core2 Duo processor T, P sequence 8400, 8600, 9400, 9500, 9600; small form factor P, L, U sequence 9300 and 9400, and Quad processor Q9100 Mobile 45 nm Intel GS45, GM47, GM45 and PM45 Express chipsets (Montevina with Intel Anti-Theft Technology) with 1066 FSB, 6 MB L2 cache, ICH10M-enhanced For Intel AMT release 4.0 (Intel Centrino 2 with vPro technology): 45 nm Intel Core2 Duo processor T, P sequence 8400, 8600, 9400, 9500, 9600; small form factor P, L, U sequence 9300 and 9400, and Quad processor Q9100 Mobile 45 nm Intel GS45, GM47, GM45 and PM45 Express chipsets (Montevina) with 1066 FSB, 6 MB L2 cache, ICH9M-enhanced For Intel AMT release 2.5 and 2.6 (Intel Centrino with vPro technology): Intel Core2 Duo processor T, L, and U 7000 sequence3, 45 nm Intel Core2 Duo processor T8000 and T9000 Mobile Intel 965 (Broadwater-Q) Express chipset with ICH8M-enhanced Note that AMT release 2.5 for wired/wireless laptops and AMT release 3.0 for desktop PCs are concurrent releases. Desktop PC requirements Desktop PCs with vPro (called "Intel Core 2 with vPro technology") require: For AMT release 5.0: Intel Core2 Duo processor E8600, E8500, and E8400; 45 nm Intel Core2 Quad processor Q9650, Q9550, and Q9400 Intel Q45 (Eaglelake-Q) Express chipset with ICH10DO For AMT release 3.0, 3.1, and 3.2: Intel Core2 Duo processor E6550, E6750, and E6850; 45 nm Intel Core2 Duo processor E8500, E8400, E8300 and E8200; 45 nm Intel Core2 Quad processor Q9550, Q9450 and Q9300 Intel Q35 (Bearlake-Q) Express chipset with ICH9DO Note that AMT release 2.5 for wired/wireless laptops and AMT release 3.0 for desktop PCs are concurrent releases. For AMT release 2.0, 2.1 and 2.2: Intel Core 2 Duo processor E6300, E6400, E6600, and E6700 Intel Q965 (Averill) Express chipset with ICH8DO vPro, AMT, Core i relationships There are numerous Intel brands. However, the key differences between vPro (an umbrella marketing term), AMT (a technology under the vPro brand), Intel Core i5 and Intel Core i7 (a branding of a package of technologies), and Core i5 and Core i7 (a processor) are as follows: The Core i7, the first model of the i series was launched in 2008, and the less-powerful i5 and i3 models were introduced in 2009 and 2010, respectively. The microarchitecture of the Core i series was code-named Nehalem, and the second generation of the line was code-named Sandy Bridge. Intel Centrino 2 was a branding of a package of technologies that included Wi-Fi and, originally, the Intel Core 2 Duo. The Intel Centrino 2 brand was applied to mobile PCs, such as laptops and other small devices. Core 2 and Centrino 2 have evolved to use Intel's latest 45-nm manufacturing processes, have multi-core processing, and are designed for multithreading. Intel vPro is a brand name for a set of Intel technology features that can be built into the hardware of the laptop or desktop PC. The set of technologies are targeted at businesses, not consumers. A PC with the vPro brand often includes Intel AMT, Intel Virtualization Technology (Intel VT), Intel Trusted Execution Technology (Intel TXT), a gigabit network connection, and so on. There may be a PC with a Core 2 processor, without vPro features built in. However, vPro features require a PC with at least a Core 2 processor. The technologies of current versions of vPro are built into PCs with some versions of Core 2 Duo or Core 2 Quad processors (45 nm), and more recently with some versions of Core i5 and Core i7 processors. Intel AMT is part of the Intel Management Engine that is built into PCs with the Intel vPro brand. Intel AMT is a set of remote management and security hardware features that let a sys-admin with AMT security privileges access system information and perform specific remote operations on the PC. These operations include remote power up/down (via wake on LAN), remote / redirected boot (via integrated device electronics redirect, or IDE-R), console redirection (via serial over LAN), and other remote management and security features. See also Intel Management Engine Desktop and mobile Architecture for System Hardware (DASH) Active Management Technology (AMT) Intel AMT versions Trusted Execution Technology (TXT) TrustZone Trusted Platform Module (TPM) Threat model Intel Core 2 Centrino 2 Centrino Intel Viiv Intel CIRA (Client-Initiated Remote Access) Notes References External links Intel ARK Intel Business Client Developer's Zone Intel AMT SDK 8.1 Reference Guide Blog: Intel Manageability Firmware Recovery Agent Forum Support: Intel Business Client Software Development Forum Resource to help install (activate) vPro systems Intel Centrino 2 Explained (CNET) vPro on Intel.com Intel vPro is everything we said it would be Intel vPro to Boost Security – Energy Efficiency – Cost Reduction Blogcast of the vPro Launch Intel(r) vPro(TM) Expert Center PRO TOOL WIKI ROI PODcast Vpro
9987415
https://en.wikipedia.org/wiki/Cyme%20%28Aeolis%29
Cyme (Aeolis)
Cyme (, Cyme of Aeolis) (modern Turkish Nemrut Limani) or Cumae was an Aeolian city in Aeolis (Asia Minor) close to the kingdom of Lydia. The Aeolians regarded Cyme as the largest and most important of their twelve cities, which were located on the coastline of Asia Minor (modern-day Turkey). As a result of their direct access to the sea, unlike most non-landlocked settlements of the ancient world, trade is believed to have prospered. Location Both the author of the 'life of Homer' and Strabo the ancient geographer, locate Cyme north of the Hermus river on the Asia Minor coastline, modern-day "Nemrut Limanı"(in Turkish) After crossing the Hyllus, the distance from Larissa to Cyme was 70 stadia, and from Cyme to Myrina was 40 stadia. (Strabo: 622) Archaeological finds such as coins give reference also to a river, believed to be that of the Hyllus. History Early history Little is known about the foundation of the city to supplement the traditional founding legend. Kyme was the largest of the Aiolian cities. According to legend, it was founded by the Amazon Kyme. The Amazons were a mythical tribe of warlike women from Pontos (or variously from Kolchis, Thrace or Skythia), who fought against Greek heroes. Ancient coins from Cyme often depict the head of Kyme wearing a taenia with the reverse featuring a horse prancing - probably in allusion to the prosperous equine industry of the region. Alternatively, settlers from mainland Greece (most likely Euboea) migrated across the Aegean Sea during the Late Bronze Age as waves of Dorian-speaking invaders brought an end to the once mighty Mycenaean civilization some time around 1050 BCE. During the Late Bronze Age and early Greek Dark Ages, the dialect of Cyme and the surrounding region of Aeolis, like that of neighboring island Lesbos, closely resembled the local dialect of Thessalia and Boetia in continental Greece. The city was founded after the Trojan War by Greeks from Locris, central Greece, after they have first captured the Pelasgian citadel of Larisa near the river Hermus. Cyme prospered and developed into a regional metropolis and founded about thirty towns and settlements in Aeolis. The Cymeans were later ridiculed as a people who had for three hundred years lived on the coast and not once exacted harbor taxes on ships making port. Hesiod's father is said to have started his journey across the Aegean from Cyme. The cities of southern Aeolis in the region surrounding Cyme occupied a good belt of land with rough mountains in the background, yet Cyme like other colonies along the coast did not trade with the native Anatolians further inland, who had occupied Asia Minor for thousands of years. Cyme consequently played no significant role in the history of western Asia Minor, prompting the historian Ephorus, 400-330 BCE, himself a native of the city, to comment repeatedly in his narrative of Greek history that while the events he wrote about were taking place, his fellow Cymeans had for centuries sat idly by and kept the peace. He may, however, have been unaware of the significance of the city's links to Phrygia and Lydia through two Greek princesses, Hermodike I and Hermodike II and their role in popularising the written Greek alphabet and coined money, respectively. Tradition recounts that a daughter of a certain Agamemnon, king of Aeolian Cyme, married a Phrygian king called Midas. This link may have facilitated the Greeks "borrowing" their alphabet from the Phrygians because the Phrygian letter shapes are closest to the inscriptions from Aeolis. A passage in Pollux speaks about those who invented the process of coining money mentioning Pheidon and Demodike from Cyme, wife of the Phrygian king, Midas, and daughter of King Agamemnon of Cyme. Politically, Cyme is assumed to have started as a settler democracy following in the tradition of other established colonies in the region although Aristotle concluded that by the 7th and 6th centuries BCE the once great democracies in the Greek world (including Cyme) evolved not from democracies to oligarchies as was the natural custom but from democracies to tyrannies. 5th century BC By the 5th century BC, Cyme was one of the 12 established Ionian colonies in Aeolis. Herodotus (4.138) mentions that one of the esteemed voters deciding whether or not to support Militiades the Athenian in his plan to liberate the Ionian Coast from Persian rule in (year BC) was Aristagoras of Cyme. Aristagorus campaigned on the side of Histiaeus the Milesian with the tyrants Strattis of Chios, Aeaces of Samos and Laodamas of Phocaea in opposing such an initiative arguing instead that each tyrant along the Ionian Coast owed their position to Darius King of Persia and that liberating their own cities would encourage democracy over tyranny. Cyme eventually came under the control of the Persian Empire following the collapse of the Lydian Kingdom at the hands of Cyrus the Great. Herodotus is the principal source for this period in Greek history and has paid a great deal of attention to events taking place in Ionia and Aeolis. When Pactyes, the Lydian general, sought refuge in Cyme from the Persians the citizens were between a rock and a hard place. As Herodotus records, they consulted the Greek god Apollo (supporting the claim that they were of Ionic not eastern culture), who said after much confusion through an oracle that he should be handed over. However, a native of Cyme questioned Apollo's word and went back to the oracle himself to confirm if indeed Apollo wanted the Cymians to surrender Pactyes. Not wanting to come to grief over the surrender of Pactyes, nor wanting the ill-effects of a Persian siege (confirms Cyme was a fortified city capable of self-defence) they avoided dealing with the Persians by simply sending him off to Mytilene on the island of Lesbos, not far from their city. In his Histories, Herodotus makes reference to Cyme (or Phriconis) as being one of the cities in which the rebel Lydian governor Pactyes sought refuge, following his attempted rebellion against the Persian King Cyrus the Great: c.546 BC Pactyes, when he learnt that an army was already on his tracks and near, took fright and fled to Cyme, and Mazares the Mede marched to Sardis with a detachment of Cyrus' troops. Finding Pactyes and his supporters gone, the first thing he did was to compel the Lydians to carry out Cyrus' orders — as a result of which they altered from that moment their whole way of life; he then sent a demand to Cyme that Pactyes should be surrendered, and the men of the town decided to consult the oracle at Branchidae as to whether they should obey ... The messengers returned home to report, and the citizens of Cyme were prepared in consequence to give up the wanted man. After the Persian naval defeat at Salamis, Xerxes moored the surviving ships at Cyme. Before 480 BC, Cyme had been the principle naval base for the Royal Fleet. Later accounts of Cyme's involvement in the Ionian Revolt which triggered the Persian Wars confirm their allegiance to the Ionian Greek cause. During this time, Herodotus states that due to the size of the Persian army, Darius the Great was able to launch a devastating three-pronged attack on the Ionian cities. The third army which he sent north to take Sardis was under the command of his son-in-law Otanes who promptly captured Cyme and Clazomenae in the process. However, later accounts reveal how Sandoces, the supposed Ionian governor of Cyme helped draft a fleet of fifteen ships for Xerxes I great expedition against mainland Greece c. 480 BC. Cyme is also believed to have been the port in which the Persian survivors of the Battle of Salamis wintered and lends considerable weight to the argument that Cyme was not only well served by defensive walls, but enjoyed the benefits of a large port capable of wintering and supplying a large wartime fleet. As a result, Cyme, like most Ionian cities at the time was a maritime power and a valuable asset to the Persian Empire. Once Aristagoras of Miletus roused the Ionians to rebel against Darius, Cyme joined the insurrection. However, the revolts at Cyme were quelled once the city was recovered by the Persians. Sandoces, the governor of Cyme at the time of Xerxes, commanded fifteen ships in the Persian military expedition against Greece (480 BC). Herodotus believes that Sandoces may have been a Greek. After the Battle of Salamis, the remnants of Xerxes's fleet wintered at Cyme. Thucydides does not provide any significant mention of place is hardly more than mentioned in the history of Thucydides. Roman and Byzantine era Polybius records that Cyme obtained freedom from taxation following the defeat of Antiochus III, later being incorporated into Roman Asia province. During the reign of Tiberius, the city suffered from a great earthquake, common in the Aegean. Other Roman sources such as Pliny the Elder mention Cyme as one of the cities of Aeolia which supports Herodotus' similar claim: The above-mentioned, then, are the twelve towns of the Ionians. The Aeolic cities are the following:- Cyme, called also Phriconis, Larissa, Neonteichus, Temnus, Cilla, Notium, Aegiroessa, Pitane, Aegaeae, Myrina, and Gryneia. These are the eleven ancient cities of the Aeolians. Originally, indeed, they had twelve cities upon the mainland, like the Ionians, but the Ionians deprived them of Smyrna, one of the number. The soil of Aeolis is better than that of Ionia, but the climate is less agreeable. It was assigned to the Roman province of Asia Prima. Ecclesiastical history During the Eastern Roman Empire, Cyme became a bishopric, which was a suffragan of the Metropolitan of Ephesus. Titular see The diocese was nominally restored in 1894 as a Latin titular see. It is vacant, having had the following (non-consecutive) incumbents, all of the lowest (episcopa) rank : Carlo Quaroni (1894.10.08 – 1896.01.20) Orazio Mazzella (1896.02.11 – 1898.03.24) (later Archbishop) Jeno Kránitz (1907.04.15 – 1935.07.12) Peter Leo Ireton (1935.08.03 – 1945.04.14) James Donald Scanlan (1946.04.27 – 1949.05.31) (later Archbishop) Urbain-Marie Person, Capuchin Friars (O.F.M. Cap.) (1955.07.03 – 1994.02.09) Archaeology Archaeologists first started taking an interest in the site in the middle of the 19th century as the wealthy landowner D. Baltazzi and later S. Reinach began excavation on the southern necropolis. In 1925, A. Salaç, working out of the Bohemian Mission, uncovered many interesting finds, including a small temple to Isis, a Roman porticus and what is believed to be a 'potter's house'. Encouraged by their successes, Turkish archaeologist E. Akurgal began his own project in 1955 which uncovered an Orientalising ceramic on the southern hill. Between 1979-1984, the Izmir Museum carried out similar excavations at various locations around the site, uncovering further inscriptions and structures on the southern hill. Geophysical studies at Cyme in more recent years, have given archaeologists a much greater knowledge of the site without being as intrusive. Geomagnetic surveys of the terrain reveal additional structures beneath the soil, as yet untouched by excavations. The northwest side of the southern hill was utilized as a residential neighborhood during the entire existence of the city. Only a limited area of the hill has been investigated. It has been verified that there were at least five successive phases of building. 1. A long and straight wall going from north to southeast represented the most ancient building phase. In the wall there are visible traces of a threshold linking two rooms. There is uncertainty as to the chronology of the wall, but what is sure is that it was built before the end of the 5th century BC. 2. Two rooms (A and B), that were part of a building dating back to the end of the 5th century BC, belong to the second phase. The building appears to be complete on the northern side, but could have also had other rooms on the southern side, where the entrance to room A opened up. The western wall of room A, was constructed with squared limestone blocks, and also acted as a terracing wall connecting the strong natural difference on the side of the hill. At the foot of this wall there was a cistern excavated in the rock that gathered water coming from the roof of the house. The cistern was filled with debris and great amounts of black and plain pottery dating back to the late Hellenistic Age. 3. Some walls that belonged to the Imperial Roman Period were constructed by means of white mortar and bricks. During this phase a service room east of room A, with a floor that was made of leveled rock, was built. In the area of the cistern, by now filled, a new room decorated by wall paintings was also built. 4. A large house occupied the area during the Late Roman Period. The rooms were constructed using reused materials, but without the use of mortar, and often enriched by polychrome mosaics. Access was gained by a ramp placed at the edge of the southwestern part of the excavation. Still, what needs to be clarified is the extent of the building, whose destruction is placed between the end of the 6th century to the beginning of the 7th century AD. 5. The final phase is represented by some superficial structures found at the northern part of the excavation. There is a long wall going from the northwest to the southeast and a ramp built with reused blocks, with the same orientation as the wall. The wall and the ramp could be proof that this area was utilized during the Byzantine Age. Numismatics Although historians have dated the Trojan war to 1178 BC by calculating Homer's solar eclipse, it was not immortalised in the Iliad until about 750 BC. Around the same period, the Mykonos pithamphora - which shows the wooden horse the Greeks used to infiltrate Troy - was manufactured on the island of Tinos. Referenced in both literature and art, that cunning end to the war - the Trojan Horse - had become synonymous with the name of Agamemnon. The house of Agamemnon claimed continuity at Cyme in Aeolia, associating themselves with the legends of the exploits of the Pelopids and "particularly the taking of Troy." and the symbolism of the horse was stamped in the coins from this area, presumably in reference to the power of the Agamemnon lineage. Indeed, the daughter of Agamemnon of Cyme, Damodice, is credited with inventing coined money by Julius Pollux after she married King Midas - famed for turning everything he touched into gold. The most rational explanation of this fable seems to be, that he encouraged his subjects to covert the produce of their agriculture, and other branches of industry, into money, by commerce, whence considerable wealth flowed into his own treasury... though it is more likely, that what the Greeks called invention, was rather the introduction of the knowledge of them [coins] from countries more advanced in civilization. It is possible that the mythical figure of Midas was based on a real king of Phrygia in the 8th century BC known as Mita. However, as with all fables, there is a problem with the dates. Coins were not invented until 610 BC by King Alyattes in Lydia whose kingdom started well after the Phrygian kingdom collapsed. His Lydian Lion was most likely the oldest coin type circulated. There were some pre-coin types, with no recognisable image, used in the Ionian city of Miletus and the island of Samos but it is noteworthy that the coins from Cyme, when first circulated around 600-550 BC, utilised the symbol of the horse - tying them to the house of Agamemnon and the glory of the Greek victory over Troy. Cyme, being geographically and politically close to Lydia, took their invention of 'nobleman's tax-tokens' to the citizens - thus making Cyme's rough incuse horse head silver fractions, Hemiobols, a candidate for the title of the Second Oldest coins - and the first used for retailing on a large-scale basis by the Ionian Greeks, which quickly spreading Market Economics through the rest of the world. For an excellent timeline graphic showing the progression from pre-coin, to lion, to horsehead imagery on the earliest coins, see Basic Electrum Types. Damodice may still have been instrumental in striking the coinage of Cyme as both Aristotle and Pollux attribute this to her but may have been confused with whether she married a later 7th or even 6th century Midas. The river god Hermos, horse with their forefoot raised and victorious athletes are typical symbols commonly found on period coinage minted at Cyme. Ancient coins from Cyme often depict the head of the Amazon Kyme wearing a taenia with the reverse featuring a horse prancing - probably in allusion to the prosperous equine industry of the region. Notable people Hermodike I attributed with transferring the Persian written script into Greece. Agamemnon of Cyme, associated himself with "the taking of Troy." Hermodike II attributed with inventing coinage for common use and transferring this throughout Greece. Ephorus (c. 400 – 330 BC), ancient Greek historian. Hesiod's father, according to the poet (Op. et D. 636), sailed from Cyme to settle at Ascra in Boeotia; which does not prove, as such compilers as Stephanus and Suidas suppose, that Hesiod was a native of Cyme. Antigonus of Cyme, ancient Greek prose writer. Teuthras of Cyme, ancient Greek musician. Heracleides of Cyme, ancient Greek historian. Rhodon of Cyme, Olympic winner at stadion race in the 213th Ancient Olympic Games, 73 AD. Gnostor of Cyme, Suda writes that Homer married Aresiphone who was the daughter of Gnostor of Cyme. See also List of ancient Greek cities References Sources Herodotus, The Histories 1954-1972, trans. Aubrey de Selincourt, edit. John Marincola, , Penguin Classics GigaCatholic with titular incumbent biography links Archeology Missioni Archeologiche Italiane in Turchia, Modern-day archaeological survey Archaeological Atlas of the Aegean, 163. Aliağa / Cyme (Kyme) Non-Destructive Geophysical surveys: Archaeological feedback paper, M. Ciminale and D. Gallo (Department of Geology and Geophysics, University of Bari) Current Archaeology in Turkey, Last updated: 2007-01-30 External links Overview of Herodotus, Book One Detailed article on Cyme, sourced from Mythography.com forums History of Greece, at the University of Lund, Sweden Catalogue of Greek Coinage (Wildwinds): Cyme Mint Forvm Ancient Coins, The Collaborative Numistimatics project: Aeolis Catalogue Catholic titular sees in Asia States and territories established in the 8th century BC 3rd-century BC disestablishments Aeolian dodecapolis Ancient Greek archaeological sites in Turkey Iron Age Greek colonies Former populated places in Turkey Geography of İzmir Province History of İzmir Province 241 BC Members of the Delian League Achaemenid ports Populated places in ancient Aeolis Greek city-states Aliağa District Populated places of the Byzantine Empire
21846922
https://en.wikipedia.org/wiki/ESlick
ESlick
The eSlick is a discontinued e-book reader, an electronic book (e-book) reading device developed by Foxit Software. It has a 6-inch E Ink screen, 600x800 pixel resolution with 4-level gray scale and a mass of 180 g. The device supports text and PDF format for reading and includes Foxit's PDF Creator and Reader Pro Pack software. In August 2010, Foxit announced that it would stop further development of the eSlick and focus on licensing PDF software to the makers of other e-book hardware. Wired attributed the move to a price war between Amazon.com's Kindle and Barnes & Noble's Nook which undermined Foxit's claim to offer the cheapest e-book reader on the market. Foxit dropped its support completely and abruptly in 2010, completely deleting all references to the eSlick from its site, including numerous forum threads and all firmware updates. This action has alienated and angered many users, as the solutions to many problems were readily available in these threads. The device is notable in that it has minimal features (no wireless, no subscription). It can read books in secure eReader format but does not support any other DRM formats. Specifications Size: 188 x 118 x 9.2 mm (7.4" x 4.7" x 0.4") Weight: 180 g (6.4 oz) Display: size: 15.5 cm (6 in) diagonal (approx 1/4 area of letter-sized page) resolution: 600 x 800 pixel resolution, 4-level gray scale Memory: 512 MB standard (100 eBooks at 1.2 MB each average), SD card expansion up to 4 GB Includes 2GB SD card Rechargeable Lithium-ion battery PC Interface: USB port OS: Embedded Linux Formats supported As of Firmware 2.0 Build 1130: Documents: PDF, TXT, ePUB, eReader format, non-encryped and secure PDB Images: GIF, BMP, JPEG, and PNG DRM-free Audio: MP3 Operating systems Windows The eSlick comes bundled with the proprietary Foxit Reader Pro Pack, PDF Creator, PDF Editor (trial) and PDF Page Organizer Pro (trial). Updating the reader's firmware is done via a proprietary program (eSlick Update Setup Package). Mac OS X and Linux As of firmware 2.0, Linux is supported for flash upgrades. Mac OS X is not supported for flashing the firmware, but the device supports file transfers via USB from any device that can mount a USB drive. The SD card and the internal memory show up as separate mountable drives. See also List of e-book readers References External links Official product webpage Foxit kills off eSlick Electronic paper technology Dedicated e-book devices Linux-based devices
7233526
https://en.wikipedia.org/wiki/University%20College%20of%20Engineering%2C%20Kakatiya%20University
University College of Engineering, Kakatiya University
The University College of Engineering, Kakatiya University is a public engineering institute located in Kothagudem, India. Formerly known as the Kothagudem School of Mines (KSM). It is the first mining college in Telangana and the second in India. History The first of all UCE, UCE(KU)] erstwhile KSM, started functioning in 1956 with single department Mining Engineering. Established in 1976, the University College of Engineering, Kakatiya University, is the oldest institution for mining in Telangana. It was formerly known as Kothagudem School of Mines, originally established under osmania University. It was later made a college under Kakatiya University under which many names were changed for the institute now settling with the name University college of engineering. The college moved to its present permanent building in 1996( except the mining department). Today it is the biggest among the campus colleges of Kakatiya University. There are 103 non teaching staff members, and around 20 regular professors. The college offers undergraduate B.Tech in CSE, EEE, Mining ECE, IT courses. Academics The college admits undergraduate students through the statewide EAMCET exam conducted every year. It offers Bachelor of Engineering (BE) courses in multiple disciplines. Departments Computer Science and Engineering Electrical and Electronics Engineering Mining Engineering Information Technology Electronics and Communication Engineering Computer Science and Engineering Kakatiya University embarked in 1996 on the Computer Science and Engineering Programme, when the UGC identified the University for its manpower development Programme. The Department offers B.E. and MCA programs. Placement Cell for Computer Science and Engineering Students are placed in companies like CA Inc., Teradata, Infosys, Syntel, Mahindra Satyam, Accenture Electrical and Electronics Engineering The department started in 1996 as a part of the Electrical Engineering Department. The department has the following labs. 1. Basic Electrical Lab 2. Electrical Machines 3. Power Systems 4. Digital Electronics 5. Control Systems Mining Engineering The Mining Engineering Department was established in 1957 at Osmania University, Hyderabad, to offer a four-year degree course in Mining Engineering, with an intake of 30 students. This is the only mining department in Telangana. Students of mining will be taken to mine surveys and mine visits . The department has established the following laboriousness for practical training: Rock Mechanics Lab Mine ventilation Lab Mine surveying Lab Mine environment Hazards Strength of Materials Mechanical Technology CAD/CAM lab with 30 computers Rock Excavation Lab survey The department faculty members are on National Committees as well as in mining projects in the region. The department has organized eleven National Seminars/Conferences/Workshops/Short-term programs in areas such as CAD/CAM, stress analysis using FEM, Optimization of energy, Recovery systems and metal spinning. References Engineering colleges in Telangana Schools of mines in India Educational institutions established in 1957 1957 establishments in Andhra Pradesh
25563341
https://en.wikipedia.org/wiki/Bach%20Mai%20Airfield
Bach Mai Airfield
Bach Mai Airfield () is a disused military airport in Thanh Xuan District, Hanoi, Vietnam, located along modern-day Le Trong Tan street. It was constructed by the French in 1917 and used by French forces until 1954; along with Gia Lam Airbase, it was one of the major logistics bases supporting French operations at Dien Bien Phu. After 1954, it was used by the Vietnamese People's Air Force and served as their air defense command and control center during the Second Indochina War, playing a part in the Cambodian–Vietnamese War as well. It is now the site of the Vietnam People's Air Force Museum, where a number of period military aircraft are on display. 1918-1940 In 1918 the 1st Escadrille d'Indochine moved to Bach Mai. In 1939 the unit was redesignated le Groupe Mixte Aérien 595 formed of L'escadrille d'observation 1/595 with the Potez 25 and an escadrille de chasse 2/595 with the Morane-Saulnier M.S.406. Vichy French Units at Bach Mai, 1940-1945 These units saw action in the Franco-Thai War of 1940–41. Following the Japanese Invasion of French Indochina in September 1940 the remaining aircraft were formed into the Groupement Aérien Nord Indochine, but this was dissolved in March 1943 due to wear and tear and lack of spare parts. During the Japanese coup d'état in French Indochina all remaining French aircraft were destroyed. First Indochina War, 1945-1954 The following units were based at Bach Mai between 1945 and 1954: Air Force Naval Air Arm Bach Mai and Gia Lam Airbase were the major logistics bases supporting French operations at Dien Bien Phu. Second Indochina War, 1964-1973 Bach Mai Airfield was used as the air defense command and control center for the Vietnamese People's Air Force during the Second Indochina War. Due to its location in the restricted area of a radius of 30 nautical miles (reduced to 10 in 1967) from the center of Hanoi and its proximity to the Bach Mai Hospital 1 km away, Bach Mai Airfield was off limits to US bombers during the early years of Operation Rolling Thunder with the result that North Vietnamese command and control was generally unmolested. Presidential approval was required for an attack on the airfield and it was attacked for the first time by F-105 Thunderchiefs of the 388th Fighter Wing on 17 November 1967. Bombing restrictions were lifted during Operation Linebacker I and the airfield was attacked on 16 May 1972. On 21 December 1972 in Operation Linebacker II, 2 B-52D's were lost to SA-2 missiles while attacking the airfield. On 22 December 1972 a string of bombs missed Bach Mai Airfield and instead hit Bach Mai Hospital killing 28 hospital staff. See also Vietnamese People's Air Force Museum, Hanoi References Airports established in 1917 Airports in Vietnam Vietnam War military installations Installations of the Vietnam People's Air Force Buildings and structures in Hanoi
47764175
https://en.wikipedia.org/wiki/GoGuardian
GoGuardian
GoGuardian is an educational technology company founded in 2014 and based in Los Angeles, California. The company has five core products: GoGuardian Admin (web filtering), GoGuardian Teacher (classroom management), GoGuardian Fleet (Chrome device management), GoGuardian Beacon (suicide and self-harm alerting), and GoGuardian DNS (network management). These services monitor student activity online, filter content, and alert school officials to possible suicidal or self-harm ideation. As of June 2018, GoGuardian reports its services are used in 10,150 schools for at least five million students. In 2018, Inc 500 named GoGuardian the fastest-growing education company. GoGuardian has raised concerns in relation to invasion of privacy. Product history GoGuardian was founded in 2014 and is based in Los Angeles, CA. Its feature set includes Chromebook filtering, monitoring, and management, as well as usage analytics, activity flagging, and theft recovery for any device running the Chrome Operating System (Chrome OS). GoGuardian also offers filtering functionality for third-party tools such as YouTube. These services enable school administrators to monitor student activity online, filter potentially harmful or distracting content, and recover lost or stolen devices. As of June 2015, GoGuardian reported it was active in over 1,600 of the estimated 15,000 school districts in the United States. In January 2015, Los Angeles Unified School District (LAUSD) chose GoGuardian to support their 1:1 device rollout program to 661,000 students. Chromebooks are one of the device options in LAUSD's 1:1 rollout. This partnership provides LAUSD with device tracking and grade-level-specific filtering capabilities, and facilitates compliance with the Children's Internet Protection Act (CIPA). In September 2015, the company unveiled GoGuardian for Teachers. The tool is designed to help teachers manage Chromebook usage in their classrooms and monitor student activity on the device. The goal of the tool is to help keep students on-task and away from inappropriate content. In January 2016, GoGuardian announced the launch of Google Classroom Integration for GoGuardian for Teachers. In January 2016, two of the company's co-founders, Aza Steel and Advait Shinde, were named to Forbes magazine's annual "30 Under 30" list in the Education category. In 2018 GoGuardian launched Beacon, a software system installed on school computers that analyzes students' browsing behavior to alert school counselors or psychologists of students at risk of suicide or self-harm. In 2018 GoGuardian was acquired by private equity firm Sumeru Equity Partners and appointed Tony Miller to their board of directors. Awards and Recognition Inc 500 Deloitte's Fast 500 In 2018 GoGuardian was named to Deloitte's fast 500 list as the 27th fastest growing technology company in North America. Forbes 30 Under 30 In January 2016, two of the company's co-founders, Aza Steel and Advait Shinde, were named to Forbes magazine's annual "30 Under 30" list in the Education category. International Design Awards Gold: GoGuardian Teacher 2016 Awards of Excellence Tech and Learning Student privacy GoGuardian products allow teachers and administrators to view and snapshot students' computer screens, close and open browser tabs, and see running applications. GoGuardian can collect information about any activity when users are logged onto their accounts, including data originating from a student's webcam, microphone, keyboard, and screen, along with historical data such as browsing history. This collection can be performed whether students connect from school-provided or personally-owned devices. Some parents have raised privacy concerns over this data collection, stating that "This is essentially spyware." In 2016, researcher Elana Zeide raised the concern that the use of GoGuardian software for suicide prevention, though "well-meaning", could result in "overreach". Zeide further noted that legitimate personal reasons could motivate a student to wish to search for sensitive information in private. According to Zeide, this concern is compounded by the fact that school devices may be the only devices for lower-income students. American School Counselor Association ethics chair Carolyn Stone raised the concern that GoGuardian's ability to track web searches conducted at home is "intrusive" and could be "conditioning children to accept constant monitoring" as normal. In October 2015, GoGuardian software was able to track keystrokes and remotely activate student webcams. GoGuardian states that the features were removed as part of its "ongoing commitment to student privacy." GoGuardian technical product manager Cody Rice stated in 2016 that schools had control over GoGuardian's collection and management of data, and that no client had complained about privacy. For students, the effects of invasion of privacy and website access restrictions can also be felt at home, outside of school hours. References Internet safety Child safety Companies based in Los Angeles Software companies based in California Security companies of the United States Computer security software Software companies of the United States Further reading Spying on Students: School-Issued Devices and Student Privacy a study by the Electronic Frontier Foundation GoGuardian Beacon listed as a 2018 "Silver in Education / Behavioral correction tools" by International Design Awards School Software Walks The Line Between Safety Monitor And 'Parent Over Shoulder' by Larry Magid writing in Forbes
1983210
https://en.wikipedia.org/wiki/Microsoft%20Active%20Accessibility
Microsoft Active Accessibility
Microsoft Active Accessibility (MSAA) is an application programming interface (API) for user interface accessibility. MSAA was introduced as a platform add-on to Microsoft Windows 95 in 1997. MSAA is designed to help Assistive Technology (AT) products interact with standard and custom user interface (UI) elements of an application (or the operating system), as well as to access, identify, and manipulate an application's UI elements. AT products work with MSAA enabled applications in order to provide better access for individuals who have physical or cognitive difficulties, impairments, or disabilities. Some examples of AT products are screen readers for users with limited sight, on screen keyboards for users with limited physical access, or narrators for users with limited hearing. MSAA can also be used for automated testing tools, and computer-based training applications. The current and latest specification of MSAA is found in part of Microsoft UI Automation Community Promise Specification. History Active Accessibility was initially referred to as OLE Accessibility and this heritage is reflected in the naming of its binary components such as oleacc.dll and the header file oleacc.h which contains definitions and declarations. As part of Microsoft's ActiveX branding push in March 1996, OLE Accessibility was renamed ActiveX Accessibility (sometimes referred to as AXA) and presented as such at the Microsoft Professional Developers Conference in San Francisco, March 1996. Later, the ActiveX branding was reserved for internet-specific technologies, and ActiveX Accessibility became Active Accessibility and frequently shortened to MSAA. MSAA was originally made available in April 1997 as part of the Microsoft Active Accessibility Software Developers Kit (SDK) version 1.0. The SDK packaged included documentation, programming libraries, sample source code, and a Re-Distributable Kit (RDK) for accessible technology vendors to include with their products. The RDK included updated operating system components for Microsoft Windows 95. Since Windows 98 and Windows NT 4.0 Service Pack 4, MSAA has been built-into all versions of the Windows platform, and has received periodic upgrades and patches over time. Programmatic exposure for assistive technology applications on Windows has historically been provided through MSAA. However, newer applications are now using Microsoft UI Automation (UIA), which was introduced in Windows Vista and the .NET Framework 3.0. Version history The following Active Accessibility versions have been released: Motivation and goals The motivating factor behind the development of MSAA was to allow an available and seamless communication mechanism between the underlying operating system or applications and assistive technology products. The programmatic goal of MSAA is to allow Windows controls to expose basic information, such as name, location on screen, or type of control, and state information such as visibility, enabled, or selected. Technical overview MSAA is based on the Component Object Model (COM). COM defines a mechanism for applications and operating systems to communicate. Figure 1 shows a high-level architecture of MSAA. Applications (e.g., word processor) are called Servers in MSAA because they provide, or serve, information about their user interfaces (UI). Accessibility tools (e.g., screen readers) are called Clients in MSAA because they consume and interact with UI information from an application. The system component of the MSAA framework, Oleacc.dll, aids in the communication between accessibility tools (clients) and applications (servers). The code boundary indicates the programmatic boundaries between applications that provide UI accessibility information and accessibility tools that interact with the UI on behalf of users. The boundary can also be a process boundary when MSAA clients have their own process. The UI is represented as a hierarchy of accessible objects; changes and actions are represented as WinEvents. Accessible objects The accessible object is the central interface of MSAA, and is represented by an COM interface and an integer . It allows applications to expose a tree structure that represents the structure of the UI. Each element of this tree exposes a set of properties and methods that allow the corresponding UI element to be manipulated. MSAA clients can access the programmatic UI information through a standard API. Roles, names, values, states MSAA communicates information by sending small chunks of information about elements of a program to the assistive technology object (AT). The four critical pieces of information on which the AT relies to help users interact with applications are an element's role, name, value, and state: Role: Conveys to users via AT what type of object a control is, such as a button or a table. The method for this is . Name: Provides a label for an element, such as Next on a button that moves users to the next page, or First Name for an edit box. The method for this is . Value: Provides the value of the specified object such as the value on a slider bar, or the information in an editable text box. Not all objects have a value. The method for this is . State: Identifies the current condition of the control, such as checked for a checkbox. State advises whether a control can be selected, focused, and/or other types of changeable functionality. The method for this is . Microsoft provides a complete list of controls and their functions. Role Role information is based on the type of UI control with which a developer wants to interact. For example, if a developer is implementing a button that is clickable, the developer would select as the Role to implement. The following table shows an example list of MSAA Roles and their related descriptions. Name The Names for elements in an application are assigned in the code by the developer. Many objects such as icons, menus, check boxes, combo boxes, and other controls have labels that are displayed to users. Any label that is displayed to users on a control (e.g., a button) is the default for the object's name property. Ensure the Name of the object makes sense to a user and describes the control properly. The Name property must not include the control role or type information, such as button or list, or it will conflict with the text from the role property (acquired from GetRoleText function of MSAA API). Value Value is used when a developer wants to return information from objects in the form of a string. Value may be returned for objects where percentages, integers, textual or visual information is contained in the object. For example, the property values returned from scroll bar and trackbar accessible objects can indicate percentages in strings. Not all objects have a Value assigned to them. State The State property describes an object's status at a moment in time. Microsoft Active Accessibility provides object state constants, defined in oleacc.h, that are combined to identify an object's state. If predefined state values are returned, clients use GetStateText to retrieve a localized string that describes the state. All objects support the State property. Challenges and limitations Microsoft designed the Active Accessibility object model during and after the release of Windows 95. The model is based on roles, each role representing a type of a user interface element. These roles are limited to user interface elements in common use at the time. For example, there is no text object model to help assistive technologies deal split buttons which combine multiple UI elements into one. MSAA does not attempt to represent styled text such as markup text or rich text documents. While MSAA still has the Value property, it can host only simple, non-styled text in its value. At the time, it was felt that the Microsoft Text Object Model (MS-TOM) would be more appropriate for expressing the attributes of formatted text. However, MS-TOM's complexity and limited initial adoption outside of Microsoft hampered access to rich text. Another limitation involves navigating the object model. MSAA represents the UI as a hierarchy of accessible objects in a manner similar to Windows' Window Manager. Clients navigate from one accessible object to another using the IAccessible::accNavigate method. However, servers implemented accNavigate in unpredictable ways and often not at all. Clients, however, must be able to deal with all approaches for any MSAA server. This ambiguity means extra work for client implementers, and the complexity can contribute to problems depending on the server implementations. Being a COM-based binary interface, IAccessible is immutable and cannot be changed without creating another interface. The result is that you cannot expose new roles, behavior or properties through the existing IAccessible-based object model. While intended to be a common subset of information about base UI elements, it was found to be difficult to extend to include information about new interaction methods. Availability MSAA was initially available as an add-on to Windows 95. It has been integrated with all subsequent Windows versions. Related technology Microsoft UI Automation (UIA): The successor to MSAA was User Interface Automation (UIA). However, since there are still MSAA based applications in existence, bridges are used to allow communication between UI Automation and MSAA applications. So information can be shared between the two APIs, an MSAA-to-UI Automation Proxy and UI Automation-to-MSAA Bridge were developed. The former is a component that consumes MSAA information and makes it available through the UI Automation client API. The latter enables client applications using MSAA access applications that implement UI Automation. Accessible Rich Internet Applications (WAI-ARIA): There is a general mapping from ARIA attributes to MSAA properties. IAccessible2: MSAA provides the roots of IAccessible2. IAccessible2 leverages the work done on MSAA, and adds additional functionality. Windows Automation API: Starting with Windows 7, Microsoft is packaging its accessibility technologies under a framework called Windows Automation API. MSAA will be part of this framework. Implementations of Microsoft Active Accessibility Active Accessibility is available for developers in all versions of Windows since Windows 95. Since its original introduction, MSAA has been used as a way to add support for programmatic access to the UI for many business and consumer applications, including Microsoft Internet Explorer, Mozilla Firefox, Microsoft Office, etc. In addition to accessibility aids such as screen readers, screen magnifiers, Augmentative and Alternative Communication (AAC) devices, the technology has been used by Test automation software, such as QuickTest Pro, Functional Tester, and SilkTest. More implementations of MSAA in applications and AT products can be found by searching on the Microsoft Accessibility sites or on the AT Information website. References External links History of Microsoft's Commitment to Accessibility UI Accessibility Checker UIA Verify Profiles of Accessibility in Action Accessibility Development Center Accessibility API Windows administration Windows APIs
24423543
https://en.wikipedia.org/wiki/National%20broadband%20plan
National broadband plan
Broadband is a term normally considered to be synonymous with a high-speed connection to the internet. Suitability for certain applications, or technically a certain quality of service, is often assumed. For instance, low round trip delay (or "latency" in milliseconds) would normally be assumed to be well under 150ms and suitable for Voice over IP, online gaming, financial trading especially arbitrage, virtual private networks and other latency-sensitive applications. This would rule out satellite Internet as inherently high-latency. In some applications, utility-grade reliability (measured for example in seconds per 30 years outage time as in the PSTN network) or security (say AES-128 as required for smart grid applications in the US) are often also assumed or defined as requirements. There is no single definition of broadband and official plans may refer to any or none of these criteria. Beyond broad latency and reliability expectations, the term itself is technology neutral; broadband can be delivered by a range of technologies including DSL, fiber optic cable, powerline networking, LTE, Ethernet, Wi-Fi or next generation access. Several operators have started to combine two of these technologies to create Hybrid Access Networks. This article presents an overview of official government plans to promote broadband – based on official sources that may be biased due to their promotion of the government plan as effective and positive. Such plans are recommended by OECD and other development agencies. All G7 countries except Canada have such a national broadband plan in place now. Comparisons Most countries considering such plans conduct their own comparative evaluations of existing national plans. The US, for instance, in September 2010 published a comparison of seven other countries' plans. The OECD tracks closely policy in this area and publishes links to relevant policy documents from its member (developed) countries. Developing countries' plans are studied most closely by the World Bank as part of its e-Development program. It has released the World Bank Broadband Strategy Toolkit to assist in policy development. Furthermore, the close relationship of universal wired broadband and smart grid plans is the subject of much study particularly in the US and Europe. The US plan has ambitious energy demand management goals (see National Broadband Plan (United States) for more details on these and their relationship to other US national goals) and its broadband plan is generally considered to be a pre-requisite to its communications-intensive energy strategy. This is also true to some degree of other countries' broadband plans. Americas Northern America Canada As of early March 2009, colleagues in Industry Canada confirm that the current national broadband strategy is a short statement in the 2009 budget: "Canada was one of the first countries to implement a connectivity agenda geared toward facilitating Internet access to all of its citizens. To this day, Canada remains one of the most connected nations in the world, with the highest broadband connection rate among the G7 countries. However, gaps in access to broadband remain, particularly in rural and remote communities. "The Government is committed to closing the broadband gap in Canada by encouraging the private development of rural broadband infrastructure. Budget 2009 provides $225 million over three years to Industry Canada to develop and implement a strategy on extending broadband coverage to all currently unserved communities beginning in 2009–10." The budget specifically includes: "Providing $225 million over three years to develop and implement a strategy on extending broadband coverage to unserved communities." On 6 March 2009, Mr. John Duncan, Parliamentary Secretary to the Minister of Indian Affairs and Northern Development and Federal Interlocutor for Métis and Non-status Indians, announced that the Government of Canada will contribute $7.86 million to the First Nations Emergency Services Society (FNESS) and their partner, the First Nations Technology Council (FNTC) in British Columbia, for the construction and provision of satellite broadband network capacity connecting 21 remote First Nations communities in British Columbia. In a 4 June 2009 news release, the CRTC endorsed the National Film Board's call for a national digital strategy. No substantial action, followup or funding was announced in the 2010 or 2011 budgets. CANARIE remains Canada's only publicly funded backbone network. Several provinces, especially Nova Scotia, have their own plans for broadband universal service see Broadband for Rural Nova Scotia initiative but these are generally last-mile services using fixed wireless technologies (Motorola Canopy in the case of Nova Scotia). No province is covered with public backhaul available to general public & business, though some provinces, regions and municipalities own some fibre. United States In the American Recovery and Reinvestment Act of 2009 – popularly referred to as a post-recession stimulus package – Congress charged the US Federal Communications Commission with creating a national broadband plan. The Recovery Act required the plan to explore several key elements of broadband deployment and use, and the commission now seeks comment on these elements, including: The most effective and efficient ways to ensure broadband access for all Americans Strategies for achieving affordability and maximum utilization of broadband infrastructure and services Evaluation of the status of broadband deployment, including the progress of related grant programs How to use broadband to advance consumer welfare, civic participation, public safety and homeland security, community development, health care delivery, energy independence and efficiency, education, worker training, private sector investment, entrepreneurial activity, job creation, and economic growth, and other national purposes. The plan was published in March 2010, on the same website used to gather public comment during its preparation. The plan is integrated with the US smart grid policies. A major purpose of the broadband plan is to enable M2M communications for that purpose. Latin America and the Caribbean Argentina The number of Internet users in the country has been estimated at 26 million (2010), of which 5 million, by late 2010, were broadband users (82% of which were residential and 81% of which connected at a speed of least 512 kbit/s), and over 1.3 were wireless and satellite users. Among residential users, 38.3% were located in Buenos Aires Province (including Greater Buenos Aires), 26.0% in the city of Buenos Aires, 8.2% in Córdoba and 7.4% in Santa Fe Province. According to a 2010 report by IDC Consulting, Argentina has a rate of 9.3 broadband accounts per 100 inhabitants, only surpassed in the region by Chile, which registered 9.7. Despite this relatively good national indicator, the penetration of Internet is not the same in all provinces, and some provinces, like Jujuy, have only 0.2 per 100 inhabitants; Formosa Province 0.3; Corrientes Province 0.4; and Tucumán Province 0.7. To reduce this disparity between provinces, in October 2010 the government presented a five-year plan, with an initial investment of 8,000 million pesos, called Plan Nacional de Telecomunicación "Argentina Conectada" ("Connected Argentina" – National Telecommunications Plan), under command of the state-owned enterprise ARSAT. Most of the financing is intended for the acquisition of the high-tech material required. The main goal is to expand broadband to the whole national territory, and to cover more than 10 million homes with a connection by the year 2015. This is supposed to duplicate the present number of residences which have access to these services, and to quintuple the penetration of optical fiber in the country. Brazil In October 2009, the Brazilian Agency of Telecommunications sought to tighten rules on domestic broadband service providers which, if implemented, could force firms such as Telefónica of Spain and Mexico's América Móvil to increase their investments in the country, according to local daily O Estado de S. Paulo reports. It is understood the new rules are designed to improve customer service with a specific focus on the delivery of stated broadband connection speeds; around 50% of users are currently thought to receive less than half the speed promised by their Internet service provider (ISP). New legislation could enter into force in 2010 requiring domestic broadband providers to comply with stricter standards on service quality. Anatel's move comes in the wake of an explosion in demand for broadband internet access which has seen user number swell to around 18 million, but also resulted in major disruptions to services – such as the recent high-profile outage of Telefónica's Speedy service which left millions of customers offline. Chile In October 2008, Chile's president, Michelle Bachelet, announced this week the countries' most ambitious telecoms subsidy plan ever, in terms of public investment and area of coverage. The project, which is aimed at boosting SMEs competitiveness in rural areas, will provide connectivity to more than three million people in 1,474 rural communities. The development is expected to cost US$100 million, 70% of which will be provided by the government through the Telecoms Development Fund. Significance: At 30 March 2008, only 0.8% of the households in rural areas in Chile have internet access. Upon completion of the project, internet network coverage will reach from 71.6% of the population to 92.2%, according to the country's telecoms regulator. In January 2008, Chile announced its 2007–2012 Strategy for Digital Development that will articulate the efforts of the public and private spheres as to new technologies during the next years. The project was prepared by the Committee of Ministers for Digital Development and seeks to provide further impulse to the ICTs in the Trans-Andean country. The goals on which the Chilean government will place more emphasis are to double the broadband connections to reach 2 million by 2010, to increase SMEs competition, to advance on the digitalization of the public health system, and to implement new technologies in areas deemed key, such as the provisional reform and education. In June 2009, Chile launched a new portal in which consumers may compare offers of all internet providers. Colombia In April 2009, Internet usage has grown to 40 percent of the Colombian population in the last ten years, with internet subscriptions rising at an annual rate of 75 percent. Over 73 percent of internet subscribers use broadband. Despite the growth, Colombia's subscription penetration average remains sixth in Latin America, with a majority of internet subscriptions concentrated in Colombia's three largest cities. To promote information technologies (IT) and telecommunications services in rural areas, Colombia's Ministry of Information Technologies and Communications developed a comprehensive ten-year National IT plan. A US$750 million public-private Communications Fund administers plan implementation, with 60 percent of funds targeted to the Compartel rural and community development program. United States Agency for International Development (USAID) also supports the development of telecommunications networks in rural areas, as well as provides technical assistance to GOC telecommunications authorities. The percentage of internet users has grown from 1 to 40 percent of the population—or approximately 17 million people—within the last decade. Permanent internet subscribers have also grown at an annual rate of 75 percent in the last five years, although the actual number of subscriptions remains low at 2 million. Colombia's penetration average (the internet subscription to population ratio) is 4.3, ranking it sixth in Latin America behind Chile, Argentina, Uruguay, Mexico and Brazil. Likewise, 55 percent of subscriptions remain concentrated in the cities of Bogotá, Medellín and Cali. Colombia has 1.45 million internet subscribers with broadband access—approximately 73 percent of total subscriptions. DSL (63 percent) and cable (32 percent) dominate the broadband market share, with Wimax (5 percent) a distant third. The main providers according to market share are: Empresa de Telefonos de Bogota (25 percent); EPM Telecommunications (24 percent); Colombia Telecommunications (20 percent); Telmex Hogar (19 percent); and independent providers (12 percent). These providers focus on triple-play packages combining internet, television and telephone services, which has contributed to the rapid expansion of internet usage. Carlos Forero, the vice-president of the CRT, said that broadband and associated value-added services are now seen as the market differentiator between telecommunications providers. Last year the Ministry of Communications (MOC) announced a National IT Plan, establishing three main goals to be achieved before 2019: 70 percent of Colombians with internet subscriptions, 100 percent of health and education establishments with internet access, and 100 percent of rural areas with internet access. The MOC plans to achieve these objectives through its flagship community and rural development program Compartel, which is funded by a US$750 million public-private National Communications Fund. Compartel provides subsidies or investment incentives to establish internet networks and telephony services in Colombia's most rural and impoverished areas. Since 2008, the program has invested US$421 million in rural networks, benefiting 16,000 rural educational, health and government institutions. In addition to Compartel, the GOC also supports additional programs in the educational, health, entrepreneurial, competitiveness, online-government and research sectors. Activities in 2008 included the distribution of refurbished computers to educational institutions (US$86 million), connectivity financing for small and medium enterprises ($15 million), conversion of all public institutions to online institutions ($70 million), and e-medicine ($5 million). USAID also promotes telecommunications connectivity for underserved and rural populations, as well as education and content to support economic and social development, through its Last Mile Initiative. Major contributors to this public-private alliance are Avantel, Intel, Cisco, Microsoft, Google, Polyvision, regional and local governments, and the MOC. Through the program, USG-provided equipment and training will connect 50 municipalities in the departments of Meta, Huila and Magdalena, including 21,000 small businesses and 325,000 institutions such as schools, hospitals, justice houses and local government offices. On the technical side, USAID assisted the MOC with the development of its National Plan and presently advises the CRT on "unbundling the local loop" to increase competition in broadband provision. Colombia remains behind Latin American neighbors such as Mexico, Brazil and Argentina in most IT indicators, but since the GOC privatized its state-owned National Telecommunications Company in 2003, the IT sector has expanded rapidly. The sector contributed a record 3 percent of total GDP in 2008. Local experts agree that IT sector will continue to experience accelerated growth as Colombia's domestic security situation improves and the legal economy strengthens. However, they also emphasize that continued private investment is key to the GOC achieving its lofty goals by 2019. Dominican Republic In October 2009, the Dominican Republic's telecoms regulator, Instituto Dominicano de las Telecomunicaciones (INDOTEL), has said it has plans to roll out fixed line telecoms services to an additional 1,000 rural communities as part of an initiative aimed at providing broadband and home voice services to all towns with more than 300 inhabitants. According to TeleGeography's GlobalComms database, the announcement comes just over a year after fixed line incumbent CODETEL inked a deal with INDOTEL to undertake a rural connectivity project that will see investment of US$100 million. In April 2009, this cable presents initial reporting on broadband deployment initiatives in the Dominican Republic (reftel). There is one ongoing government initiative to provide broadband access to 508 rural communities that is scheduled to finish by September. While future incentives are being considered by the regulatory agency, no others currently exist and broadband expansion is further hampered by 28 percent in taxes levied on all telecommunications sales. A Senate committee announced on 30 March it would review and update of the 1998 telecommunications law. The Dominican Telecommunications Institute (INDOTEL), the GoDR regulatory agency, launched a tender in 2007 for a Rural Broadband Connectivity Program. At that time, only 30 percent of the country's 383 municipalities had broadband capacity. The tender offered a subsidy of up to US$5 million. The winning bidder was Codetel (Mexican-owned), the largest company in the market, which offered to connect the 508 communities with no cash subsidy but rather in exchange for the rights to a WiMax frequency in the country. INDOTEL Executive Director Joelle Exarhakos told EconOff that the program has proceeded successfully and more than 100 rural communities have already been connected. She said Codetel would complete the broadband deployment plan by September 2009. By that time, every municipality in the country will have broadband access. Under the program, Codetel provides 256 kB/second or faster service to rural communities at prices that match the prices charged in urban centers where Codetel competes with other providers. Exarhakos told EconOff that INDOTEL does not have current plans for a second stage for the rural connectivity program, noting that with the completion of this plan, every municipality in the country will have broadband access. She said that in many of these communities, local entrepreneurs have built connections to the networks servicing even smaller communities nearby. Nonetheless, INDOTEL does not foresee a second stage of the rural program to venture into even smaller villages. But Exarhakos told EconOff that she believes such incentives might not be necessary; part of the goal of the Rural Connectivity Program was to demonstrate rural residents' capacity to pay and it has. In Monte Plata, a national provider, Dijitec, is developing infrastructure without any government incentive to compete with Codetel. In many of these communities, INDOTEL has set up Informatics Training Centers (CCI), where schoolchildren and residents can access the Internet and learn how to use computers. These centers are among the 846 centers around the country that INDOTEL has established as part of an information technology promotion program. INDOTEL provides the hardware and software for the centers and community groups, schools, churches or town governments maintain and operate the facilities. EconOff visited one such site in October 2008 at a church in Samaná which was inoperable because there were no funds to pay the electricity bill. Asked about these issues, Exarhakos candidly acknowledged that some of the committees have not succeeded in maintaining the facilities. Sur Futuro is one of the non-governmental organizations that has taken on the operations of CCIs, and runs three centers in communities where the organization is also otherwise involved. The group's education director told EconOff that while INDOTEL's CCI program provides an excellent service to communities, the lack of long-term funding limits its impact. She said it costs between US$500 and 850 monthly to operate a CCI, funds that are difficult to come across in poor communities. Sur Futuro's president noted that she is aware that the Catholic Church struggles to maintain the CCIs it runs. Codetel's participation in the rural broadband program has been directed by Ahmed Awad, who said the company's total cost of the program is about US$50 million. He said that while Codetel views it as a social investment, it has also proven relatively commercially successful. In addition to installing and maintaining the infrastructure for broadband connectivity in the 508 communities, Codetel is responsible for setting up an entrepreneur program, establishing an Internet portal for the program and providing training in each community participating in the program. In the entrepreneur program, Codetel has helped small businesspeople in many of the communities invest an average of US$1000 to start up internet cafes or international call centers. The Internet portal, which Codetel hired an NGO to construct, features geographic, demographic and other facts about each of the 508 communities. Awad told EconOff he believes it is the only database of information about these forgotten locales. The training provided by Codetel is limited to a one-hour workshop provided to the highest level of school taught in each municipality. Awad said that while the schools have received the trainers positively, he noted that one hour was insufficient to provide much training to the students. Awad told EconOff that the installed connections are 80 percent wireless, but that despite the fact that this provides the opportunity for cellular-only service in these areas, many customers want wired hardware in their homes despite the higher costs. Because the service is wireless, many locales contiguous to the participating communities have gained broadband access, Awad said. "In addition to the 508 municipalities, another 150 or so villages will receive service because of the wireless reach," he told EconOff. Awad said he hoped that INDOTEL would launch a sequel to this program, noting that there are another 1500 communities that lack broadband access. However, he lamented the fact that the sector does not have an ongoing focalized subsidy that would reduce costs to rural users, which would make these consumers a more attractive target for private investment. He also commented that the country needs more investment in information technology (IT) education in order to take advantage of the growing broadband penetration and stimulate demand for these services. Perhaps most importantly, though, he cited the lack of reliable electricity as one of the highest hurdles impeding broadband growth both in rural communities and nationwide. Instead of providing incentives for growth, the GoDR has a policy of discouraging it with high taxes. In a 2 April interview with the newspaper Hoy, Codetel President Oscar Pena complained that the Dominican Republic has the fourth highest taxes on telecommunications of any country in the world, at 28 percent, and a 3 percent municipal tax appears likely to increase this burden even further. Pena said that the implementation of the 3 percent tax would send a strong negative signal to investors. Ecuador In August 2009, BNamericas reports that Ecuador's telecoms watchdog Senatel aims to end 2010 with 9,000 schools connected to the internet via broadband networks under a national scheme, compared to 1,900 today, with 4,000 of the new connections to be made this year. A further 11,000 schools are to be covered by other public-funded social programmes, universal access fund Fodetel told BNamericas, adding that state-run telco Corporacion Nacional de Telecomunicaciones (CNT) is currently handling all rollouts, and as yet there had been no plans announced to open up tenders to private sector broadband operators. The news site wrote that telecoms regulator Conatel lists ongoing projects to connect 759 schools at a cost of US$4.56 million, or an average of US$6,000 per school. Mexico According to CEO Telecom Briefings, Latin America 2009, Telmex considers that the goal to connect 15 million to broadband by 2012 is impossible. This is reportedly one of the administration's goals. The Government also hopes that by that date 70 million people will be able to connect to the Internet. According to data from INEGI, 20% of the population had access to the Internet. Mexico presented the AgendaDigital.mx in 2012 establishing goals to increase fixed and mobile broadband penetration to 38% by 2015. In June 2013 a reform Telecommunications was adopted establishing access to broadband and Internet as a constitutional right. The Reform also includes connectivity goals such as connecting at least 70% of Mexican households and 85% of SME's with the average OECD countries connection speed at competitive prices. As part of this reform the past regulator, the Comisión Federal de Telecomunicaciones (COFETEL) created by presidential decree in 1996 is to be replaced by a newly created organism the Insitituto Federal de Telecomunicaciones (IFT). COFETEL, now IFT, went from being a division of the Ministry of Communications and Transport (SCT) to become an independent constitutional body, the number of commissioners increased from five to seven and the selection of the commissioners changed. Previously, commissioners were appointed by the president with Senate approval. Under the recent Reform, the commissioners are elected by an Evaluation Committee who proposes a list of candidates that meet specific requirements established by law to the Executive, who selects a candidate(s) to be ratified by the Senate. In terms of financing, COFETEL depended on the allocation of a resources by the Ministry of Finance which limited its independent operation. The new IFT exercises its budget autonomously, according to the law 'the House will ensure the adequacy budget to enable the effective and timely exercise of its powers' (Article 28 LFT, 2013). Europe European Union The Digital Agenda for Europe is one of the seven flagship initiatives of the Europe 2020 strategy. The objective is to bring "basic broadband" to all Europeans by 2013 and also to ensure that, by 2020, all Europeans have access to much higher internet speeds of above 30 Mbit/s and 50% or more of European households subscribe to internet connections above 100 Mbit/s. On 20 September 2010 the European Commission published a Broadband Communication, which describes measures the commission will take to achieve the targets of the Digital Agenda. Austria In October 2009, the European Commission called on the Austrian telecommunications regulator, RTR, to suspend the adoption of regulatory measures governing the broadband access market, finding that RTR provided "insufficient evidence" that mobile broadband connections can be considered substitutes for fixed-line DSL (digital subscriber line) and cable modem connections. RTR had proposed to define the broadband access market for residential customers as including mobile, DSL, and cable modem connections and to consider that market as competitive. RTR allegedly found that the retail broadband market for business customers was not competitive and that wholesale regulation of the market, including requiring "bitstream access", remained necessary. The EC disputed the conclusion that mobile connections are substitutes for fixed-line broadband connections, which would require that all three types of connections can be equally used for downloading music or films or providing sufficiently secure connections for Internet banking. The EC also questioned the definition of the relevant wholesale product market "as a sufficiently detailed forward-looking analysis of the different wholesale inputs is missing." Previously, in 2003, the Campaign for Broadband Internet Connection initiative was launched, seeking to achieve blanket broadband by 2007. Information is scarce on whether the campaign was continued since the 2007 has come and passed without their goals being met. Belgium The Belgian government owns over 50% of the incumbent telco provider Belgacom. As of April 2009, at the request of BIPT (the Belgian Institute for Postal service and Telecommunications), the consultancy firms Analysys Mason and Hogan & Hartson has drawn up a report regarding the development of the broadband market in Belgium and suggested a certain number of possible actions to promote competition on this market. At the request of Mr Vincent van Quickenborne, Minister of Enterprise and Simplification, the suggested action items have been submitted to the sector for consultation. According to the 2017 statistics, there are still 12% of the houses in Belgium that have broadband services with a bandwidth of less than 30 Mbit/s. Proximus has started to deploy Hybrid Access Networks by combining XDSL and LTE to address this problem. Bulgaria In March 2009, the "National Program for Development of Broadband Access in Republic of Bulgaria”, issued by the State Agency for Information Technologies and Communication, set the following targets by the year 2013: 100% coverage of population at 10 Mbit/s in large cities 90% coverage of population at 6 Mbit/s in medium cities 30% coverage of population at 1 Mbit/s in rural areas (90% mobile broadband) Croatia The Government of the Republic of Croatia adopted on 13 October 2006 the Strategy for the development of broadband access to the Internet in the Republic of Croatia until 2008. The Strategy aims to reduce the gap between Croatia and European Union member states concerning the density level (penetration) of broadband Internet connections. Therefore, an ambitious goal has been set to achieve the density level of at least 12 percent, i.e. to number at least 500,000 broadband connections until the end of 2008. In January 2009, the government declared success. However, accepting the fact that in the area of the development of broadband Internet, new challenges stand before the Republic of Croatia. The Ministry of the Sea, Transport and Infrastructure and the Central State Administrative Office for e-Croatia have initiated the drafting of the new Strategy for the development of broadband access to the Internet which would define strategic goals for the forthcoming period. Czech Republic To speed up the broadband network development and stimulate its use mainly by households and individuals, the Government adopted the "National Broadband Access Policy" in January 2005. The Policy is based on the OECD Council recommendations on promoting broadband development. Its main goal for the CR is to achieve a level of about 50% of the population to use broadband by 2010 at the latest. Denmark The existing strategy for the rollout and use of broadband in Denmark is based on the 2001 broadband plan 'From hardware to content'. The Danish Government follows up this strategy annually. According to a hearing in 2005, Denmark will continue to follow the main principles of the strategy. The political objective of the Danish Government is high transmission capacity for all, and strategies such as a national infrastructure that is rapid, inexpensive and secure, are needed to achieve this objective. The rollout of the IT infrastructure shall be developed by the private market with the Danish public sector serving as the driving force. For example, own IT investments by the public sector are intended to boost the demand for a digital infrastructure. Estonia The Estonian Information Society Strategy 2013 and the Estonian Electronic Communications Act reflect the basic principles of encouraging infrastructure investment, practicing technologically neutral policy and regulation and the primary role of the private sector in the expansion of broadband. Finland On 8 May 2008, Ms Suvi Lindén, Minister of Communications, appointed Mr Harri Pursiainen, permanent secretary, to study the means of ensuring a comprehensive broadband supply throughout the country and of organising its funding especially in non-built-up areas. The first part of the study includes a proposal for a government resolution and the second part examines the reasoning behind the proposal topic by topic. The report proposes that the public sector introduce business subsidies to enterprises that upgrade the public telecommunications network into a condition that makes available to most all citizens by 2015 an optical fibre or cable network supporting 100 Mbit connections. Prior to this goal, the speed of the broadband connection included in the universal service obligation must be raised to an average of 1 Mbit/s by 31 December 2010 at the latest. In order to finance the State contribution required for the target for 2015, it is proposed that certain radio frequencies coming up for allocation be auctioned. In the event that auction revenues are insufficient to cover the State's public aid for telecommunications infrastructure construction, the shortfall would be made up with a telecommunications network compensatory payment to be collected from telecommunications operators. The auction revenues and the compensatory payments could be entered as income and decisions on their use made either through the Budget or by means of a fund outside the Budget. Finland has passed a law making access to broadband a legal right for Finnish citizens. When the law went into effect in July 2010, every person in Finland, which has a population of around 5.3 million, will have the guaranteed right to a one-megabit broadband connection, says the Ministry of Transport and Communications The project was on track thanks especially to a huge uptake of mobile broadband. The guarantee increases to 10, 100 and 1 gigabit connections on fixed dates, a unique guarantee in the world. Energy-efficient datacentres, accounting for "0.5 to 1.5 per cent of total electricity consumption in Finland", which capture "heat generated by datacentres [to feed] it into district heating networks, and "a research programme on environmental monitoring and services" to "create new tools, standards and methods for environmental measurement, monitoring and decision-making... based on environmental data to improve the energy and material efficiency of infrastructures and industrial processes" were major goals of public research projects related to the broadband initiatives. France On 20 October 2008 the French Government announced sweeping measures to make France a leading digital economy by 2012. The plan outlines a strategy that the government will follow in the coming years to make France a leading digital economy. While many of the measures are obviously aimed at helping French companies, foreign companies will be able to benefit from the plan simply by creating a French subsidiary or by entering into strategic agreements with French companies or universities. The new plan will create opportunities for telecommunications operators, equipment manufacturers, content providers, web-based delivery platforms, game and software publishers, and universities in France. Germany In February 2009, the second Merkel cabinet approved a "broadband strategy" with stated aims of accelerating telecommunication and internet connectivity, closing gaps in underserved areas by the end of 2010, and ensuring nationwide access to high speed internet by 2014. Policy actions include upgrades to existing broadband infrastructure over the short term by deploying the entire range of feasible technologies – whether cable, fiber optics, satellite, or wireless – and utilizing the digital dividend resulting from frequencies no longer needed for broadcasting following digitalization. Accordingly, the regulatory agency BNetzA held a digital dividend spectrum auction in April through May 2010. The auctioned spectrum in the 800 MHz frequency band was sold to three of the four national mobile operators (Deutsche Telekom, Vodafone Germany, and O2) and is used to provide LTE service. After the 2013 federal election, the coalition agreement which led to the third Merkel cabinet included a goal to provide "nationwide coverage" of at least 50 Mbit/s internet access by 2018 through means of encouraging investments, reducing investment barriers and setting appropriate regulatory frameworks. Greece The governmental FTTH plan of 2M homes passed absolutely reasonable despite what has been said so far by many about the necessity of such an investment in terms of scale and scope. Ireland The National Broadband Scheme (NBS) aims to encourage and secure the provision of broadband services to targeted areas in Ireland in which broadband services are not currently available and are unlikely to be available in the near future. Following a broadband coverage mapping exercise to identify underserved areas and a competitive tendering process, a contract was awarded to "3" (a Hutchison Whampoa company) in December 2008, to implement and operate the NBS. Under the contract, 3 will be required to provide services to all premises in the NBS area that want service. In order to facilitate competition, 3 also will be required to provide wholesale access to any other authorized operator who wishes to serve the NBS area. Latvia In 2006 and 2007, Latvia carried out several small projects to increase internet access in Latvian provinces and a single major-scale project to increase broadband access in rural areas. The projects were implemented and co-financed by local governments and the central government, with the majority of the funding provided by EU structural funds. Latvia has listed internet access and availability promotion as an activity eligible for EU structural funds for the 2007–2013 EU budgetary period, but this activity is not designed to target rural areas or specifically increase broadband deployment. However, given Latvia's severe economic decline and the associated budgetary issues, the outlook for additional projects in the future is grim. Falling budget revenues have significantly limited Latvia's ability to co-finance any EU projects. There are bureaucratic obstacles as well, since Latvia has not yet adopted guidelines and evaluation criteria necessary to launch this program. One reason behind this delay is the GOL plan to reprioritize the national list of activities eligible for EU money in response to the crisis, increasing funding to the export sector. Internet access is not viewed by Latvian authorities as a significant problem; therefore, an increase in funding to communication infrastructure projects is unlikely. Furthermore, Latvia has decided to divert the funds the European Commission allocated to expansion of broadband networks, as part of the European Economic Recovery Plan, to projects in the dairy sector. Lithuania Rural Area Information Technology (RAIN) project 002 project The Development Strategy of the Broadband Infrastructure of Lithuania for 2005–2010 was published in the official gazette on 31 December 2002. The Strategy goals are as follows: to create conditions for public administration institutions, bodies and individuals to obtain broadband access; to promote competition in the field of the Internet access provision on the market using public and private capital investments; to seek that the national social and economic growth would be influenced; to reduce the exclusion of the population in the territory of the country. When connecting public administration institutions and bodies to the broadband networks and creating an opportunity for small and medium-sized enterprises as well as the population to use the broadband infrastructure and e-services all over the country's territory (especially in peripheral/uncompetitive locations where the level of use and development of wideband connection services is low), the following assessment criteria are important: By 1 January 2007 in 50% of the country's territory to create an opportunity to connect to the available broadband networks for all small and medium-sized enterprises willing to do so as well as the population and to connect at least 40% of public administration institutions and bodies (i.e. educational institutions, libraries, health care institutions and bodies, etc.) to the broadband networks. By 1 January 2008 in 50% of the country's territory to create an opportunity to connect to the available broadband networks for all small and medium-sized enterprises willing to do so as well as the population and to connect at least 60% of public administration institutions and bodies to the wideband connection networks. By 1 January 2009 to connect 100% of public administration institutions and bodies (except for some diplomatic representations of the RepublicofLithuania abroad) to the broadband networks. By 1 January 2010 in 98% of the country's territory to create an opportunity to connect to the available broadband networks for all small and medium-sized enterprises willing to do so as well as the population. Netherlands Nederland BreedbandLand (NBL) is the independent national platform for the provision of aid and incentives to the social sectors for the 'better and smarter' use of broadband. Poland Newspaper Gazeta Wyborcza writes that the Polish government has drafted a new law regulating broadband network deployments and will put it to a parliamentary vote in late October or early November. The law would, for example, require every multiresidential house in the country from 2010 to be connected with fibre, define rules for local governments to invest in broadband in areas that are not viable for commercial roll-outs, as well as set a framework for using utility infrastructure to accommodate network equipment. If approved as planned, the legislation could become effective as of January 2010. Significance The new regulations are part of the government's broadband strategy, which aims at bringing 100% of Poland's households and businesses within the coverage of broadband infrastructure by 2013 or 2014, partially using European Union (EU) funding In the meantime, telecoms regulator UKE is also in discussions with the local incumbent, TP, over the company's mid-term capex strategy, trying to agree on an investment level that would contribute to the goals of the national broadband strategy. In order to boost the investment, the UKE has softened some of its policies upon the incumbent, most significantly in regard with vertical separation, which it has put on hold for the time being, and wholesale access fees, which it has reportedly offered to freeze for the next couple of years. Portugal In January 2009, Portugal's government announced an 800-million-euro credit line for the roll-out of next-generation broadband networks in the country. Prime Minister José Sócrates announced the funding, saying he hoped the country's main telecoms operators would invest one billion euro to build NGNs during 2009. The credit line forms part of an agreement between the government and the operators Portugal Telecom, ZON Multimédia, Sonaecom, and ONI on the roll-out of fibre networks, and is the first step in a 2.18-billion-euro plan announced in December 2008 to boost the country's economy. Prime Minister Socrates said the credit line would pave the way for improvements in high-speed internet, television and voice services, adding: "This is the launch of the first measure of the stimulus plan to combat the economic crisis." Development Through Fibre Portugal's PM said he hoped the investment would allow up to 1.5 million homes and businesses to be connected to the new fibre networks. He added that the government has no preference regarding how the networks are rolled out by the operators, leaving them to reach a decision among themselves on whether single or multiple networks are constructed. Although the terms of the credit line have not been disclosed, they are likely to be highly favourable to the operators, and may represent a timely cash injection—as the global economic crisis bites, operator spending in reined in and private investment sources dry up. Portugal's broadband market showed strong growth, not least due to widespread cable and DSL networks. ADSL2+ services are also available from alternative operators such as Vodafone and cable data speeds at up to 100 Mbit/s were trialled in 2007. The Portuguese government had set a goal of 50% home broadband penetration by 2010, and this latest investment should allow the operators to significantly surpass this target. Romania In May 2009, while broadband Internet access increased by 30 percent last year, Romania's penetration rate is still half that of the EU average. Both the newly reorganized National Authority for Administration and Regulations in Communications (ANCOM) and the renamed Ministry of Communication and Information Society (MCIS) have repeatedly expressed interest in further expanding broadband access. To this end, MCIS issued a new broadband strategy for 2009 to 2015, but has yet to identify how to implement or fund the strategy. This cable responds to Department's reftel request for information on country broadband deployment initiatives. In his first press conference on 9 April, new ANCOM President Catalin Marinescu stated his main goal would be to increase the number of broadband Internet connections. Due to the expansion of the 3G network, mobile access connections to the Internet reached 2.7 million users in 2008, which is almost double the number in 2007. Despite a 30 percent increase in access in 2008, Romania's broadband penetration is still only about 11.7 percent, which is less than half of the EU average of 22.9 percent. ANCOM suggestions for increasing broadband deployment in Romania include: Review local loop access conditions that were unsuccessfully regulated in 2003; stimulate operators with existing 3G licenses to expand services and/or reorganize the 3G band in order to grant one or two additional licenses to companies (there are currently four 3G licensed operators in Romania); reissue a WiMax tender for the 3.5–3.7 GHz band. A 2008 attempt to grant WiMax licenses failed due to the high cost. ANCOM hopes the Government will lower the cost in order to increase commercial interest in the coming year. A precondition for accessing EU structural funds in this area is the adoption of a national broadband strategy. The MCIS's 2009–2015 strategy for broadband wireless access establishes an inter-ministerial working group responsible for implementation of infrastructure projects for broadband service expansion. Additionally, Minister of Communication Gabriel Sandu claims he will identify other financing sources, such as crisis funds, governmental funds and private funds to increase broadband deployment to rural areas. Slovenia In 2004, Slovenia issued the Strategy for Development of Broadband Networks, effective 2004–2006. The main principles included: The primary role of the market and competition in broadband development; formulating measures to activate the public sector, especially where private sector interest is lacking; expanding broadband connections in public administration and stimulating e-government services; stimulating competition between different types of infrastructure and services. Slovenia is preparing a new strategy for development of broadband networks, which will focus on simulating of private sector development of rural and scarcely populated areas. Spain Since 2005, the Ministry of Industry, Tourism and Trade has granted financial aids to operators in order to encourage their investments in areas where there would unlikely have been any broadband deployment. Two main programmes compose the Spanish strategy to provide broadband Internet access to rural and isolated areas: National Programme for Broadband roll-out in rural and remote areas: PEBA (2005–2008) Avanza Infrastructures Programme (2008–2012) In order to ensure consecution of the programme objectives, not duplication of investments and not distortion of competition, specific service and operative requirements were required to both projects and beneficiary operators. Service requirements consisted on providing broadband access with a minimum download speed and a maximum monthly fee. Additionally, the program operative requirements consisted on: Technological neutrality so that any type of broadband technology could be deployed and to avoid technology obsolescence; the duty of the beneficiaries to open up the financed networks to competition; infrastructure investments in well-defined and not serviced areas in order to avoid duplicative investments. Taking into account cabled technologies such as DSL are distance sensitive, and generally only feasible within a few miles of the nearest central office switch, PEBA projects were not limited to a single technology. At present, several technological solutions (ADSL, WiMAX, Satellite and HFC) are being used to provide broadband access to the PEBA population centres, depending on their geographic features, roll–out dates and available technology. Following Peba achievements and within the Avanza Infrastructures Program, the Ministry has continued working to increase broadband coverage in very small population centres. Additionally and taken into account advances in technology and the need not only to provide broadband access but also to improve the service quality and speed, the objective was also to improve bandwidth and network capacity provided by telcos at rural areas. Two action lines compose the broadband strategy under this funding program: F1: Projects intended to deploy access infrastructures in order to satisfy the demand for broadband connection from population in isolated and rural areas. F2: Projects intended to improve speed and capacity of rural backbone networks F1 action line projects will be based mainly on wireless broadband access technologies such as: HSDPA, WIMAX and Satellite although some of the beneficiary operators are already planning to provide ADSL connection at some population centres. F2 projects will mostly improve transport networks by means of fibre optics and WiMAX radio links. On 15 October 2009, Spain's Ministry of Industry has opened a public consultation on extending the concept of universal service to cover broadband access. The consultation concerns topics such as the minimum speed, the use of wireless technologies for broadband provision, related pricing models, and the schedule for service implementation. Significance At the moment, the mandate for universal service covers narrowband internet access, but as speeds of such services have become increasingly inadequate for responding to typical user needs, the government is now assessing whether connection over broadband should be defined as a legal right. As of end-June this year, according to regulator CMT, accesses defined as narrowband accounted for 2.4% of all 9.547 million internet subscriptions, against 5.2% a year earlier; in meanwhile, of the broadband accesses those at speeds below 1 Mbit/s accounted represented 0.4% of the total. IHS Global Insight's view is that the allocation of lower frequency bands such as 800 and 900 MHz for data services will be in the main role if the Spanish government wants to bring broadband to every citizen. Thus far, incumbent Telefónica has used WiMAX and satellite connections to live up its universal service mandate in some of the more remote areas of the country, yet we believe that by expanding its mobile broadband network by using the lower frequencies it could achieve the same more cost-efficiently. Sweden The goal of Sweden's Information Technology Policy is that Sweden should be a sustainable information society for all. This implies an accessible information society with a modern infrastructure and IT services of public benefit, so as to simplify everyday life and give people in every part of the country a better quality of life. IT should contribute to a better quality of life and help improve and simplify everyday life for people and companies, but it should also be used to promote sustainable growth. An effective and secure physical infrastructure for IT, with high transmission capacity, should be available in all parts of the country so as to give people access to, among other things, interactive public e-services. For broadband, the Swedish and European IT policy aims to increase accessibility to an infrastructure with capacity for broadband transmission. Broadband among other things, promotes economic growth by creating new services and opening up new investment and employment opportunities. The objective is broadband for all households (permanent housing) and business and public operations. According to the Swedish Post and Telecom Authority, 'broadband' in this objective, refers to connections that can be upgraded to a transmission rate downstream of at least 2 Mb per second. Between 2001 and 2007, Sweden's broadband support program included: Total state governmental funding 817 million $ (5.25 billion SEK) Total investment: Government 51%; Municipalities 11%; Operators 30%; EU structural funds 7%; Regional policy funds 1% Concentrating on rural and other areas where the market will not supply infrastructure Open procurement process Requirement that networks should be operator-neutral 85% of investments used for new infrastructure Other European countries Norway According to the statement of the political platform for the parties in Government7 dated 13 October 2005, the level of ambition for broadband rollout is to rise. The rollout of broadband throughout the entire country offers great potential to the business sector in the form of development and the setting up of more businesses, while reducing the difficulties of great distances. The tangible objectives of Government policy include: Broadband should be available throughout Norway by the end of 2007 Unreasonable geographical price differences for broadband connection should not exist Government funding will contribute to broadband rollout in those areas where rollout is not ensured by market players. Russia On 17 September 2009, Deputy Prime Minister Sergey Sobyanin has indicated that a special government commission will meet in October to discuss the development of broadband services in Russia, reports Prime-Tass. Issues to be considered will include the enhancement of broadband access quality and the increase of data-transfer speeds. Sobyanin also stated that the government would allocate 10 billion roubles (US$326.7 million) towards the development of various high-tech projects in 2010, aimed at carrying out technological upgrades in various sectors of the Russian economy. Earlier in the week Communications and Mass Media Minister Igor Shchyogolev stated that the government viewed the construction of main communications lines and the development of broadband and digital TV services as the top priorities of the telecoms industry. Significance The words of the deputy prime minister and communications minister highlight the importance with which the development of broadband services in the country is viewed. Broadband has taken over from mobile as the sector of highest growth potential in Russia, with the Communications Ministry reporting a fourfold increase in internet traffic in 2008. The market is heterogeneous, with no single dominant operator, but instead a number of players employing a variety of technologies. Although uptake has typically been centred on the economic hubs of the capital Moscow and St Petersburg, operators are increasingly expanding towards the regions for further opportunities. With operators continuing to invest in the broadband sector, subscriber uptake growing, and now increasing signals of interest from the government, IHS Global Insight expects the sector to continue to grow healthily in the short and medium terms. Switzerland On 7 October 2009, the Swiss Federal Office of Communications (ComCom) has revealed that round table discussions on the deployment of fibre-to-the-home (FTTH) networks are producing concrete results. According to the regulator the major players are now in agreement on uniform technical standards, meaning that there are no technical barriers to the rapid expansion of the fibre network. A consensus has also been reached on coordination, which will prevent the parallel construction of new networks by laying multiple fibres in every building (known as the multiple fibre model). At the same time the participants at the round table have agreed that all providers must have access to the fibre-optic network under the same conditions, so as to protect end-users' freedom of choice. The participants drew up further recommendations for standardised network access by services. Thanks to an open interface, service providers will enjoy network access to customers at all times via network operators. If, at a later date, the customer opts for a different service provider on the same fibre-optic network, the switch will be possible without any technical complications. The roundtable discussions involve cable network operators, telecoms companies and electricity utilities. Further roundtables and working groups will be held to clarify points. ComCom will also examine whether new regulatory measures are needed to govern FTTH deployment, with the aim of reporting to parliament by mid-2010 at the latest. United Kingdom The United Kingdom issued its "Digital Britain" report in June 2009. The Universal Service Commitment. More than one in 10 households today cannot enjoy a 2Mbps connection. We will correct this by providing universal service by 2012. It has a measure of future-proofing so that, as the market deploys next-generation broadband, we do not immediately face another problem of exclusion. The USC is also a necessary step if we are to move towards digital switchover in the delivery of more and more of our public services. The Universal Service Commitment will be delivered by a mix of technologies: DSL, fibre to the street cabinet, wireless and possibly satellite infill. It will be funded from £200m from direct public funding,2 enhanced by five other sources: commercial gain through tender contract and design, contributions in kind from private partners, contributions from other public sector organisations in the nations and regions who benefit from the increased connectivity, the consumer directly for in-home upgrading, and the value of wider coverage obligations on mobile operators arising from the wider mobile spectrum package. The Commitment will be delivered through the Network Design and Procurement Group, with a CEO appointed in the Autumn. We will also discuss with the BBC Trust the structure which gives them appropriate visibility in the delivery process of the use being made of the Digital Switchover Help Scheme underspend, which will be realised in full by 2012 The Next Generation Final Third project. Next generation broadband networks offer not just conventional high definition video entertainment and games (which because of this country's successful satellite platform are less significant drivers here than in some other markets) but also more revolutionary applications. These will include tele-presence, allowing for much more flexible working patterns, e-healthcare in the home and for small businesses the increasing benefits of access to cloud computing which substantially cuts costs and allows much more rapid product and service innovation. Next-generation broadband will enable innovation and economic benefits we cannot today predict. First generation broadband provided a boost to GDP of some 0.5%–1.0% a year. In recent months the UK has seen an energetic, market-led roll-out of next generation fixed broadband. By this Summer speeds of 50Mbps and above will be available to all households covered by the Virgin Media Ltd's national cable network: some 50% of UK homes. Following decisions by the regulator, Ofcom, which have enhanced regulatory certainty, BT Group plc has been encouraged by the first year capital allowances measures in Budget 2009 and the need to respond competitively to accelerate their plans for the mix of fibre to the cabinet and fibre to the home. BT's enhanced network will cover the first 1,000,000 homes in their network. The £100m Yorkshire Digital Region programme approved in Budget 2009 will also provide a useful regional testbed for next generation digital networks. In August 2009 the UK Government published its Digital Britain Implementation Plan setting out the government's roadmap for the rollout of its plans mentioned above. Africa Botswana Following the further liberalization of the Botswana telecommunications sector in 2004, the Government embarked on a new licensing structure in 2006. That action was designed to move the country from the pre-existing licensing framework, which made the distinction between the various telecoms services, to a service-neutral structure with the view of accommodating technological convergence. Currently, the only operator offering fixed-line broadband services is state-owned Botswana Telecommunications Corporation (BTC), which has launched ADSL services. However, the larger ISPs are also rolling out broadband wireless networks, mainly in Gaborone (the capital), to serve corporate customers in particular. Mobile operator Orange launched its broadband wireless service in June 2008 using a WiMAX network in Gaborone. The total number of broadband subscribers increased to an estimated 3,500 in 2007, up from 1,800 in 2006 and 1,600 in 2005, with broadband comprising a growing proportion of total internet accounts. BTC offers a range of data services including Frame Relay, ISDN, ADSL, MPLS and a broadband wireless service known as Wireless FastConnect. The data market has been liberalized, with ISPs now holding value-added network service (VANS) provider licenses. BTC enjoyed a monopoly over international bandwidth until February 2001 when the regulator began issuing international data gateway licenses. BTC's international bandwidth reached the 200-Mbit/s mark during 2008, some 90% of which (180 Mbit/s) was supplied via cross-border fiber networks to neighboring countries and 10% (20 Mbit/s) by satellite. During 2004, BTC began the deployment of ADSL and a domestic two-way VSAT network for areas beyond the reach of terrestrial infrastructure. During September 2008, BTC completed the roll-out of the Trans-Kalahari fiber-optic project, connecting Botswana to the neighboring countries of Namibia and Zambia. The 2,000-kilometre system was built in three parts: phase 1 runs from Jwaneng through Ghanzi to Mamuno (the border with Namibia); phase 2 runs from Ghanzi via Maun to Orapa; and the phase 3 runs from Sebina via Nata, Kasane to Ngoma (the border with Zambia). The network is designed to provide onward connectivity to submarine cables, removing the dependence on transiting through South Africa to reach the Sat-3/WASC and S.A.F.E (Southern Africa Far East) submarine cable systems. BTC is a signatory to three submarine cable projects (EASSy, WAFS and AWCC) and the government is in tripartite discussions with Angola and Namibia to assist each other in realizing the most effective means of achieving connectivity to these cable systems: East Africa Submarine System (EASSy), which will run along the eastern coast of Africa from Port Sudan (Sudan) to Mtunzini (South Africa) via Mombasa (Kenya), Dar es Salaam (Tanzania) and Maputo (Mozambique). West Africa Festoon System (WAFS), the planned cable that will run along the western coast of Africa from Nigeria through Gabon, Democratic Republic of the Congo, Angola and possibly Namibia. Africa West Coast Cable (AWCC), planned to run along the western coast of Africa from South Africa and Namibia to the United Kingdom. The proposed AWCC was replaced by the West Africa Submarine Cable (WASC), which awarded a supply contract to Alcatel-Lucent in April 2009. Egypt Broadband access, mainly the DSL variety, is still in its infancy. Local loop unbundling for DSL access was introduced in April 2002 to kick-start broadband uptake, but real growth occurred only after a government initiative in May 2004 that mandated a 50% tariff cut for unbundling. As part of the government's e-Readiness initiative, a strategy of public-private partnership is being aggressively pursued in the Internet sector to accelerate Internet and broadband uptake. Ghana On 23 July 2009, the government of Ghana has signed a US$150 million contract with Chinese equipment manufacturer Huawei Technologies for the supply of advanced telecoms infrastructure to ensure broadband internet access countrywide within the next two years. The Minister of Communications, Haruna Iddrisu, told delegates at a conference on Business process outsourcing (BPO) that the infrastructure would link internet Point of Presence (POP) in all district capitals under the government's ICT Backbone Development Programme. The minister added that the government was committed to ensuring it developed the human resources needed to promote the country as a prospect for BPO companies, and said Ghana was working hard to ensure the legislative regime was right encourage inward investment under the e-legislation programme. 'During the year the Ministry of Communications will also facilitate the development of additional legislations in the area of data protection and intellectual property for investors in the area of data capturing and management to operate within the confines of international guidelines and rules,' he said. His words were echoed by Vice President John Dramani Mahama who stressed Ghana's commitment to developing the nation's ICT backbone capabilities. 'In addition to the SAT3 connectivity, GLO-1 and MaiOne will commence the construction of two additional landing stations by the end of this year to take care of the issues of bandwidth redundancy,' he said. In May 2007, the government of Ghana launched the "Wiring Ghana" project, a $250 million nationwide 4,000 kilometer fiber-optic backbone project that promises to dramatically increase Ghana's broadband bandwidth supply, reportedly to a capacity of STM-16 nationwide. The government's goal is to provide an open-access, nationwide broadband connectivity to boost economic development. As of May 2008, Phase I of the project that covers the south and mid-country had been completed. Rollout of the second and final phase is underway and is slated to be completed by December 2009. Ghana's project will also provide fiber-optic connections with the neighboring countries of Burkina Faso to the north and Togo to the east. Ghana's broadband market, divided between ADSL and wireless broadband services, was small at the end of 2008, a total of about 26,500 subscribers, ADSL 53% and wireless WiFi/WiMax 47%. Kenya In May 2009, Kenya will revolutionize its telecom industry when it initiates its first fiber optic internet connection on 27 June. This broadband connection will vastly improve the quality of internet access to Kenya and contiguous landlocked countries. With increased internet capacity, the fiber will improve local bandwidth quality and potentially decrease communication costs, as it complements the existing and widely used satellite communication networks. The increased bandwidth capabilities will improve the competitiveness of existing businesses, create growth in new industries such as knowledge-based businesses and business process outsourcing, and significantly increase access to information for end-users, schools, and universities. The Government of Kenya (GOK) expects foreign investment in the sector to hit $10 billion in 2009. Septel will report how this connection and other broadband initiatives will affect rural and underserved areas. From Patrick Boateng's June 2009 report: on broadband, the Kenyan government has recently taken several steps to boost the country's future international bandwidth by committing to several planned submarine cable projects. As an example, in September 2006, Kenya's cabinet decided, after further delays to the proposed East African Submarine Cable System (EASSy), to proceed with its plan to build its own submarine cable, the East Africa Marine System (TEAMS) Ltd. In November 2006, the government signed a memorandum of understanding (MoU) with UAE's fixed-line incumbent, Etisalat, to build a submarine cable from Mombasa to Al Fujairah in the UAE. In a February 2007 Kenyan government contract award to U.S. Company, Tyco Telecommunications, the company conducted an undersea survey for the project. In October 2007, after the completion of the marine survey, Alcatel-Lucent won the bid to build the cable, and service is expected to begin at the end of the third quarter 2009. In 2008, Kenya adopted a National ICT Policy and enacted the Kenya Communications Amendment Act. Backed by a new ICT Sector Master Plan (2008–2012) and a projected budget of $812.5 million, the main goals of this move are to develop regulations that will provide an enabling environment for leveraging the new broadband capacity, and to improve the ICT sector in general. Meanwhile, a third submarine cable funded by South African and other investors, Sea Cable System (SEACOM), landed in Mombasa in May 2009 and is expected to be operational by July 2009. EASSy is slowly in progress and is expected to land in Mombasa in 2010. Also on the horizon are two additional submarine cables, Orange's Lion which will run from Mombasa to Madagascar to Mauritius and Reunion; and FLAG Telecom NGN System II cable. Nigeria On 6 March 2009, the Nigerian Communications Commission (NCC) has partnered with Nigerian WiMAX operator ipNX to bring broadband access to all 36 states in the country through the 'State Accelerated Bandwidth Initiative' (SABI). Ifeanyi Amah, executive director of ipNX, commented: 'The move to empower Nigerians with broadband internet access started almost two years ago. Our intention is to take our products and services to other regions. Our commitment in that direction can be seen through our partnership with NCC on SABI'. The project has been long-delayed, having been postponed because of government red tape and a lack of budget. According to TeleGeography's GlobalComms database, at the start of 2009 three companies were given letters of intent from the NCC: ipNX, MTN and a Wi-Fi alliance of several ISPs. The first phase of the project will cover the 36 state capitals, before being extended to government buildings. Nigeria as a whole can be classified as unserved. The Project arm of SABI consisted of a reverse bid process where broadband providers were invited to state the minimum subsidy each provider required to deploy broadband coverage to all 37 state capital cities in the Country and the three lowest bidders would be appointed. The subsidies were limited to: CPEs for the first 3,000 subscribers per city. (to enable a critical mass for each city sustainable network) Bandwidth supply for the first one year. The process is complete and three providers have been selected and approved by the Federal Government and they are IPNX, MTN, and NAIJA-WIFI (The latter being a consortium of about 15–20 ISPs.) One of the providers has started deployment and completed Kano (capital of Kano State in Northern Nigeria) which was launched in May. They are scheduled to launch in Ibadan, capital of Oyo State next month. Currently, the broadband market in Nigeria is in its infancy and is predominantly wireless, rather than wireline, and is dominated by the fixed wireless access operators. Consequently, the market structure has been determined more by the licensing and regulation of the radio spectrum space rather than by unbundling of the local loop. VSAT remains the predominant form of broadband Internet access. It is estimated that 51.1% of Internet users are connected by VSAT, 24% by broadband wireless, 3.4% by DSL, 9.3% via dial-up, 8.7% by cable/satellite and the remainder by Wi-Fi and leased lines. The introduction of a unified licensing regime from February 2006 is having far-reaching consequences by increasing the scope of the operating licenses of different fixed-line and mobile operators. At least two Nigerian ISPs have expanded services into other countries in West Africa. Intercellular, one of the PTOs, has been granted a license to operate in Sierra Leone. It has also applied for licenses in Benin, Chad, Guinea, Liberia, and Mali. Meanwhile, Hyperia, a leading Nigerian ISP, has announced the launch of two new wireless broadband services: a new WiMAX service in Port Harcourt and a two-way broadband VSAT service. Hyperia awarded a contract to Navini Networks, and has now launched the 3.5 GHz WiMAX service in Port Harcourt. The ISP also awarded a contract to Gilat Satellite Networks, to provide a SkyEdge broadband satellite hub and several hundred VSAT terminals. The VSAT network uses both C band and , and its hub and will be deployed at Hyperia's network operations center in London (U.K.). The VSAT network will enable Hyperia to expand its services in West Africa and to provide multiple services including broadband IP, telephony, and videoconferencing. Transmission networks are the crux for the whole Nigerian telecommunications sector. New mobile operators, fixed-wireless access (FWA) operators, and ISPs require a robust national transmission backbone to link base stations, mobile switches, and POPs together with long-haul upstream bandwidth. In the absence of a reliable backbone, satellite has provided transmission and backhaul capacity, as well as international connectivity. NITEL has access to Sat-3/WASC and is gradually upgrading its national transmission capacity. Globacom launched its "Glo Xpress" long-distance transmission service in July 2005, and private operators continue to invest heavily in the roll-out of their own microwave and wireless networks. The regulator Nigerian Communications Commission (NCC) has liberalized the long-distance market, and in addition to the two national carriers, there are now seven national long-distance operator (NLDOs) that are investing in the roll-out of fiber backbones in different regions of the country. The mobile operators have also deployed their own independent national fiber and microwave backbones, and the unified access license granted to the mobile operators would allow them to commercialize these backbones. A key factor has been the entry of foreign operators into this market, through the acquisition of Nigerian operators. Nigeria has several Gbit/s of international bandwidth, of which NITEL was only serving some 310 Mbit/s using the Sat-3/WASC cable, even though this only represents a fraction of the capacity that it owns on the cable. The balance of Nigeria's international bandwidth is provided entirely by satellite. Of up to six new submarine cables that would land in Nigeria, contracts have been awarded for two: Glo-1, and Main-1 (the others include AWCC, WAFS, Infinity, and Uhurunet). Glo-1 is expected to enter into service in 2009 and Main-1 from May 2010. In the medium term, the introduction of new submarine cables and national backbones linking Lagos to inland cities and towns will squeeze satellite operators, which have met the demand for services and filled the vacuum for data infrastructure. Following is a list of key broadband networks underway: National Carriers: Both national fixed-line operators NITEL and Globacom, and leading national mobile operators, MTN and Zain, are all rolling out their own fiber-optic network infrastructure. Alheri Engineering Co Ltd: Alheri Engineering, which was awarded one of the 3G licenses, has a two-pronged strategy. First, it will provide a carriers' carrier service to other operators, and plans to roll out a fiber transmission network in three phases. The first phase runs from Benin to Warri, the second from Benin to Port Harcourt, and the third from Aba to Jos. The second prong is to offer 3G services, and in May 2007 the company submitted a bid for the fourth mobile operator M-Tel. Backbone Connectivity Network Ltd (BCN): BCN began the roll-out of fiber backbone in the northern states of Nigeria in July 2006. The first phase involves the roll-out of 700 kilometers of fiber from Abuja (the federal capital) to Kano via Kaduna and Zaria. The second phase will form a ring running from Kano via Katsina, Malumfashi, Funtua, Zaria, Abuja, Akwanga, Jos, Bauchi, Gombe, Biu, Yola, Bamboa, Maiduguri, Damaturu, Potaskum, Azare, and Dutse back to Kano. The third phase will run from Sokoto to Abuja via Birnin Kebbi, Kamba, Yelwa Yauri, Kontagora, Bida, and Minna. Multi-Links Telecommunications Co Ltd (MLTC): Multi-Links, which was acquired by South African incumbent Telkom SA in March 2007, is deploying a fiber network in the south-west of the country, and has a fiber backbone running from Lagos to Abuja. Multi-links had deployed some 2,500 kilometers of fiber-optic network by mid-2008, completed a metro Ethernet ring in Lagos, and will roll out Ethernet rings in Kano, Kaduna and the Niger Delta region. Phase3 Telecom: Phase3 Telecom won a 15-year concession in March 2006 to operate the fiber-optic cables strung over the high-voltage electricity transmission network of the Power Holding Company of Nigeria (PHCN) in the west of the country. The current fiber network included in the concession is some 1,500 kilometers in length, including a 900-kilometre stretch from Lagos to the federal capital, Abuja (via Ibadan, Oshogbo, Jebba, Shiriro, and Minna), and the operator plans to expand this to 3,000 kilometers over an 18-month period. Phase3 Telecom plans to wholesale service and provide long-distance carrier services to other licensed operators, including mobile operators, private telecommunication operators, ISPs, and others. In December 2008, Phase 3 entered into a deal facilitated by the Economic Community of West African States (ECOWAS) to extend its carrier network to Benin, Togo, Ghana, Côte d'Ivoire, and Senegal using the high-voltage power transmission lines in these ECOWAS states. According to the Phase 3 Telecom's CEO Stanley Jegede, the project is part of ECOWAS' Intelcom II program to interconnect West African countries by fiber-optic networks. Suburban Telecoms: Suburban Telecoms has formed a joint venture with Ocean and Oil Holdings (OOH) Ltd to deploy a fiber MAN along a gas pipeline currently being deployed in Lagos by Gaslink. Suburban awarded a contract to Huawei and ATC that covers the deployment of 3,500 kilometers of fiber-optic cable. Suburban Telecoms has ambitions to become not just an MAN in Lagos, but also in other major Nigerian cities. The operator has a three-phased approach: in the first phase, it will connect major cities including Lagos, Ibadan, Abuja, Kano, Kaduna, Benin, Makurdi, Enugu, Onitsha, and Port Harcourt; in the second and third phases it will connect secondary population centers between these cities, and then extensions to other cities after that. Victoria Garden City Communications Ltd (VGC Communications): VGC had a customer base of 20,000 in 2007, and operates a fiber MAN in Lagos, where it has laid some 120 kilometers of fiber-optic cables connecting the suburb of Victoria Garden City with Victoria Island, Ikoyi, Ikeja, Marina, Costain, and MayfairGardens. Submarine Cables: In addition to the Sat-3/WASC submarine cable that lands at Lagos, contracts have been awarded for at least two more cables. NITEL retains exclusivity on access to Sat-3/WASC, and so the introduction of competition is expected to increase capacity dramatically and drive down prices. Glo-1: Globacom, the SNO and mobile operator, announced in August 2008 that its Glo-1 submarine cable, which will run from the United Kingdom to Lagos and Port Harcourt (Nigeria), was "nearing completion" and would be finished by May 2009. Alcatel-Lucent began laying the Glo-1 cable from Bude in May 2007 and by October 2007 it had been deployed as far as Senegal. In August 2008, Globacom said that the final phase is due to begin shortly and will see the remaining section of cable laid from Lagos northwards to reach the other half of the cable in Senegal. The 9,330 km submarine cable will have a capacity of 32 STM-64 circuits (totaling 318.4 Gbit/s). A dedicated link will also be leased through the trans-Atlantic Apollo 2 cable from Cornwall (U.K.) to the United States. Globacom had initially secured access to Sat-3/WASC through a leasing agreement with NITEL, but abandoned this and in August 2004 unveiled a plan to build its own private submarine cable, called Glo-1, from Lagos to the United Kingdom. Main-1: Main Street Technologies, a Nigerian company, announced in April 2008 that it had awarded a supply contract to Tyco Telecommunications to build its Main One submarine cable. The 14,000-kilometre cable has a capacity of 1.28 Tbit/s and will be built in two phases that are both scheduled to be completed in May 2010: phase 1 will connect Nigeria, Ghana, and Portugal, and phase 2 will connect to Angola and South Africa. The Main One Cable is a private system that will provide open access to regional telecom operators and ISPs. Other submarine cables: At least five other cables are planned to land in Nigeria. The Africa West Coast Cable (AWCC) would run from South Africa to Europe via Nigeria, the West African Festoon System (WAFS) would run from Angola to Nigeria, connecting those countries that do not have Sat-3 landing points, the Infinity Worldwide Group of Companies plans a West African cable that would include Nigeria, NEPAD plans a cable, called Uhurunet, which would encircle the whole continent and have a landing point in Nigeria, and the proposed ACE (Africa Coast to Europe) cable would run from Gabon to France via Nigeria and 19 other African countries. South Africa The government passed the Electronic Communications Act in 2006 and is dramatically restructuring the sector towards a converged framework, converting vertically integrated licenses previously granted to PSTN, mobile, USAL, PTN and VANS operators into new Electronic Communications Network Services (ECNS), Electronic Communications Services (ECS), or broadcasting licenses. In January 2009, the Independent Communications Authority of South Africa (ICASA) granted ECS and ECNS licenses to over 500 VANS operators. The South African market is in the process of being dramatically restructured, moving away from old-style, vertically integrated segments under the 1996 Telecommunications Act and 2001 Telecommunications Amendment Act towards horizontal service layers, and the new-style licensing regime is being converted to accommodate this. This process involves the conversion of pre-existing licenses into new Individual or Class Electronic Communications Network Services (ECNS), Electronic Communications Services (ECS), or broadcasting licenses. Licenses are also required for radio frequency spectrum, except for very low power devices. ICASA granted ECNS licenses during December 2007 to seven new under-serviced area licenses (USAL) operators. The new licensees include PlatiTel, Ilembe Communications, Metsweding Telex, Dinaka Telecoms, Mitjodi Telecoms, and Nyakatho Telecoms. South Africa had an estimated 6 million internet users in 2008 and the number of fixed (wireless and wireline) broadband subscribers is estimated at 750,000. Telkom reported 491,774 ADSL subscribers (Q3 2008), with the remainder using wireless broadband networks. South Africa's total international bandwidth reached the 10 Gbit/s mark during 2008, and its continued increase is being driven primarily by the uptake of broadband and lowering of tariffs. Three new submarine cable projects will soon bring more capacity to South Africa from 2009—the SEACOM cable is expected to enter service in June 2009 and supply contracts have been awarded for both the EASSy and WACS cables. While the internet user base increased to an estimated 6 million in 2008, the growth curve of paying internet accounts is recognized to be flattening. Meanwhile, the number of broadband subscribers is estimated to have grown quickly to reach about 750,000, split between Telkom's ADSL service (491,774 subscribers by 30 September 2008), and broadband wireless services provided by WBS, Sentech and others. Dial-up subscribers are migrating to broadband, and then escalating to higher-bandwidth packages as they become available. The South African market is split into two main tiers: top-tier internet access providers; and downstream retail ISPs. ISPs are licensed as value-added network service (VANS) providers, although under the Electronic Communications Act of 2006, these licenses were converted in January 2009 to individual or class electronic communication service (ECS) licenses. All domestic ISPs gain international connectivity through one of the internet access providers: SAIX (Telkom), Neotel, Verizon Business, The Internet Solution, MTN Network Solutions, DataPro and Posix Systems. Following the deregulation of the VANS industry in South Africa, a number of leading operators have diversified from being a top-tier ISP to becoming a converged communications service provider offering a range of voice and data services, particularly voice over IP, through the conversion of VANS licenses into ECS licenses. The total number of wireless broadband subscribers overtook that of fixed broadband subscribers in South Africa during 2007. The total number of broadband subscribers is estimated at 750,000 by late 2007, of which Telkom reported 335,112 ADSL subscribers. Wireless Business Solutions (WBS) launched the iBurst system in South Africa in late 2004, and had a subscriber base of 45,000 by July 2007, up from 2,500 subscribers by February 2005. Sentech had about 4,000 MyWireless subscribers in 2007, up from 2,500 in 2006. With delays to local loop unbundling (LLU), which would give ISPs access to exchanges, operators are deploying a range of broadband wireless networks. While the mobile operators are deploying HSDPA, W-CDMA and EDGE networks and entering the broadband space, operators are also deploying WiMAX, iBurst, and CDMA systems. Telkom, Sentech, Neotel, WBS and the under-serviced areas licensees (USALs) have currently been given commercial WiMAX licenses. Telkom launched full commercial WiMAX services in June 2007, first at 14 sites in Pretoria, Cape Town and Durban, and a further 57 sites are planned for 2007/8. Another 10 operators, including M-Web and Vodacom, were granted temporary test licenses and are awaiting spectrum to be allocated by ICASA. In May 2008, WBS partnered with Vodacom and Intel Corporation to roll out an 802.16e WiMAX network. The key upcoming development is that supply contracts have been awarded for three submarine cables that will land in South Arica—SEACOM, EASSy, and WACS. The data sector is a key area for growth in both the corporate data and residential data markets. South Africa is currently served by two submarine cables: SAT-2 and the SAT-3/WASC/SAFE system. Contracts have been awarded for the following three additional submarine cables that will land in South Africa from June 2009: SEACOM: The SEACOM submarine cable landing at Mombasa is due to enter commercial service in June 2009. The cable runs from South Africa to Egypt via Mozambique, Madagascar, Tanzania, Kenya, Djibouti and Saudi Arabia, connecting eastwards through to India and westwards through the Mediterranean. East African Submarine Cable System (EASSy): The planned EASSy will run from South Africa (Mtunzini) to Egypt via Mombasa (Kenya) and other East African countries. The cable will run as far north as Djibouti and Port Sudan, with onward connectivity to Europe provide by the Europe India Gateway (EIG) cable. In March 2007, a 23-member consortium behind EASSy signed a supply contract with Alcatel-Lucent. West African Cable System (WACS). In April 2009, the WACS consortium signed a construction and maintenance agreement, and awarded a supply contract to Alcatel-Lucent for a 14,000-kilometre cable to provide connectivity between South Africa, Portugal and the United Kingdom via 11 other African countries. With a minimum design capacity of 3.84 Terabit/s, WACS will connect South Africa to the United Kingdom with landings in 12 countries: Namibia, Angola, DRC, Congo, Cameroon, Nigeria, Togo, Ghana, Côte d'Ivoire, Cape Verde, the Canary Islands, and Portugal. The WACS consortium comprises 11 parties: Angola Telecom, Broadband Infraco (see below), Cable & Wireless, MTN, Portugal Telecom, Sotelco, Tata Communications, Telecom Namibia, Telkom SA, Togo Telecom and Vodacom. The cable is expected to be ready for service during 2011. Uganda On 13 October 2009, Uganda's ICT Minister Aggrey Awori announces the establishment of the Uganda Broadband Infrastructure Strategy Team (UBIST). The team he said comprises representatives from Government, Regulator, Operators, Internet Service Providers, Other investors/consumers, Civil Society and academia. UBIST will provide government with a well informed position for enabling it to access International Broadband Infrastructure. The government is also studying a number of policies for the effective implementation of the National Broadband strategy. Asia Hong Kong, S.A.R. of China The Digital 21 Strategy was first published in 1998 by the Government of the Hong Kong Special Administrative Region to set out the government's vision of developing Hong Kong into a leading digital city. As a living document, updated in 2001 and 2004, it has taken into account the evolving needs of the community and technological advancements. The 2008 edition notes that Hong Kong offers the world's most affordable Internet connection and mobile phone services with penetration rates among the highest in the world. Cyberport and Hong Kong Science Park have been developed as strategic hubs bringing together clusters of high-tech information and communications technology (ICT) companies and professional talent from all over the world. The Government is pursuing a vigorous e-government program that has achieved good progress over the years. The Digital 21 Strategy sets out a vision of building on Hong Kong's position as a world digital city through advancing our achievements and seizing new opportunities. The realization of the Digital 21 Strategy vision requires the participation of the entire community including the ICT industry, business sectors, academia and the general public. As an integral part of the Strategy, key indicators of Hong Kong's ICT development will be measured and tracked over time for public reference. The Office of the Government Chief Information Officer (OGCIO) is the focal point in the Government for dialogue with the public on the Strategy, for coordinating with all parties within the Government on its implementation and for tracking progress on an annual basis. India The typical speeds for consumer broadband connections in India vary from 512 kbit/s to 12 Mbit/s, with speeds of up to 100 Mbit/s available in a few areas (mostly on university campuses), though plans of 8–16 Mbit/s and above are becoming more common from BSNL, Airtel, Beam Telecom, Reliance and MTNL, but they are very expensive & out of reach of most Indians. The demand in India is for affordable 1Mbit/s plans with no download limit. The price of Broadband in India is very high compared to Europe or other parts of Asia, with a 1 Mbit/s connection costing between US$20 and $30 per month. Because of this, broadband is yet to filter down to the masses, with a penetration rate of 8.03 million (December 2009), or about 9.9% of the estimated 81 million total Internet subscribers. In addition to the high prices, many providers have introduced a Fair Usage Policy on "Unlimited" plans, while data plans still have very low data transfer limits (typically 100 GB) after which speed is reduced. For example, Airtel offers 100Mbit/s, 200Mbit/s plans, & 300Mbit/s plans, which fall to 512kbit/s due to FUP. Domestic Fiber The National Internet Backbone (NIB), owned and operated by the government, is the largest backbone in the country. Bharti (Airtel), GAILTel, Railtel, Reliance Communications and Tata Communications also own and operate large amounts of backbone fiber throughout the country. International Connectivity Indonesia In April 2009, Indonesian telecommunication officials announced the start of an auction process for 2.3 GHz radio frequency for broadband wireless access. Only companies or their subsidiaries that are majority Indonesian-owned are able to compete per the 2007 Investment Law and accompanying Negative List. The Department of Communications and Information (DEPKOMINFO)laid out the following process for interested bidders: 29 April – 5 May Pick-up bidding document; 6 May Submit written questions; 11–20 May Submit pre-qualification documents; 29 May Announce prequalification results; 1–5 June Prequalification appeal period; 3–9 June Pre-auction clarifying 10–12 June Implement three round auction; 12 June Announce auction results; 15–16 June Auction appeal period; 17 June Confirmation of auction results. Indonesian officials plan to hold future auctions for the 3.3 and 3.5 GHz frequencies. No date is set for these future auctions. Earlier this year, DEPKOMINFO issued a Ministerial decree instructing operators using those frequencies to migrate to other frequencies. Because of the Indonesian Negative List on Foreign Investment, U.S. firms will not be able to participate in the auction unless they partner with local companies to create a subsidiary that is majority Indonesian-owned. DEPKOMINFO officials said Indosat may not be able to compete because they are majority owned by a Qatar firm. The official added that a majority Indonesian-owned subsidiary of Indosat could still participate in the auction. Japan In 2001, the Japanese Cabinet released the e-Japan Priority Policy Programme. It stated that the private sector is to play the leading role in information technology, and the government's role is to implement an environment in which markets function smoothly through the promotion of fair competition and removal of unnecessary regulations. It also stated that government must play an active role in areas in which the private sector's activities do not fulfill the goals of facilitating e-government, closing the digital divide and promoting research and development. The e-Japan program extended tax incentives and budgetary support for carriers building fiber optic networks. To implement this program, the Ministry of Public Management, Home Affairs, Posts and Telecommunications (MPHPT; it later changed its name to the Ministry of Internal Affairs and Communications, or MIC) pursued two policies: the National Broadband Initiative, which mandates that federal and local governments deploy fiber to underserved areas; and the e-Japan Strategy, which set forth the goal of providing access at affordable rates by 2005 to high speed Internet networks for at least 30 million households, and to ultra-high speed Internet networks for 10 million households. Japan reached this goal with a broadband household penetration rate of 41.7 percent in 2004. These policies provided $60 million to municipalities investing in local public broadband networks, as well as low-interest loans to carriers to encourage them to build other broadband networks, including DSL, wireless, and cable systems. The loans were made through the Development Bank of Japan and the Telecommunications Advancement Organisation, both of which were largely funded by the government of Japan. In 2004, MIC issued its "Ubiquitous Japan" (u-Japan) policy. Its goal is to achieve a ubiquitous network society in which anything and anyone can easily access networks and freely transmit information from anywhere at any time by 2010. MIC is reviewing its regulations regarding broadband as part of its work towards achieving that goal. South Korea In February 2009, the Korea Communications Commission (KCC) announced plans to upgrade the national network to offer 1 Gbit/s service by 2012, an upgrade from the prior 100 Mbit/s guarantee. The plan was intended to cost 34.1 trillion won (US$24.6 billion) over the next five years. The central government will put up 1.3 trillion won, with the remainder coming from private telecom operators. The project is also expected to create more than 120,000 jobs – a win for the Korean economy. In November 2006, the government had announced it would invest 26.6 trillion won (US$28.3 billion) to upgrade networks—including fiber-to-the-home (FTTH), optical LAN and hyper fiber co-axial cable—in the country over the next four years. The government aims to upgrade a total of 20 million subscriber lines—10 million lines for fixed and wireless services each. The government is expecting industry to contribute funds toward the national upgrading project. The decision to focus on broadband began in the mid-1990s and intensified after South Korea's economy was crippled by the collapse of the Asian financial markets in 1997, when policy makers targeted technology as a key sector for restoring the country's economic health. Korean regulators set a path for the industry with well-publicized national goals. All big office and apartment buildings would be given a fiber connection by 1997. By 2000, 30 percent of households would have broadband access through DSL or cable lines. By 2005, more than 80 percent of households would have access to fast connections of 20 Mbit/s or more—about the rate needed for high-definition television. Most of the country's consumers were already served by the dominant carrier Korea Telecom, but the government encouraged competitors with a low-interest loan program for companies that built their own broadband facilities. The program offered $77 million in two years alone, with a particular focus on rural areas. The government offered other incentives for Korea Telecom. Once a state-owned monopoly, the company began the transition to private hands in 1993. But the government, which retained some shares until 2002, allowed the process to become final only on the condition that Korea Telecom bring broadband to all the villages in the country. The government also offered Internet training for the portion of the population deemed likely to be left behind in the digital age. About 10 million people fell into this category in the first round of the government's initiative, including stay-at-home wives, military personnel, disabled citizens, and even prison inmates. That program was ultimately expanded to anyone who wanted it. In 2004 a consortium that included the now defunct Ministry of Information and Communication, and private sector telecommunication and cable firms including KT, Hanaro Telecommunications, and others started to build a major infrastructure project called the Broadband Convergence Network (BcN). This infrastructure has been launched as a three-phase project. The first phase of the BcN extended from 2004 through 2005, the second phase extended from 2006 through 2007, and the third phase extended from 2008 through 2010. The timeline for the project has since been extended. A major focus of the South Korean program is WiBro to offer seamless 100 Mbit/s hybrid networking. It's "Heterogeneous Network Integration Solution (HNIS) technology... weds 3G/4G service with any open Wi-Fi network to deliver speeds many times faster than North Americans can get from their wireless providers. The technology is designed to work without a lot of consumer intervention. For example, HNIS will automatically provision open Wi-Fi access wherever subscribers travel. The combination of mobile broadband with Wi-Fi works seamlessly as well. Currently, smartphones can use Wi-Fi or mobile data, but not both at the same time... While mobile operators cope with spectrum and capacity issues, HNIS can reduce the load on wireless networks, without creating a hassle for wireless customers who used to register with every Wi-Fi service they encountered. The theoretical speed of an HNIS-enhanced 3G and Wi-Fi connection in South Korea will be 60Mbps when SK Telecom fully deploys the technology this year. As SK expands the technology to its 4G networks, theoretical maximum speeds will increase to 100Mbps.". SK plans to equip all of its smartphones with the new technology starting in 2013. See heterogenous network for more on the technology. Lebanon The National Broadband Strategy is a project initiated by the Partnership for Lebanon. The Broadband plan would bring larger, high speed communication pipes that would allow Lebanese citizens to have faster access to information and change the way they live, work, play, and learn. Lebanon's outdated communications infrastructure puts Lebanese industry at a competitive disadvantage, costing jobs, decreasing revenue, and slowing economic growth. The Partnership for Lebanon is working with the Lebanese government and business leaders to modernize Lebanon's communications infrastructure through innovation and investment. In so doing, the Partnership is helping Lebanon update technology, reduce connectivity costs, and improve ICT quality across the board. The Partnership is working specifically with the Telecommunications Regulatory Agency (TRA) to develop a national broadband strategy designed to bring broadband to Lebanon's urban and rural communities. As part of this effort, the Partnership is conducting broadband business analysis, developing network architecture options, and crafting a regulatory framework to facilitate the successful implementation of a modern communications infrastructure. The Partnership is also working with the government owned telecom operator, Ogero, to increase Lebanon's international bandwidth capacity. The Partnership has provided Ogero with an Internet Exchange Point and is assembling the equipment needed to install two state of the art International Internet Gateways. To help educate local stakeholders about the benefits of a modern communications infrastructure, the Partnership and in coordination with the LBSG Committee which represents economic councils, private sector leaders and professional assiociations, launched a recent Advertising Campaign. The objective of this campaign is to raise awareness and educate the Lebanese public on the need of a True Broadband Infrastructure in Lebanon. The True Broadband Infrastructure will encourage economic growth and social development. The campaign second objective is to drive people to sign the manifesto. Malaysia High Speed Broadband (HSBB) is a broadband service that offers bandwidth delivered at network speeds of 10 Mbit/s and above when compared to normal broadband (Broadband to the General Population (BBGP)) which delivers bandwidth through wired and wireless technologies at network speeds ranging between 256 kbit/s and 4 Mbit/s. Eventually Malaysia will see basic HSBB packages offering network speeds between 20 and 50 Mbit/s and up to 100 Mbit/s for consumers while businesses will have maximum network speeds available up to 1Gbit/s. To understand the deployment of HSBB, the role HSBB plays in the larger world of broadband deployment in Malaysia must first be understood. Broadband deployment in Malaysia is carried out using two approaches – normal broadband, known as Broadband to the General Population (BBGP) delivered via wired (DSL) and wireless technologies (WiMax, WiFi, 3G/HSDPA) while the other will be through HSBB. BBGP (via both wired and wireless modes) is deployed nationwide while HSBB ( available only through the wired mode) will only initially be concentrated in the Klang Valley, Iskandar Malaysia and key industrial zones throughout the country. It is expected that 1.3 million premises will have the ability to access HSBB coverage by end 2012. Singapore The Singaporean government will provide up to S$750 million (US$520 million) in grants to build the Next Generation National Broadband Network. It will be wireline and wireless, and have speeds ranging from 100 Mbit/s to 1 Gbit/s. The network will be open to all service providers, threatening SingTel and StarHub's market dominance. The government will not prevent companies from building their own networks, however. Bidding is taking place in two stages – first, for the passive infrastructure, and then for the active infrastructure. In September 2008, IDA selected OpenNet, in which SingTel has a 30 percent stake, to design, build, and operate the passive infrastructure. Wireless@SG, which is launched on 1 December 2006, is a wireless broadband programme developed by IDA Singapore as part of its Next Generation National Infocomm Infrastructure initiative. Wireless@SG is powered by the network of three wireless operators, iCell, M1 Limited and SingTel. It will be provided free for all Singapore residents and visitors till 31 March 2013. Users can enjoy free, both in-door and outdoor seamless wireless broadband access with speeds of up to 1 Mbit/s at most public areas. Taiwan Over the past ten years, the Taiwan authorities have pursued a series of ICT infrastructure development projects, beginning with the "E-Government" initiative in 2000 that aimed to create more efficient, networked public services. The authorities expanded E-Government to include "E-Society," "E-Industry," and "E-Opportunity" initiatives under 2002's "E-Taiwan" plan. According to James Lo, Section Chief in the National Communications Commission (NCC) Department of Planning, from 2003 to 2007, the Taiwan authorities and private sector partners spent over US$10 billion on the broadband development. By the end of 2007, there were six million broadband internet accounts in Taiwan. Thailand Some of the nation's most powerful telecommunications executives and the regulatory agency, Nation Telecommunications Union (NTC), met for the first time on 2 July 2009 to formulate a plan for Meaningful Broadband. The plan calls for interacting with prime minister, and a spectrum of Thai ministries to establish the role of broadband in achieving public-policy reforms in the Abhisit government. The event, held at the Oriental Hotel, was the first meeting of the Meaningful Broadband Working Group, led by Craig Warren Smith, a visiting professor of Chulalongkorn University's Center for Ethics of Science and Technology. Sponsored by NTC, the event released a white paper on Meaningful Broadband. The report rejects the path to broadband favored by Singapore and other advanced nations which serves affluent citizens who can afford high speed internet. Instead, it calls for a new "broadband ecosystem" for Thailand, that is focused primarily on the Middle of the Pyramid (MOP), a middle-income group of Thais who make from $2 to $7 per day. By bringing 28 million of these MOP Thais into subsidized meaningful mobile broadband applications, Smith predicts a "wealth effect" that could bring equity and sustainability to the Thai economy. Responding to the framework, Khun Supachai called was one of several members of the group that advocated a follow up study that would prepare for a meeting with Prime Minister Abhisit along with ministers of Finance, Education, ICT and other relevant parties. "We need to figure out the roles of government, the regulator and the telecommunications operators in establishing broadband that brings optimal benefits to Thailand." Supachai, agreed to be host and sponsor of further research in preparation of the next meeting of the Working Group to be held in September. "Along with painting the big picture of how broadband could serve the nation, we should focus specifically how it can serve education and human resources development," said Montchai Noosong, executive vice president of TOT. "Central to the 'meaningful' idea is a new approach to Ethics, said Chulalongkorn University Soraj Hongladarom. "We want Thailand to develop a way to help users choose broadband applications that will lead them to happiness not addiction," he said. Oceania Australia On 15 September 2009, the Minister for Broadband, Communications and the Digital Economy, Senator Stephen Conroy, announced fundamental reforms to Australian's telecommunications landscape. In April 2009, the Australian Government announced that it would establish a new company that will invest up to $43 billion over 8 years to build and operate a wholesale-only, open access National Broadband Network. The new network will provide fiber optic to the home and workplace, supplemented with next generation wireless and satellite technologies to deliver superfast broadband services. The Government plans to sell down its ownership of the company – NBN Co. Ltd. – 5 years after the network is built. Again in April 2009, the Government released a discussion paper entitled "National Broadband Network: Regulatory Reform for 21st Century Broadband". The paper is based on public comments on the NBN. The paper appears to be similar to an FCC NPRM or NOI. It outlines the method of establishing the NBN and also sketches general regulatory reforms to assist the market in the future. To facilitate fiber build-out, the government will simplify land right of way procedures. The Australian Government had previously (in 2007) planned to subsidize a privately operated fiber-to-the-node project. The collapse of capital markets altered that plan. New Zealand The government has two plans to bring fast broadband to 97.8% of the population by 2019. The first is the NZ$1.35bn Ultra-Fast Broadband project where fibre to the home will be available to 75% of the population. The second is the NZ$300M Rural Broadband Initiative to upgrade rural telephone exchanges and to deploy fixed 3G networks. See also Digital divide Global Internet usage Internet access References Notes Sources External links World differences in broadband prices Broadband International telecommunications
51542939
https://en.wikipedia.org/wiki/Pine64
Pine64
Pine Store Limited, known by its trade name Pine64 (styled as PINE64), is a Hong Kong-based organization which designs, manufactures and sells single-board computers, notebook computers and smartphones. While Pine Store Ltd. is a legal for-profit entity, it operates much like a non-profit organization in the sense that it does not draw profits from most device sales, operates with volunteers, and reinvests income from sales back into the company. Its name was inspired from the mathematical constants pi and with a reference to 64-bit computing power. History Pine64 initially operated as Pine Microsystems Inc. (Fremont, California), founded by TL Lim, the inventor of the PopBox and Popcorn Hour series of media players sold under the Syabas and Cloud Media brands. In 2015, Pine Microsystems offered its first product, the Pine A64, a single-board computer designed to compete with the popular Raspberry Pi in both power and price. The A64 was first funded through a Kickstarter crowdfunding drive in December 2015 which raised over $1.7 million. The Kickstarter project was overshadowed by delays and shipping problems. The original Kickstarter page referred to the Pine64 Inc. based in Delaware, but all devices for the Kickstarter campaign were manufactured and sold by Pine Microsystems Inc. based in Fremont, California. In January 2020, Pine Microsystems Inc. was dissolved while Pine Store Limited was incorporated on December 5, 2019 in Hong Kong. As of late 2020, the standard form contract of pine64.com binds all orders to the laws of Malaysia, while the products are shipped from warehouses in Hong Kong and Shenzhen, China. Devices After the initial Kickstarter orders for the Pine A64 single board computers had been satisfied, the company went on to create several successors, and later also added notebooks and a smartphone to the "Pine" family. Single-board computers The original Pine A64 boards released in 2016 are powered by the Allwinner A64 system-on-chip. It features a 1.2 GHz Quad-Core ARM Cortex-A53 64-Bit Processor, an ARM Mali 400 MP2 graphics processor unit, one HDMI 1.4a port, one MicroSD slot, two USB 2.0 ports and a 100 Megabit Ethernet port. The A64 board has only 512 megabytes of RAM, the 1 GB and 2 GB versions are labeled "Pine A64+". While the 512 MB model only works with Arch Linux and Debian GNU/Linux distributions such as Armbian or DietPi, the A64+ with more memory can also run other operating systems including Android, Remix OS, Windows 10, FreeBSD, and Ubuntu. Optional eMMC storage modules can be plugged into special headers on the board. A compute module called SOPINE A64 was introduced in January 2017. It features the same system-on-chip as the Pine A64, but mounted on a DDR3 SODIMM form factor board without the USB/HDMI/Ethernet connectors. It competes with the Raspberry Pi Compute Modules. Pine64 sells a "Clusterboard" with an inbuilt eight-port Gigabit Ethernet switch which can be used to build a cluster system out of up to seven SOPINE modules. A review by Hackaday noted problems with production quality, software, and user support. 2017 also saw the addition of a "Long Term Supply" (LTS) version of the Pine A64/A64+ boards called "Pine A64/A64(+)- LTS". The LTS versions are identical to the A64/A64+, but are guaranteed to be available until the year 2022 at a slightly higher cost. In July 2017, the company added a new line of single board computers based on Rockchip SoCs. The ROCK64 features a Rockchip RK3328 Quad-Core ARM Cortex A53 64-Bit Processor, a Mali-450MP2 GPU capable of playing 4K HDR videos, 1/2/4 Gigabytes of RAM, two USB 2.0 and one USB 3.0 ports, one HDMI 2.0 port, a Gigabit Ethernet port, a MicroSD slot and several other peripheral ports. Its larger brother, the ROCKPro64, is based on a Rockchip RK3399 Hexa-Core (dual ARM Cortex-A72 and quad ARM Cortex A53) 64-Bit Processor instead. It features a Mali T-860 Quad-Core GPU and, in addition to the standard USB/Ethernet/HDMI/MicroSD ports, also has an eDP interface and an open-ended PCI Express x4 slot. An optional PCI Express to Dual SATA-II adapter and an optional Wi-Fi module are offered by Pine64 In 2019 a new Allwinner-based board was added as a direct competitor to the Raspberry Pi 3 Model B+. The Pine H64 is based on the Allwinner H6 Quad-Core ARM Cortex A53 64-Bit Processor. It features a Mali T-722 GPU, two or three Gigabytes of RAM, two USB 2.0 and one USB 3.0 ports, one HDMI 2.0 port, onboard 802.11n Wi-Fi, a Gigabit Ethernet port, a MicroSD slot and several other peripheral ports. Notebook computers In November 2016 the Pinebook, a netbook built around an Allwinner A64 SoC with 2 GB of RAM and a 16 GB eMMC module was announced. Pre-release comments in Make wrote that the A64's closest analog was two to three times the A64's price, and that the A64 continued the Raspberry Pi's trend of breaking barriers for engineers. Production started in April 2017. The Pinebook can only be obtained via a build-to-order system, potential buyers have to wait weeks or even months for an order code which then has to be redeemed within 72 hours. The hardware is priced at 99 US-$, but due to a 30 US-$ shipping fee and country-dependent import duties and taxes the final price is higher. The Pinebook was notably used by the KDE team to improve Plasma on ARM desktops. In a review of final hardware by Linux.com, the reviewer was surprised at his ability to have the full, albeit slow, Mate desktop environment at the A64's price. Phoronix's benchmarks indicated similar CPU performance to a Raspberry Pi 3. In July 2019 the company announced the PineBook Pro, a netbook based around the Rockchip RK3399 SoC which is also used in the ROCKPro64. The preorder system went live on July 25, 2019. The device is priced at 199 US-$, though the final price after shipping and import duties/taxes is higher. On March 15, 2020, it was announced that the PineBook Pro will ship with Arch Linux based Manjaro Linux as the default operating system. Smartphone , Pine64 is working on a Linux smartphone, PinePhone, using a quad-core ARM Cortex-A53 64-Bit System on a chip (SoC). The aim is for the phone to be compatible with any mainline Linux kernel and to "support existing and well established Linux-on-Phone projects", as a community developed smartphone. After a initial BraveHeart release for early adopters in February 2020, the company continued releasing Community Editions that incrementally improve the design. The community support has been very good, with 17 different OSes already released for the device. In October 2021, the company announced the PinePhone Pro based on a binned RK3399 SoC with additional RAM and MMC storage, as well as higher resolution cameras. Smartwatch In September 2019, Pine64 announced the PineTime smartwatch. It is meant as a community driven software development smartwatch platform and positioned as a complementary device to the PinePhone. Tablet In May 2020, Pine64 announced the PineTab tablet, with optional detachable backlit keyboard. It is a 10" tablet based on the same technology as the PinePhone, but without the modem and kill switches of that model. In August 2021, the company announced the PineNote. The PineNote is a 10" tablet with a Rockchip RK3566 and 4GB RAM, the same configuration used for the new Quartz64 SBCs. The tablet features a 227 DPI touchscreen Eink display panel that also includes a Wacom digitizer layer for stylus support. Soldering Iron In July 2020, Pine64 announced the Pinecil soldering iron based on the TS100 Soldering iron. References External links 2015 establishments in California Computer companies established in 2015 Computer companies of Hong Kong Computer security companies Electronics companies of Hong Kong Hong_Kong_brands Kickstarter-funded products Mobile phone manufacturers Multinational companies headquartered in Hong Kong Privately held companies of Hong Kong Single-board computers
1163261
https://en.wikipedia.org/wiki/Evesham%20Technology
Evesham Technology
Evesham Technology was a computer manufacturing and retail company based in Evesham, Worcestershire, England. It began operations in 1983 and closed in 2008 following financial difficulties. It was a significant contributor to the United Kingdom's domestic computer and digital television market. Its assets grew to include a factory and warehouse complex, and a chain of 19 retail stores in towns and cities throughout the UK, with around 300 employees. The company was founded in 1983 by Richard Austin and Robert (Bob) Hitchcock as Evesham Micros, to be briefly known as Evesham.com and finally as Evesham Technology. Austin continued as chairman and controlling shareholder until the company and its much reduced, short-lived successor, Geemore Technology Ltd, went into liquidation. Company history Initially specialising in Amstrad computers, Evesham began trading in 1983 as Evesham Micros, and for a short while as Evesham.com. It expanded its activities to include desktop and laptop computers, software, components and peripherals, servers, storage and networking as well as gaming systems, LCD televisions, digital TV recorders, and satellite navigation. The sale of the Amstrad at competitive prices contributed to the initial success of the company. It developed and licensed a number of peripherals and upgrades for the ZX Spectrum (48K memory upgrade, Interface III cheat/copy Cartridge) and Commodore 64 (Freeze Frame cheat/copy Cartridge, Dolphin DOS disk drive accelerator, Oceanic disk drive replacement) in the second half of the 1980s. The company continued to sell computers at discounted prices, retailing an end of line stock Olivetti PC, the UK's first PC to be priced under £300. In December 1991, it cleared remaining stocks of the Amstrad's PC 2000 range, including 286 and 386DX machines selling for well under £1,000. Evesham became the UK's largest supplier of Atari ST computers, also selling them at discounted prices, having purchased large stocks. They also sold a range of software and peripherals for Amiga and Atari ST computers. Many of the products it designed were innovative and received favourable reviews by the computer press. The company moved premises from Bridge Street to St Richards Road in 1989, opened a manufacturing plant, and began marketing under the brand name of Zydec. In early 1992 Evesham released, under the name Vale, its first home produced PC, intended to be sold as part of the Zydec product range. The Zydec name was later revived as a range of low costing PCs sold briefly alongside the Vale range. The company expanded, and acquired a number of industrial units on the Four Pools Industrial Estate. In 1998, Evesham relocated to purpose built facilities at Vale Park. In 2002 it merged with Mertec, a company established for over 20 years in Swansea, South Wales that held long-term supply and service contracts with some of the largest Welsh institutions. In mid-2004 Evesham and FalconStor Software, Inc., a leading developer of network storage infrastructure software solutions, announced an OEM agreement to supply the UK market with FalconStor's iSCSI Storage Server for Windows Storage Server 2003(R) (iSCSI Storage Server), with Evesham's SilverSTOR iSAN family of products. By 2007, Evesham Technology had more than 300 employees and offered a wide range of self-manufactured, imported, and assembled products that included desktops, laptops, servers, plasma and LCD displays, Media PCs and NAS servers, peripherals, and networking equipment. Its products were marketed through its own chain of 19 retail outlets in UK towns and cities including Altrincham, Birmingham, Bristol, Cambridge, Glasgow, Ipswich, London, Milton Keynes, Norwich, Nottingham, Peterborough, Reading, Southampton, Swansea, and Tunbridge Wells. It was a founding member of the PC Association and was one of the few major suppliers recommended by BECTA, the UK's educational ICT advisory organisation. Financial problems and closure Evesham had invested heavily into the UK government Home Computer Initiative, and encountered financial problems when the Labour government announced the sudden withdrawal of the scheme in 2007, leading to the appointment of Leonard Curtis as administrators. In August 2007, the company received a help package in the form of an investment of $22 million from PCC Technology, a Dubai based investment fund controlled by Time Group founder Tahir Mohsan. The chain of retail stores were closed and Evesham continued to trade as newly formed Geemore Technology Ltd., a much smaller and re-structured company under the directorship of Austin, with about one third of the Evesham workforce. Customers were not immediately affected by the change. Geemore operated from the former Evesham premises at Vale Park, and the Elite Logo Company, a dormant business owned by Tahir Mohsan and Tariq Mohammed, was registered as secretary of the new firm. In February 2008, PCC Technology, issued a press release saying that the restructuring of the failed Evesham Technology was complete and had gone well under the circumstances. They also indicated that they were looking to resell the Evesham brand which was trading under the new company, Geemore Technology Ltd. After a lack of interest from potential buyers, however, the company finally closed down towards the end of March and the three-month contracts of all remaining Evesham staff who stayed on at Geemore expired. All consumer orders were fulfilled or deposits refunded, and a large number of staff were retained by Evesham while its operations were reduced in a controlled manner. Warranties however, lost their validity as Moshan's PCC did not provide for continued customer support under the terms of the original product guarantees. A group of former staff from Evesham headed by former Evesham employee Robin Daunter started Tewktech, a company selling computers and offering repairs for a variety of brands including Evesham Technology. References Companies based in Worcestershire Companies established in 1983 Electronics companies of the United Kingdom Companies disestablished in 2008 Computer companies of the United Kingdom
68921832
https://en.wikipedia.org/wiki/Reply%20Corporation
Reply Corporation
Reply Corporation, often shortened to Reply Corp., was an American computer company based in San Jose, California. Founded in 1988 by Steve Petracca, the company licensed the Micro Channel architecture from IBM for their own computers released in 1989, competing against IBM's PS/2 line. The company later divested from offering complete systems in favor of marketing motherboard upgrades for older PS/2s. Reply enjoyed a close relationship with IBM, owing to many of its founding employees, including Petracca, having worked for IBM. The company was acquired by Radius in 1997. History Foundation Reply Corp. was founded by Steve V. Petracca (born 1951 in Honolulu). Prior to founding Reply, Petracca worked at International Business Machines from 1976 to 1988, starting out at the company's Boulder, Colorado, office as an industrial engineer in various capacities. There he graduated from CU Boulder, with a bachelor's degree in history. In 1980, Petracca moved to Boca Raton, Florida, to work at IBM's facility there, managing the start-up of the first production line for the IBM PC. Petracca graduated from Nova Southeastern University with an MBA and was promoted to manager of new product operations in the mid-1980s, handling the release strategy and ramp-up of the Personal System/2—IBM's intended successor to the PC—before being promoted to manager of business analysis for their Entry Systems Division sometime around 1987. In 1987, Petracca moved again to White Plains, New York, where he worked as manager of systems technology, which encompassed IBM's RISC-based workstations, printers, displays and Personal Systems. Petracca quit IBM in 1988, dissatisfied with a culture he perceived as promoting the creation of needless business units and obsession with data visualization. Interviewed by Inc., he said, "We would get into meetings and spend more time arguing over whether to use pie charts or bar charts than over the content of the data". That year, he moved to San Jose, California, and started Reply Corporation. Petracca obtained the funding to start his company from friends and family, as well as with his severance package from IBM. He poached several IBM employees for his startup, many of whom were on the development team of the PS/2. Petracca and his employees spent a year devising the company's first products, which were a line of desktop computers based on IBM's Micro Channel architecture: the Reply 286/16, the Reply 286/20, and the 386SX/16. According to InformationWeek, Reply was the first company to license Micro Channel—a bus architecture which IBM introduced with the PS/2 and which Petracca helped launch—for a PS/2 clone. However they were beaten to market with a PS/2 clone by Tandy Corporation, who released the 5000 MC in 1988. The Reply computers were introduced ahead of the 1989 COMDEX/Fall in November, with the company releasing the 286/16 and 286/20 the following month. These two machines competed with the Models 50 and 60, mid-ranged computers in the PS/2 line which IBM introduced in 1987. Those PS/2s featured 10-MHz Intel 80286 microprocessors clocked at 10 MHz, while the Reply models had 80286 processors clocked at 16 MHz and 20 MHz respectively. The 386SX/16 had an 80386SX clocked at 16 MHz. Reply touted the modularity of these computers, arranging their cases in a so-called "5×5" design: five drive bays and five expansion slots. Additionally the microprocessors were located on daughterboards connected to the motherboard. This daughterboard approach, which Reply termed the TurboProcessor, meant that the computers could be upgraded with faster processors over the lifespans of the motherboards. Reply included this feature to address the concerns of existing PS/2 owners, who feared that their investments were fast becoming obsolete. Reply were the only makers of PS/2 clones with 286 processors; according to PCWeek, MCA-equipped IBM PS/2s with these processors were skipped over by all but the most budget-conscious corporate buyers, making these Reply models relatively unpopular compared to the 386SX/16, which they released in 1990. IBM partnership In October 1990, Reply joined thirteen other makers of MCA machines—including IBM—in an alliance to push Micro Channel as the sole standard for 32-bit computers. This alliance, named the Micro Channel Developers Association, was intended to compete with the so-called Gang of Nine, an aggregate of PC clone manufacturers unofficially led by Compaq and Hewlett-Packard, who were backing the Extended Industry Standard Architecture directly against IBM. Also in that month, Reply released their first Micro Channel expansion card, the Token Ring Adapter/A, as well as their next generation of 32-bit Micro Channel machines with TurboProcessor, the Model 32. The most-expensive computer in this line, featuring an i486 clocked at 33 MHz, sold for $12,895 ($ in ). A comparably equipped PS/2, the Model 90 XP, sold for $16,695 ($ in ) simultaneously. Like the Reply Model 32, the PS/2 Model 95 XP featured an upgradable processor slot, which IBM termed the "processor complex". Unlike with Reply's TurboComplex, IBM forbade the buyer from upgrading their own computer, requiring an authorized service personnel to perform the upgrade for a fee. The Reply Model 32 also supported industry-standard SIMMs for RAM, whereas IBM used proprietary RAM modules for their PS/2 Model 95. Reply further strengthened their relationship with IBM upon signing a license in May 1991 that allowed them to equip their MCA machines with official IBM SCSI and ESDI hard drives, the Model M keyboard and PC DOS, as well as OS/2 Extended Edition. In August 1992, IBM signed an agreement with Reply that allowed them to offer computers with IBM's 386SLC and 486SLC processors, making Reply the first company to offer computers with IBM silicon based on Intel's x86 architecture. Because IBM's contract with Intel forbade IBM from selling their 386SLC on the open market as a standalone part, IBM had to manufacture the processor on a TurboProcessor board. The Model 16 386SLC, released in August 1992 and replacing Reply's earlier Model 16 386SX/20, featured this IBM-built TurboProcessor. The upgrade resulted in an over twofold performance boost for the same price point as the 386SX, according to Petracca, who shelved further TurboProcessor releases featuring Intel's 16-bit processors in favor of IBM's as a result. Restructuring Reply laid off 40 of its 100 employees in October 1992, prompted by a $5 million loss in profit amid fierce price war in the PC industry ushered in by Compaq. Reply surveyed their customers as to how the company should reinvent itself and decided to phase out manufacturing complete machines, instead releasing upgrade motherboards for existing IBM PS/2s. These motherboards, which Reply marketed as the TurboProcessor System Upgrade (later as the PowerBoard), were released starting in December 1992, allowing modern processors such as IBM's own 386SLC and Intel's i486DX2 to be installed in late-1980s-issue PS/2s, of which three to four million were estimated to be still in regular use in large companies. The upgrade motherboards also allowed these PS/2s, which normally supported ESDI drives only, to support PATA drives, which were plentiful compared to the former owing to their use in almost all IBM PC-compatible systems of the day. Also in December, Reply released their last complete computer, the Model 16 486SLC2, featuring IBM's 486SLC2 clocked at 40 MHz, upgradable to Intel's i486SX2 at 50 MHz. In April 1993, IBM signed a deal with Reply to resell the latter's TurboProcessor Upgrade cards under IBM's branding. Later that year, Reply ventured into designing computers for other companies, engineering the motherboard for IBM's PS/2 Model 53, as well as designing another MCA computer from the ground up for Olivetti. The company also announced several MCA expansion cards, including graphics and sound upgrades and an IDE drive adapter. In addition, they teased a PowerPC upgrade board for the PS/2, spurred by developments in the contemporaneous alliance between Apple, Motorola and IBM. This came to fruition in 1994 as the MPC105, a PowerPC 603-powered motherboard with PCI and ISA slots which Reply manufactured, albeit ultimately not as a PS/2 upgrade. By the end of 1993, Reply posed a profit of $3 million. Petracca was given $14 million in venture capital by five backers some years back to fund Reply's growth, but by August 1994 the capital had run dry. Petracca turned to a private equity firm in an attempt to make the company more profitable. Reply branched out from releasing upgrade motherboards for IBM exclusively with the release of the Deskpro PowerBoard. Released in December 1994, these boards were designed for Compaq Deskpros with 286 and 386 processors, allowing them to be fitted with the i486SX2 up to the DX4, as well as having Pentium OverDrive sockets. The company followed this up with upgrade boards for Compaq's ProLinea machines in 1995, as well as boards for IBM's PS/1s and PS/ValuePoints. According to Ward's, Reply had by 1995 recovered their original workforce of 100 employees. In December that year, Reply released the DOS on Mac expansion card for Apple's Power Macintosh 8100, allowing it to run MS-DOS and Windows applications off a Cyrix 5x86 or an Intel DX4. In 1996, they released an update to these cards for Macintosh computers featuring Intel Pentium processors clocked at 166 MHz to 200 MHz. Acquisition By 1997, Reply was down to 25 employees, for reasons unclear. In April that year, Reply sold their DOS on Mac technology to Radius, a hardware company based in Sunnyvale, California. This sale and the loss of Reply's employees put the company's longevity into question. Reply's remaining 25 employees moved into Radius' office in Sunnyvale, with ten leading engineers including Petracca securing permanent positions within Radius. Radius acquired Reply outright later that year. Products Computers Motherboards Expansion cards Token Ring Adapter/A MicroChannel Video Adapter MicroChannel Sound Card with SCSI MicroChannel Sound Card without SCSI DOS on Mac Citations References External links 1997 mergers and acquisitions American companies established in 1988 American companies disestablished in 1997 Computer companies established in 1988 Computer companies disestablished in 1997 Defunct companies based in the San Francisco Bay Area Defunct computer companies based in California Defunct computer companies of the United States Defunct computer hardware companies
35630432
https://en.wikipedia.org/wiki/Solid%20PDF%20Tools
Solid PDF Tools
Solid PDF Tools is a document reconstruction software product which allows users to convert PDFs into editable documents and create PDFs from a variety of file sources. The same technology used in the software's Solid Framework SDK is licensed by Adobe for Acrobat X Features Solid PDF Tools supports conversion from the following formats into PDF: Microsoft Word .docx and .doc Rich text format .rtf Microsoft Excel .xlsx .xml Microsoft PowerPoint .pptx .html Plain text .txt Solid PDF Tools recognizes columns, can remove headers, footers and image graphics and can extract flowing text content. Selective content extraction is supported, allowing the conversion of specific text, tables, or images from a PDF file while also providing for the combination of multiple PDF tables into a single Excel worksheet. Batching is supported for converting multiple PDF documents at the same time or for combining multiple PDF documents into a single file and compression options allow users to reduce the size of a PDF, optimizing it for a variety of media. PDF Creation PDFs can be created either from an application’s print function or through Solid PDF Tool’s drag-and-drop WYSIWYG interface. Users are also able to document properties, permissions, and passwords upon creation. Scanning, Archiving Solid PDF Tools users may scan a paper document into a PDF/A-1b file for archiving purposes and by relying upon built-in Microsoft Office Document Imaging (MODI) Optical Character Recognition (OCR) software, the document becomes keyword searchable. The software converts existing PDF documents into PDF/A-1b files which make them searchable, consistent, and archivable providing PDF/A validation compliance to make sure they reproduce similarly in the future. Editing Solid PDF Tools allows users to edit PDFs by adding custom watermarks and moving page orders. It will also detect hyperlinks and has text mark-up recovery. History Originally launched in 2003 by Solid Documents, Solid PDF Tools has become known as an alternative to Adobe® Acrobat® and utilized by businesses worldwide. In August 2008, version 2.0 included user interfaces for French, Chinese, and Spanish languages. In December 2010, version 7 was released offering several feature enhancements including the ability to convert PDF files into .docx, .xlsx or .pptx formats without requiring the user to have Microsoft Office. Additional features include table formatting improvements, text mark-up recovery, and extraction from PDF to .csv files. Version 9.0 allows scanned PDF data recovery into Microsoft Excel, offers improved conversion technology, and feature integration. See also List of PDF software References External links Solid PDF Tools website Creation Software Reviews Utah Computer Society Monthly Report June 2009 PDF/A-1 Validation Testing PDF software
51818
https://en.wikipedia.org/wiki/IBM%20Informix
IBM Informix
IBM Informix is a product family within IBM's Information Management division that is centered on several relational database management system (RDBMS) offerings. The Informix products were originally developed by Informix Corporation, whose Informix Software subsidiary was acquired by IBM in 2001. In April 2017, IBM and HCL Technologies (Products & Platforms Division) agreed to a long-term, 15-year partnership to co-develop, support, and market the product. IBM has delegated active development and support to HCL and shares marketing of the key Informix products with HCL. The current version of Informix is 14.10 and forms the basis of several product editions with variation in capacity and functionality. The Informix database has been used in many high transaction rate OLTP applications in the retail, finance, energy and utilities, manufacturing and transportation sectors. More recently the server has been enhanced to improve its support for data warehouse workloads. The Informix server supports the object–relational model, which has permitted IBM to offer extensions that support data types that are not a part of the SQL standard. The most widely used of these are the JSON, BSON, time series and spatial extensions, which provide both data type support and language extensions that permit high performance domain specific queries and efficient storage for data sets based on semi-structured, time series, and spatial data. Key products The current version of IBM Informix is 14.10. The major enhancements made over previous releases were adding built-in index compression, integration of JSON collections with support for MongoDB JSON drivers into the server, and an enhancement permitting database objects to be partitioned across multiple servers in a cluster or grid (aka sharding). Queries can optionally return data from the locally connected server instance or from an entire grid with the same SQL. Informix version 14.10 introduced support for partial indexing where only a subset of the rows in a table are indexed and for multi-valued key indexes which support indexing the elements within multi-valued data types such as LIST, SET, MULTISET, and BSON array fields. Heterogeneous clusters are fully supported, and there are several deployment options that are available, including some that provide very high levels of data redundancy and fault tolerance. This feature is marketed by IBM as Informix Flexible Grid. Informix is offered in a number of editions, including free developer editions, editions for small and mid-sized business, and editions supporting the complete feature set and designed to be used in support of the largest enterprise applications. There is also an advanced data warehouse edition of Informix. This version includes the Informix Warehouse Accelerator which uses a combination of newer technologies including in-memory data, tokenization, deep compression, and columnar database technology to provide extreme high performance on business intelligence and data warehouse style queries. Informix TimeSeries is a unique feature of the database system that allows for efficient and fast manipulation of time series data, such as that generated by devices such as smart electric meters, or as found in financial trading systems with time stamped stock 'ticks'. This type of data is not well suited to storage or use in the normal SQL supported style of data organization. Positioning IBM has several database products with capabilities which overlap in some areas. Informix is often compared to IBM's other major database product, DB2, which is offered on the mainframe zSeries platform as well as on Windows, Unix and Linux. Speculation that IBM would combine Informix with DB2, or with other database products has proven to be unfounded. IBM has instead continued to expand the variety of database products it offers, such as Netezza, a data warehouse appliance, and Cloudant, a NoSQL database. IBM has described its approach to the market as providing "workload optimized systems." Informix is generally considered to be optimized for environments with very low or no database administration, including use as an embedded database. It has a long track record of supporting very high transaction rates and providing uptime characteristics needed for mission critical applications such as manufacturing lines and reservation systems. Informix has been widely deployed in the retail sector, where the low administration overhead makes it useful for in-store deployments. With the ability to deeply embed Informix in gateways and routers, timeseries support, small footprint, and low administration requirements, Informix is also targeted at Internet-of-Things solutions, where many of the data-handling requirements can be handled with gateways that embed Informix and connect sensors and devices to the internet. In April 2017 IBM announced that they were outsourcing development of Informix to Indian IT specialists HCL, and that a number of IBM employees working on Informix would also move to HCL. As part of this arrangement IBM will continue to market and sell Informix to their customers. Other products In addition to the products based on the version 14.1 engine the IBM Informix family also includes a number of legacy database products which are still supported in market. These include Informix OnLine, Informix Standard Edition (SE) and Informix C-ISAM. These products are simpler and smaller footprint database engines that are also frequently embedded in third party applications. Collectively these products are often referred to as the "Informix Classics". The IBM Informix family also includes a client-side development environment, the Client-SDK, which supports a number of different environments including .net for Windows developers and a variety of protocols for Unix and Linux environments. Obsolete and non-IBM Informix heritage products Plans IBM has long-term plans for both Informix and DB2, with both databases sharing technology with each other, although IBM has continually denied fusion of the two products. Training and certification IBM Training includes a complete set of core Data Servers Training courses that apply to Informix. These courses delve into many essential Informix concepts, from fundamentals to advanced SQL topics. As part of IBM's Academic Initiative, IBM is offering Informix software, documentation and training to higher education institutions worldwide through its new Informix on Campus program. IBM is offering an inclusive package of Informix materials to college faculty called "Informix In a Box", which offers hands-on labs and PowerPoints to use in lessons, recorded training for teachers, DVDs with class material and VMware virtual appliance images, as well as T-shirts for students. Users groups Users groups remain active in Belgium, Croatia, France, Germany, the United States, and many other countries. The IIUG (International Informix Users Group) acts as a federation of those user groups and provides numerous services to its members. See also List of relational database management systems Comparison of relational database management systems References External links Proprietary database management systems IBM subsidiaries Informix Relational database management systems Client-server database management systems RDBMS software for Linux Data companies
2279892
https://en.wikipedia.org/wiki/Wirth%27s%20law
Wirth's law
Wirth's law is an adage on computer performance which states that software is getting slower more rapidly than hardware is becoming faster. The adage is named after Niklaus Wirth, who discussed it in his 1995 article "A Plea for Lean Software". History Wirth attributed the saying to Martin Reiser, who in the preface to his book on the Oberon System wrote: "The hope is that the progress in hardware will cure all software ills. However, a critical observer may observe that software manages to outgrow hardware in size and sluggishness." Other observers had noted this for some time before; indeed, the trend was becoming obvious as early as 1987. He states two contributing factors to the acceptance of ever-growing software as: "rapidly growing hardware performance" and "customers' ignorance of features that are essential versus nice-to-have". Enhanced user convenience and functionality supposedly justify the increased size of software, but Wirth argues that people are increasingly misinterpreting complexity as sophistication, that "these details are cute but not essential, and they have a hidden cost". As a result, he calls for the creation of "leaner" software and pioneered the development of Oberon, a software system developed between 1986 and 1989 based on nothing but hardware. Its primary goal was to show that software can be developed with a fraction of the memory capacity and processor power usually required, without sacrificing flexibility, functionality, or user convenience. Other names The law was restated in 2009 and attributed to Larry Page, founder of Google. It has been referred to as Page's law. The first use of that name is attributed to Sergey Brin at the 2009 Google I/O Conference. Other common forms use the names of the leading hardware and software companies of the 1990s, Intel and Microsoft, or their CEOs, Andy Grove and Bill Gates, for example "What Intel giveth, Microsoft taketh away" and Andy and Bill's law: "What Andy giveth, Bill taketh away". Gates's law ("The speed of software halves every 18 months") is a variant on Wirth's law, borrowing its name from Bill Gates, co-founder of Microsoft. It is an observation that the speed of commercial software generally slows by 50% every 18 months, thereby negating all the benefits of Moore's law. This could occur for a variety of reasons: feature creep, code cruft, developer laziness, lack of funding, forced updates, forced porting (to a newer OS or to support a new technology) or a management turnover whose design philosophy does not coincide with the previous manager. May's law, named after David May, is a variant stating: "Software efficiency halves every 18 months, compensating Moore's law". See also Code bloat Feature creep Jevons paradox Minimalism (computing) No Silver Bullet Parkinson's law Software bloat Waste References Further reading Adages Computer architecture statements Computing culture Rules of thumb Technology strategy
103586
https://en.wikipedia.org/wiki/Chosen-ciphertext%20attack
Chosen-ciphertext attack
A chosen-ciphertext attack (CCA) is an attack model for cryptanalysis where the cryptanalyst can gather information by obtaining the decryptions of chosen ciphertexts. From these pieces of information the adversary can attempt to recover the hidden secret key used for decryption. For formal definitions of security against chosen-ciphertext attacks, see for example: Michael Luby and Mihir Bellare et al. Introduction A number of otherwise secure schemes can be defeated under chosen-ciphertext attack. For example, the El Gamal cryptosystem is semantically secure under chosen-plaintext attack, but this semantic security can be trivially defeated under a chosen-ciphertext attack. Early versions of RSA padding used in the SSL protocol were vulnerable to a sophisticated adaptive chosen-ciphertext attack which revealed SSL session keys. Chosen-ciphertext attacks have implications for some self-synchronizing stream ciphers as well. Designers of tamper-resistant cryptographic smart cards must be particularly cognizant of these attacks, as these devices may be completely under the control of an adversary, who can issue a large number of chosen-ciphertexts in an attempt to recover the hidden secret key. It was not clear at all whether public key cryptosystems can withstand the chosen ciphertext attack until the initial breakthrough work of Moni Naor and Moti Yung in 1990, which suggested a mode of dual encryption with integrity proof (now known as the "Naor-Yung" encryption paradigm). This work made understanding of the notion of security against chosen ciphertext attack much clearer than before and open the research direction of constructing systems with various protections against variants of the attack. When a cryptosystem is vulnerable to chosen-ciphertext attack, implementers must be careful to avoid situations in which an adversary might be able to decrypt chosen-ciphertexts (i.e., avoid providing a decryption oracle). This can be more difficult than it appears, as even partially chosen ciphertexts can permit subtle attacks. Additionally, other issues exist and some cryptosystems (such as RSA) use the same mechanism to sign messages and to decrypt them. This permits attacks when hashing is not used on the message to be signed. A better approach is to use a cryptosystem which is provably secure under chosen-ciphertext attack, including (among others) RSA-OAEP secure under the random oracle heuristics, Cramer-Shoup which was the first public key practical system to be secure. For symmetric encryption schemes it is known that authenticated encryption which is a primitive based on symmetric encryption gives security against chosen ciphertext attacks, as was first shown by Jonathan Katz and Moti Yung. Varieties Chosen-ciphertext attacks, like other attacks, may be adaptive or non-adaptive. In an adaptive chosen-ciphertext attack, the attacker can use the results from prior decryptions to inform their choices of which ciphertexts to have decrypted. In a non-adaptive attack, the attacker chooses the ciphertexts to have decrypted without seeing any of the resulting plaintexts. After seeing the plaintexts, the attacker can no longer obtain the decryption of additional ciphertexts. Lunchtime attacks A specially noted variant of the chosen-ciphertext attack is the "lunchtime", "midnight", or "indifferent" attack, in which an attacker may make adaptive chosen-ciphertext queries but only up until a certain point, after which the attacker must demonstrate some improved ability to attack the system. The term "lunchtime attack" refers to the idea that a user's computer, with the ability to decrypt, is available to an attacker while the user is out to lunch. This form of the attack was the first one commonly discussed: obviously, if the attacker has the ability to make adaptive chosen ciphertext queries, no encrypted message would be safe, at least until that ability is taken away. This attack is sometimes called the "non-adaptive chosen ciphertext attack"; here, "non-adaptive" refers to the fact that the attacker cannot adapt their queries in response to the challenge, which is given after the ability to make chosen ciphertext queries has expired. Adaptive chosen-ciphertext attack A (full) adaptive chosen-ciphertext attack is an attack in which ciphertexts may be chosen adaptively before and after a challenge ciphertext is given to the attacker, with only the stipulation that the challenge ciphertext may not itself be queried. This is a stronger attack notion than the lunchtime attack, and is commonly referred to as a CCA2 attack, as compared to a CCA1 (lunchtime) attack. Few practical attacks are of this form. Rather, this model is important for its use in proofs of security against chosen-ciphertext attacks. A proof that attacks in this model are impossible implies that any realistic chosen-ciphertext attack cannot be performed. A practical adaptive chosen-ciphertext attack is the Bleichenbacher attack against PKCS#1. Numerous cryptosystems are proven secure against adaptive chosen-ciphertext attacks, some proving this security property based only on algebraic assumptions, some additionally requiring an idealized random oracle assumption. For example, the Cramer-Shoup system is secure based on number theoretic assumptions and no idealization, and after a number of subtle investigations it was also established that the practical scheme RSA-OAEP is secure under the RSA assumption in the idealized random oracle model. See also Dancing on the Lip of the Volcano: Chosen Ciphertext Attacks on Apple iMessage (Usenix 2016) References Cryptographic attacks
6124942
https://en.wikipedia.org/wiki/Dada%20Kondke
Dada Kondke
Krishna "Dada" Kondke (Marathi pronunciation: [d̪aːd̪a koːɳɖke]; 8 August 1932 – 14 March 1998) was an Indian actor and film producer. He was one of the most renowned personalities in Marathi film industry, famous for his double entendre dialogues in movies. Kondke was born into a family owning a grocery shop and owners of chawls in Morbaug area of Mumbai which were let out. His family members were also foreman handling millworkers of Bombay Dyeing. Dada Kondke was entered in the Guinness Book of World Records for the highest number of films (nine) that achieved silver jubilee (running for 25 consecutive weeks). Kondke was called "Dada", an honorific Marathi term meaning "elder brother", which led to his popular name Dada Kondke. He was credited with introducing the genre of sex comedy to Marathi cinema and Indian cinema. Early life Kondke was a born to and raised in a Koli family of cotton-mill workers in a chawl in Naigaon, near Lalbaug, Mumbai. His family originally hailed from the village of Ingavali which was in the erstwhile Bhor State near Pune. Kondke and his migrant family retained close connections to their rural roots. As a youngster, Kondke was a rough kid who later on took up job in a local grocery retail chain called Apna Bazaar. He lost most of his immediate family to unfortunate events and the grieving process changed him profoundly. These events made him focus more on the lighter side of life and make people laugh. Kondke started his entertainment career with a band and then worked as a stage actor. While working for the drama companies, Kondke toured throughout Maharashtra which helped him understand the local population's taste in entertainment. Career Stage career Kondke was involved in cultural activities of Seva Dal, a Congress party volunteers organization, where he started working in dramas. During this period came in contact with various Marathi stage personalities including writer, Vasant Sabnis. Later, Kondke started his own theatre company, and approached Sabnis to compose a drama script for him. Sabnis appreciated Dada's performance in Khankhanpurcha Raja (Translation: Bankrupt King), and agreed to write a modern Marathi language Tamasha or Loknatya (folk play). The drama was named Vichha Majhi Puri Kara (Translation: Fulfill my Wish). The drama went on to play over 1500 shows all over Maharashtra and made Dada a star. Film career Vichha Majhi Puri Kara brought Kondke into spotlight and in 1969, he debuted in Marathi movies through a role in Bhalji Pendharkar's movie Tambdi Maati which won the National Film Award for Best Feature Film in Marathi. He then turned producer with Songadya in 1971. Songadya was based on a story written by Vasant Sabnis, and was directed by Govind Kulkarni. He cast himself as Namya, the simpleton who falls for the glamour of Kalavati (played by Usha Chavan) who is a dancer. Some of the other people who played major characters in this movie were Nilu Phule, Ganpat Patil, Sampat Nikam and Ratnamala. Kondke retained his team from Songadya and delivered his next hit Eakta Jeev Sadashiv. Kondke's story-lines were always based on the simpleton engaged in lower level occupations. For example, Kondke portrayed himself as a Dhobi (Laundry Man) in Aali Angavar, Poor Farmer in Songadya, and a Police Constable in Pandu Havaldar. Kondke is known for using the same team of actors, technicians and playback singers to repeat the formula for success that he believed he had got from his debut film. Many of his movies, produced under the "Kamakshi Pictures" banner, had Usha Chavan as the lead actress, Rajesh Mujumdar as screen play writer (from Pandu Hawaldar onward), Raam Laxman as music director, Jayawant Kulkarni and later Mahendra Kapoor as the male playback singer, Usha Mangeshkar as the female playback singer, and Bal Mohite as the chief assistant. Kondke often employed the veteran actor-dancer, Bhagwan Dada in dancing sequences in his films such as Aali Angavar, Hyoch Navra Pahije, Bot Lavin Tithe Gudgulya, and Ram Ram Gangaram. Filmography Featured songs As a lyricist he wrote multiple songs on animals "Manasa paras medhara bari" (meaning 'goats are much better than human beings') in the film Ekta Jeev Sadashiv "Labaad Landga Dhwang Kartay" (on the cunningness of foxes) in Ekta Jeev Sadashiv "Chalara vaghya" (dog) in the film Tumcha Amcha Jamala "Jodi bailachi khillari" (bullocks) in the film Mala Gheun Chala "Bakricha samdyasni laglay lala" (goat) in the film Ram Ram Gangaram Bhajans "Anjanichyā Sutā Tulā Rāmācha Vardān" in the film Tumcha Amcha Jamala Political career Balasaheb Thackeray, leader of the party Shiv Sena, helped Kondke with screenings of Songadya, when Dev Anand’s film, Tere Mere Sapne was released, and resulted in movie theatres replacing showings of Songadya, for it. The move angered Marathi-speaking moviegoers, as many were eager to watch Kondke's film. The news of the replacement reached the Sena Bhavan, and after a meeting, party members and locals marched to the theatre to protest the move. Thackeray's justification for supporting Kondke was that he was a Marathi Māṇūs (Man). In return, Kondke, with Gajanan Shirke, helped found the Chitrapat Shakha'. Kondke was impressed with Thackeray's charisma and had toured Maharashtra to attract voters towards Shiv Sena. Kondke was a very active member of Shiv Sena and was able to influence many areas of rural Maharashtra due to his popularity and way of making fiery speeches to impress the masses. Personal life He was married to Nalini but they later got divorced. He did not remarry. On 14 March 1998, Kondke suffered a heart attack at his residence Rama Niwas in Dadar, Mumbai. He was rushed to Shushrusha Nursing Home, where he was declared dead on admission. At the time, Kondke was working on the film Jaraa Dheer Dhara'' with Usha Chavan. References External links The comic spirit State institutes Dada Kondke award Now even Kondke's book raises controversy Dada Kondke: End of an era Dada's Songs Direct Download Dada Kondke pictures More pictures of Dada Kondke from Marathi movies Koli people People from Pune district 1932 births 1998 deaths Male actors in Marathi cinema Marathi film directors 20th-century Indian male actors Male actors from Maharashtra Film producers from Maharashtra Maharashtra
19636
https://en.wikipedia.org/wiki/Mathematical%20logic
Mathematical logic
Mathematical logic is the study of formal logic within mathematics. Major subareas include model theory, proof theory, set theory, and recursion theory. Research in mathematical logic commonly addresses the mathematical properties of formal systems of logic such as their expressive or deductive power. However, it can also include uses of logic to characterize correct mathematical reasoning or to establish foundations of mathematics. Since its inception, mathematical logic has both contributed to, and has been motivated by, the study of foundations of mathematics. This study began in the late 19th century with the development of axiomatic frameworks for geometry, arithmetic, and analysis. In the early 20th century it was shaped by David Hilbert's program to prove the consistency of foundational theories. Results of Kurt Gödel, Gerhard Gentzen, and others provided partial resolution to the program, and clarified the issues involved in proving consistency. Work in set theory showed that almost all ordinary mathematics can be formalized in terms of sets, although there are some theorems that cannot be proven in common axiom systems for set theory. Contemporary work in the foundations of mathematics often focuses on establishing which parts of mathematics can be formalized in particular formal systems (as in reverse mathematics) rather than trying to find theories in which all of mathematics can be developed. Subfields and scope The Handbook of Mathematical Logic in 1977 makes a rough division of contemporary mathematical logic into four areas: set theory model theory recursion theory, and proof theory and constructive mathematics (considered as parts of a single area). Additionally, sometimes the field of computational complexity theory is also included as part of mathematical logic. Each area has a distinct focus, although many techniques and results are shared among multiple areas. The borderlines amongst these fields, and the lines separating mathematical logic and other fields of mathematics, are not always sharp. Gödel's incompleteness theorem marks not only a milestone in recursion theory and proof theory, but has also led to Löb's theorem in modal logic. The method of forcing is employed in set theory, model theory, and recursion theory, as well as in the study of intuitionistic mathematics. The mathematical field of category theory uses many formal axiomatic methods, and includes the study of categorical logic, but category theory is not ordinarily considered a subfield of mathematical logic. Because of its applicability in diverse fields of mathematics, mathematicians including Saunders Mac Lane have proposed category theory as a foundational system for mathematics, independent of set theory. These foundations use toposes, which resemble generalized models of set theory that may employ classical or nonclassical logic. History Mathematical logic emerged in the mid-19th century as a subfield of mathematics, reflecting the confluence of two traditions: formal philosophical logic and mathematics. "Mathematical logic, also called 'logistic', 'symbolic logic', the 'algebra of logic', and, more recently, simply 'formal logic', is the set of logical theories elaborated in the course of the last [nineteenth] century with the aid of an artificial notation and a rigorously deductive method." Before this emergence, logic was studied with rhetoric, with calculationes, through the syllogism, and with philosophy. The first half of the 20th century saw an explosion of fundamental results, accompanied by vigorous debate over the foundations of mathematics. Early history Theories of logic were developed in many cultures in history, including China, India, Greece and the Islamic world. Greek methods, particularly Aristotelian logic (or term logic) as found in the Organon, found wide application and acceptance in Western science and mathematics for millennia. The Stoics, especially Chrysippus, began the development of predicate logic. In 18th-century Europe, attempts to treat the operations of formal logic in a symbolic or algebraic way had been made by philosophical mathematicians including Leibniz and Lambert, but their labors remained isolated and little known. 19th century In the middle of the nineteenth century, George Boole and then Augustus De Morgan presented systematic mathematical treatments of logic. Their work, building on work by algebraists such as George Peacock, extended the traditional Aristotelian doctrine of logic into a sufficient framework for the study of foundations of mathematics. Charles Sanders Peirce later built upon the work of Boole to develop a logical system for relations and quantifiers, which he published in several papers from 1870 to 1885. Gottlob Frege presented an independent development of logic with quantifiers in his Begriffsschrift, published in 1879, a work generally considered as marking a turning point in the history of logic. Frege's work remained obscure, however, until Bertrand Russell began to promote it near the turn of the century. The two-dimensional notation Frege developed was never widely adopted and is unused in contemporary texts. From 1890 to 1905, Ernst Schröder published Vorlesungen über die Algebra der Logik in three volumes. This work summarized and extended the work of Boole, De Morgan, and Peirce, and was a comprehensive reference to symbolic logic as it was understood at the end of the 19th century. Foundational theories Concerns that mathematics had not been built on a proper foundation led to the development of axiomatic systems for fundamental areas of mathematics such as arithmetic, analysis, and geometry. In logic, the term arithmetic refers to the theory of the natural numbers. Giuseppe Peano published a set of axioms for arithmetic that came to bear his name (Peano axioms), using a variation of the logical system of Boole and Schröder but adding quantifiers. Peano was unaware of Frege's work at the time. Around the same time Richard Dedekind showed that the natural numbers are uniquely characterized by their induction properties. Dedekind proposed a different characterization, which lacked the formal logical character of Peano's axioms. Dedekind's work, however, proved theorems inaccessible in Peano's system, including the uniqueness of the set of natural numbers (up to isomorphism) and the recursive definitions of addition and multiplication from the successor function and mathematical induction. In the mid-19th century, flaws in Euclid's axioms for geometry became known. In addition to the independence of the parallel postulate, established by Nikolai Lobachevsky in 1826, mathematicians discovered that certain theorems taken for granted by Euclid were not in fact provable from his axioms. Among these is the theorem that a line contains at least two points, or that circles of the same radius whose centers are separated by that radius must intersect. Hilbert developed a complete set of axioms for geometry, building on previous work by Pasch. The success in axiomatizing geometry motivated Hilbert to seek complete axiomatizations of other areas of mathematics, such as the natural numbers and the real line. This would prove to be a major area of research in the first half of the 20th century. The 19th century saw great advances in the theory of real analysis, including theories of convergence of functions and Fourier series. Mathematicians such as Karl Weierstrass began to construct functions that stretched intuition, such as nowhere-differentiable continuous functions. Previous conceptions of a function as a rule for computation, or a smooth graph, were no longer adequate. Weierstrass began to advocate the arithmetization of analysis, which sought to axiomatize analysis using properties of the natural numbers. The modern (ε, δ)-definition of limit and continuous functions was already developed by Bolzano in 1817, but remained relatively unknown. Cauchy in 1821 defined continuity in terms of infinitesimals (see Cours d'Analyse, page 34). In 1858, Dedekind proposed a definition of the real numbers in terms of Dedekind cuts of rational numbers, a definition still employed in contemporary texts. Georg Cantor developed the fundamental concepts of infinite set theory. His early results developed the theory of cardinality and proved that the reals and the natural numbers have different cardinalities. Over the next twenty years, Cantor developed a theory of transfinite numbers in a series of publications. In 1891, he published a new proof of the uncountability of the real numbers that introduced the diagonal argument, and used this method to prove Cantor's theorem that no set can have the same cardinality as its powerset. Cantor believed that every set could be well-ordered, but was unable to produce a proof for this result, leaving it as an open problem in 1895. 20th century In the early decades of the 20th century, the main areas of study were set theory and formal logic. The discovery of paradoxes in informal set theory caused some to wonder whether mathematics itself is inconsistent, and to look for proofs of consistency. In 1900, Hilbert posed a famous list of 23 problems for the next century. The first two of these were to resolve the continuum hypothesis and prove the consistency of elementary arithmetic, respectively; the tenth was to produce a method that could decide whether a multivariate polynomial equation over the integers has a solution. Subsequent work to resolve these problems shaped the direction of mathematical logic, as did the effort to resolve Hilbert's Entscheidungsproblem, posed in 1928. This problem asked for a procedure that would decide, given a formalized mathematical statement, whether the statement is true or false. Set theory and paradoxes Ernst Zermelo gave a proof that every set could be well-ordered, a result Georg Cantor had been unable to obtain. To achieve the proof, Zermelo introduced the axiom of choice, which drew heated debate and research among mathematicians and the pioneers of set theory. The immediate criticism of the method led Zermelo to publish a second exposition of his result, directly addressing criticisms of his proof. This paper led to the general acceptance of the axiom of choice in the mathematics community. Skepticism about the axiom of choice was reinforced by recently discovered paradoxes in naive set theory. Cesare Burali-Forti was the first to state a paradox: the Burali-Forti paradox shows that the collection of all ordinal numbers cannot form a set. Very soon thereafter, Bertrand Russell discovered Russell's paradox in 1901, and Jules Richard discovered Richard's paradox. Zermelo provided the first set of axioms for set theory. These axioms, together with the additional axiom of replacement proposed by Abraham Fraenkel, are now called Zermelo–Fraenkel set theory (ZF). Zermelo's axioms incorporated the principle of limitation of size to avoid Russell's paradox. In 1910, the first volume of Principia Mathematica by Russell and Alfred North Whitehead was published. This seminal work developed the theory of functions and cardinality in a completely formal framework of type theory, which Russell and Whitehead developed in an effort to avoid the paradoxes. Principia Mathematica is considered one of the most influential works of the 20th century, although the framework of type theory did not prove popular as a foundational theory for mathematics. Fraenkel proved that the axiom of choice cannot be proved from the axioms of Zermelo's set theory with urelements. Later work by Paul Cohen showed that the addition of urelements is not needed, and the axiom of choice is unprovable in ZF. Cohen's proof developed the method of forcing, which is now an important tool for establishing independence results in set theory. Symbolic logic Leopold Löwenheim and Thoralf Skolem obtained the Löwenheim–Skolem theorem, which says that first-order logic cannot control the cardinalities of infinite structures. Skolem realized that this theorem would apply to first-order formalizations of set theory, and that it implies any such formalization has a countable model. This counterintuitive fact became known as Skolem's paradox. In his doctoral thesis, Kurt Gödel proved the completeness theorem, which establishes a correspondence between syntax and semantics in first-order logic. Gödel used the completeness theorem to prove the compactness theorem, demonstrating the finitary nature of first-order logical consequence. These results helped establish first-order logic as the dominant logic used by mathematicians. In 1931, Gödel published On Formally Undecidable Propositions of Principia Mathematica and Related Systems, which proved the incompleteness (in a different meaning of the word) of all sufficiently strong, effective first-order theories. This result, known as Gödel's incompleteness theorem, establishes severe limitations on axiomatic foundations for mathematics, striking a strong blow to Hilbert's program. It showed the impossibility of providing a consistency proof of arithmetic within any formal theory of arithmetic. Hilbert, however, did not acknowledge the importance of the incompleteness theorem for some time. Gödel's theorem shows that a consistency proof of any sufficiently strong, effective axiom system cannot be obtained in the system itself, if the system is consistent, nor in any weaker system. This leaves open the possibility of consistency proofs that cannot be formalized within the system they consider. Gentzen proved the consistency of arithmetic using a finitistic system together with a principle of transfinite induction. Gentzen's result introduced the ideas of cut elimination and proof-theoretic ordinals, which became key tools in proof theory. Gödel gave a different consistency proof, which reduces the consistency of classical arithmetic to that of intuitionistic arithmetic in higher types. The first textbook on symbolic logic for the layman was written by Lewis Carroll, author of Alice in Wonderland, in 1896. Beginnings of the other branches Alfred Tarski developed the basics of model theory. Beginning in 1935, a group of prominent mathematicians collaborated under the pseudonym Nicolas Bourbaki to publish Éléments de mathématique, a series of encyclopedic mathematics texts. These texts, written in an austere and axiomatic style, emphasized rigorous presentation and set-theoretic foundations. Terminology coined by these texts, such as the words bijection, injection, and surjection, and the set-theoretic foundations the texts employed, were widely adopted throughout mathematics. The study of computability came to be known as recursion theory or computability theory, because early formalizations by Gödel and Kleene relied on recursive definitions of functions. When these definitions were shown equivalent to Turing's formalization involving Turing machines, it became clear that a new concept – the computable function – had been discovered, and that this definition was robust enough to admit numerous independent characterizations. In his work on the incompleteness theorems in 1931, Gödel lacked a rigorous concept of an effective formal system; he immediately realized that the new definitions of computability could be used for this purpose, allowing him to state the incompleteness theorems in generality that could only be implied in the original paper. Numerous results in recursion theory were obtained in the 1940s by Stephen Cole Kleene and Emil Leon Post. Kleene introduced the concepts of relative computability, foreshadowed by Turing, and the arithmetical hierarchy. Kleene later generalized recursion theory to higher-order functionals. Kleene and Georg Kreisel studied formal versions of intuitionistic mathematics, particularly in the context of proof theory. Formal logical systems At its core, mathematical logic deals with mathematical concepts expressed using formal logical systems. These systems, though they differ in many details, share the common property of considering only expressions in a fixed formal language. The systems of propositional logic and first-order logic are the most widely studied today, because of their applicability to foundations of mathematics and because of their desirable proof-theoretic properties. Stronger classical logics such as second-order logic or infinitary logic are also studied, along with Non-classical logics such as intuitionistic logic. First-order logic First-order logic is a particular formal system of logic. Its syntax involves only finite expressions as well-formed formulas, while its semantics are characterized by the limitation of all quantifiers to a fixed domain of discourse. Early results from formal logic established limitations of first-order logic. The Löwenheim–Skolem theorem (1919) showed that if a set of sentences in a countable first-order language has an infinite model then it has at least one model of each infinite cardinality. This shows that it is impossible for a set of first-order axioms to characterize the natural numbers, the real numbers, or any other infinite structure up to isomorphism. As the goal of early foundational studies was to produce axiomatic theories for all parts of mathematics, this limitation was particularly stark. Gödel's completeness theorem established the equivalence between semantic and syntactic definitions of logical consequence in first-order logic. It shows that if a particular sentence is true in every model that satisfies a particular set of axioms, then there must be a finite deduction of the sentence from the axioms. The compactness theorem first appeared as a lemma in Gödel's proof of the completeness theorem, and it took many years before logicians grasped its significance and began to apply it routinely. It says that a set of sentences has a model if and only if every finite subset has a model, or in other words that an inconsistent set of formulas must have a finite inconsistent subset. The completeness and compactness theorems allow for sophisticated analysis of logical consequence in first-order logic and the development of model theory, and they are a key reason for the prominence of first-order logic in mathematics. Gödel's incompleteness theorems establish additional limits on first-order axiomatizations. The first incompleteness theorem states that for any consistent, effectively given (defined below) logical system that is capable of interpreting arithmetic, there exists a statement that is true (in the sense that it holds for the natural numbers) but not provable within that logical system (and which indeed may fail in some non-standard models of arithmetic which may be consistent with the logical system). For example, in every logical system capable of expressing the Peano axioms, the Gödel sentence holds for the natural numbers but cannot be proved. Here a logical system is said to be effectively given if it is possible to decide, given any formula in the language of the system, whether the formula is an axiom, and one which can express the Peano axioms is called "sufficiently strong." When applied to first-order logic, the first incompleteness theorem implies that any sufficiently strong, consistent, effective first-order theory has models that are not elementarily equivalent, a stronger limitation than the one established by the Löwenheim–Skolem theorem. The second incompleteness theorem states that no sufficiently strong, consistent, effective axiom system for arithmetic can prove its own consistency, which has been interpreted to show that Hilbert's program cannot be reached. Other classical logics Many logics besides first-order logic are studied. These include infinitary logics, which allow for formulas to provide an infinite amount of information, and higher-order logics, which include a portion of set theory directly in their semantics. The most well studied infinitary logic is . In this logic, quantifiers may only be nested to finite depths, as in first-order logic, but formulas may have finite or countably infinite conjunctions and disjunctions within them. Thus, for example, it is possible to say that an object is a whole number using a formula of such as Higher-order logics allow for quantification not only of elements of the domain of discourse, but subsets of the domain of discourse, sets of such subsets, and other objects of higher type. The semantics are defined so that, rather than having a separate domain for each higher-type quantifier to range over, the quantifiers instead range over all objects of the appropriate type. The logics studied before the development of first-order logic, for example Frege's logic, had similar set-theoretic aspects. Although higher-order logics are more expressive, allowing complete axiomatizations of structures such as the natural numbers, they do not satisfy analogues of the completeness and compactness theorems from first-order logic, and are thus less amenable to proof-theoretic analysis. Another type of logics are s that allow inductive definitions, like one writes for primitive recursive functions. One can formally define an extension of first-order logic — a notion which encompasses all logics in this section because they behave like first-order logic in certain fundamental ways, but does not encompass all logics in general, e.g. it does not encompass intuitionistic, modal or fuzzy logic. Lindström's theorem implies that the only extension of first-order logic satisfying both the compactness theorem and the downward Löwenheim–Skolem theorem is first-order logic. Nonclassical and modal logic Modal logics include additional modal operators, such as an operator which states that a particular formula is not only true, but necessarily true. Although modal logic is not often used to axiomatize mathematics, it has been used to study the properties of first-order provability and set-theoretic forcing. Intuitionistic logic was developed by Heyting to study Brouwer's program of intuitionism, in which Brouwer himself avoided formalization. Intuitionistic logic specifically does not include the law of the excluded middle, which states that each sentence is either true or its negation is true. Kleene's work with the proof theory of intuitionistic logic showed that constructive information can be recovered from intuitionistic proofs. For example, any provably total function in intuitionistic arithmetic is computable; this is not true in classical theories of arithmetic such as Peano arithmetic. Algebraic logic Algebraic logic uses the methods of abstract algebra to study the semantics of formal logics. A fundamental example is the use of Boolean algebras to represent truth values in classical propositional logic, and the use of Heyting algebras to represent truth values in intuitionistic propositional logic. Stronger logics, such as first-order logic and higher-order logic, are studied using more complicated algebraic structures such as cylindric algebras. Set theory Set theory is the study of sets, which are abstract collections of objects. Many of the basic notions, such as ordinal and cardinal numbers, were developed informally by Cantor before formal axiomatizations of set theory were developed. The first such axiomatization, due to Zermelo, was extended slightly to become Zermelo–Fraenkel set theory (ZF), which is now the most widely used foundational theory for mathematics. Other formalizations of set theory have been proposed, including von Neumann–Bernays–Gödel set theory (NBG), Morse–Kelley set theory (MK), and New Foundations (NF). Of these, ZF, NBG, and MK are similar in describing a cumulative hierarchy of sets. New Foundations takes a different approach; it allows objects such as the set of all sets at the cost of restrictions on its set-existence axioms. The system of Kripke–Platek set theory is closely related to generalized recursion theory. Two famous statements in set theory are the axiom of choice and the continuum hypothesis. The axiom of choice, first stated by Zermelo, was proved independent of ZF by Fraenkel, but has come to be widely accepted by mathematicians. It states that given a collection of nonempty sets there is a single set C that contains exactly one element from each set in the collection. The set C is said to "choose" one element from each set in the collection. While the ability to make such a choice is considered obvious by some, since each set in the collection is nonempty, the lack of a general, concrete rule by which the choice can be made renders the axiom nonconstructive. Stefan Banach and Alfred Tarski showed that the axiom of choice can be used to decompose a solid ball into a finite number of pieces which can then be rearranged, with no scaling, to make two solid balls of the original size. This theorem, known as the Banach–Tarski paradox, is one of many counterintuitive results of the axiom of choice. The continuum hypothesis, first proposed as a conjecture by Cantor, was listed by David Hilbert as one of his 23 problems in 1900. Gödel showed that the continuum hypothesis cannot be disproven from the axioms of Zermelo–Fraenkel set theory (with or without the axiom of choice), by developing the constructible universe of set theory in which the continuum hypothesis must hold. In 1963, Paul Cohen showed that the continuum hypothesis cannot be proven from the axioms of Zermelo–Fraenkel set theory. This independence result did not completely settle Hilbert's question, however, as it is possible that new axioms for set theory could resolve the hypothesis. Recent work along these lines has been conducted by W. Hugh Woodin, although its importance is not yet clear. Contemporary research in set theory includes the study of large cardinals and determinacy. Large cardinals are cardinal numbers with particular properties so strong that the existence of such cardinals cannot be proved in ZFC. The existence of the smallest large cardinal typically studied, an inaccessible cardinal, already implies the consistency of ZFC. Despite the fact that large cardinals have extremely high cardinality, their existence has many ramifications for the structure of the real line. Determinacy refers to the possible existence of winning strategies for certain two-player games (the games are said to be determined). The existence of these strategies implies structural properties of the real line and other Polish spaces. Model theory Model theory studies the models of various formal theories. Here a theory is a set of formulas in a particular formal logic and signature, while a model is a structure that gives a concrete interpretation of the theory. Model theory is closely related to universal algebra and algebraic geometry, although the methods of model theory focus more on logical considerations than those fields. The set of all models of a particular theory is called an elementary class; classical model theory seeks to determine the properties of models in a particular elementary class, or determine whether certain classes of structures form elementary classes. The method of quantifier elimination can be used to show that definable sets in particular theories cannot be too complicated. Tarski established quantifier elimination for real-closed fields, a result which also shows the theory of the field of real numbers is decidable. He also noted that his methods were equally applicable to algebraically closed fields of arbitrary characteristic. A modern subfield developing from this is concerned with o-minimal structures. Morley's categoricity theorem, proved by Michael D. Morley, states that if a first-order theory in a countable language is categorical in some uncountable cardinality, i.e. all models of this cardinality are isomorphic, then it is categorical in all uncountable cardinalities. A trivial consequence of the continuum hypothesis is that a complete theory with less than continuum many nonisomorphic countable models can have only countably many. Vaught's conjecture, named after Robert Lawson Vaught, says that this is true even independently of the continuum hypothesis. Many special cases of this conjecture have been established. Recursion theory Recursion theory, also called computability theory, studies the properties of computable functions and the Turing degrees, which divide the uncomputable functions into sets that have the same level of uncomputability. Recursion theory also includes the study of generalized computability and definability. Recursion theory grew from the work of Rózsa Péter, Alonzo Church and Alan Turing in the 1930s, which was greatly extended by Kleene and Post in the 1940s. Classical recursion theory focuses on the computability of functions from the natural numbers to the natural numbers. The fundamental results establish a robust, canonical class of computable functions with numerous independent, equivalent characterizations using Turing machines, λ calculus, and other systems. More advanced results concern the structure of the Turing degrees and the lattice of recursively enumerable sets. Generalized recursion theory extends the ideas of recursion theory to computations that are no longer necessarily finite. It includes the study of computability in higher types as well as areas such as hyperarithmetical theory and α-recursion theory. Contemporary research in recursion theory includes the study of applications such as algorithmic randomness, computable model theory, and reverse mathematics, as well as new results in pure recursion theory. Algorithmically unsolvable problems An important subfield of recursion theory studies algorithmic unsolvability; a decision problem or function problem is algorithmically unsolvable if there is no possible computable algorithm that returns the correct answer for all legal inputs to the problem. The first results about unsolvability, obtained independently by Church and Turing in 1936, showed that the Entscheidungsproblem is algorithmically unsolvable. Turing proved this by establishing the unsolvability of the halting problem, a result with far-ranging implications in both recursion theory and computer science. There are many known examples of undecidable problems from ordinary mathematics. The word problem for groups was proved algorithmically unsolvable by Pyotr Novikov in 1955 and independently by W. Boone in 1959. The busy beaver problem, developed by Tibor Radó in 1962, is another well-known example. Hilbert's tenth problem asked for an algorithm to determine whether a multivariate polynomial equation with integer coefficients has a solution in the integers. Partial progress was made by Julia Robinson, Martin Davis and Hilary Putnam. The algorithmic unsolvability of the problem was proved by Yuri Matiyasevich in 1970. Proof theory and constructive mathematics Proof theory is the study of formal proofs in various logical deduction systems. These proofs are represented as formal mathematical objects, facilitating their analysis by mathematical techniques. Several deduction systems are commonly considered, including Hilbert-style deduction systems, systems of natural deduction, and the sequent calculus developed by Gentzen. The study of constructive mathematics, in the context of mathematical logic, includes the study of systems in non-classical logic such as intuitionistic logic, as well as the study of predicative systems. An early proponent of predicativism was Hermann Weyl, who showed it is possible to develop a large part of real analysis using only predicative methods. Because proofs are entirely finitary, whereas truth in a structure is not, it is common for work in constructive mathematics to emphasize provability. The relationship between provability in classical (or nonconstructive) systems and provability in intuitionistic (or constructive, respectively) systems is of particular interest. Results such as the Gödel–Gentzen negative translation show that it is possible to embed (or translate) classical logic into intuitionistic logic, allowing some properties about intuitionistic proofs to be transferred back to classical proofs. Recent developments in proof theory include the study of proof mining by Ulrich Kohlenbach and the study of proof-theoretic ordinals by Michael Rathjen. Applications "Mathematical logic has been successfully applied not only to mathematics and its foundations (G. Frege, B. Russell, D. Hilbert, P. Bernays, H. Scholz, R. Carnap, S. Lesniewski, T. Skolem), but also to physics (R. Carnap, A. Dittrich, B. Russell, C. E. Shannon, A. N. Whitehead, H. Reichenbach, P. Fevrier), to biology (J. H. Woodger, A. Tarski), to psychology (F. B. Fitch, C. G. Hempel), to law and morals (K. Menger, U. Klug, P. Oppenheim), to economics (J. Neumann, O. Morgenstern), to practical questions (E. C. Berkeley, E. Stamm), and even to metaphysics (J. [Jan] Salamucha, H. Scholz, J. M. Bochenski). Its applications to the history of logic have proven extremely fruitful (J. Lukasiewicz, H. Scholz, B. Mates, A. Becker, E. Moody, J. Salamucha, K. Duerr, Z. Jordan, P. Boehner, J. M. Bochenski, S. [Stanislaw] T. Schayer, D. Ingalls)." "Applications have also been made to theology (F. Drewnowski, J. Salamucha, I. Thomas)." Connections with computer science The study of computability theory in computer science is closely related to the study of computability in mathematical logic. There is a difference of emphasis, however. Computer scientists often focus on concrete programming languages and feasible computability, while researchers in mathematical logic often focus on computability as a theoretical concept and on noncomputability. The theory of semantics of programming languages is related to model theory, as is program verification (in particular, model checking). The Curry–Howard correspondence between proofs and programs relates to proof theory, especially intuitionistic logic. Formal calculi such as the lambda calculus and combinatory logic are now studied as idealized programming languages. Computer science also contributes to mathematics by developing techniques for the automatic checking or even finding of proofs, such as automated theorem proving and logic programming. Descriptive complexity theory relates logics to computational complexity. The first significant result in this area, Fagin's theorem (1974) established that NP is precisely the set of languages expressible by sentences of existential second-order logic. Foundations of mathematics In the 19th century, mathematicians became aware of logical gaps and inconsistencies in their field. It was shown that Euclid's axioms for geometry, which had been taught for centuries as an example of the axiomatic method, were incomplete. The use of infinitesimals, and the very definition of function, came into question in analysis, as pathological examples such as Weierstrass' nowhere-differentiable continuous function were discovered. Cantor's study of arbitrary infinite sets also drew criticism. Leopold Kronecker famously stated "God made the integers; all else is the work of man," endorsing a return to the study of finite, concrete objects in mathematics. Although Kronecker's argument was carried forward by constructivists in the 20th century, the mathematical community as a whole rejected them. David Hilbert argued in favor of the study of the infinite, saying "No one shall expel us from the Paradise that Cantor has created." Mathematicians began to search for axiom systems that could be used to formalize large parts of mathematics. In addition to removing ambiguity from previously naive terms such as function, it was hoped that this axiomatization would allow for consistency proofs. In the 19th century, the main method of proving the consistency of a set of axioms was to provide a model for it. Thus, for example, non-Euclidean geometry can be proved consistent by defining point to mean a point on a fixed sphere and line to mean a great circle on the sphere. The resulting structure, a model of elliptic geometry, satisfies the axioms of plane geometry except the parallel postulate. With the development of formal logic, Hilbert asked whether it would be possible to prove that an axiom system is consistent by analyzing the structure of possible proofs in the system, and showing through this analysis that it is impossible to prove a contradiction. This idea led to the study of proof theory. Moreover, Hilbert proposed that the analysis should be entirely concrete, using the term finitary to refer to the methods he would allow but not precisely defining them. This project, known as Hilbert's program, was seriously affected by Gödel's incompleteness theorems, which show that the consistency of formal theories of arithmetic cannot be established using methods formalizable in those theories. Gentzen showed that it is possible to produce a proof of the consistency of arithmetic in a finitary system augmented with axioms of transfinite induction, and the techniques he developed to do so were seminal in proof theory. A second thread in the history of foundations of mathematics involves nonclassical logics and constructive mathematics. The study of constructive mathematics includes many different programs with various definitions of constructive. At the most accommodating end, proofs in ZF set theory that do not use the axiom of choice are called constructive by many mathematicians. More limited versions of constructivism limit themselves to natural numbers, number-theoretic functions, and sets of natural numbers (which can be used to represent real numbers, facilitating the study of mathematical analysis). A common idea is that a concrete means of computing the values of the function must be known before the function itself can be said to exist. In the early 20th century, Luitzen Egbertus Jan Brouwer founded intuitionism as a part of philosophy of mathematics . This philosophy, poorly understood at first, stated that in order for a mathematical statement to be true to a mathematician, that person must be able to intuit the statement, to not only believe its truth but understand the reason for its truth. A consequence of this definition of truth was the rejection of the law of the excluded middle, for there are statements that, according to Brouwer, could not be claimed to be true while their negations also could not be claimed true. Brouwer's philosophy was influential, and the cause of bitter disputes among prominent mathematicians. Later, Kleene and Kreisel would study formalized versions of intuitionistic logic (Brouwer rejected formalization, and presented his work in unformalized natural language). With the advent of the BHK interpretation and Kripke models, intuitionism became easier to reconcile with classical mathematics. See also Argument Informal logic Knowledge representation and reasoning Logic List of computability and complexity topics List of first-order theories List of logic symbols List of mathematical logic topics List of set theory topics Mereology Propositional calculus Well-formed formula Notes References Undergraduate texts Shawn Hedman, A first course in logic: an introduction to model theory, proof theory, computability, and complexity, Oxford University Press, 2004, . Covers logics in close relation with computability theory and complexity theory Graduate texts Kleene, Stephen Cole.(1952), Introduction to Metamathematics. New York: Van Nostrand. (Ishi Press: 2009 reprint). Kleene, Stephen Cole. (1967), Mathematical Logic. John Wiley. Dover reprint, 2002. . Research papers, monographs, texts, and surveys J.D. Sneed, The Logical Structure of Mathematical Physics. Reidel, Dordrecht, 1971 (revised edition 1979). Reprinted as an appendix in Classical papers, texts, and collections Reprinted in English translation as: "Consistency and irrational numbers". Two English translations: 1963 (1901). Essays on the Theory of Numbers. Beman, W. W., ed. and trans. Dover. 1996. In From Kant to Hilbert: A Source Book in the Foundations of Mathematics, 2 vols, Ewald, William B., ed., Oxford University Press: 787–832. Reprinted in English translation as "The notion of 'definite' and the independence of the axiom of choice" in . Frege, Gottlob (1879), Begriffsschrift, eine der arithmetischen nachgebildete Formelsprache des reinen Denkens. Halle a. S.: Louis Nebert. Translation: Concept Script, a formal language of pure thought modelled upon that of arithmetic, by S. Bauer-Mengelberg in . Frege, Gottlob (1884), Die Grundlagen der Arithmetik: eine logisch-mathematische Untersuchung über den Begriff der Zahl. Breslau: W. Koebner. Translation: J. L. Austin, 1974. The Foundations of Arithmetic: A logico-mathematical enquiry into the concept of number, 2nd ed. Blackwell. Reprinted in English translation in Gentzen's Collected works, M. E. Szabo, ed., North-Holland, Amsterdam, 1969. Reprinted in English translation in Gödel's Collected Works, vol II, Solomon Feferman et al., eds. Oxford University Press, 1993. English 1902 edition (The Foundations of Geometry) republished 1980, Open Court, Chicago. Lecture given at the International Congress of Mathematicians, 3 September 1928. Published in English translation as "The Grounding of Elementary Number Theory", in Mancosu 1998, pp. 266–273. Reprinted in English translation as Translated as "On possibilities in the calculus of relatives" in Excerpt reprinted in English translation as "The principles of arithmetic, presented by a new method"in . Reprinted in English translation as "The principles of mathematics and the problems of sets" in . Reprinted in English translation as "Proof that every set can be well-ordered" in . Reprinted in English translation as "A new proof of the possibility of a well-ordering" in . External links Polyvalued logic and Quantity Relation Logic forall x: an introduction to formal logic, a free textbook by . A Problem Course in Mathematical Logic, a free textbook by Stefan Bilaniuk. Detlovs, Vilnis, and Podnieks, Karlis (University of Latvia), Introduction to Mathematical Logic. (hyper-textbook). In the Stanford Encyclopedia of Philosophy: Classical Logic by Stewart Shapiro. First-order Model Theory by Wilfrid Hodges. In the London Philosophy Study Guide: Mathematical Logic Set Theory & Further Logic Philosophy of Mathematics
7969681
https://en.wikipedia.org/wiki/Coding%20conventions
Coding conventions
Coding conventions are a set of guidelines for a specific programming language that recommend programming style, practices, and methods for each aspect of a program written in that language. These conventions usually cover file organization, indentation, comments, declarations, statements, white space, naming conventions, programming practices, programming principles, programming rules of thumb, architectural best practices, etc. These are guidelines for software structural quality. Software programmers are highly recommended to follow these guidelines to help improve the readability of their source code and make software maintenance easier. Coding conventions are only applicable to the human maintainers and peer reviewers of a software project. Conventions may be formalized in a documented set of rules that an entire team or company follows, or may be as informal as the habitual coding practices of an individual. Coding conventions are not enforced by compilers. Software maintenance Reducing the cost of software maintenance is the most often cited reason for following coding conventions. In their introduction to code conventions for the Java programming language, Sun Microsystems provides the following rationale: Code conventions are important to programmers for a number of reasons: 40%–80% of the lifetime cost of a piece of software goes to maintenance. Hardly any software is maintained for its whole life by the original author. Code conventions improve the readability of the software, allowing engineers to understand new code more quickly and thoroughly. If you ship your source code as a product, you need to make sure it is as well packaged and clean as any other product you create. Quality Software peer review frequently involves reading source code. This type of peer review is primarily a defect detection activity. By definition, only the original author of a piece of code has read the source file before the code is submitted for review. Code that is written using consistent guidelines is easier for other reviewers to understand and assimilate, improving the efficacy of the defect detection process. Even for the original author, consistently coded software eases maintainability. There is no guarantee that an individual will remember the precise rationale for why a particular piece of code was written in a certain way long after the code was originally written. Coding conventions can help. Consistent use of whitespace improves readability and reduces the time it takes to understand the software. Coding standards Where coding conventions have been specifically designed to produce high-quality code, and have then been formally adopted, they then become coding standards. Specific styles, irrespective of whether they are commonly adopted, do not automatically produce good quality code. Reduction of complexity Complexity is a factor going against security. The management of complexity includes the following basic principle: minimize the amount of code written during the project development. This prevents unnecessary work which prevents unnecessary cost, both upfront and downstream. This is simply because if there is less code, it is less work not only to create the application, but also to maintain it. Complexity is managed both at the design stage (how the project is architectured) and at the development stage (by having simpler code). If the coding is kept basic and simple then the complexity will be minimised. Very often this is keeping the coding as 'physical' as possible - coding in a manner that is very direct and not highly abstract. This produces optimal code that is easy to read and follow. Complexity can also be avoided simply by not using complicated tools for simple jobs. The more complex the code is the more likely it is to be buggy, the more difficult the bugs are to find and the more likely there are to be hidden bugs. Refactoring Refactoring refers to a software maintenance activity where source code is modified to improve readability or improve its structure. Software is often refactored to bring it into conformance with a team's stated coding standards after its initial release. Any change that does not alter the behavior of the software can be considered refactoring. Common refactoring activities are changing variable names, renaming methods, moving methods or whole classes and breaking large methods (or functions) into smaller ones. Agile software development methodologies plan for regular (or even continuous) refactoring making it an integral part of the team software development process. Task automation Coding conventions allow programmers to have simple scripts or programs whose job is to process source code for some purpose other than compiling it into an executable. It is common practice to count the software size (Source lines of code) to track current project progress or establish a baseline for future project estimates. Consistent coding standards can, in turn, make the measurements more consistent. Special tags within source code comments are often used to process documentation, two notable examples are javadoc and doxygen. The tools specify the use of a set of tags, but their use within a project is determined by convention. Coding conventions simplify writing new software whose job is to process existing software. Use of static code analysis has grown consistently since the 1950s. Some of the growth of this class of development tools stems from increased maturity and sophistication of the practitioners themselves (and the modern focus on safety and security), but also from the nature of the languages themselves. Language factors All software practitioners must grapple with the problem of organizing and managing a large number of sometimes complex instructions. For all but the smallest software projects, source code (instructions) are partitioned into separate files and frequently among many directories. It was natural for programmers to collect closely related functions (behaviors) in the same file and to collect related files into directories. As software development shifted from purely procedural programming (such as found in FORTRAN) towards more object-oriented constructs (such as found in C++), it became the practice to write the code for a single (public) class in a single file (the 'one class per file' convention). Java has gone one step further - the Java compiler returns an error if it finds more than one public class per file. A convention in one language may be a requirement in another. Language conventions also affect individual source files. Each compiler (or interpreter) used to process source code is unique. The rules a compiler applies to the source creates implicit standards. For example, Python code is much more consistently indented than, say Perl, because whitespace (indentation) is actually significant to the interpreter. Python does not use the brace syntax Perl uses to delimit functions. Changes in indentation serve as the delimiters. Tcl, which uses a brace syntax similar to Perl or C/C++ to delimit functions, does not allow the following, which seems fairly reasonable to a C programmer: set i = 0 while {$i < 10} { puts "$i squared = [expr $i*$i]" incr i } The reason is that in Tcl, curly braces are not used only to delimit functions as in C or Java. More generally, curly braces are used to group words together into a single argument. In Tcl, the word while takes two arguments, a condition and an action. In the example above, while is missing its second argument, its action (because the Tcl also uses the newline character to delimit the end of a command). Common conventions There are a large number of coding conventions; see Coding Style for numerous examples and discussion. Common coding conventions may cover the following areas: Comment conventions Indent style conventions Line length conventions Naming conventions Programming practices Programming principles Programming style conventions Coding standards include the CERT C Coding Standard, MISRA C, High Integrity C++, see list below. See also Comparison of programming languages (syntax) Hungarian Notation Indent style List of tools for static code analysis List of software development philosophies MISRA Programming style Software metrics Software quality The Power of 10 Rules References List of coding standards Coding conventions for languages ActionScript: Flex SDK coding conventions and best practices Ada: Ada 95 Quality and Style Guide: Guidelines for Professional Programmers Ada: Guide for the use of the Ada programming language in high integrity systems (ISO/IEC TR 15942:2000) Ada: NASA Flight Software Branch — Ada Coding Standard Ada: ESA Ada Coding Standard - BSSC(98)3 Issue 1 October 1998 Ada: European Space Agency's Software engineering and standardisation C: CERT C Coding Standard CERT C Coding Standard (SEI) C: Embedded C Coding Standard (Barr Group) C: Firmware Development Standard (Jack Ganssle) C: MISRA C C: TIOBE C Standard C++: C++ Core Guidelines (Bjarne Stroustrup, Herb Sutter) C++: Quantum Leaps C/C++ Coding Standard C++: C++ Programming/Programming Languages/C++/Code/Style Conventions C++: GeoSoft's C++ Programming Style Guidelines C++: Google's C++ Style Guide C++: High Integrity C++ C++: MISRA C++ C++: Philips Healthcare C++ Coding Standard C/C++: C/C++ Coding Guidelines from devolo C#: C# Coding Conventions (C# Programming Guide) C#: Design Guidelines for Developing Class Libraries C#: Brad Abrams C#: Philips Healthcare or Philips Healthcare C# Coding Standard D: The D Style Dart: The Dart Style Guide Erlang: Erlang Programming Rules and Conventions Flex: Code conventions for the Flex SDK Java: Ambysoft's Coding Standards for Java Java: Code Conventions for the Java Programming Language (Not actively maintained. Latest version: 1999-APR-20.) Java: GeoSoft's Java Programming Style Guidelines Java: Java: TIOBE Java Standard Java: SoftwareMonkey's coding conventions for Java and other brace-syntax languages JavaScript: Code Conventions for the JavaScript Programming Language Lisp: Riastradh's Lisp Style Rules MATLAB: Neurobat Coding Conventions for MATLAB Object Pascal: Object Pascal Style Guide Perl: Perl Style Guide PHP::PEAR: PHP::PEAR Coding Standards PHP::FIG: PHP Framework Interop Group PL/I: PL/I Style Guide Python: Style Guide for Python Code Ruby: The Unofficial Ruby Usage Guide Ruby: GitHub Ruby style guide Shell: Google's Shell Style Guide Coding conventions for projects Apache Developers' C Language Style Guide Drupal PHP Coding Standards GNU Coding Standards (PDF) Linux Kernel Coding Style (or Documentation/CodingStyle in the Linux Kernel source tree) Mozilla Coding Style Guide Mono: Programming style for Mono OpenBSD Kernel source file style guide (KNF) Road Intranet's C++ Guidelines Style guides for Google-originated open-source projects The NetBSD source code style guide (formerly known as the BSD Kernel Normal Form) Zend Framework Coding Standards ZeroMQ C Language Style for Scalability (CLASS) Source code
40316085
https://en.wikipedia.org/wiki/PlayStation%203%20homebrew
PlayStation 3 homebrew
Some enthusiasts participate in homebrew for the PlayStation 3 video game console. Homebrew software was first run on the PS3 by a group of hackers under the name "Team Ice" by exploiting a vulnerability in the game Resistance: Fall of Man. Following various other hacks executed from Linux, Sony removed the ability to install another operating system in the 3.21 firmware update. This event caused backlash among the hacker communities, and eventually the group Fail0verflow found a flaw in the generation of encryption keys which they leveraged to restore the ability to install Linux. George Hotz (Geohot), often misattributed as the genesis of homebrew on the PS3, later created the first homebrew signed using the private "metldr" encryption key which he leaked onto the internet. The action of leaking the key led to Hotz being sued by Sony. The court case was settled out of court, with the result of George Hotz not being able to further reverse engineer the PS3. Private key compromised At the 2010 Chaos Communication Congress (CCC) in Berlin, a group calling itself fail0verflow announced it had succeeded in bypassing a number of the PlayStation 3's security measures, allowing unsigned code to run without a dongle. They also announced that it was possible to recover the Elliptic Curve DSA (ECDSA) private key used by Sony to sign software, due to a failure of Sony's ECDSA implementation to generate a different random number for each signature. However, fail0verflow chose not to publish this key because it was not necessary to run homebrew software on the device. The release of this key would allow anyone to sign their code and therefore be able to run it on any PlayStation 3 console. This would also mean that no countermeasures could be taken by Sony without rendering old software useless, as there would be no distinction between official and homebrew software. On January 3, 2011, geohot published the aforementioned private key, represented in hexadecimal as C5 B2 BF A1 A4 13 DD 16 F2 6D 31 C0 F2 ED 47 20 DC FB 06 70, as well as a Hello world program for the PS3. On January 12, 2011, Sony Computer Entertainment America filed lawsuits against both fail0verflow and geohot for violations of the DMCA and CFAA. The suit against geohot was settled at the end of March, 2011, with geohot agreeing to a permanent injunction. Custom firmware (CFW) To allow for homebrew using the newly discovered encryption keys, several modified versions of system update 3.55 have been released by Geohot and others. The most common feature is the addition of an "App Loader" that allows for the installation of homebrew apps as signed DLC-like packages. Although Backup Managers could run at that time, they could not load games at first even though some success had been made by making backups look like DLC games and then signing them. An LV2 patch was later released to allow Backup Managers to load game backups and was later integrated into the Managers themselves so that it doesn't have to be run whenever the PS3 is restarted. PS3 System Software update 3.56 tried to patch Miha's exploit for 3.55, however, within a day the system was circumvented again. This caused Sony to release another update shortly after, 3.60, which was secure against circumvention. However, users may choose not to update and games requiring a firmware version above 3.55 can be patched to run on v3.55 or lower. Soon after v3.60 was released, updates to the PlayStation Network were conducted to block any methods known that allowed PSN access on firmware older than the latest required official firmware (v4.87 ), thereby blocking users who chose not to update. A custom firmware known as "Rebug", released on March 31, 2011, gave retail PS3s most of the options and functionality of debug/developer PS3 units. One week later, tutorials became available allowing users to download PSN content for free, using fake (rather than stolen) credit card numbers. One April 12 report described hackers using the jailbroken firmware to access the dev-PSN to get back on games like Call of Duty, with widespread reports of cheating. While some sources blamed Rebug for the subsequent intrusion to Sony's private developer network, Time "Techland" described such theories as "highly—as in looking down at the clouds from the tip-top of Mount Everest highly—speculative". In late 2017, there was a tool released to convert 4.82 PS3 to . A new exploit toolset was released in 2020. See also PlayStation 3 Jailbreak Notes References Homebrew Homebrew software
14563877
https://en.wikipedia.org/wiki/Aviation%20combat%20element
Aviation combat element
In the United States Marine Corps, the aviation combat element or air combat element (ACE) is the aviation component of the Marine Air-Ground Task Force (MAGTF). The ACE is task organized to perform the six functions of Marine Corps aviation in support of MAGTF operations. The ACE is led by an aviation headquarters which employs rotary-wing, tiltrotor, and fixed-wing aircraft in conjunction with command and control, maintenance and engineering units. Role within the MAGTF The majority of aircraft usage within the MAGTF is for close air support or transport for the ground combat element (GCE) or logistics combat element (LCE), however, other specialized missions are available. The six main functions include: assault support, anti-air warfare, offensive air support, electronic warfare, control of aircraft and missiles, and aerial reconnaissance. The aviation combat element (ACE), which contributes the air power to the MAGTF, includes all aircraft (fixed wing, helicopters, tiltrotor, and UAV) and aviation support units. The units are organized into detachments, squadrons, groups, and wings, except for low altitude air defense units, which are organized into platoons, detachments, batteries, and battalions. These units include pilots, flight officers, enlisted aircrewmen, aviation logistics (aircraft maintenance, aviation electronics, aviation ordnance, and aviation supply) and Navy aviation medical and chaplain's corps personnel, as well as ground-based air defense units, and those units necessary for command and control (management and planning for manpower, intelligence, operations and training, and logistics functions), aviation command and control (tactical air command, air defense control, air support control, and air traffic control), communications, and aviation ground support (e.g., airfield services, bulk fuels/aircraft refueling, crash rescue, engineer construction and utilities support, EOD, motor transport, ground equipment supply and maintenance, local security/law enforcement, and the wing band). ACE Organization and size The size of the ACE varies in proportion to the size of the MAGTF. A Marine Expeditionary Force has a Marine Aircraft Wing or MAW. A Marine Expeditionary Brigade holds a Marine Aircraft Group or MAG, reinforced with a variety of aircraft squadrons and support personnel. The various Marine Expeditionary Units command a reinforced squadron, with various types of aircraft mixed into a single unit (known as a composite squadron). Generally, MEF postings are permanent, while MEBs and MEUs rotate their ACE, GCE, and LCE twice annually. 1st and 3rd Marine Aircraft Wings are unique in that they are subordinate to III MEF and I MEF, respectively, while all other equivalent units numerical designators match the MEF to which attached. Hierarchy of Marine Aviation units 1st Marine Aircraft Wing ACE of III Marine Expeditionary Force Marine Aircraft Group 12 Marine Aircraft Group 24 Marine Aircraft Group 36 Marine Air Control Group 18 2nd Marine Aircraft Wing ACE of II Marine Expeditionary Force Marine Aircraft Group 14 Marine Aircraft Group 26 Marine Aircraft Group 29 Marine Aircraft Group 31 Marine Air Control Group 28 3rd Marine Aircraft Wing ACE of I Marine Expeditionary Force Marine Aircraft Group 11 Marine Aircraft Group 13 Marine Aircraft Group 16 Marine Aircraft Group 39 Marine Air Control Group 38 4th Marine Aircraft Wing ACE of Marine Forces Reserve Marine Aircraft Group 41 Marine Aircraft Group 49 Marine Air Control Group 48 See also Marine Air-Ground Task Force United States Marine Corps Aviation List of United States Marine Corps aircraft wings List of United States Marine Corps aircraft groups List of active United States Marine Corps aircraft squadrons List of inactive United States Marine Corps aircraft squadrons List of United States Marine Corps aviation support units References United States Marine Corps aviation United States Marine Corps organization
38252574
https://en.wikipedia.org/wiki/Maxon%20Computer%20GmbH
Maxon Computer GmbH
Maxon Computer GmbH is a German software acquisition company that produces the 3D software Cinema 4D. History Maxon was founded in 1985 by three college students, Harald Egel, Uwe Bärtels and Harald Schneider. They had purchased their first computer, an Atari ST, and agreed to write a book about the basic programming language that ran on the ATARI ST, the GFA-BASIC Buch. The first issue of ST-Computer was released in January 1986. In 1991, after two years of development, the first version of Cinema 4D was launched in December 1993. This was followed in May 1994 with an upgrade to Cinema 4D V1.5, with improvements in rendering quality. Cinema 4D became Maxon Computer's flagship product. In January 2000, Nemetschek, a leader in architectural CAD Software, bought a 70% stake in Maxon in order to acquire a high-quality renderer for their CAD models, as well as to enter the multimedia market and later introduced version 6 of Cinema 4D XL Cinema 4D Release 10 was presented in 2006. Announcement of Cinema 4D Release 11 at Siggraph show, Los Angeles, in 2008. The company's program Cinebench is used to test computer hardware capabilities. The most recent version is R23 released in 2020. Maxon is considered one of the major companies in 3D animation and graphics. Maxon acquired Redshift in mid-2019, merged with Red Giant Software later that year, and acquired Pixologic in 2021 References 3D graphics software 3D animation software Film and video technology
9004057
https://en.wikipedia.org/wiki/Storm%20Worm
Storm Worm
The Storm Worm (dubbed so by the Finnish company F-Secure) is a phishing backdoor Trojan horse that affects computers using Microsoft operating systems, discovered on January 17, 2007. The worm is also known as: Small.dam or Trojan-Downloader.Win32.Small.dam (F-Secure) CME-711 (MITRE) W32/Nuwar@MM and Downloader-BAI (specific variant) (McAfee) Troj/Dorf and Mal/Dorf (Sophos) Trojan.DL.Tibs.Gen!Pac13 Trojan.Downloader-647 Trojan.Peacomm (Symantec) TROJ_SMALL.EDW (Trend Micro) Win32/Nuwar (ESET) Win32/Nuwar.N@MM!CME-711 (Windows Live OneCare) W32/Zhelatin (F-Secure and Kaspersky) Trojan.Peed, Trojan.Tibs (BitDefender) The Storm Worm began attacking thousands of (mostly private) computers in Europe and the United States on Friday, January 19, 2007, using an e-mail message with a subject line about a recent weather disaster, "230 dead as storm batters Europe". During the weekend there were six subsequent waves of the attack. As of January 22, 2007, the Storm Worm accounted for 8% of all malware infections globally. There is evidence, according to PCWorld, that the Storm Worm was of Russian origin, possibly traceable to the Russian Business Network. Ways of action Originally propagated in messages about European windstorm Kyrill, the Storm Worm has been seen also in emails with the following subjects: 230 dead as storm batters Europe. [The worm was dubbed "Storm" because of this message subject.] A killer at 11, he's free at 21 and kill again! U.S. Secretary of State Condoleezza Rice has kicked German Chancellor Angela Merkel British Muslims Genocide Naked teens attack home director. Re: Your text Radical Muslim drinking enemies' blood. Chinese/Russian missile shot down Russian/Chinese satellite/aircraft Saddam Hussein safe and sound! Saddam Hussein alive! Venezuelan leader: "Let's the War beginning". Fidel Castro dead. If I Knew FBI vs. Facebook USA occupies Iran When an attachment is opened, the malware installs the wincom32 service, and injects a payload, passing on packets to destinations encoded within the malware itself. According to Symantec, it may also download and run the Trojan.Abwiz.F trojan, and the W32.Mixor.Q@mm worm. The Trojan piggybacks on the spam with names such as "postcard.exe" and "Flash Postcard.exe," with more changes from the original wave as the attack mutates. Some of the known names for the attachments include: Postcard.exe ecard.exe FullVideo.exe Full Story.exe Video.exe Read More.exe FullClip.exe GreetingPostcard.exe MoreHere.exe FlashPostcard.exe GreetingCard.exe ClickHere.exe ReadMore.exe FlashPostcard.exe FullNews.exe NflStatTracker.exe ArcadeWorld.exe ArcadeWorldGame.exe Later, as F-Secure confirmed, the malware began spreading the subjects such as "Love birds" and "Touched by Love". These emails contain links to websites hosting some of the following files, which are confirmed to contain the virus: with_love.exe withlove.exe love.exe frommetoyou.exe iheartyou.exe fck2008.exe fck2009.exe According to Joe Stewart, director of malware research for SecureWorks, Storm remains amazingly resilient, in part because the Trojan horse it uses to infect systems changes its packing code every 10 minutes, and, once installed, the bot uses fast flux to change the IP addresses for its command and control servers. Botnetting The compromised machine becomes merged into a botnet. While most botnets are controlled through a central server, which if found can be taken down to destroy the botnet, the Storm Worm seeds a botnet that acts in a similar way to a peer-to-peer network, with no centralized control. Each compromised machine connects to a list of a subset of the entire botnet - around 30 to 35 other compromised machines, which act as hosts. While each of the infected hosts share lists of other infected hosts, no one machine has a full list of the entire botnet - each only has a subset, making it difficult to gauge the true extent of the zombie network. On 7 September 2007, estimates of the size of the Storm botnet ranged from 1 to 10 million computers. Researchers from the University of Mannheim and the Institut Eurecom have estimated concurrent online storm nodes to be between 5,000 and 40,000. Rootkit Another action the Storm Worm takes is to install the rootkit Win32.agent.dh. Symantec pointed out that flawed rootkit code voids some of the Storm Worm author's plans. Later variants, starting around July 2007, loaded the rootkit component by patching existing Windows drivers such as tcpip.sys and cdrom.sys with a stub of code that loads the rootkit driver module without requiring it to have an entry in the Windows driver list. April Fool's Day On April 1, 2008, a new storm worm was released onto the net, with April Fools-themed subject titles. Feedback The list of antivirus companies that can detect the Storm Worm include Authentium, BitDefender, ClamAV, eSafe, Eset, F-Prot, F-Secure, Kaspersky, McAfee, Sophos, Symantec, Trend Micro, avast! and Windows Live OneCare. The Storm Worm is constantly being updated by its authors to evade antivirus detection, so this does not imply that all the vendors listed above are able to detect all the Storm Worm variants. An intrusion detection system offers some protection from the rootkit, as it may warn that the Windows process "services.exe" is trying to access the Internet using ports 4000 or 7871. Windows 2000, Windows XP and presumably Windows Vista can be infected by all the Storm Worm variants, but Windows Server 2003 cannot, as the malware's author specifically excluded that edition of Windows from the code. Additionally, the decryption layer for some variants requires Windows API functions that are only available in Windows XP Service Pack 2 and later, effectively preventing infection on older versions of Windows. Peter Gutmann sent an email noting that the Storm botnet comprises between 1 and 10 million PCs depending on whose estimates you believe. Although Dr. Gutmann makes a hardware resource comparison between the Storm botnet and distributed memory and distributed shared memory high performance computers at TOP500, exact performance matches were not his intention—rather a more general appreciation of the botnet's size compared to other massive computing resources. Consider for example the size of the Storm botnet compared to grid computing projects such as the World Community Grid. An article in PCWorld dated October 21, 2007 says that a network security analyst presented findings at the Toorcon hacker conference in San Diego on October 20, 2007, saying that Storm is down to about 20,000 active hosts or about one-tenth of its former size. However, this is being disputed by security researcher Bruce Schneier, who notes that the network is being partitioned in order to sell the parts off independently. Notes External links Spamtrackers SpamWiki: Storm NetworkWorld: Storm Worm's virulence may change tactics Wired.com: Analysis by Bruce Schneier "There's a Storm Coming", from the IBM ISS X-Force Blog Trojan.Peacomm (Storm) at Symantec Stormy Weather: A Quantitative Assessment of the Storm Web Threat in 2007 (Trend Micro) In millions of Windows, the perfect Storm is gathering, from The Observer. April Fool's Day Storm Worm Attack Hits, from PC World. Storm and the future of social engineering from Help Net Security (HNS). Bodmer, Kilger, Carpenter, & Jones (2012). Reverse Deception: Organized Cyber Threat Counter-Exploitation. New York: McGraw-Hill Osborne Media. , Email worms 2007 in computing Hacking in the 2000s
3104727
https://en.wikipedia.org/wiki/Phoning%20home
Phoning home
In computing, phoning home is a term often used to refer to the behavior of security systems that report network location, username, or other such data to another computer. Phoning home may be useful for the proprietor in tracking a missing or stolen computer. This type of phoning home is frequently used on mobile computers at corporations. It typically involves a software agent which is difficult to detect or remove. However, there are malicious types of phoning homes such as surreptitious communication between applications or hardware installed at end-user sites and their manufacturers or developers. The traffic may be encrypted to make it difficult or impractical for the end-user to determine what data are being transmitted. The Stuxnet attack on Iran's nuclear facilities was facilitated by phone home technology as reported by The New York Times. Legal phoning home There are some uses for the phoning home practice that are legal in some countries. For example, phoning home could be for purposes of access restriction, such as transmitting an authorization key. This is done with the Adobe Creative Suite. Each time one of the programs is opened, it phones home with the serial number. If the serial number is listed as being already in use, or a fake, then the program will present the user with the option of inputting the correct serial number. If the user refuses, the next time the program loads, it will operate in trial mode until a valid serial number has been input. However, the method can be thwarted by either disabling the internet connection when starting the program or adding a firewall or Hosts file rule to prevent the program from communicating with the verification server. Phoning home could also be for marketing purposes, such as the "Sony BMG Rootkit", which transmits a hash of the currently playing CD back to Sony, or a digital video recorder (DVR) reporting on viewing habits. High-end computing systems such as mainframes have had 'phone home' capabilities for many years, to alert the manufacturer of hardware problems with the mainframes or disk storage subsystems (this enables repair or maintenance to be performed quickly and even proactively under the maintenance contract). Similarly, high-volume copy machines have long been equipped with phone-home capabilities, both for billing and for preventive/predictive maintenance purposes. In research computing, phoning home is used to track the daily usage of open source academic software. This phoning is used to develop logs for the purposes of justification in grant proposals to support the ongoing funding of such projects. Aside from malicious software phoning home, phoning home may be done to track computer assets—especially mobile computers. One of the most well-known software applications that leverage phoning home for tracking is Absolute Software's CompuTrace. This software employs an agent which calls into an Absolute-managed server on regular intervals with information companies or the police can use to locate a missing computer. More phone-home uses Other than phoning the home (website) of the applications' authors, applications can allow their documents to do the same thing, thus allowing the documents' authors to trigger (essentially anonymous) tracking by setting up a connection that is intended to be logged. Such behavior, for example, caused v7.0.5 of Adobe Reader to add an interactive notification whenever a PDF file tries phoning (to its author's) home. HTML e-mail messages can easily implement a form of "phoning home". Images and other files required by the e-mail body may generate extra requests to a remote web server before they can be viewed. The IP address of the user's own computer is sent to the webserver (an unavoidable process if a reply is required), and further details embedded in request URLs can further identify the user by e-mail address, marketing campaign, etc. Such extra page resources have been referred to as "web bugs" and they can also be used to track off-line viewing and other uses of ordinary web pages. So as to prevent the activation of these requests, many e-mail clients do not load images or other web resources when HTML e-mails are first viewed, giving users the option to load the images only if the e-mail is from a trusted source. Malicious phoning home There are many malware applications that can "phone home" to gather and store information about a person's machine. For example, the Pushdo Trojan shows the new complexity of modern malware applications and the phoning home capabilities of these systems. Pushdo has 421 executables available to be sent to an infected Windows client. Surveillance cameras Foscam have been reported by security researcher Brian Krebs to secretly phone home to the manufacturer. See also Digital Rights Management (DRM) Product activation Spyware Internet of Things References Computer network security Spyware Internet privacy
492313
https://en.wikipedia.org/wiki/Seth%20Schoen
Seth Schoen
Seth David Schoen (born September 27, 1979) is senior staff technologist for the Electronic Frontier Foundation, a technology civil rights organisation, and has been actively involved in discussing digital copyright law and encryption since the 1990s. He is an expert in trusted computing. In February 2008, Schoen collaborated with a Princeton research group led by Edward Felten that discovered a vulnerability of DRAM that undermined the basic assumptions of computer encryption security. In October 2005, Schoen led a small research team at EFF to decode the tiny tracking dots hidden in the printouts of some laser printers. Seth attended Northfield Mount Hermon School in Northfield, Massachusetts from 1993–1997. While attending UC Berkeley, Schoen founded Californians for Academic Freedom to protest the loyalty oath the state made university employees swear. Schoen later worked for Linuxcare, where he developed the Linuxcare Bootable Business Card. After he left Linuxcare, he forked the project to create the LNX-BBC rescue system, of which he is a lead developer. Schoen was formerly a board member and the Secretary of the Peer-Directed Projects Center, a Texas-based non-profit corporation, until he stepped down in November 2006. Schoen is the author of the DeCSS haiku. References External links Personal homepage Vitanuova, Seth's weblog DeCSS haiku The History of the DeCSS Haiku Californians for Academic Freedom (archived) 1979 births American bloggers Copyright activists Living people Northfield Mount Hermon School alumni University of California, Berkeley alumni 21st-century American non-fiction writers
35786689
https://en.wikipedia.org/wiki/Albrecht%20Schmidt%20%28computer%20scientist%29
Albrecht Schmidt (computer scientist)
Albrecht Schmidt (born 1970) is a computer scientist best known for his work in ubiquitous computing, pervasive computing, and the tangible user interface. He is a professor at Ludwig Maximilian University of Munich where he joined the faculty in 2017. Biography Professional career Albrecht Schmidt received an M.Sc. in computing from Manchester Metropolitan University (UK) in 1996. His master thesis was on modular neural networks. In 1997, he finished his masters in computer science at the University of Ulm (Germany). As a research assistant Schmidt was working towards a PhD at Telecooperation Office (TecO), University of Karlsruhe (Germany) from 1998 to 2001. He continued his study at Lancaster University (UK) to finish his PhD. Schmidt's PhD thesis was titled Ubiquitous Computing - Computing in Context. Schmidt transferred to the Ludwig Maximilian University of Munich (Germany) in 2003, where he led the Emmy-Noether Research Group 'Embedded Interaction'. He was appointed professor for applied computer science/media informatics at the University of Bonn and simultaneously served as department manager at the Fraunhofer Institute 'Institut für Intelligente Informations- und Analysesysteme (IAIS)'. From October 2007 to December 2010, he took the position as chair for pervasive computing at University of Duisburg-Essen. From January 2011 to August 2017, Schmidt directed the 'Human-Computer Interaction' research group at the Institut für Visualisierung und Interaktive Systeme at University of Stuttgart. Since October 2017, he has been the head professor of the research group 'Human Centered Ubiquitous Media Group' at the Department of Computer Science at the Ludwig Maximilian University of Munich. Recognition Schmidt was elected to the CHI Academy in 2018. Selected bibliography A. Schmidt, M. Beigl, H.W. Gellersen: "There is more to context than location", In: Computers & Graphics 23 (6), 893–90, 1999 A. Schmidt, K. Aidoo, A. Takaluoma, U. Tuomela, K. Van Laerhoven, W. Van de Velde: "Advanced interaction in context", In: Handheld and Ubiquitous Computing, 89-101, 1999 A. Schmidt: "Implicit human computer interaction through context", In: Personal and Ubiquitous Computing 4 (2), 191–199, 2000 H.W. Gellersen, A. Schmidt, M. Beigl: "Multi-sensor context-awareness in mobile devices and smart artifacts", In: Mobile Networks and Applications 7 (5), 341–351, 2002 M. Beigl, H.W. Gellersen, A. Schmidt: "Mediacups: experience with design and use of computer-augmented everyday artefacts", In: Computer Networks 35 (4), 401–409, 2001 A. Schmidt: "Ubiquitous computing-computing in context" (Ph.D. thesis), 2002 A. Schmidt, K Van Laerhoven: "How to build smart appliances?", In: Personal Communications, IEEE 8 (4), 66-71 2001 N. Kern, B. Schiele, A. Schmidt: "Multi-sensor activity context detection for wearable computing", In: Ambient Intelligence, 220–232, 2003 W.W. Gaver, J. Bowers, A. Boucher, H. Gellersen, S. Pennington, A. Schmidt, A. Steed, N. Villar, B. Walker: "The drift table: designing for ludic engagement", In: ACM, 2004 M. Kranz, P. Holleis, A. Schmidt: "Embedded Interaction - Interacting with the Internet of Things", In: IEEE Internet Computing, vol. 14, no. 2, pp. 46–53, March–April 2010 M. Kranz, A. Schmidt, A. Maldonado, R.B. Rusu, M. Beetz, B. Hörnler, G. Rigoll: "Context-Aware Kitchen Utilities", In: Proceedings of the 1st International Conference on Tangible and Embedded Interaction (TEI2007), pp. 213–214, Baton Rouge, Louisiana, USA, February 2007 M. Kranz, A. Schmidt, R.B. Rusu, A. Maldonado, M. Beetz, B. Hörnler, G. Rigoll: "Sensing Technologies and the Player-Middleware for Context-Awareness in Kitchen Environments", In: Proceedings of the 4th International Conference on Networked Sensing Systems (INSS2007), pp. 179–186, Brunswick, Germany, June 2007 L. Terrenghi, M. Kranz, P. Holleis, A. Schmidt: "A cube to learn: a tangible user interface for the design of a learning appliance", In: Personal and Ubiquitous Computing, vol. 10''', no. 2–3, pp. 153–158, 2006 A. Schmidt, M. Kranz, P. Holleis: "Interacting with the ubiquitous computer: towards embedding interaction", In: Proceedings of the Joint Conference on Smart Objects and Ambient Intelligence'' (sOc-EUSAI2005), pp. 147–152, Grenoble, France, October 2005 References External links http://albrecht-schmidt.blogspot.com/ 1970 births Living people Human–computer interaction researchers Ubiquitous computing researchers University of Ulm alumni People from Crailsheim University of Bonn faculty University of Duisburg-Essen faculty
49195450
https://en.wikipedia.org/wiki/Rob%20Walling
Rob Walling
Rob Walling is a serial entrepreneur, author, podcaster, and angel investor. He is the author of Start Small, Stay Small: A Developer's Guide to Launching a Startup, which was published in 2010. Walling is the founder of email marketing software Drip that was acquired in a life-changing exit by Leadpages in July 2016. Career In the early 2000s, Walling tried to launch a number of software products that failed. He had his first notable success with DotNetInvoice, an invoicing software application. Subsequently, he launched other small business and formed an online community of software founders called Micropreneurs. In 2010, he started a podcast with Mike Taber, which became one of the most popular startup podcasts in iTunes, called Startups for the Rest of Us. The next year, in June, he co-founded MicroConf, a conference for self-funded startups, which is held twice a year in Las Vegas and Europe. In August 2011, Walling purchased and revamped HitTail, a web-based software as a service product that provides long tail keyword suggestions. Before purchasing HitTail, it had used a freemium business model to market the product. Walling stopped the freemium model and used traditional software marketing to bring in more paying users. He founded Drip, an email marketing tool that allows a user to send emails to their audience based on user behavior, in 2012. Walling has become known as a supporter of self-funding or bootstrapping software companies that turn a profit, instead of raising funding from venture capitalists. Much of his writing focuses on tactics for growing software as a service startups. In 2014, he wrote the foreword to Dan Norris' book, The 7-Day Startup: You Don't Learn Until You Launch. In March, 2015 Walling published one of his best-known essays titled The Stairstep Approach to Bootstrapping where he outlines a potentially safer, more structured approach to bootstrapping a startup by starting with small, simple product ideas and leveraging the revenue and experience earned from them to tackle more ambitious (and typically more financially rewarding) ideas. This approach was the focus of a chapter of the 2015 book The End of Jobs: Money, Meaning and Freedom Without the 9-to-5 by Taylor Pearson. On July 26, 2016, his company, Drip, was acquired by Leadpages for an undisclosed amount. He led the Drip product team, out of Leadpages’ Minneapolis headquarters, until April 2018. In October 2018, Walling announced that he was starting the first startup accelerator designed for bootstrappers, called TinySeed. References External links robwalling.com Startups for the Rest of Us Founder Cafe Drip TinySeed Living people American businesspeople Date of birth missing (living people) Year of birth missing (living people)
30864422
https://en.wikipedia.org/wiki/Nero%20Burning%20ROM
Nero Burning ROM
Nero Burning ROM, commonly called Nero, is an optical disc authoring program from Nero AG. The software is part of the Nero Multimedia Suite but is also available as a stand-alone product. It is used for burning and copying optical discs such as CDs, DVDs, Blu-rays. The program also supports label printing technologies LightScribe and LabelFlash and can be used to convert audio files into other audio formats. Name Nero Burning ROM is a pun in reference to Roman Emperor Nero, who was best known for his association in the Great Fire of Rome. The emperor allegedly fiddled while the city of Rome burned. Also, Rome in German is spelled Rom. The software's logo features a burning Colosseum, although this is an anachronism as it was not built until after Nero's death. Features Nero Burning ROM is only available for Microsoft Windows. A Linux-compatible version was available from 2005 to 2012, but it has since been discontinued. In newer versions, media can be added to compilations via the Nero MediaBrowser. Nero AirBurn, a new feature in Nero 2015, enables users to burn media straight from their mobile devices. The latest version is Nero Burning ROM 2017 released in October 2016 including 4.0 with 256-bit encryption. The software supports the creation of a variety of media formats: Disk image files Audio CD discs DVD-Video discs Blu-ray Discs AVCHD video discs Bootable data discs ISO/UDF data discs discs Additional functions include: Printing on discs with LightScribe and LabelFlash technology Erasing rewritable discs Copying audio CD tracks in a choice of audio formats onto a hard disk drive Converting audio files to other audio file formats Connection to the online music database Gracenote Image format support Nero Burning ROM works with a number of optical disc image formats, including the raw uncompressed image using the ISO9660 standard and Nero's proprietary NRG file format. Depending on the version, additional image formats may be supported. To use non-natively supported formats such as lossless FLAC, Wavpack, and Shorten, additional program modules must be installed. The modules are also known as plug-ins and codecs and are usually free, although Nero AG sells some proprietary video and audio plug-ins. Standard CD images created by Nero products have the filename extension .NRG, but users can also create and burn normal ISO images. Varieties Nero Burning ROM is integrated in the Nero Multimedia Suite and is also available as a downloadable standalone product. It is also a part of Nero Essentials – a slimmed-down version of Nero Multimedia Suite – that comes bundled with OEM computers and optical disc writers. Version history Nero Burning ROM Note: Although Nero AG appears to no longer maintain a history of older versions on their website, release notes are archived by several third-party sites. Nero Linux See also InCD – drag and drop packet-writing software from Nero AG Nero Digital – a suite of MPEG-4 codecs developed by Nero AG List of optical disc authoring software References External links Official website 1997 software Optical disc authoring software Shareware Windows CD/DVD writing software Linux CD/DVD writing software
460144
https://en.wikipedia.org/wiki/Edinburgh%20Multiple%20Access%20System
Edinburgh Multiple Access System
The Edinburgh Multi-Access System (EMAS) was a mainframe computer operating system developed at the University of Edinburgh during the 1970s. EMAS was a powerful and efficient general purpose multi-user system which coped with many of the computing needs of the University of Edinburgh and the University of Kent (the only other site outside Edinburgh to adopt the operating system). History Originally running on the ICL System 4/75 mainframe (based on the design of the IBM 360) it was later reimplemented on the ICL 2900 series of mainframes (as EMAS 2900 or EMAS-2) where it ran in service until the mid-1980s. Near the end of its life, the refactored version was back-ported (as EMAS-3) to the Amdahl 470 mainframe clone, and thence to the IBM System/370-XA architecture (the latter with help from the University of Kent, although they never actually ran EMAS-3). The National Advanced System (NAS) VL80 IBM mainframe clone followed later. The final EMAS system (the Edinburgh VL80) was decommissioned in July 1992. The University of Kent system went live in December 1979, and ran on the least powerful machine in the ICL 2900 range - an ICL 2960, with 2MB of memory, executing about 290k instructions per second. Despite this, it reliably supported around 30 users. This number increased in 1983 with the addition of an additional 2MB of memory and a second Order Code Processor (OCP) (what is normally known as a CPU) running with symmetric multiprocessing. This system was decommissioned in August 1986. Features EMAS was written entirely in the Edinburgh IMP programming language, with only a small number of critical functions using embedded assembler within IMP sources. It had several features that were advanced for the time, including dynamic linking, multi-level storage, an efficient scheduler, a separate user-space kernel ('director'), a user-level shell ('basic command interpreter'), a comprehensive archiving system and a memory-mapped file architecture. Such features led EMAS supporters to claim that their system was superior to Unix for the first 20 years of the latter's existence. Legacy The Edinburgh Computer History Project is attempting to salvage some of the lessons learned from the EMAS project and has the complete source code of EMAS online for public browsing. See also Atlas Autocode References History of computing in the United Kingdom Time-sharing operating systems 1970s software University of Edinburgh School of Informatics
15357987
https://en.wikipedia.org/wiki/List%20of%20mergers%20and%20acquisitions%20by%20Apple
List of mergers and acquisitions by Apple
Apple Inc. is an American multinational corporation that designs and manufactures consumer electronics and software products. It was established in Cupertino, California, on April 1, 1976, by Steve Jobs, Steve Wozniak, and Ronald Wayne, and was incorporated on January 3, 1977. The company's hardware products include the Macintosh line of personal computers, the iPod line of portable media players, the iPad line of tablets, the iPhone line of smartphones, the Apple TV line of digital media players, and the Apple Watch line of smartwatches. Apple's software products include the macOS, iOS, iPadOS, tvOS, and watchOS operating systems, the iTunes media player, the Safari web browser, and the iLife suite of multimedia and creativity software. , Apple is publicly known to have acquired more than 100 companies. The actual number of acquisitions is possibly larger as Apple does not reveal the majority of its acquisitions unless discovered by the press. Apple has cofounded two half-equity partnerships and purchased equity stakes in three preexisting companies, and has made three divestments. Apple has not released the financial details for the majority of its mergers and acquisitions. Apple's business philosophy is to acquire small companies that can be easily integrated into existing company projects. For instance, Apple acquired Emagic and its professional music software, Logic Pro, in 2002. The acquisition was incorporated in the creation of the digital audio workstation software GarageBand, now part of the iLife software suite. The company made its first acquisition on March 2, 1988, with its purchase of Network Innovations. In 2013, Apple acquired thirteen companies. Apple's largest acquisition was that of Beats Electronics in August 2014 for $3 billion. Of the companies Apple has acquired, 71 were based in the United States. In early-May 2019, Apple CEO Tim Cook said to CNBC that Apple acquires a company every two to three weeks on average, having acquired 20 to 25 companies in the past six months alone. Acquisitions Stakes Divestments Significant investments in Apple Institutional ownership Apple Inc. is a public, joint-stock company registered with the SEC. , it has 4,715,280,000 outstanding shares. These are mainly held by institutional investors and funds. The top 16 institutional shareholders (and eight related, notable funds with over 25 million shares) are: (338,533,988): The Vanguard Group, Inc. (110,521,153): Vanguard Total Stock Market Index Fund (85,435,882): Vanguard 500 Index Fund (43,783,603): Vanguard Institutional Index Fund-Institutional Index Fund (30,726,434): Vanguard Growth Index Fund (296,598,349): BlackRock Inc. (30,589,798): BlackRock iShares Core S&P 500 ETF (249,589,329): Berkshire Hathaway, Inc. (185,419,773): State Street Corporation (49,752,710): SPDR S&P 500 Trust ETF (112,369,787): FMR, LLC (33,772,877): Fidelity 500 Index Fund (59,311,465): Northern Trust Corporation (58,414,412): Geode Capital Management, LLC (47,548,838): Norges Bank Investment Management incorporated (44,444,899): Bank of New York Mellon Corporation (43,431,586): Invesco Ltd. (37,375,269): Invesco QQQ Series 1 ETF (39,648,345): Bank of America Corp. (38,105,167): Morgan Stanley (37,679,873): JP Morgan Chase and Co. (33,304,696): Goldman Sachs Group Inc. (27,564,370): T. Rowe Price Associates Inc. (25,527,026): Wells Fargo & Company Subsidiaries of Apple Anobit Apple Energy Apple IMC Apple Sales International Apple Services Apple Worldwide Video Beats Electronics Beddit Braeburn Capital Claris International (formerly FileMaker Inc) Shazam See also List of largest mergers and acquisitions Lists of corporate acquisitions and mergers Notes References Citations Bibliography External links – official website Apple Mergers and acquisitions
53680114
https://en.wikipedia.org/wiki/Sokobond
Sokobond
Sokobond is a puzzle video game created by Alan Hazelden and Harry Lee. The game was released on Linux, OS X, Windows-based personal computers in August 2013. It was later released for Nintendo Switch in September 2021. Gameplay Sokobond is a puzzle video game that tasks players with pushing atoms around a stage to form molecules. Development and release Sokobond was created by independent developers Alan Hazelden and Harry Lee. The game's music was composed by Allison Walker. The game was released on Linux, OS X, Windows-based personal computers on 27 August 2013. The game was later released on the digital distribution service Steam, after being greenlit by the community. Reception Sokobond received "generally favorable" reviews from professional critics according to review aggregator website Metacritic. References External links 2013 video games Linux games MacOS games Puzzle video games Video games developed in the United Kingdom Windows games
2576109
https://en.wikipedia.org/wiki/Sports%20rating%20system
Sports rating system
A sports rating system is a system that analyzes the results of sports competitions to provide ratings for each team or player. Common systems include polls of expert voters, crowdsourcing non-expert voters, betting markets, and computer systems. Ratings, or power ratings, are numerical representations of competitive strength, often directly comparable so that the game outcome between any two teams can be predicted. Rankings, or power rankings, can be directly provided (e.g., by asking people to rank teams), or can be derived by sorting each team's ratings and assigning an ordinal rank to each team, so that the highest rated team earns the #1 rank. Rating systems provide an alternative to traditional sports standings which are based on win-loss-tie ratios. In the United States, the biggest use of sports ratings systems is to rate NCAA college football teams in Division I FBS, choosing teams to play in the College Football Playoff. Sports ratings systems are also used to help determine the field for the NCAA men's and women's basketball tournaments, men's professional golf tournaments, professional tennis tournaments, and NASCAR. They are often mentioned in discussions about the teams that could or should receive invitations to participate in certain contests, despite not earning the most direct entrance path (such as a league championship). Computer rating systems can tend toward objectivity, without specific player, team, regional, or style bias. Ken Massey writes that an advantage of computer rating systems is that they can "objectively track all" 351 college basketball teams, while human polls "have limited value". Computer ratings are verifiable and repeatable, and are comprehensive, requiring assessment of all selected criteria. By comparison, rating systems relying on human polls include inherent human subjectivity; this may or may not be an attractive property depending on system needs. History Sports ratings systems have been around for almost 80 years, when ratings were calculated on paper rather than by computer, as most are today. Some older computer systems still in use today include: Jeff Sagarin's systems, the New York Times system, and the Dunkel Index, which dates back to 1929. Before the advent of the college football playoff, the Bowl Championship Series championship game participants were determined by a combination of expert polls and computer systems. Theory Sports ratings systems use a variety of methods for rating teams, but the most prevalent method is called a power rating. The power rating of a team is a calculation of the team's strength relative to other teams in the same league or division. The basic idea is to maximize the amount of transitive relations in a given data set due to game outcomes. For example, if A defeats B and B defeats C, then one can safely say that A>B>C. There are obvious problems with basing a system solely on wins and losses. For example, if C defeats A, then an intransitive relation is established (A > B > C > A) and a ranking violation will occur if this is the only data available. Scenarios such as this happen fairly regularly in sports—for example, in the 2005 NCAA Division I-A football season, Penn State beat Ohio State, Ohio State beat Michigan, and Michigan beat Penn State. To address these logical breakdowns, rating systems usually consider other criteria such as the game's score and where the match was held (for example, to assess a home field advantage). In most cases though, each team plays a sufficient amount of other games during a given season, which lessens the overall effect of such violations. From an academic perspective, the use of linear algebra and statistics are popular among many of the systems' authors to determine their ratings. Some academic work is published in forums like the MIT Sloan Sports Analytics Conference, others in traditional statistics, mathematics, psychology, and computer science journals. If sufficient "inter-divisional" league play is not accomplished, teams in an isolated division may be artificially propped up or down in the overall ratings due to a lack of correlation to other teams in the overall league. This phenomenon is evident in systems that analyze historical college football seasons, such as when the top Ivy League teams of the 1970s, like Dartmouth, were calculated by some rating systems to be comparable with accomplished powerhouse teams of that era such as Nebraska, USC, and Ohio State. This conflicts with the subjective opinion that claims that while good in their own right, they were not nearly as good as those top programs. However, this may be considered a "pro" by non-BCS teams in Division I-A college football who point out that ratings systems have proven that their top teams belong in the same strata as the BCS teams. This is evidenced by the 2004 Utah team that went undefeated in the regular season and earned a BCS bowl bid due to the bump in their overall BCS ratings via the computer ratings component. They went on to play and defeat the Big East Conference champion Pittsburgh in the 2005 Fiesta Bowl by a score of 35-7. A related example occurred during the 2006 NCAA Men's Basketball Tournament where George Mason were awarded an at-large tournament bid due to their regular season record and their RPI rating and rode that opportunity all the way to the Final Four. Goals of some rating systems differ from one another. For example, systems may be crafted to provide a perfect retrodictive analysis of the games played to-date, while others are predictive and give more weight to future trends rather than past results. This results in the potential for misinterpretation of rating system results by people unfamiliar with these goals; for example, a rating system designed to give accurate point spread predictions for gamblers might be ill-suited for use in selecting teams most deserving to play in a championship game or tournament. Rating considerations Home advantage When two teams of equal quality play, the team at home tends to win more often. The size of the effect changes based on the era of play, game type, season length, sport, even number of time zones crossed. But across all conditions, "simply playing at home increases the chances of winning." A win away from home is therefore seen more favorably than a win at home, because it was more challenging. Home advantage (which, for sports played on a pitch, is almost always called "home field advantage") is also based on the qualities of the individual stadium and crowd; the advantage in the NFL can be more than a 4-point difference from the stadium with the least advantage to the stadium with the most. Strength of schedule Strength of schedule refers to the quality of a team's opponents. A win against an inferior opponent is usually seen less favorably than a win against a superior opponent. Often teams in the same league, who are compared against each other for championship or playoff consideration, have not played the same opponents. Therefore, judging their relative win-loss records is complicated. The college football playoff committee uses a limited strength-of-schedule algorithm that only considers opponents' records and opponents' opponents' records (much like RPI). Points versus wins A key dichotomy among sports rating systems lies in the representation of game outcomes. Some systems store final scores as ternary discrete events: wins, draws, and losses. Other systems record the exact final game score, then judge teams based on margin of victory. Rating teams based on margin of victory is often criticized as creating an incentive for coaches to run up the score, an "unsportsmanlike" outcome. Still other systems choose a middle ground, reducing the marginal value of additional points as the margin of victory increases. Sagarin chose to clamp the margin of victory to a predetermined amount. Other approaches include the use of a decay function, such as a logarithm or placement on a cumulative distribution function. In-game information Beyond points or wins, some system designers choose to include more granular information about the game. Examples include time of possession of the ball, individual statistics, and lead changes. Data about weather, injuries, or "throw-away" games near season's end may affect game outcomes but are difficult to model. "Throw-away games" are games where teams have already earned playoff slots and have secured their playoff seeding before the end of the regular season, and want to rest/protect their starting players by benching them for remaining regular season games. This usually results in unpredictable outcomes and may skew the outcome of rating systems. Team composition Teams often shift their composition between and within games, and players routinely get injured. Rating a team is often about rating a specific collection of players. Some systems assume parity among all members of the league, such as each team being built from an equitable pool of players via a draft or free agency system as is done in many major league sports such as the NFL, MLB, NBA, and NHL. This is certainly not the case in collegiate leagues such as Division I-A football or men's and women's basketball. Cold start At the beginning of a season, there have been no games from which to judge teams' relative quality. Solutions to the cold start problem often include some measure of the previous season, perhaps weighted by what percent of the team is returning for the new season. ARGH Power Ratings is an example of a system that uses multiple previous years plus a percentage weight of returning players. Rating methods Permutation of standings Several methods offer some permutation of traditional standings. This search for the "real" win-loss record often involves using other data, such as point differential or identity of opponents, to alter a team's record in a way that is easily understandable. Sportswriter Gregg Easterbrook created a measure of Authentic Games, which only considers games played against opponents deemed to be of sufficiently high quality. The consensus is that all wins are not created equal. Pythagorean Pythagorean expectation, or Pythagorean projection, calculates a percentage based on the number of points a team has scored and allowed. Typically the formula involves the number of points scored, raised to some exponent, placed in the numerator. Then the number of points the team allowed, raised to the same exponent, is placed in the denominator and added to the value in the numerator. Football Outsiders has used The resulting percentage is often compared to a team's true winning percentage, and a team is said to have "overachieved" or "underachieved" compared to the Pythagorean expectation. For example, Bill Barnwell calculated that before week 9 of the 2014 NFL season, the Arizona Cardinals had a Pythagorean record two wins lower than their real record. Bill Simmons cites Barnwell's work before week 10 of that season and adds that "any numbers nerd is waving a “REGRESSION!!!!!” flag right now." In this example, the Arizona Cardinals' regular season record was 8-1 going into the 10th week of the 2014 season. The Pythagorean win formula implied a winning percentage of 57.5%, based on 208 points scored and 183 points allowed. Multiplied by 9 games played, the Cardinals' Pythagorean expectation was 5.2 wins and 3.8 losses. The team had "overachieved" at that time by 2.8 wins, derived from their actual 8 wins less the expected 5.2 wins, an increase of 0.8 overachieved wins from just a week prior. Trading "skill points" Originally designed by Arpad Elo as a method for ranking chess players, several people have adapted the Elo rating system for team sports such as basketball, soccer and American football. For instance, Jeff Sagarin and FiveThirtyEight publish NFL football rankings using Elo methods. Elo ratings initially assign strength values to each team, and teams trade points based on the outcome of each game. Solving equations Researchers like Matt Mills use Markov chains to model college football games, with team strength scores as outcomes. Algorithms like Google's PageRank have also been adapted to rank football teams. List of sports rating systems Advanced NFL Stats, United States of America National Football League ARGH Power Ratings ATP Rankings, international tennis Colley Matrix Dickinson System, United States of America college football Pomeroy College Basketball Ratings, United States of America college basketball Ratings Percentage Index (RPI), United States of America NCAA basketball, baseball, softball, hockey, soccer, lacrosse, and volleyball Smithman Qualitative Index, United States of America soccer - obsolete TrueSkill, a Bayesian ranking system inspired by the Glicko rating system Bowl Championship Series computer rating systems In collegiate American football, the following people's systems were used to choose teams to play in the national championship game. Anderson & Hester / Seattle Times Richard Billingsley Wes Colley / Atlanta Journal-Constitution Richard Dunkel Kenneth Massey Herman Matthews / Scripps Howard New York Times David Rothman Jeff Sagarin / USA Today Peter Wolfe Further reading Bibliographies Popular press Wayne Winston is a professor of decision sciences at Indiana University and was a classmate of Jeff Sagarin at MIT. He published several editions of a text on the Microsoft Excel spreadsheet software that includes material on ranking sports teams, as well as a book focused directly on this topic. He and Sagarin created rating systems together. Academic work Much of this information is available at available at References Sports records and statistics Sports terminology Sports science Rating system Baseball statistics Basketball statistics Analytics American football records and statistics
6682215
https://en.wikipedia.org/wiki/Virtual%20University%20of%20Pakistan
Virtual University of Pakistan
The Virtual University of Pakistan (VU) () is a public university located in the urban area Head Office M.A Jinnah Campus, Defence Road, Off Raiwind Road Lahore Punjab, Pakistan. University Encourage on modern Information and Communication Technologies. Overview Virtual University is Pakistan's first university based completely on modern Information and Communication Technologies. Established in 2002 by the Government of Pakistan to promote distance education in modern information and communication technologies as its primary objectives, the university is noted for its online lectures and broadcasting rigorous programs regardless of their students' physical locations. It is recognized by the Higher Education Commission (Pakistan). The university offers undergraduate and post-graduate courses in business administration, economics, computer science and information technology. Due to its heavy reliance on serving lectures through the internet, Pakistani students residing overseas in several other countries of the region are also enrolled in the university's programs. Academic degrees VU offers following degree programs: Bachelor's programs (4 years) BS Sociology BS Computer Science BS Information Technology BS Software Engineering BS Business Administration BS Public Administration BS Management BS Marketing Bachelor of Business and Information Technology (BBIT) BS Mass Communication BS Psychology BS Commerce BS Accounting and Finance BS Banking and Finance Bachelor's programs (2 years) BA BA (Mass Communication) BA (Psychology) B.Sc. (Computer Science) B.Com B.Ed. (Elementary) BA (Business Administration) BA (Mathematics, Statistics and Economics) BA (Supply Chain Management) Associate Degree Program (ADP 2 years) Computer Networking Database Management System Web Design and Development Computer Science Accounting and Finance Islamic Banking Human Resource Management Operations Management Sales and Marketing Supply Chain Management Master's programs (MA/MSc 3.5 years) M.Sc. Statistics Master of Computer Science (MCS) Master of Information Technology (MIT) Master of Business Administration (MBA)-Executive M.Sc. Applied Psychology M.Sc. Economics M.Sc. Organizational Psychology M.Sc. Mass Communication M.Sc. Mathematics M.A. English Language Teaching (ELT) Master of Commerce (M.Com) Master of Public Administration (MPA) Master of Business Economics (MBEcon) Master of Accounting Master of Finance Master of Accounting and Finance Master of Banking and Finance Master of Human Resources Management (MHRM) Master of Operation and Supply Chain Management Master's programs (MS/MPhil 2 years) MS Mathematics MSBA/MBA MS Bioinformatics MS Biotechnology MS Genetics MS Molecular Biology M.Phil.Education (Educational Leadership and Management) MS Computer MS Zoology Doctoral programs Ph.D Computer Science Ph.D Biotechnology Campuses Since its foundation in 2002, VU has expanded operations to reach more than one hundred cities throughout the country with more than 190 associated institutions providing infrastructure support to students. Virtual University has two types of campuses. The first ones are its own campuses which are called its offices. Its head office is located in Lahore. Second ones are called private virtual campuses (PVC) and run by the private organizations (more commonly known as affiliated campuses). VIRTUAL CAMPUSES AZAD KASHMIR: 3 BALOCHISTAN: 4 ISLAMABAD CAPITAL Territory: 3 GILGIT BALTISTAN: 2 KHYBER PAKHTUNKHWA: 15 PUNJAB: 152 SINDH: 25 Distant teaching method VU is one of two Pakistani universities which provide distance education (the first one is Allama Iqbal Open University). The students can get lectures its Learning Management System (LMS), YouTube channel and online sessions. First batch of 500 students starts classes. March 26, 2002 Federal Charter granted by Government of Pakistan. September 1, 2002 VU start broadcasts over its own TV channels VTV1 and VTV2. June 15, 2004 VU selected as coordinating institute of multi-country IDRC funded project. March 1, 2005 VU puts is lectures on YouTube November, 2005 Launch of MCS, MIT and MBA programs. March 6, 2006 Launch of two new TV channels VTV3, & VTV4. September 26, 2006 VU becomes an Asia Pacific Broadcasting Union (ABU) member. November 7, 2006 VU agreement with Ujala TV Dubai to telecast VU programs. November 30, 2006. VU agreement with the University of Veterinary and Animal Sciences to offer joint BS Bio-informatics program. May 25, 2007 Launch of MS program in Computer Science. Fall 2007 Launch of B.A., B.Com., B.Sc. (2-year programs). Fall 2009" VU launches its in-house developed LMS (Learning Management System) and other MIS(s) by IT Department. Spring 2009'' The Virtual University of Pakistan held its first Convocation on Tuesday 18 May 2010 simultaneously at Peshawar, Rawalpindi, Lahore, Jamshoro, Karachi and Quetta. The Virtual University of Pakistan holds Highest Category 'W' By HEC. HEC on May 28, 2013, has clarified the ambiguity created by some newspapers amongst VU students and the general public. HEC has, therefore, reconfirmed the status of the University as an online & distance education institution of the country. References Distance education institutions based in Pakistan Educational institutions established in 2002 Universities and colleges in Lahore Public universities and colleges in Pakistan 2002 establishments in Pakistan .
81193
https://en.wikipedia.org/wiki/Cross-platform%20software
Cross-platform software
In computing, cross-platform software (also called multi-platform software, platform-agnostic software, or platform-independent software) is computer software that is designed to work in several computing platforms. Some cross-platform software requires a separate build for each platform, but some can be directly run on any platform without special preparation, being written in an interpreted language or compiled to portable bytecode for which the interpreters or run-time packages are common or standard components of all supported platforms. For example, a cross-platform application may run on Microsoft Windows, Linux, and macOS. Cross-platform software may run on many platforms, or as few as two. Some frameworks for cross-platform development are Codename One, Kivy, Qt, Flutter, NativeScript, Xamarin, Phonegap, Ionic, and React Native. Platforms Platform can refer to the type of processor (CPU) or other hardware on which an operating system (OS) or application runs, the type of OS, or a combination of the two. An example of a common platform is the Microsoft Windows OS running on the x86 architecture. Other well-known desktop platforms are Linux/Unix and macOS - both of which are themselves cross-platform. There are, however, many devices such as smartphones that are also platforms. Applications can be written to depend on the features of a particular platform—either the hardware, OS, or virtual machine (VM) it runs on. For example, the Java platform is a common VM platform which runs on many OSs and hardware types. Hardware A hardware platform can refer to an instruction set architecture. For example: x86 architecture and its variants such as IA-32 and x86-64. These machines often run one version of Microsoft Windows, though they can run other OSs including Linux, OpenBSD, NetBSD, macOS and FreeBSD. The 32-bit ARM architectures (and newer 64-bit version) is common on smartphones and tablet computers, which run Android, iOS and other mobile operating systems. Software A software platform can be either an OS or programming environment, though more commonly it is a combination of both. An exception is Java, which uses an OS-independent VM to execute Java bytecode. Examples of software platforms are: BlackBerry 10 Android for smartphones and tablet computers (x86, ARM) iOS (ARM) Microsoft Windows (x86, ARM) Microsoft's Common Language Infrastructure (CLI), also known as .NET Framework Cross-platform variant Mono (previously by Novell and now by Xamarin) Java Web browsers – more or less compatible with each other, running JavaScript web-apps Linux (x86, PowerPC, ARM, and other architectures) macOS (x86, PowerPC (on 10.5 and below), and ARM (on Apple silicon or 11.0 and above)) Mendix Solaris (SPARC, x86) SymbianOS SPARC PlayStation 4 (x86), PlayStation 3 (PowerPC) and PlayStation Vita (ARM) Unix Xbox Minor/historical AmigaOS (m68k), AmigaOS 4 (PowerPC), AROS (x86, PowerPC, m68k), MorphOS (PowerPC) Atari TOS, MiNT BSD (many platforms; see NetBSDnet, for example) DOS-type systems on the x86: MS-DOS, IBM PC DOS, DR-DOS, FreeDOS OS/2, eComStation Java The Java language is typically compiled to run on a VM that is part of the Java platform. The Java VM (JVM) is a CPU implemented in software, which runs all Java code. This enables the same code to run on all systems that implement a JVM. Java software can be executed by a hardware-based Java processor. This is used mostly in embedded systems. Java code running in the JVM has access to OS-related services, like disk I/O and network access, if the appropriate privileges are granted. The JVM makes the system calls on behalf of the Java application. This lets users to decide the appropriate protection level, depending on an ACL. For example, disk and network access is usually enabled for desktop applications, but not for browser-based applets. The Java Native Interface (JNI) can also be used to access OS-specific functions, with a loss of portability. Currently, Java Standard Edition software can run on Microsoft Windows, macOS, several Unix-like OSs, and several real-time operating systems for embedded devices. For mobile applications, browser plugins are used for Windows and Mac based devices, and Android has built-in support for Java. There are also subsets of Java, such as Java Card or Java Platform, Micro Edition, designed for resource-constrained devices. Implementation For software to be considered cross-platform, it must be function on more than one computer architecture or OS. Developing such software can be a time-consuming task because different OSs have different application programming interfaces (API). For example, Linux uses a different API from Windows. Software written for one OS may not automatically work on all architectures that OS supports. One example is OpenOffice.org, which in 2006 did not natively run on AMD64 or Intel 64 processors implementing the x86-64 standards; by 2012 it was "mostly" ported to these systems. Just because software is written in a popular programming language such as C or C++, it does not mean it will run on all OSs that support that language—or even on different versions of the same OS. Web applications Web applications are typically described as cross-platform because, ideally, they are accessible from any web browser: the browser is the platform. Web applications generally employ a client–server model, but vary widely in complexity and functionality. It can be hard to reconcile the desire for features with the need for compatibility. Basic web applications perform all or most processing from a stateless server, and pass the result to the client web browser. All user interaction with the application consists of simple exchanges of data requests and server responses. This type of application was the norm in the early phases of World Wide Web application development. Such applications follow a simple transaction model, identical to that of serving static web pages. Today, they are still relatively common, especially where cross-platform compatibility and simplicity are deemed more critical than advanced functionality. Prominent examples of advanced web applications include the Web interface to Gmail, A9.com, Google Maps website, and the Live Search service (now Bing) from Microsoft. Such applications routinely depend on additional features found only in the more recent versions of popular web browsers. These features include Ajax, JavaScript, Dynamic HTML, SVG, and other components of rich web applications. Older versions often lack these. Design Because of the competing interests of compatibility and functionality, numerous design strategies have emerged. Many software systems use a layered architecture where platform-dependent code is restricted to the upper- and lowermost layers. Graceful degradation Graceful degradation attempts to provide the same or similar functionality to all users and platforms, while diminishing that functionality to a least common denominator for more limited client browsers. For example, a user attempting to use a limited-feature browser to access Gmail may notice that Gmail switches to basic mode, with reduced functionality but still of use. Multiple codebases Some software is maintained in distinct codebases for different (hardware and OS) platforms, with equivalent functionality. This requires more effort to maintain the code, but can be worthwhile where the amount of platform-specific code is high. Single codebase This strategy relies on having one codebase that may be compiled to multiple platform-specific formats. One technique is conditional compilation. With this technique, code that is common to all platforms is not repeated. Blocks of code that are only relevant to certain platforms are made conditional, so that they are only interpreted or compiled when needed. Another technique is separation of functionality, which disables functionality not supported by browsers or OSs, while still delivering a complete application to the user. (See also: Separation of concerns.) This technique is used in web development where interpreted code (as in scripting languages) can query the platform it is running on to execute different blocks conditionally. Third-party libraries Third-party libraries attempt to simplify cross-platform capability by hiding the complexities of client differentiation behind a single, unified API, at the expense of vendor lock-in. Responsive Web design Responsive web design (RWD) is a Web design approach aimed at crafting the visual layout of sites to provide an optimal viewing experience—easy reading and navigation with a minimum of resizing, panning, and scrolling—across a wide range of devices, from mobile phones to desktop computer monitors. Little or no platform-specific code is used with this technique. Testing Cross-platform applications need much more integration testing. Some web browsers prohibit installation of different versions on the same machine. There are several approaches used to target multiple platforms, but all of them result in software that requires substantial manual effort for testing and maintenance. Techniques such as full virtualization are sometimes used as a workaround for this problem. Tools such as the Page Object Model allow cross-platform tests to be scripted so that one test case covers multiple versions of an app. If different versions have similar user interfaces, all can be tested with one test case. Traditional applications Web applications are becoming increasingly popular but many computer users still use traditional application software which does not rely on a client/web-server architecture. The distinction between traditional and web applications is not always clear. Features, installation methods and architectures for web and traditional applications overlap and blur the distinction. Nevertheless, this simplifying distinction is a common and useful generalization. Binary software Traditional application software has been distributed as binary files, especially executable files. Executables only support platform they were built for—which means that a single cross-platform executable could be very bloated with code that never executes on a particular platform. Instead, generally there is a selection of executables, each built for one platform. For software that is distributed as a binary executable, such as that written in C or C++, there must be a software build for each platform, using a toolset that translates—transcompiles—a single codebase into multiple binary executables. For example, Firefox, an open-source web browser, is available on Windows, macOS (both PowerPC and x86 through what Apple Inc. calls a Universal binary), Linux, and BSD on multiple computer architectures. The four platforms (in this case, Windows, macOS, Linux, and BSD) are separate executable distributions, although they come largely from the same source code. The use of different toolsets may not be enough to build a working executables for different platforms. In this case, programmers must port the source code to the new platform. For example, an application such as Firefox, which already runs on Windows on the x86 family, can be modified and re-built to run on Linux on the x86 (and potentially other architectures) as well. The multiple versions of the code may be stored as separate codebases, or merged into one codebase. An alternative to porting is cross-platform virtualization, where applications compiled for one platform can run on another without modification of the source code or binaries. As an example, Apple's Rosetta, which is built into Intel-based Macintosh computers, runs applications compiled for the previous generation of Macs that used PowerPC CPUs. Another example is IBM PowerVM Lx86, which allows Linux/x86 applications to run unmodified on the Linux/Power OS. Example of cross-platform binary software: The LibreOffice office suite is built for Microsoft Windows, macOS, many Linux distributions, FreeBSD, NettBSD, OpenBSD, Android, iOS, Chrome OS, web-based Collabora Online and many others. Many of these are supported on several hardware platforms with processor architectures including IA-32, x86-64 and ARM. Scripts and interpreted languages A script can be considered to be cross-platform if its interpreter is available on multiple platforms and the script only uses the facilities built into the language. For example, a script written in Python for a Unix-like system will likely run with little or no modification on Windows, because Python also runs on Windows; indeed there are many implementations (e.g. IronPython for .NET Framework). The same goes for many of the open-source scripting languages. Unlike binary executable files, the same script can be used on all computers that have software to interpret the script. This is because the script is generally stored in plain text in a text file. There may be some trivial issues, such as the representation of a new line character. Some popular cross-platform scripting languages are: bash – A Unix shell commonly run on Linux and other modern Unix-like systems, as well as on Windows via the Cygwin POSIX compatibility layer. Perl – First released in 1987. Used for CGI programming, small system administration tasks, and more. PHP – Mostly used for web applications. Python – A language which focuses on rapid application development and ease of writing, instead of run-time efficiency. Ruby – An object-oriented language which aims to be easy to read. Can also be used on the web through Ruby on Rails. Tcl – A dynamic programming language, suitable for a wide range of uses, including web and desktop applications, networking, administration, testing and many more. Video games Cross-platform or multi-platform is a term that can also apply to video games released on a range of video game consoles. Examples of cross-platform games include: Miner 2049er, Tomb Raider: Legend, FIFA series, NHL series and Minecraft. Each has been released across a variety of gaming platforms, such as the Wii, PlayStation 3, Xbox 360, personal computers, and mobile devices. Sone platforms are harder to write for than others. To offset this, a video game may be released on a few platforms first, then later on others. Typically, this happens when a new gaming system is released, because video game developers need to acquaint themselves with its hardware and software. Some games may not be cross-platform because of licensing agreements between developers and video game console manufacturers that limit development to one particular console. As an example, Disney could create a game with the intention of release on the latest Nintendo and Sony game consoles. Should Disney license the game with Sony first, it may be required to release the game solely on Sony's console for a short time or indefinitely. Cross-platform play Several developers have implemented ways to play games online while using different platforms. Psyonix, Epic Games, Microsoft, and Valve all possess technology that allows Xbox 360 and PlayStation 3 gamers to play with PC gamers, leaving the decision of which platform to use to consumers. The first game to allow this level of interactivity between PC and console games was Quake 3. Games that feature cross-platform online play include Rocket League, Final Fantasy XIV, Street Fighter V, Killer Instinct, Paragon and Fable Fortune, and Minecraft with its Better Together update on Windows 10, VR editions, Pocket Edition and Xbox One. Programming Cross-platform programming is the practice of deliberately writing software to work on more than one platform. Approaches There are different ways to write a cross-platform application. One approach is to create multiple versions of the same software in different source trees—in other words, the Microsoft Windows version of an application might have one set of source code files and the Macintosh version another, while a FOSS *nix system might have a third. While this is straightforward, compared to developing for only one platform it can cost much more to pay a larger team or release products more slowly. It can also result in more bugs to be tracked and fixed. Another approach is to use software that hides the differences between the platforms. This abstraction layer insulates the application from the platform. Such applications are platform agnostic. Applications that run on the JVM are built this way. Some applications mix various methods of cross-platform programming to create the final application. An example is the Firefox web browser, which uses abstraction to build some of the lower-level components, with separate source subtrees for implementing platform-specific features (like the GUI), and the implementation of more than one scripting language to ease software portability. Firefox implements XUL, CSS and JavaScript for extending the browser, in addition to classic Netscape-style browser plugins. Much of the browser itself is written in XUL, CSS, and JavaScript. Toolkits and environments There are many tools available to help the process of cross-platform programming: 8th: a development language which utilizes Juce as its GUI layer. It currently supports Android, iOS, Windows, macOS, Linux and Raspberry Pi. Anant Computing: A mobile application platform that works in all Indian languages, including their keyboards, and also supports AppWallet and native performance in all OSs. AppearIQ: a framework that supports the workflow of app development and deployment in an enterprise environment. Natively developed containers present hardware features of the mobile devices or tablets through an API to HTML5 code thus facilitating the development of mobile apps that run on different platforms. Boden: a UI framework written in C++. Cairo: a free software library used to provide a vector graphics-based, device-independent API. It is designed to provide primitives for 2-dimensional drawing across a number of different backends. Cairo is written in C and has bindings for many programming languages. Cocos2d: an open-source toolkit and game engine for developing 2D and simple 3D cross-platform games and applications. Codename One: an open-source Write Once Run Anywhere (WORA) framework for Java and Kotlin developers. Delphi: an IDE which uses a Pascal-based language for development. It supports Android, iOS, Windows, macOS, Linux. Ecere SDK: a GUI and 2D/3D graphics toolkit and IDE, written in eC and with support for additional languages such as C and Python. It supports Linux, FreeBSD, Windows, Android, macOS and the Web through Emscripten or Binaryen (WebAssembly). Eclipse: an open-source development environment. Implemented in Java with a configurable architecture which supports many tools for software development. Add-ons are available for several languages, including Java and C++. FLTK: an open-source toolkit, but more lightweight because it restricts itself to the GUI. Flutter: A cross-platform UI framework for Android and iOS developed by Google. fpGUI: An open-source widget toolkit that is completely implemented in Object Pascal. It currently supports Linux, Windows and a bit of Windows CE. GeneXus: A Windows rapid software development solution for cross-platform application creation and deployment based on knowledge representation and supporting C#, COBOL, Java including Android and BlackBerry smart devices, Objective-C for Apple mobile devices, RPG, Ruby, Visual Basic, and Visual FoxPro. GLBasic: A BASIC dialect and compiler that generates C++ code. It includes cross compilers for many platforms and supports numerous platform (Windows, Mac, Linux, Android, iOS and some exotic handhelds). Godot: an SDK which uses Godot Engine. GTK+: An open-source widget toolkit for Unix-like systems with X11 and Microsoft Windows. Haxe: An open-source language. Juce: An application framework written in C++, used to write native software on numerous systems (Microsoft Windows, POSIX, macOS), with no change to the code. Kivy: an open-source cross-platform UI framework written in Python. It supports Android, iOS, Linux, OS X, Windows and Raspberry Pi. LEADTOOLS: Cross-platform SDK libraries to integrate recognition, document, medical, imaging, and multimedia technologies into Windows, iOS, macOS, Android, Linux and web applications. LiveCode: a commercial cross-platform rapid application development language inspired by HyperTalk. Lazarus: A programming environment for the FreePascal Compiler. It supports the creation of self-standing graphical and console applications and runs on Linux, MacOSX, iOS, Android, WinCE, Windows and WEB. Max/MSP: A visual programming language that encapsulates platform-independent code with a platform-specific runtime environment into applications for macOS and Windows A cross-platform Android runtime. It allows unmodified Android apps to run natively on iOS and macOS Mendix: a cloud-based low-code application development platform. MonoCross: an open-source model–view–controller design pattern where the model and controller are cross-platform but the view is platform-specific. Mono: An open-source cross-platform version of Microsoft .NET (a framework for applications and programming languages) MoSync: an open-source SDK for mobile platform app development in the C++ family. Mozilla application framework: an open-source platform for building macOS, Windows and Linux applications. A cross-platform JavaScript/TypeScript framework for Android and iOS development. OpenGL: a 3D graphics library. Pixel Game Maker MV: A proprietary 2D game development software for Windows for developing Windows and Nintendo Switch games. PureBasic: a proprietary language and IDE for building macOS, Windows and Linux applications. ReNative: The universal development SDK to build multi-platform projects with React Native. Includes latest iOS, tvOS, Android, Android TV, Web, Tizen TV, Tizen Watch, LG webOS, macOS/OSX, Windows, KaiOS, Firefox OS and Firefox TV platforms. Qt: an application framework and widget toolkit for Unix-like systems with X11, Microsoft Windows, macOS, and other systems—available under both proprietary and open-source licenses. Simple and Fast Multimedia Library: A multimedia C++ API that provides low and high level access to graphics, input, audio, etc. Simple DirectMedia Layer: an open-source multimedia library written in C that creates an abstraction over various platforms’ graphics, sound, and input APIs. It runs on OSs including Linux, Windows and macOS and is aimed at games and multimedia applications. Smartface: a native app development tool to create mobile applications for Android and iOS, using WYSIWYG design editor with JavaScript code editor. Tcl/Tk Ultimate++: a C++ rapid application development framework focused on programmers productivity. It includes a set of libraries (GUI, SQL, etc..), and an integrated development environment. It supports Windows and Unix-like OS-s. Unity: Another cross-platform SDK which uses Unity Engine. Uno Platform: Windows, macOS, iOS, Android, WebAssembly and Linux using C#. Unreal: A cross-platform SDK which uses Unreal Engine. V-Play Engine: V-Play is a cross-platform development SDK based on the popular Qt framework. V-Play apps and games are created within Qt Creator. WaveMaker: A low-code development tool to create responsive web and hybrid mobile (Android & iOS) applications. WinDev: an Integrated Development Environment for Windows, Linux, .Net and Java, and web browers. Optimized for business and industrial applications. wxWidgets: an open-source widget toolkit that is also an application framework. It runs on Unix-like systems with X11, Microsoft Windows and macOS. Xojo: a RAD IDE that uses an object-oriented programming language to create desktop, web and iOS apps. Xojo makes native, compiled desktop apps for macOS, Windows, Linux and Raspberry Pi. It creates compiled web apps that can be run as standalone servers or through CGI. And it recently added the ability to create native iOS apps. Challenges There are many challenges when developing cross-platform software. Testing cross-platform applications may be considerably more complicated, since different platforms can exhibit slightly different behaviors or subtle bugs. This problem has led some developers to deride cross-platform development as "write once, debug everywhere", a take on Sun Microsystems' "write once, run anywhere" marketing slogan. Developers are often restricted to using the lowest common denominator subset of features which are available on all platforms. This may hinder the application's performance or prohibit developers from using the most advanced features of each platform. Different platforms often have different user interface conventions, which cross-platform applications do not always accommodate. For example, applications developed for macOS and GNOME are supposed to place the most important button on the right-hand side of a window or dialog, whereas Microsoft Windows and KDE have the opposite convention. Though many of these differences are subtle, a cross-platform application which does not conform to these conventions may feel clunky or alien to the user. When working quickly, such opposing conventions may even result in data loss, such as in a dialog box confirming whether to save or discard changes. Scripting languages and VM bytecode must be translated into native executable code each time they are used, imposing a performance penalty. This penalty can be alleviated using techniques like just-in-time compilation; but some computational overhead may be unavoidable. Different platforms require the use of native package formats such as RPM and MSI. Multi-platform installers such as InstallAnywhere address this need. Cross-platform execution environments may suffer cross-platform security flaws, creating a fertile environment for cross-platform malware. See also Fat binary Cross-platform play Hardware-agnostic Software portability List of video games that support cross-platform play List of widget toolkits Platform virtualization Java (software platform) Language binding Transcompiler Binary code compatibility Xamarin Comparison of user features of messaging platforms Mobile development frameworks, many of which are cross-platform. References Computing platforms Interoperability
38588890
https://en.wikipedia.org/wiki/Loudia%20Laarman
Loudia Laarman
Loudia Laarman (born October 4, 1991) is a Canadian sprinter of Caribbean origin who specializes in the 100 metres, 200 metres and 4×100 m relay. She participated in the 2007 World Youth Championships in Athletics, winning a bronze medal in the 4×100 m relay. A native of Haiti, Loudia Laarman attended Winston Churchill High School in Lethbridge, Alberta. While in Alberta she set the junior provincial records in the 100, the 60 and the 50 metres, with respectively 11.64, 7.41 and 6.48 s. She ranked 7th with a time of 11.81 in the 100 metres event at the 2010 IAAF World Junior Championships in the Moncton 2010 Stadium, just 0.01 s behind German athlete Tatjana Pinto. Loudia Laarman participated in the 100 metres event in the semifinals of the Pac-12 Conference Championships in Eugene, Oregon in May 2012 with a time of 11.62, placing 6th. She obtained All-America honors as member of her 4x100 m relay team, a team of the USC Trojans, placed seventh at the NCAA Championships in June 2012 and June 2013. At the 2012 Trojan Invitational her team finished second with a time of 44.64. At the same meet she placed second in the 100 metres event with a time of 11.83 and 10th in the 200 metres event with 24.11 s. Later that year her relay team set a seasonal best of 44.18. References External links IAAF profile for Loudia Laarman USC Trojans biography athletics.ca profile all-athletics Results athletic.net Results milesplit.com Statistics 1991 births Living people Sportspeople from Alberta Canadian female sprinters USC Trojans women's track and field athletes Haitian emigrants to Canada Track and field athletes from Los Angeles University of Southern California alumni Black Canadian female track and field athletes
82930
https://en.wikipedia.org/wiki/FLOPS
FLOPS
In computing, floating point operations per second (FLOPS, flops or flop/s) is a measure of computer performance, useful in fields of scientific computations that require floating-point calculations. For such cases, it is a more accurate measure than measuring instructions per second. Floating-point arithmetic Floating-point arithmetic is needed for very large or very small real numbers, or computations that require a large dynamic range. Floating-point representation is similar to scientific notation, except everything is carried out in base two, rather than base ten. The encoding scheme stores the sign, the exponent (in base two for Cray and VAX, base two or ten for IEEE floating point formats, and base 16 for IBM Floating Point Architecture) and the Significand (number after the radix point). While several similar formats are in use, the most common is ANSI/IEEE Std. 754-1985. This standard defines the format for 32-bit numbers called single precision, as well as 64-bit numbers called double precision and longer numbers called extended precision (used for intermediate results). Floating-point representations can support a much wider range of values than fixed-point, with the ability to represent very small numbers and very large numbers. Dynamic range and precision The exponentiation inherent in floating-point computation assures a much larger dynamic range – the largest and smallest numbers that can be represented – which is especially important when processing data sets where some of the data may have extremely large range of numerical values or where the range may be unpredictable. As such, floating-point processors are ideally suited for computationally intensive applications. Computational performance FLOPS and MIPS are units of measure for the numerical computing performance of a computer. Floating-point operations are typically used in fields such as scientific computational research. The unit MIPS measures integer performance of a computer. Examples of integer operation include data movement (A to B) or value testing (If A = B, then C). MIPS as a performance benchmark is adequate when a computer is used in database queries, word processing, spreadsheets, or to run multiple virtual operating systems. Frank H. McMahon, of the Lawrence Livermore National Laboratory, invented the terms FLOPS and MFLOPS (megaFLOPS) so that he could compare the supercomputers of the day by the number of floating-point calculations they performed per second. This was much better than using the prevalent MIPS to compare computers as this statistic usually had little bearing on the arithmetic capability of the machine. FLOPS on an HPC-system can be calculated using this equation: This can be simplified to the most common case: a computer that has exactly 1 CPU: FLOPS can be recorded in different measures of precision, for example, the TOP500 supercomputer list ranks computers by 64 bit (double-precision floating-point format) operations per second, abbreviated to FP64. Similar measures are available for 32-bit (FP32) and 16-bit (FP16) operations. FLOPS per cycle per core for various processors Performance records Single computer records In June 1997, Intel's ASCI Red was the world's first computer to achieve one teraFLOPS and beyond. Sandia director Bill Camp said that ASCI Red had the best reliability of any supercomputer ever built, and "was supercomputing's high-water mark in longevity, price, and performance". NEC's SX-9 supercomputer was the world's first vector processor to exceed 100 gigaFLOPS per single core. In June 2006, a new computer was announced by Japanese research institute RIKEN, the MDGRAPE-3. The computer's performance tops out at one petaFLOPS, almost two times faster than the Blue Gene/L, but MDGRAPE-3 is not a general purpose computer, which is why it does not appear in the Top500.org list. It has special-purpose pipelines for simulating molecular dynamics. By 2007, Intel Corporation unveiled the experimental multi-core POLARIS chip, which achieves 1 teraFLOPS at 3.13 GHz. The 80-core chip can raise this result to 2 teraFLOPS at 6.26 GHz, although the thermal dissipation at this frequency exceeds 190 watts. In June 2007, Top500.org reported the fastest computer in the world to be the IBM Blue Gene/L supercomputer, measuring a peak of 596 teraFLOPS. The Cray XT4 hit second place with 101.7 teraFLOPS. On June 26, 2007, IBM announced the second generation of its top supercomputer, dubbed Blue Gene/P and designed to continuously operate at speeds exceeding one petaFLOPS, faster than the Blue Gene/L. When configured to do so, it can reach speeds in excess of three petaFLOPS. On October 25, 2007, NEC Corporation of Japan issued a press release announcing its SX series model SX-9, claiming it to be the world's fastest vector supercomputer. The SX-9 features the first CPU capable of a peak vector performance of 102.4 gigaFLOPS per single core. On February 4, 2008, the NSF and the University of Texas at Austin opened full scale research runs on an AMD, Sun supercomputer named Ranger, the most powerful supercomputing system in the world for open science research, which operates at sustained speed of 0.5 petaFLOPS. On May 25, 2008, an American supercomputer built by IBM, named 'Roadrunner', reached the computing milestone of one petaFLOPS. It headed the June 2008 and November 2008 TOP500 list of the most powerful supercomputers (excluding grid computers). The computer is located at Los Alamos National Laboratory in New Mexico. The computer's name refers to the New Mexico state bird, the greater roadrunner (Geococcyx californianus). In June 2008, AMD released ATI Radeon HD 4800 series, which are reported to be the first GPUs to achieve one teraFLOPS. On August 12, 2008, AMD released the ATI Radeon HD 4870X2 graphics card with two Radeon R770 GPUs totaling 2.4 teraFLOPS. In November 2008, an upgrade to the Cray Jaguar supercomputer at the Department of Energy's (DOE's) Oak Ridge National Laboratory (ORNL) raised the system's computing power to a peak 1.64 petaFLOPS, making Jaguar the world's first petaFLOPS system dedicated to open research. In early 2009 the supercomputer was named after a mythical creature, Kraken. Kraken was declared the world's fastest university-managed supercomputer and sixth fastest overall in the 2009 TOP500 list. In 2010 Kraken was upgraded and can operate faster and is more powerful. In 2009, the Cray Jaguar performed at 1.75 petaFLOPS, beating the IBM Roadrunner for the number one spot on the TOP500 list. In October 2010, China unveiled the Tianhe-1, a supercomputer that operates at a peak computing rate of 2.5 petaFLOPS. the fastest PC processor reached 109 gigaFLOPS (Intel Core i7 980 XE) in double precision calculations. GPUs are considerably more powerful. For example, Nvidia Tesla C2050 GPU computing processors perform around 515 gigaFLOPS in double precision calculations, and the AMD FireStream 9270 peaks at 240 gigaFLOPS. In November 2011, it was announced that Japan had achieved 10.51 petaFLOPS with its K computer. It has 88,128 SPARC64 VIIIfx processors in 864 racks, with theoretical performance of 11.28 petaFLOPS. It is named after the Japanese word "kei", which stands for 10 quadrillion, corresponding to the target speed of 10 petaFLOPS. On November 15, 2011, Intel demonstrated a single x86-based processor, code-named "Knights Corner", sustaining more than a teraFLOPS on a wide range of DGEMM operations. Intel emphasized during the demonstration that this was a sustained teraFLOPS (not "raw teraFLOPS" used by others to get higher but less meaningful numbers), and that it was the first general purpose processor to ever cross a teraFLOPS. On June 18, 2012, IBM's Sequoia supercomputer system, based at the U.S. Lawrence Livermore National Laboratory (LLNL), reached 16 petaFLOPS, setting the world record and claiming first place in the latest TOP500 list. On November 12, 2012, the TOP500 list certified Titan as the world's fastest supercomputer per the LINPACK benchmark, at 17.59 petaFLOPS. It was developed by Cray Inc. at the Oak Ridge National Laboratory and combines AMD Opteron processors with "Kepler" NVIDIA Tesla graphic processing unit (GPU) technologies. On June 10, 2013, China's Tianhe-2 was ranked the world's fastest with 33.86 petaFLOPS. On June 20, 2016, China's Sunway TaihuLight was ranked the world's fastest with 93 petaFLOPS on the LINPACK benchmark (out of 125 peak petaFLOPS). The system, which is almost exclusively based on technology developed in China, is installed at the National Supercomputing Center in Wuxi, and represents more performance than the next five most powerful systems on the TOP500 list combined. In June 2019, Summit, an IBM-built supercomputer now running at the Department of Energy's (DOE) Oak Ridge National Laboratory (ORNL), captured the number one spot with a performance of 148.6 petaFLOPS on High Performance Linpack (HPL), the benchmark used to rank the TOP500 list. Summit has 4,356 nodes, each one equipped with two 22-core Power9 CPUs, and six NVIDIA Tesla V100 GPUs. In June 2020, Fugaku turned in a High Performance Linpack (HPL) result of 415.5 petaFLOPS, besting the now second-place Summit system by a factor of 2.8x. Fugaku is powered by Fujitsu’s 48-core A64FX SoC, becoming the first number one system on the list to be powered by ARM processors. In single or further reduced precision, used in machine learning and AI applications, Fugaku’s peak performance is over 1,000 petaflops (1 exaflops). The new system is installed at RIKEN Center for Computational Science (R-CCS) in Kobe, Japan. Distributed computing records Distributed computing uses the Internet to link personal computers to achieve more FLOPS: , the Folding@home network has over 2.3 exaFLOPS of total computing power. It is the most powerful distributed computer network, being the first ever to break 1 exaFLOPS of total computing power. This level of performance is primarily enabled by the cumulative effort of a vast array of powerful GPU and CPU units. , the entire BOINC network averages about 31 petaFLOPS. , SETI@Home, employing the BOINC software platform, averages 896 teraFLOPS. , Einstein@Home, a project using the BOINC network, is crunching at 3 petaFLOPS. , MilkyWay@Home, using the BOINC infrastructure, computes at 847 teraFLOPS. , GIMPS, searching for Mersenne primes, is sustaining 1,354 teraFLOPS. Cost of computing Hardware costs See also Computer performance by orders of magnitude Gordon Bell Prize LINPACK benchmarks Moore's law Multiply–accumulate operation Performance per watt#FLOPS per watt Exaflop computing SPECfp SPECint SUPS TOP500 References Benchmarks (computing) Floating point Units of frequency Computer performance