id
stringlengths
3
8
url
stringlengths
32
207
title
stringlengths
1
114
text
stringlengths
93
492k
22544934
https://en.wikipedia.org/wiki/Ren%20Ng
Ren Ng
Yi-Ren Ng (born September 21, 1979) is a Malaysian American scientist who is an assistant professor in the Department of Electrical Engineering & Computer Sciences at the University of California, Berkeley. He was the founder, executive chairman and CEO of Lytro, a Mountain View, California-based startup company. Lytro was developing consumer light-field cameras based on Ng's graduate research at Stanford University. Lytro ceased operations in late March 2018. Biography Ng was born in Malaysia, and immigrated to Australia at the age of 9. He earned a B.S. degree in mathematical and computational science in 2001, an M.S. in computer science in 2002, and a Ph.D. in computer science in 2006, all from Stanford University. His doctoral dissertation, titled Digital Light Field Photography, received the 2006 ACM Doctoral Dissertation Award. Business career Ng interned at Microsoft from June 2000 to September 2000 and June 2003 to September 2003 while studying at Stanford. After graduation in 2006, Ng founded Lytro and was CEO for more than six years. On June 29, 2012, Ng announced that he would step aside as CEO in order to spend more time on the vision for the company and less on its day-to-day operations. Ng also would become executive chairman and remain at Lytro full-time. Charles Chi, then executive chairman, served as interim CEO until Ng chose former Ning chief Jason Rosenthal as Lytro's new CEO in March 2013 after a lengthy external search. Academic In 2013 Ng was awarded the Royal Photographic Society's Selwyn Award given to those under the age of 35 years who have conducted successful science-based research connected with imaging. In July 2015, Ng became an assistant professor in the Department of Electrical Engineering & Computer Sciences at College of Engineering of University of California, Berkeley. References External links Lytro - Home Ren Ng EECS Profile at UC Berkeley Malaysian emigrants to Australia Australian emigrants to the United States American chairpersons of corporations Living people Australian chief executives Stanford University School of Engineering alumni UC Berkeley College of Engineering faculty 1979 births American scientists
65425919
https://en.wikipedia.org/wiki/Nick%20Rakocevic
Nick Rakocevic
Nikola "Nick" Rakocevic (; born December 31, 1997) is a Serbian professional basketball player for the Zhejiang Golden Bulls of the Chinese Basketball Association (CBA). He played college basketball for the USC Trojans and attended St. Joseph High School in Westchester, IL. Born in the United States, he represents Serbia internationally. High school career Rakocevic was born in Chicago to Momo and Denise Rakocevic. His father emigrated from Belgrade, Serbia at age 25. Rakocevic played for St. Joseph High School in Westchester, Illinois under coach Gene Pingatore. As a junior, he was a starter on the Class 3A state championship team. In his senior season, Rakocevic averaged 19.8 points, 14.4 rebounds and four blocks per game, leading his team to the Class 3A Westinghouse Sectional title. He earned First Team All-State honors from the Chicago Tribune. On April 11, 2016, he committed to play college basketball for USC over offers from UNLV, Arizona State and Florida. College career Rakocevic averaged 5.2 points and 4.2 rebounds per game as a freshman at USC, serving as a part-time starter. On March 13, 2018, he recorded 24 points and a career-high 19 rebounds in a 103–98 double overtime win over UNC Asheville. He grabbed the most rebounds by a USC player in a game since David Bluthenthal in 2000. As a sophomore, Rakocevic averaged 8.1 points and 6.2 rebounds per game, shooting 62.7 percent, the second-highest field goal percentage in program history. In his junior season, he averaged 14.7 points, 9.3 rebounds and 1.4 blocks per game. Rakocevic was a two-time Pac-12 Player of the Week and was named All-Pac-12 Honorable Mention. On November 12, 2019, he tied his career-high of 27 points, grabbed 16 rebounds and reached 1,000 career points, in an 84–66 victory over South Dakota State. As a senior, Rakocevic averaged 10.5 points and 8.3 rebounds per game. Professional career On September 16, 2020, Rakocevic signed his first professional contract with the Zhejiang Golden Bulls of the Chinese Basketball Association (CBA). National team career Rakocevic played for the Serbia under-20 national team at the 2017 FIBA U20 European Championship in Greece. He averaged 6.3 points and 3.9 rebounds per game, helping his team finish in fifth place. Career statistics College |- | style="text-align:left;"| 2016–17 | style="text-align:left;"| USC | 36 || 8 || 14.9 || .563 || – || .671 || 3.7 || .4 || .3 || .6 || 5.2 |- | style="text-align:left;"| 2017–18 | style="text-align:left;"| USC | 36 || 22 || 21.1 || .627 || – || .548 || 6.2 || .5 || .7 || .5 || 8.1 |- | style="text-align:left;"| 2018–19 | style="text-align:left;"| USC | 33 || 30 || 30.0 || .548 || .000 || .679 || 9.3 || 1.3 || .8 || 1.4 || 14.7 |- | style="text-align:left;"| 2019–20 | style="text-align:left;"| USC | 31 || 29 || 27.3 || .458 || .429 || .634 || 8.3 || 1.5 || 1.2 || 1.0 || 10.5 |- class="sortbottom" | style="text-align:center;" colspan="2"| Career | 136 || 89 || 23.1 || .540 || .360 || .643 || 6.8 || .9 || .7 || .9 || 9.5 References External links USC Trojans bio 1997 births Living people American men's basketball players American expatriate basketball people in China American people of Serbian descent Power forwards (basketball) Basketball players from Chicago USC Trojans men's basketball players Serbian expatriate basketball people in China Serbian expatriate basketball people in the United States Serbian men's basketball players
39619438
https://en.wikipedia.org/wiki/AnimatLab
AnimatLab
AnimatLab is an open-source neuromechanical simulation tool that allows authors to easily build and test biomechanical models and the neural networks that control them to produce behaviors. Users can construct neural models of varied level of detail, 3D mechanical models of triangle meshes, and use muscles, motors, receptive fields, stretch sensors, and other transducers to interface the two systems. Experiments can be run in which various stimuli are applied and data is recorded, making it a useful tool for computational neuroscience. The software can also be used to model biomimetic robotic systems. Motivation Neuromechanical simulation enables investigators to explore the dynamical relationships between the brain, the body, and the world in ways that are difficult or impossible through experiment alone. This is done by producing biologically realistic models of the neural networks that control behavior, while also simulating the physics that controls the environment in which an animal is situated. Interactions with the simulated world can then be fed back to the virtual nervous system using models of sensory systems. This provides feedback similar to what the real animal would encounter, and makes it possible to close the sensory-motor feedback loop to study the dynamic relationship between nervous function and behavior. This relationship is crucial to understanding how nervous systems work. History The application was initially developed at Georgia State University under NSF grant #0641326. Version 1 of AnimatatLab was released in 2010. Work has continued on the application and a new, improved second version was released in June 2013. Functionality AnimatLab allows users to develop models of varied levels of detail due to the types of models available. Neurons may be simple firing rate models, integrate-and-fire models, or Hodgkin–Huxley models. Plugins for other neuron models can be written and used. Hill-type muscles, motors, or servos can be used to actuate joints. Adapters between neurons and actuators are used to generate forces. Adapters between mechanical components (joints, body segments, muscles, etc.) provide feedback to the control system. Stimuli, such as voltage clamps, current clamps, and velocity clamps (for joints) can be added to design experiments. Data can be recorded from virtually every component of the system, and viewed in graphs or exported as a comma separated values file, making analysis easy. In addition, the user interface is entirely graphical, making it easy for beginners to use. Neural modeling A variety of biological neuron models are available for use. The Hodgkin–Huxley model, both single- and multi-compartment integrate-and-fire models, and various abstracted firing-rate models are available. This is a valuable feature because the purpose of one's model and its complexity decide which features of neural behavior are important to simulate. Network construction is graphical, with neurons dragged and dropped into a network and synapses drawn between them. When a synapse is drawn, the user specifies what type to use. Both spiking and nonspiking chemical synapses, as well as electrical synapses, are available. Both short-term (through facilitation) and long term (Hebbian) learning mechanisms are available, greatly increasing the capability of the nervous systems constructed. Rigid body modeling Body segments are modeled as rigid bodies drawn as triangle meshes with uniform mass density. Meshes can be selected from a set of primitives (cube, ellipsoid, cone, etc.) or imported from third party software such as Maya or Blender. Physics are simulated with the Vortex engine. Users can specify separate collision and graphical meshes for a rigid body, greatly reducing simulation time. In addition, material properties and the interaction between materials can be specified, allowing different restitution, coefficient of friction, etc. within the simulation. Muscle modeling A Hill-type muscle model modified according to Shadmehr and Wise can be used for actuation. Muscles are controlled by placing a voltage-tension adapter between a motor neuron and a muscle. Muscles also have stiffness and damping properties, as well as length-tension relationships that govern their behavior. Muscles can are placed to act on muscle attachment bodies in the mechanical simulation, which then apply the muscle tension force to the other bodies in the simulation. Sensory modeling Adapters may be placed to convert rigid body measurements to neural activity, much like how voltage-tension adapters are used to activate muscles. These may be joint angles or velocities, rigid body forces or accelerations, or behavioral states (e.g. hunger). In addition to these scalar inputs, contact fields may be specified on rigid bodies, which then provide pressure feedback to the system. This functionality has been used for skin-like sensing and to detect leg loading in walking structures. Stimulus types Stimuli can be applied to mechanical and neural objects in simulation for experimentation. These include current and voltage clamps, as well as velocity clamps for joints between rigid bodies. Graph types Data can be output in the form of line graphs and two-dimensional surfaces. Line graphs are useful for most data types, including neural and synaptic output, as well as body and muscle dynamics. Surface plots are useful for outputting activation on contact fields. Both of these can be output as comma separated values files, allowing the user to use other software such as Matlab or Excel for quantitative analysis. Research performed with AnimatLab Many academic projects have used AnimatLab to build neuromechanical models and explore behavior. These include: Shaking of a wet cat paw Locust jump and flight control Crayfish walking Cockroach walking and turning References External links Robotics simulation software Scientific simulation software Science software
366244
https://en.wikipedia.org/wiki/Trojan%20Room%20coffee%20pot
Trojan Room coffee pot
The Trojan Room coffee pot was a coffee machine located in the Computer Laboratory of the University of Cambridge, England. Created in 1991 by Quentin Stafford-Fraser and Paul Jardetzky, it was migrated from their laboratory network to the web in 1993 becoming the world's first webcam. To save people working in the building the disappointment of finding the coffee machine empty after making the trip to the room, a camera was set up providing a live picture of the coffee pot to all desktop computers on the office network. After the camera was connected to the Internet a few years later, the coffee pot gained international notoriety as a feature of the fledgling World Wide Web, until it was retired in 2001. Development The 128×128 px greyscale camera was connected to the laboratory's local network through a video capture card fitted on an Acorn Archimedes computer. Researcher Quentin Stafford-Fraser wrote the client software, dubbed XCoffee and employing the X Window System protocol, while his colleague Paul Jardetzky wrote the server program. In 1993, web browsers gained the ability to display images, and it soon became clear that this would be an easier way to make the picture available to users. The camera was connected to the Internet and the live picture became available via HTTP in November of the same year, by computer scientists Daniel Gordon and Martyn Johnson. It therefore became visible worldwide and grew into a popular landmark of the early web. Retirement and legacy Following the laboratory's move to its current premises, the camera was eventually switched off, at 09:54 UTC on 22 August 2001. Coverage of the shutdown included front-page mentions in The Times and The Washington Post, as well as articles in The Guardian and Wired. The last of the four or five coffee machines seen online, a Krups, was auctioned on eBay for £3,350 to the German news website Spiegel Online. The pot was later refurbished pro bono by Krups employees, and was switched on again in the magazine's editorial office. Since the summer of 2016, the coffee maker is on permanent loan to the Heinz Nixdorf MuseumsForum in Paderborn. Spoofs of the Trojan Room coffee machine ranged from the Hyper Text Coffee Pot Control Protocol, a 1998 April Fools' Day specification for a communication protocol, to the 2002 video game Hitman 2: Silent Assassin, in which the player can destroy a "coffee camera" in a kitchen as a distraction. The coffee pot was also mentioned in the BBC Radio 4 drama The Archers on 24 February 2005. In the first episode of the fourth season of AMC's Halt and Catch Fire, the coffee pot is shown to depict the first webcam showing up on the emerging World Wide Web. References Further reading External links Trojan Room Coffee Machine original website and evolution timeline Internet archive of the site New coffee pot webcam at the offices of Spiegel Online Heinz Nixdorf Museum Paderborn - CoffeeCam Live View YouTube Computerphile Interview with Quentin Stafford-Fraser Internet properties established in 1991 Internet properties disestablished in 2001 British websites Cooking appliances Defunct websites History of Cambridge Internet culture University of Cambridge Computer Laboratory 1991 establishments in England 2001 disestablishments in England
938701
https://en.wikipedia.org/wiki/VisualBoyAdvance
VisualBoyAdvance
VisualBoyAdvance (commonly abbreviated as VBA) is a free emulator of the Game Boy, Game Boy Color, and Game Boy Advance handheld game consoles as well as of Super Game Boy and Super Game Boy 2. Besides the DirectX version for the Windows platform, there is also one that is based on the free platform independent graphics library SDL. This is available for a variety of operating systems including Linux, BSD, Mac OS X, and BeOS. VisualBoyAdvance has also been ported to AmigaOS 4, AROS, GameCube, Wii, webOS, and Zune HD. History The VisualBoyAdvance project was started by a developer under the online alias "Forgotten". When this person left the development of the emulator, the project was handed over to a team named "VBA Team", led by Forgotten's brother. Development on the original VisualBoyAdvance stopped in 2004 with version 1.8.0 beta 3, and a number of forked versions were made by various developers in the years since then, such as VisualBoyAdvance-M. VisualBoyAdvance-M VisualBoyAdvance-M, or simply VBA-M, is an improved fork from the inactive VisualBoyAdvance project, adding several features as well as maintaining an up-to-date codebase. After VisualBoyAdvance became inactive in 2004, several forks began to appear such as VBALink, which allowed users to emulate the linking of two Game Boy devices. Eventually, VBA-M was created, which merged several of the forks into one codebase. Thus, the M in VBA-M stands for Merge. There is also a RetroArch/Libretro port of VBA-M's GBA emulation core (without the GB, GBC and SGB cores) as well as a modified version called VBA-Next. Features VisualBoyAdvance sports the following features: Compatibility with Game Boy, Game Boy Color, and Game Boy Advance ROMs Import/export feature of native saved games from and to other emulators Full save state support Joystick support Super Game Boy and Super Game Boy 2 border and color palette support Game Boy Printer emulation Real-time IPS patching (used mostly to play fan translations) Hacking and debugging tools, including loggers, viewers and editor The SDL version also includes a Game Boy Advance debugger Auto-fire support Speed-up key Full screen mode support Screen capture support Full support for GameShark for Game Boy Advance and Code Breaker Advance cheat codes (Windows version only) Audio (WAV) and video (AVI) recording Also allows recording in a proprietary video format only supported by VisualBoyAdvance and its forked versions Graphic filters to enhance display: 2xSaI, Super 2xSaI, Super Eagle, AdvanceMAME, Pixelate, and Motion blur GUI skinning support In addition, VisualBoyAdvance-M adds the following: HQ3x/4x pixel filters Game Boy linking, over LAN and Internet In conjunction with the Dolphin GameCube emulator, VBA-M supports linking GameCube and Game Boy Advance titles. Critical security flaw The VBA emulator is vulnerable to arbitrary code execution through a feature that allows importation of cheat codes from files, which isn't protected against buffer overrun. By importing a malicious XPC file (usually containing a list of GameShark cheat codes), VBA and VBA-rr can execute arbitrary code contained within the file. Proof-of-concept XPC files have been written for VBA 1.8.0 and VBA-rr, but VBA-M is currently not known to be vulnerable. See also List of video game emulators References External links Amiga emulation software AmigaOS 4 software BeOS software Free emulation software Game Boy Advance emulators Game Boy emulators Linux emulation software MacOS emulation software Multi-emulators Portable software Windows emulation software
3514168
https://en.wikipedia.org/wiki/NepaLinux
NepaLinux
NepaLinux was a Debian and Morphix-based Linux distribution focused on desktop usage in Nepali language computing. It contains applications for desktop users, such as OpenOffice.org, Nepali GNOME and KDE desktops, Nepali input method editor. The development and distribution of NepaLinux was done by Madan Puraskar Pustakalaya. Version 1.0 was produced as part of the PAN Localization Project, with the support of the International Development Research Centre (IDRC) of Canada. NepaLinux is an effort of promoting free and open-source software in Nepal. In October 2007, NepaLinux was the joint recipient of the Association for Progressive Communications' annual APC FOSS prize. References Knoppix Language-specific Linux distributions Linux distributions
51911151
https://en.wikipedia.org/wiki/Software%20development%20security
Software development security
Security, as part of the software development process, is an ongoing process involving people and practices, and ensures application confidentiality, integrity, and availability. Secure software is the result of security aware software development processes where security is built in and thus software is developed with security in mind. Security is most effective if planned and managed throughout every stage of software development life cycle (SDLC), especially in critical applications or those that process sensitive information. The solution to software development security is more than just the technology. Software development challenges As technology advances, application environments become more complex and application development security becomes more challenging. Applications, systems, and networks are constantly under various security attacks such as malicious code or denial of service. Some of the challenges from the application development security point of view include Viruses, Trojan horses, Logic bombs, Worms, Agents, and Applets. Applications can contain security vulnerabilities that may be introduced by software engineers either intentionally or carelessly. Software, environmental, and hardware controls are required although they cannot prevent problems created from poor programming practice. Using limit and sequence checks to validate users’ input will improve the quality of data. Even though programmers may follow best practices, an application can still fail due to unpredictable conditions and therefore should handle unexpected failures successfully by first logging all the information it can capture in preparation for auditing. As security increases, so does the relative cost and administrative overhead. Applications are typically developed using high-level programming languages which in themselves can have security implications. The core activities essential to the software development process to produce secure applications and systems include: conceptual definition, functional requirements, control specification, design review, code review and walk-through, system test review, and maintenance and change management. Building secure software is not only the responsibility of a software engineer but also the responsibility of the stakeholders which include: management, project managers, business analysts, quality assurance managers, technical architects, security specialists, application owners, and developers. Basic principles There are a number of basic guiding principles to software security. Stakeholders’ knowledge of these and how they may be implemented in software is vital to software security. These include: Protection from disclosure Protection from alteration Protection from destruction Who is making the request What rights and privileges does the requester have Ability to build historical evidence Management of configuration, sessions and errors/exceptions Basic practices The following lists some of the recommended web security practices that are more specific for software developers. Sanitize inputs at the client side and server side Encode request/response Use HTTPS for domain entries Use only current encryption and hashing algorithms Do not allow for directory listing Do not store sensitive data inside cookies Check the randomness of the session Set secure and HttpOnly flags in cookies Use TLS not SSL Set strong password policy Do not store sensitive information in a form’s hidden fields Verify file upload functionality Set secure response headers Make sure third party libraries are secured Hide web server information Security testing Common attributes of security testing include authentication, authorization, confidentiality, availability, integrity, non-repudiation, and resilience. Security testing is essential to ensure that the system prevents unauthorized users to access its resources and data. Some application data is sent over the internet which travels through a series of servers and network devices. This gives ample opportunities to unscrupulous hackers. Summary All secure systems implement security controls within the software, hardware, systems, and networks - each component or process has a layer of isolation to protect an organization's most valuable resource which is its data. There are various security controls that can be incorporated into an application's development process to ensure security and prevent unauthorized access. References Stewart, James (2012). CISSP Certified Information Systems Security Professional Study Guide Sixth Edition. Canada: John Wiley & Sons, Inc. pp. 275–319. . Report from Dagstuhl Seminar 12401Web Application Security Edited by Lieven Desmet, Martin Johns, Benjamin Livshits, and Andrei Sabelfeld, http://research.microsoft.com/en-us/um/people/livshits/papers%5Ctr%5Cdagrep_s12401.pdf Web Application Security Consortium, The 80/20 Rule for Web Application Security by Jeremiah Grossman 2005, http://www.webappsec.org/projects/articles/013105.shtml Wikipedia Web Application Security page, Web application security Web Security Wiki page, https://www.w3.org/Security/wiki/Main_Page Wikipedia Web Security Exploits page, :Category:Web security exploits Open Web Application Security Project (OWASP), https://www.owasp.org/index.php/Main_Page Wikipedia Network Security page, Network security Open Web Application Security Project (OWASP) web site, https://www.owasp.org/images/8/83/Securing_Enterprise_Web_Applications_at_the_Source.pdf Software development Software quality
58344898
https://en.wikipedia.org/wiki/Firewall%3A%20Zero%20Hour
Firewall: Zero Hour
Firewall: Zero Hour is a virtual reality first-person shooter game developed by First Contact Entertainment and published by Sony Interactive Entertainment. It was released for the PlayStation 4 via the PlayStation VR on August 28, 2018. Gameplay Firewall: Zero Hour is an online, or offline via training, virtual reality tactical action shooter where players called contractors take on contracts issued from agents who are known as mother and father. Each contractor has a different passive skill for use on the field which can help support the player fend off the enemy team from completing their objective. There are 12 contractors players can choose from all with a set primary skill and a secondary skill you can edit. Weapon loadouts are also customizable where players can pick a primary weapon, secondary weapon, one lethal equipment and one nonlethal equipment. Taking on contracts players will receive experience for certain actions they complete, as well as earning in-game currency known as "Crypto" where both can be found scattered around the map or completing the contract. Training Players can practice their tactics against AI enemies that scale with rank either solo, or in a 4-player squad which either can be private or public matchmaking. Contracts In this mode players are against other players formed into two teams of four in which the attacking team will have two objectives to complete in order to fulfill their contract. Attacking team "Mother" would have to first break a firewall in a location hidden from their wristband until a team member comes in contact with one then secondly hack a laptop in order to fulfill their contract earning experience points and crypto. Defending team "Father" can either move to the firewalls and defend or defend their main objective laptop. Development The game was announced during PSX 2017 at the Anaheim Convention Center. Reception Initial reviews for Firewall: Zero Hour were positive and the game was praised as being a major step forward for virtual reality gaming. Accolades See also List of PlayStation VR games References 2018 video games First-person shooters PlayStation 4 games PlayStation 4-only games PlayStation VR games Video games developed in the United States
36231990
https://en.wikipedia.org/wiki/Silicon%20Studio
Silicon Studio
Silicon Studio is a Japanese computer graphics technology company and video game developer based in Tokyo. As a technology company, Silicon Studio has produced several products in the 3D computer graphics field, including middleware software, such as a post-processing visual effects library called YEBIS, as well as general real-time graphics engines and game development engines, such as OROCHI and Mizuchi, a physically based rendering engine. As a video game developer, Silicon Studio has worked on different titles for several gaming platforms, most notably, the action-adventure game 3D Dot Game Heroes on the PlayStation 3, the two role-playing video games Bravely Default and Bravely Second: End Layer on the Nintendo 3DS, and Fantasica on the iOS and Android mobile platforms. History Silicon Studio was established in 2000. It was founded by Teruyasu Sekimoto, who was formerly the senior vice president of Silicon Graphics (SGI). Specializing from the start in rendering technology, research and development methods, post-processing visual effects, game content development, and online game solutions, Silicon Studio created four main studios to achieve the highest productivity in these areas. The research team at Silicon Studio developed several techniques related to fields in visual effects shown at the Computer Entertainment Developers Conference, such as post effect processing and global illumination. While traditionally a provider of middleware solutions for Japanese game developers, Silicon Studio has grown as an international company with a greater focus on the visibility of their products abroad. Silicon Studio has partnerships with a number of companies, including French company Allegorithmic, Canadian company Audiokinetic, British company Stainless Games, Italian companies such as Kunos Simulazioni and Milestone, American companies such as Microsoft and Pixar, and Japanese companies such as Bandai Namco, DeNA, Dimps, FromSoftware, Idea Factory, Koei Tecmo, Marvelous, Sega, and Sony Computer Entertainment. Silicon Studio has also partnered with the following companies: Vivante, OTOY, Square Enix, and Matchlock. In February 2015, Silicon Studio was listed on the Tokyo Stock Exchange Mothers market. Video games Games developed by Silicon Studio: Middleware Bishamon – Bishamon is a particle effect authoring tool and runtime library that works for many gaming platforms. It is developed by a partner company and is integrated with the Orochi3 game development engine. Motion Portrait – Motion Portrait is a technology tool that can automatically animate a portrait. It supports both regular camera photos or non-realistic character drawings. YEBIS Development for YEBIS originally began some time around 2004. Notable video games that utilize YEBIS include: Software that support YEBIS include: Substance Designer 4.3 Substance Painter Luminous Studio Modo OTOY real-time path tracing engine YEBIS 2 YEBIS 2 is a post-processing middleware solution that allows developers to create high-quality lens-simulation optical effects. In June 2013, Silicon Studio announced that their next post-processing middleware solution, YEBIS 2, would be available for game developers on the PlayStation 4 and Xbox One development network. At the E3 Expo 2013, Square Enix’s tech demo Agni’s Philosophy was shown using YEBIS 2 post-processing effects. In August 2013, the YEBIS 2 tech demo "Rigid Gems" was featured in Google’s official unveiling of the Nexus 7 mobile tablet. YEBIS has also been used for the Xbox One launch title, Fighter Within. In May 2014, Silicon Studio announced that their YEBIS 2 middleware was being utilized in the MotoGP 14 video racing game, developed by Milestone for PlayStation 4. YEBIS 2 is also utilized by Square Enix's Luminous Studio engine, and the action role-playing game Final Fantasy XV which runs on the Luminous Studio engine. In 2014, Allegorithmic announced that it had integrated YEBIS 2 with software such as Substance Designer 4.3 and Substance Painter, which are supported by The Foundry's Modo software. OTOY has also been using YEBIS for their real-time path tracing engine on PC. In 2015, Geomerics announced that it has integrated YEBIS 3 with the Forge lighting tool for the Enlighten 3 software. Engines OROCHI3 – Orochi3 is an all-in-one game development engine. It supports PlayStation 4, Xbox One, PlayStation 3, PlayStation Vita, Xbox 360 and PC. It was used by Bandai Namco Entertainment's fighting game Rise of Incarnates. An earlier version of Orochi was also used by Square Enix's third-person shooter arcade game Gunslinger Stratos in 2012. Mizuchi A new real-time graphics engine that debuted in 2014, compatible with the PC and PlayStation 4 platforms. It is called Mizuchi, with the full title, Mizuchi: The Cutting-Edge Real-Time Rendering Engine. It is intended to be used for various different applications, including video game development, films, architectural and automobile visualization, and academic research. In September 2014, a tech demo running on the engine, called "Museum", was revealed. It received a positive reception for the high visual quality of its real-time graphics. In December 2014, Silicon Studio announced the Mizuchi engine will be compatible with the PC at 60 frames per second and the PlayStation 4 at 30 frames per second. Stride Stride, formerly known as Xenko and before that as Paradox, is a game development framework and C# game engine with an asset pipeline and a cross-platform runtime supporting iOS, Android, Windows UWP, Linux, and PlayStation 4. It was made free and open-source software in October 2014. Xenko beta version 1.8x was then released finally out of beta in February 2017. In April 2020, engine was renamed to Stride. References External links Japanese companies established in 2000 3D computer graphics Free and open-source software Software companies based in Tokyo Software companies established in 2000 Video game companies established in 2000 Video game companies of Japan Video game development companies
5164145
https://en.wikipedia.org/wiki/Australian%20Computer%20Society
Australian Computer Society
The Australian Computer Society (ACS) is an association for information and communications technology professionals with over 48,000 members Australia-wide. According to its Constitution, its objectives are "to advance professional excellence in information technology" and "to promote the development of Australian information and communications technology resources". The ACS was formed on 1 January 1966 from five state based societies. It was formally incorporated in the Australian Capital Territory on 3 October 1967. Since 1983 there have been chapters in every state and territory. The ACS is a member of the Australian Council of Professions ("Professions Australia"), the peak body for professional associations in Australia. Internationally, ACS is a member of the International Professional Practice Partnership (IP3), South East Asia Regional Computer Confederation, International Federation for Information Processing and The Seoul Accord. The ACS is also a member organisation of the Federation of Enterprise Architecture Professional Organizations (FEAPO), a worldwide association of professional organisations which have come together to provide a forum to standardise, professionalise, and otherwise advance the discipline of Enterprise Architecture. Activities The ACS operates various chapters, annual conferences, Special Interest Groups, and a professional development program. Members are required to comply with a Code of Ethics and a Code of Professional Conduct. Extent of representation The ACS describes itself as "the professional association for Australia's Information and Communication Technology (ICT) sector" and "Australia's primary representative body for the ICT workforce", but industry analysts have questioned this based on the small percentage of IT professionals who are ACS members. The issue has been discussed in the press since at least 2004, and in 2013 the Sydney Morning Herald wrote that "the ACS aggressively seeks to control the important software engineering profession in Australia, but ... less than 5 per cent of the professional IT workforce belongs to the ACS." The ACS Foundation came up with a slightly higher figure: "Depending on the data used to calculate the number of ICT professionals in Australia, however, [ACS] membership represents approximately 6.5 per cent of the total." Presidents The Australian Computer Society elects its National President every two years, who serves as the leader of the Society. Some of the most recent presidents include: Dr. Nick Tate, 2022 - 2023 Ian Opperman, 2020 - 2021 Yohan Ramasundarah, 2018 – 2019 Anthony Wong, 2016 – 2017 Brenda Aynsley, 2014 – 2016 Nick Tate, 2012 – 2014 Kumar Parakala, 2008 – 2010 Philip Argy, 2006 – 2008 Edward Mandla, 2004 – 2006 Young IT The Young IT Professionals Board of the Australian Computer Society provides a voice for young IT professionals and students, as well as a range of services and benefits for members. Currently Young IT organises and runs a bi-annual YIT International Conference and other events such as local career days, soft skills and technical seminars, networking opportunities and social events (e.g. Young IT in the Pub) in each of the Australian States. The most recent Young IT Conference was held in Melbourne in 2014. Publications Information Age is the official publication of the ACS. In February 2015 Information Age became an online-only publication. Peer-reviewed research publications of the ACS include: Journal of Research and Practice in Information Technology Conferences in Research and Practice in Information Technology Australasian Journal of Information Systems The digital library contains free journal articles and conference papers. Related organisations Association for Computing Machinery ACS Foundation Australian Information Security Association British Computer Society Institution of Analysts and Programmers International Federation for Information Processing New Zealand Computer Society Computer Society of India Computer Society of Southern Africa Canadian Information Processing Society SEARCC Seoul Accord Other Australian computer associations AUUG – Now deregistered Linux Australia LUGs in Australia SAGE-AU Institute of Analytics Professionals of Australia (IAPA), incorporating business data analytics, business intelligence, data mining and related industries Australian Software Innovation Forum, encourages collaboration and co-operation in Java EE and associated technologies Special Interest Groups Special Interest Groups (SIGs) of the ACS are connected to each state branch with some SIGs of the same or similar name occurring in a number of states, depending on local interest, and include: Architects, Software Quality Assurance, Women in Technology, Business Requirements Analysis, Enterprise Capacity Management, Enterprise Solution Development, Free Open Source Software, Information Security, IT Management, Project Management, Service Oriented Computing, Web Services, Consultants and Contractors, IT Security, PC Recycling, Curry SIG, Information Technology in Education, Robotics, E-Commerce, IT Governance, Software Engineering and Cloud Computing. A recent addition is the Green ICT Group on computers and telecommunications for environmental sustainability. In 2007 the Telecommunications Society of Australia was absorbed into the Australian Computer Society as the Telecommunications Special Interest Group Education and Certification The ACS runs the online Computer Professional Education Program (CPEP) for postgraduate education in subjects including: Green ICT Strategies; New Technology Alignment; Business, Strategy & IT; Adaptive Business Intelligence; Project Management; Managing Technology and Operations. CPEP uses the Australian developed Moodle course management system and is delivered via the web. The Diploma of Information Technology (DIT) is equivalent to one academic year of a Bachelor of Information Technology at several universities. It has eight compulsory subjects: systems analysis, programming, computer organisation, data management, OO systems development, computer communications, professional practice and systems principles. The ACS also certifies IT professionals at two levels, the Certified Professional and the Certified Technologist. Each certification level has a minimum level of experience and also required ongoing CPD (Certified Professional Development) hours of learning each year. In 2017 the ACS launched a cybersecurity specialisation within the certification framework. Digital Disruptors Awards ACS recognizes outstanding technical talent in the Australian industry with seven "ACS Digital Disruptors awards". They are: Individual Awards ICT Professional of the Year Emerging ICT Professional of the Year (Age under 30) CXO Disruptor of the Year Team/Project Awards Service Transformation for the Digital Consumer Skills Transformation of Work Teams Best New Tech Platform ICT Research Project of the Year See also Skills Framework for the Information Age References Citations Sources "The Australian Computer Society". Retrieved 16 May 2006. "ACS Historical Notes". Retrieved 16 May 2006. ACS Code of Professional Conduct and Professional Practice External links of the ACS ACS Foundation Information Age Professional associations based in Australia Information technology organizations based in Oceania Organizations established in 1966 1966 establishments in Australia
4371321
https://en.wikipedia.org/wiki/Vine%20Linux
Vine Linux
is a Japanese Linux distribution sponsored by VineCaves. It has been a fork of Red Hat Linux 7.2 since Vine Linux 3.0. Work on Vine Linux was started in 1998. All versions except Vine Seed have been announced to be discontinued from May 4, 2021. Release history References External links RPM-based Linux distributions Japanese-language Linux distributions X86-64 Linux distributions PowerPC operating systems Linux distributions without systemd Linux distributions
29832926
https://en.wikipedia.org/wiki/Basware
Basware
Basware Corporation (founded 1985) is a Finnish software company selling enterprise software for financial processes, purchase to pay and financial management. The company has operations across over 50 countries on six continents. Basware’s markets include Nordic countries, Europe, Russia, North America and Asia Pacific. The company has its own subsidiaries in Scandinavia, Western Europe and the United States. Additionally, Basware has over 70 value added resellers in Europe. History 1985–1999 Basware was founded in 1985. It was then a Finnish subsidiary of an American corporation and it was called Baltic Accounting Systems. The Finnish management team (Hannu Vaajoensuu, Ilkka Sihvo, Kirsi Eräkangas, Antti Pöllänen and Sakari Perttunen) bought the company in 1990. In 1992 Basware introduced their first Financial Management software. In 1997 the company introduced their first Invoice Processing software. It contained Scan & Capture Workflow for Invoices (e-Flow). The first international customer was got in 1999, in Sweden. 2000– In 2000 the company had a public listing on the Helsinki Stock Exchange. They also introduced the first electronic invoicing, archiving and procurement products and intelligent OCR and got their first customers in Norway, Denmark and the Netherlands. Between 2001-2002 there were expansion into Central Europe, UK, USA and Australia. In 2005 Basware acquired Norway-based Iocore and Finnish Trivet Software, the following year they acquired Finnish Analyste and in 2007 UK-based Digital Vision Technologies Ltd. In 2008 Basware acquired Norwegian-based Contempus AS, and the next year Itella's Norwegian invoice automation solution business and TAG Services Pty Ltd in Australia. In 2010 Basware acquired TNT Post's Connectivity operations. Two years later it acquired German e-Invoice network First Businesspost (1stbp) GmbH and network and e-Invoicing business of leading Benelux operator Certipost. In 2013 there were over 60 million transactions in the Basware Commerce Network. Basware acquired Certipost, the leading e-invoice operator in the Benelux In 2015 Basware acquired Procserve, a leading e-procurement operator in the UK and the following year they acquired Verian, a leading e-procurement operator in the US In 2020 Basware introduced InvoiceReady, a business unit that provides small and medium-sized businesses with its procure-to-pay technology Products Basware’s products are packaged composite applications (PCA) that are intended to interact with and complement the functionalities of existing systems, such as enterprise resource planning (ERP) systems, used across the organization to support daily processes. The company's software automates procurement, invoice receiving and invoice processing and can also be implemented as a software as a service (SaaS) delivery. These technologies are known as Cloud-Invoicing and guarantee a barrier-free exchange of structured documents. Basware claims that Cloud Invoicing will reform the whole way of business-to-business interoperability. References Software companies of Finland Procurement Financial software companies Companies listed on Nasdaq Helsinki Companies based in Espoo Financial services companies established in 1985 1985 establishments in Finland
36708312
https://en.wikipedia.org/wiki/Syrian%20Electronic%20Army
Syrian Electronic Army
The Syrian Electronic Army (SEA; ) is a group of computer hackers which first surfaced online in 2011 to support the government of Syrian President Bashar al-Assad. Using spamming, website defacement, malware, phishing, and denial-of-service attacks, it has targeted terrorist organizations, political opposition groups, western news outlets, human rights groups and websites that are seemingly neutral to the Syrian conflict. It has also hacked government websites in the Middle East and Europe, as well as US defense contractors. the SEA has been "the first Arab country to have a public Internet Army hosted on its national networks to openly launch cyber attacks on its enemies". The precise nature of SEA's relationship with the Syrian government has changed over time and is unclear. Origins and historical context In the 1990s Syrian President Bashar al-Assad headed the Syrian Computer Society, which is connected to the SEA, according to research by University of Toronto and University of Cambridge, UK. There is evidence that a Syrian Malware Team goes as far back as January 1, 2011. In February 2011, after years of Internet censorship, Syrian censors lifted a ban on Facebook and YouTube. In April 2011, only days after anti-regime protests escalated in Syria, Syrian Electronic Army emerged on Facebook. On May 5, 2011 the Syrian Computer Society registered SEA’s website (syrian-es.com). Because Syria's domain registration authority registered the hacker site, some security experts have written that the group was supervised by the Syrian state. SEA claimed on its webpage to be no official entity, but "a group of enthusiastic Syrian youths who could not stay passive towards the massive distortion of facts about the recent uprising in Syria". As soon as May 27, 2011 SEA had removed text that denied it was an official entity. One commentator has noted that "[SEA] volunteers might include Syrian diaspora; some of their hacks have used colloquial English and Reddit memes. According to a 2014 report by security company Intelcrawler, SEA activity has shown links with "officials in Syria, Iran, Lebanon and Hezbollah." A February 2015 article by The New York Times stated that "American intelligence officials" suspect the SEA is "actually Iranian". However, no data has shown a link between Iran's and Syria's cyber attack patterns according to an analysis of "open-source intelligence" by cyber security firm Recorded Future. Online activities SEA has pursued activities in three key areas: Website defacement and electronic surveillance against Syrian rebels and other opposition: The SEA has carried out surveillance to discover the identities and location of Syrian rebels, using malware (including the Blackworm tool), phishing, and denial of service attacks. this electronic monitoring has extended to foreign aid workers. Defacement attacks against Western websites that it contends spread news hostile to the Syrian government: These have included news websites such as BBC News, the Associated Press, National Public Radio, CBC News, Al Jazeera, Financial Times, The Daily Telegraph, The Washington Post, Syrian satellite broadcaster Orient TV, and Dubai-based al-Arabia TV, as well as rights organizations such as Human Rights Watch. SEA targets include VoIP apps, such as Viber and Tango. Spamming popular Facebook pages with pro-regime comments: The Facebook pages of President Barack Obama and former French President Nicolas Sarkozy have been targeted by such spam campaigns. Global cyber espionage: "technology and media companies, allied military procurement officers, US defense contractors, and foreign attaches and embassies". The SEA's tone and style vary from the serious and openly political to ironic statements intended as critical or pointed humor: SEA had "Exclusive: Terror is striking the #USA and #Obama is Shamelessly in Bed with Al-Qaeda" tweeted from the Twitter account of 60 Minutes, and in July 2012 posted "Do you think Saudi and Qatar should keep funding armed gangs in Syria in order to topple the government? #Syria," from Al Jazeera's Twitter account before the message was removed. In another attack, members of SEA used the BBC Weather Channel Twitter account to post the headline, "Saudi weather station down due to head on-collision with camel." After Washington Post reporter Max Fisher called their jokes unfunny, one hacker associated with the group told a Vice interview 'haters gonna hate.'" Operating system On 31 October 2014, the SEA released a Linux distribution named SEANux. Timeline of notable attacks 2011 July 2011: University of California Los Angeles website defaced by SEA hacker "The Pro". September 2011: Harvard University website defaced in what was called the work of a "sophisticated group or individual". The Harvard homepage was replaced with an image of Syrian president Bashar al-Assad with the message "Syrian Electronic Army Were Here". 2012 April 2012: The official blog of social media website LinkedIn was redirected to a site supporting Bashar al-Assad. August 2012: The Twitter account of the Reuters news agency sent 22 tweets with false information on the conflict in Syria. The Reuters news website was compromised, and posted a false report about the conflict to a journalist's blog. 2013 20 April 2013: The Team Gamerfood homepage was defaced. 23 April 2013: The Associated Press Twitter account falsely claimed the White House had been bombed and President Barack Obama injured. This led to a US$136.5 billion decline in value of the S&P 500 the same day. May 2013: The Twitter account of The Onion was compromised by phishing Google Apps accounts of The Onions employees. The platform was also used by the hackers to spread pro-Syrian tweets. 24 May 2013: The ITV News London Twitter account was hacked. On 26 May 2013: the Android applications of British broadcaster Sky News were hacked on Google Play Store. 17 July 2013: Truecaller servers were hacked into by the Syrian Electronic Army. The group claimed on its Twitter handle to have recovered 459 GiBs of database, primarily due to an older version of WordPress installed on the servers. The hackers released Truecaller's alleged database host ID, username, and password via another tweet. On 18 July 2013, TrueCaller confirmed on its blog that only their website was hacked, but claimed that the attack did not disclose any passwords or credit card information. 23 July 2013: Viber servers were hacked, the support website replaced with a message and a supposed screenshot of data that was obtained during the intrusion. 15 August 2013: Advertising service Outbrain suffered a spearphishing attack and SEA placed redirects into the websites of The Washington Post, Time, and CNN. 27 August 2013: NYTimes.com had its DNS redirected to a page that displayed the message "Hacked by SEA" and Twitter's domain registrar was changed. 28 August 2013: Twitter's DNS registration showed the SEA as its Admin and Tech contacts, and some users reported that the site's Cascading Style Sheets (CSS) had been compromised. 29–30 August 2013: The New York Times, The Huffington Post, and Twitter were knocked down by the SEA. A person claiming to speak for the group stepped forward to tie these attacks to the increasing likelihood of U.S military action in response to al-Assad using chemical weapons. A self-described operative of the SEA told ABC News in an e-mail exchange: "When we hacked media we do not destroy the site but only publish on it if possible, or publish an article [that] contains the truth of what is happening in Syria. ... So if the USA launch attack on Syria we may use methods of causing harm, both for the U.S. economy or other." 2–3 September 2013: Pro-Syria hackers broke into the Internet recruiting site for the US Marine Corps, posting a message that urged US soldiers to refuse orders if Washington decides to launch a strike against the Syrian government. The site, www.marines.com, was paralyzed for several hours and redirected to a seven-sentence message "delivered by SEA". 30 September 2013: The Global Post's official Twitter account and website were hacked. SEA posted through their Twitter account, "Think twice before you publish untrusted informations [sic] about Syrian Electronic Army" and "This time we hacked your website and your Twitter account, the next time you will start searching for new job" 28 October 2013: By gaining access to the Gmail account of an Organizing for Action staffer, the SEA altered shortened URLs on President Obama's Facebook and Twitter accounts to point to a 24-minute pro-government video on YouTube. 9 November 2013: SEA hacked the website of VICE, a no-affiliate news/documentary/blog website, which has filmed numerous times in Syria with the side of the Rebel forces. Logging into vice.com redirected to what appeared to be the SEA homepage. 12 November 2013: SEA hacked the Facebook page of Matthew VanDyke, a Libyan Civil War veteran and pro-rebel news reporter. 2014 1 January 2014: SEA hacked Skype's Facebook, Twitter and blog, posting an SEA related picture and telling users not to use Microsoft's e-mail service Outlook.com —formerly known as Hotmail—claiming that Microsoft sells user information to the government. 11 January 2014: SEA hacked the Xbox Support Twitter pages and directed tweets to the group's website. 22 January 2014: SEA hacked the official Microsoft Office Blog, posting several images and tweeted about the attack. 23 January 2014: CNN's HURACAN CAMPEÓN 2014 official Twitter account showed two messages, including a photo of the Syrian Flag composed of binary code. CNN removed the Tweets within 10 minutes. 3 February 2014: SEA hacked the websites of eBay and PayPal UK. One source reported the hackers said it was just for show and that they took no data. 6 February 2014: SEA hacked the DNS of Facebook. Sources said the registrant contact details were restored and Facebook confirmed that no traffic to the website was hijacked, and that no users of the social network were affected. 14 February 2014: SEA hacked the Forbes website and their Twitter accounts. 26 April 2014: SEA hacked the information security-related RSA Conference website. 18 June 2014: SEA hacked the websites of British newspapers The Sun (United Kingdom) and The Sunday Times. 22 June 2014: The Reuters website was hacked a second time and showed a SEA message condemning Reuters for "publishing false articles about Syria". Hackers compromised the website, corrupting ads served by Taboola. 27 November 2014: SEA hacked hundreds of sites through hijacking Gigya's comment system of prominent websites, displaying a message "You've been hacked by the Syrian Electronic Army(SEA)." Affected websites included the Aberdeen Evening Express, Logitech, Forbes, The Independent UK Magazine, London Evening Standard, The Telegraph, NBC, the National Hockey League, Finishline.com, PCH.com, Time Out New York and t3.com (a tech website), stv.com, Walmart Canada, PacSun, Daily Mail websites, bikeradar.com (cycling website), SparkNotes, millionshort.com, Milenio.com, Mediotiempo.com, Todobebe.com and myrecipes.com, Biz Day SA, BDlive South Africa, muscleandfitness.com, and CBC News. 2015 21 January 2015: French newspaper Le Monde wrote that SEA hackers "managed to infiltrate our publishing tool before launching a denial of service". 2018 17 May 2018: Two suspects were indicted by the United States for "conspiracy" for hacking several US websites. 2021 October 2021: Facebook discovers the presence of several fake accounts run by the SEA and its affiliated organizations. The accounts had reportedly been used to target Syrian opposition figures and human rights activists, as well as members of the YPG and White Helmets. See also Advanced persistent threat Hacktivism Internet censorship in Syria PLA Unit 61398 Tailored Access Operations References External links old account Youtube Channel Pinterest profile of the Syrian Electronic Army VK profile of the Syrian Electronic Army syrianelectronicarmy.com, first SEA website which was later redirected to its .sy replacement sea.sy, SEA's newer website, which SEA started in late May 2013; it has its access revoked by the Syrian Computer Society (site displays blank loading page on browser, and widget returns "ERROR 403: Forbidden" as of August 2013) The Emergence of Open and Organized Pro-Government Cyber Attacks in the Middle East: The Case of the Syrian Electronic Army, Helmi Noman, May 30, 2011, published by Information Warfare Monitor, a public-private partnership between University of Ottawa and Secdev Group, including screenshots of SEA activities. google cache of an SEA website mentioned in Information Warfare Monitor report citing [email protected] as a contact address and links to a Facebook page called SEA.Vic0r.2 at Vict0r Battalion - Syrian Electronic Army The page is no longer available as of September 2013. Understanding the Syrian Electronic Army (SEA), HP-Security Research Blog Syrian Cyber Hackers Charged - Two From ‘Syrian Electronic Army’ Added to Cyber’s Most Wanted (FBI) Organizations of the Syrian civil war Paramilitary organizations based in Syria Cyberwarfare Hacker groups Information operations and warfare Propaganda organizations Saboteurs
14531895
https://en.wikipedia.org/wiki/Oracle%20Net%20Services
Oracle Net Services
In the field of database computing, Oracle Net Services consists of sets of software which enable client applications to establish and maintain network sessions with Oracle Database servers. Since Oracle databases operate in and across a variety of software and hardware environments, Oracle Corporation supplies high-level transparent networking facilities with the intention of providing networking functionality regardless of differences in nodes and protocols. Terminology network service name (NSN): "[a] simple name for a service that resolves to a connect descriptor" For example: sales.acme.co.uk Components Oracle Corporation defines Oracle Net Services as comprising: Oracle net listener Oracle Connection Manager Oracle Net Configuration assistant Oracle Net Manager Oracle Net Oracle Net, a proprietary networking stack, runs both on client devices and on Oracle database servers in order to set up and maintain connections and messaging between client applications and servers. Oracle Net (formerly called "SQL*Net" or "Net8") comprises two software components: Oracle Net Foundation Layer: makes and maintains connection sessions. The Oracle Net Foundation Layer establishes and also maintains the connection between the client application and server. It must reside on both the client and server for peer-to-peer communication to occur. Oracle Protocol Support: interfaces with underlying networking protocols such as TCP/IP, named pipes, or Sockets Direct Protocol (SDP). The listener The listener process(es) on a server detect incoming requests from clients for connection - by default on port 1521 - and manage network-traffic once clients have connected to an Oracle database. The listener uses a configuration-file - listener.ora - to help keep track of names, protocols, services and hosts. The listener.ora file can include three sorts of parameters: listener-address entries SID_LIST entries control entries Apart from pre-defined and known statically-registered databases, a listener can also accept dynamic service registration from a database. Oracle Connection Manager The Oracle Connection Manager (CMAN) acts as a lightweight router for Oracle Net packets. Oracle Net Manager Oracle Net Manager, a GUI tool, configures Oracle Net Services for an Oracle home on a local client or server host. (Prior to Oracle 9i known as "Net8 Assistant".) Associated software Utilities and tools tnsping: determines the accessibility of an Oracle net service. Software suites Oracle software integrating closely with and/or depending on Oracle Net Services includes: Oracle Clusterware Oracle Data Guard Oracle Enterprise Manager Oracle Internet Directory Oracle RAC (real application clusters) Oracle Streams See also Transparent Network Substrate (TNS) References Arun Kumar, John Kanagaraj and Richard Stroupe: Oracle Database 10g Insider Solutions. Sams, 2005. External links "Oracle Network Configuration" Footnotes Oracle software
13201685
https://en.wikipedia.org/wiki/Smart%20grid
Smart grid
A smart grid is an electrical grid which includes a variety of operation and energy measures including: Advanced metering infrastructure (of which smart meters are a generic name for any utility side device even if it is more capable e.g. a fiber optic router) Smart distribution boards and circuit breakers integrated with home control and demand response (behind the meter from utility perspective) Load control switches and smart appliances, often financed by efficiency gains on municipal programs (e.g. PACE financing) Renewable energy resources, including capacity to charge parked (electric vehicle) batteries or larger arrays of batteries recycled from these, or other energy storage. Energy efficient resources Sufficient utility grade fiber broadband to connect and monitor the above, with wireless as backup. Sufficient spare if "dark" capacity to ensure failover, often leased for revenue. Electronic power conditioning and control of the production and distribution of electricity are important aspects of the smart grid. Smart grid policy is organized in Europe as Smart Grid European Technology Platform. Policy in the United States is described in § 17381. Roll-out of smart grid technology also implies a fundamental re-engineering of the electricity services industry, although typical usage of the term is focused on the technical infrastructure. Concerns with smart grid technology mostly focuses on smart meters, items enabled by them, and general security issues. Background Historical development of the electricity grid The first alternating current power grid system was installed in 1886 in Great Barrington, Massachusetts. At that time, the grid was a centralized unidirectional system of electric power transmission, electricity distribution, and demand-driven control. In the 20th century, local grids grew over time, and were eventually interconnected for economic and reliability reasons. By the 1960s, the electric grids of developed countries had become very large, mature and highly interconnected, with thousands of 'central' generation power stations delivering power to major load centres via high capacity power lines which were then branched and divided to provide power to smaller industrial and domestic users over the entire supply area. The topology of the 1960s grid was a result of the strong economies of scale: large coal-, gas- and oil-fired power stations in the 1 GW (1000 MW) to 3 GW scale are still found to be cost-effective, due to efficiency-boosting features that can be cost-effective only when the stations become very large. Power stations were located strategically to be close to fossil fuel reserves (either the mines or wells themselves, or else close to rail, road or port supply lines). Siting of hydro-electric dams in mountain areas also strongly influenced the structure of the emerging grid. Nuclear power plants were sited for availability of cooling water. Finally, fossil fuel-fired power stations were initially very polluting and were sited as far as economically possible from population centres once electricity distribution networks permitted it. By the late 1960s, the electricity grid reached the overwhelming majority of the population of developed countries, with only outlying regional areas remaining 'off-grid'. Metering of electricity consumption was necessary on a per-user basis in order to allow appropriate billing according to the (highly variable) level of consumption of different users. Because of limited data collection and processing capability during the period of growth of the grid, fixed-tariff arrangements were commonly put in place, as well as dual-tariff arrangements where night-time power was charged at a lower rate than daytime power. The motivation for dual-tariff arrangements was the lower night-time demand. Dual tariffs made possible the use of low-cost night-time electrical power in applications such as the maintaining of 'heat banks' which served to 'smooth out' the daily demand, and reduce the number of turbines that needed to be turned off overnight, thereby improving the utilisation and profitability of the generation and transmission facilities. The metering capabilities of the 1960s grid meant technological limitations on the degree to which price signals could be propagated through the system. From 1970s to the 1990s, growing demand led to increasing numbers of power stations. In some areas, supply of electricity, especially at peak times, could not keep up with this demand, resulting in poor power quality including blackouts, power cuts, and brownouts. Increasingly, electricity was depended on for industry, heating, communication, lighting, and entertainment, and consumers demanded ever higher levels of reliability. Towards the end of the 20th century, electricity demand patterns were established: domestic heating and air-conditioning led to daily peaks in demand that were met by an array of 'peaking power generators' that would only be turned on for short periods each day. The relatively low utilisation of these peaking generators (commonly, gas turbines were used due to their relatively lower capital cost and faster start-up times), together with the necessary redundancy in the electricity grid, resulted in high costs to the electricity companies, which were passed on in the form of increased tariffs. In the 21st century, some developing countries like China, India, and Brazil were seen as pioneers of smart grid deployment. Modernization opportunities Since the early 21st century, opportunities to take advantage of improvements in electronic communication technology to resolve the limitations and costs of the electrical grid have become apparent. Technological limitations on metering no longer force peak power prices to be averaged out and passed on to all consumers equally. In parallel, growing concerns over environmental damage from fossil-fired power stations has led to a desire to use large amounts of renewable energy. Dominant forms such as wind power and solar power are highly variable, and so the need for more sophisticated control systems became apparent, to facilitate the connection of sources to the otherwise highly controllable grid. Power from photovoltaic cells (and to a lesser extent wind turbines) has also, significantly, called into question the imperative for large, centralised power stations. The rapidly falling costs point to a major change from the centralised grid topology to one that is highly distributed, with power being both generated and consumed right at the limits of the grid. Finally, growing concern over terrorist attack in some countries has led to calls for a more robust energy grid that is less dependent on centralised power stations that were perceived to be potential attack targets. Definition of "smart grid" The first official definition of Smart Grid was provided by the Energy Independence and Security Act of 2007 (EISA-2007), which was approved by the US Congress in January 2007, and signed to law by President George W. Bush in December 2007. Title XIII of this bill provides a description, with ten characteristics, that can be considered a definition for Smart Grid, as follows:"It is the policy of the United States to support the modernization of the Nation's electricity transmission and distribution system to maintain a reliable and secure electricity infrastructure that can meet future demand growth and to achieve each of the following, which together characterize a Smart Grid: (1) Increased use of digital information and controls technology to improve reliability, security, and efficiency of the electric grid. (2) Dynamic optimization of grid operations and resources, with full cyber-security. (3) Deployment and integration of distributed resources and generation, including renewable resources. (4) Development and incorporation of demand response, demand-side resources, and energy-efficiency resources. (5) Deployment of 'smart' technologies (real-time, automated, interactive technologies that optimize the physical operation of appliances and consumer devices) for metering, communications concerning grid operations and status, and distribution automation. (6) Integration of 'smart' appliances and consumer devices. (7) Deployment and integration of advanced electricity storage and peak-shaving technologies, including plug-in electric and hybrid electric vehicles, and thermal storage air conditioning. (8) Provision to consumers of timely information and control options. (9) Development of standards for communication and interoperability of appliances and equipment connected to the electric grid, including the infrastructure serving the grid. (10) Identification and lowering of unreasonable or unnecessary barriers to adoption of smart grid technologies, practices, and services."The European Union Commission Task Force for Smart Grids also provides smart grid definition as: "A Smart Grid is an electricity network that can cost efficiently integrate the behaviour and actions of all users connected to it – generators, consumers and those that do both – in order to ensure economically efficient, sustainable power system with low losses and high levels of quality and security of supply and safety. A smart grid employs innovative products and services together with intelligent monitoring, control, communication, and self-healing technologies in order to: Better facilitate the connection and operation of generators of all sizes and technologies. Allow consumers to play a part in optimising the operation of the system. Provide consumers with greater information and options for how they use their supply. Significantly reduce the environmental impact of the whole electricity supply system. Maintain or even improve the existing high levels of system reliability, quality and security of supply. Maintain and improve the existing services efficiently." A common element to most definitions is the application of digital processing and communications to the power grid, making data flow and information management central to the smart grid. Various capabilities result from the deeply integrated use of digital technology with power grids. Integration of the new grid information is one of the key issues in the design of smart grids. Electric utilities now find themselves making three classes of transformations: improvement of infrastructure, called the strong grid in China; addition of the digital layer, which is the essence of the smart grid; and business process transformation, necessary to capitalize on the investments in smart technology. Much of the work that has been going on in electric grid modernization, especially substation and distribution automation, is now included in the general concept of the smart grid. Early technological innovations Smart grid technologies emerged from earlier attempts at using electronic control, metering, and monitoring. In the 1980s, automatic meter reading was used for monitoring loads from large customers, and evolved into the Advanced Metering Infrastructure of the 1990s, whose meters could store how electricity was used at different times of the day. Smart meters add continuous communications so that monitoring can be done in real time, and can be used as a gateway to demand response-aware devices and "smart sockets" in the home. Early forms of such demand side management technologies were dynamic demand aware devices that passively sensed the load on the grid by monitoring changes in the power supply frequency. Devices such as industrial and domestic air conditioners, refrigerators and heaters adjusted their duty cycle to avoid activation during times the grid was suffering a peak condition. Beginning in 2000, Italy's Telegestore Project was the first to network large numbers (27 million) of homes using smart meters connected via low bandwidth power line communication. Some experiments used the term broadband over power lines (BPL), while others used wireless technologies such as mesh networking promoted for more reliable connections to disparate devices in the home as well as supporting metering of other utilities such as gas and water. Monitoring and synchronization of wide area networks were revolutionized in the early 1990s when the Bonneville Power Administration expanded its smart grid research with prototype sensors that are capable of very rapid analysis of anomalies in electricity quality over very large geographic areas. The culmination of this work was the first operational Wide Area Measurement System (WAMS) in 2000. Other countries are rapidly integrating this technology — China started having a comprehensive national WAMS when the past 5-year economic plan completed in 2012. The earliest deployments of smart grids include the Italian system Telegestore (2005), the mesh network of Austin, Texas (since 2003), and the smart grid in Boulder, Colorado (2008). See below. Features The smart grid represents the full suite of current and proposed responses to the challenges of electricity supply. Because of the diverse range of factors there are numerous competing taxonomies and no agreement on a universal definition. Nevertheless, one possible categorization is given here. Reliability The smart grid makes use of technologies such as state estimation, that improve fault detection and allow self-healing of the network without the intervention of technicians. This will ensure more reliable supply of electricity, and reduced vulnerability to natural disasters or attack. Although multiple routes are touted as a feature of the smart grid, the old grid also featured multiple routes. Initial power lines in the grid were built using a radial model, later connectivity was guaranteed via multiple routes, referred to as a network structure. However, this created a new problem: if the current flow or related effects across the network exceed the limits of any particular network element, it could fail, and the current would be shunted to other network elements, which eventually may fail also, causing a domino effect. See power outage. A technique to prevent this is load shedding by rolling blackout or voltage reduction (brownout). Flexibility in network topology Next-generation transmission and distribution infrastructure will be better able to handle possible bidirectional energy flows, allowing for distributed generation such as from photovoltaic panels on building roofs, but also charging to/from the batteries of electric cars, wind turbines, pumped hydroelectric power, the use of fuel cells, and other sources. Classic grids were designed for one-way flow of electricity, but if a local sub-network generates more power than it is consuming, the reverse flow can raise safety and reliability issues. A smart grid aims to manage these situations. Efficiency Numerous contributions to overall improvement of the efficiency of energy infrastructure are anticipated from the deployment of smart grid technology, in particular including demand-side management, for example turning off air conditioners during short-term spikes in electricity price, reducing the voltage when possible on distribution lines through Voltage/VAR Optimization (VVO), eliminating truck-rolls for meter reading, and reducing truck-rolls by improved outage management using data from Advanced Metering Infrastructure systems. The overall effect is less redundancy in transmission and distribution lines, and greater utilization of generators, leading to lower power prices. Load adjustment/Load balancing The total load connected to the power grid can vary significantly over time. Although the total load is the sum of many individual choices of the clients, the overall load is not necessarily stable or slow varying. For example, if a popular television program starts, millions of televisions will start to draw current instantly. Traditionally, to respond to a rapid increase in power consumption, faster than the start-up time of a large generator, some spare generators are put on a dissipative standby mode. A smart grid may warn all individual television sets, or another larger customer, to reduce the load temporarily (to allow time to start up a larger generator) or continuously (in the case of limited resources). Using mathematical prediction algorithms it is possible to predict how many standby generators need to be used, to reach a certain failure rate. In the traditional grid, the failure rate can only be reduced at the cost of more standby generators. In a smart grid, the load reduction by even a small portion of the clients may eliminate the problem. Peak curtailment/leveling and time of use pricing To reduce demand during the high cost peak usage periods, communications and metering technologies inform smart devices in the home and business when energy demand is high and track how much electricity is used and when it is used. It also gives utility companies the ability to reduce consumption by communicating to devices directly in order to prevent system overloads. Examples would be a utility reducing the usage of a group of electric vehicle charging stations or shifting temperature set points of air conditioners in a city. To motivate them to cut back use and perform what is called peak curtailment or peak leveling, prices of electricity are increased during high demand periods, and decreased during low demand periods. It is thought that consumers and businesses will tend to consume less during high demand periods if it is possible for consumers and consumer devices to be aware of the high price premium for using electricity at peak periods. This could mean making trade-offs such as cycling on/off air conditioners or running dishwashers at 9 pm instead of 5 pm. When businesses and consumers see a direct economic benefit of using energy at off-peak times, the theory is that they will include energy cost of operation into their consumer device and building construction decisions and hence become more energy efficient. Sustainability The improved flexibility of the smart grid permits greater penetration of highly variable renewable energy sources such as solar power and wind power, even without the addition of energy storage. Current network infrastructure is not built to allow for many distributed feed-in points, and typically even if some feed-in is allowed at the local (distribution) level, the transmission-level infrastructure cannot accommodate it. Rapid fluctuations in distributed generation, such as due to cloudy or gusty weather, present significant challenges to power engineers who need to ensure stable power levels through varying the output of the more controllable generators such as gas turbines and hydroelectric generators. Smart grid technology is a necessary condition for very large amounts of renewable electricity on the grid for this reason. There is also support for vehicle-to-grid. Market-enabling The smart grid allows for systematic communication between suppliers (their energy price) and consumers (their willingness-to-pay), and permits both the suppliers and the consumers to be more flexible and sophisticated in their operational strategies. Only the critical loads will need to pay the peak energy prices, and consumers will be able to be more strategic in when they use energy. Generators with greater flexibility will be able to sell energy strategically for maximum profit, whereas inflexible generators such as base-load steam turbines and wind turbines will receive a varying tariff based on the level of demand and the status of the other generators currently operating. The overall effect is a signal that awards energy efficiency, and energy consumption that is sensitive to the time-varying limitations of the supply. At the domestic level, appliances with a degree of energy storage or thermal mass (such as refrigerators, heat banks, and heat pumps) will be well placed to 'play' the market and seek to minimise energy cost by adapting demand to the lower-cost energy support periods. This is an extension of the dual-tariff energy pricing mentioned above. Demand response support Demand response support allows generators and loads to interact in an automated fashion in real time, coordinating demand to flatten spikes. Eliminating the fraction of demand that occurs in these spikes eliminates the cost of adding reserve generators, cuts wear and tear and extends the life of equipment, and allows users to cut their energy bills by telling low priority devices to use energy only when it is cheapest. Currently, power grid systems have varying degrees of communication within control systems for their high-value assets, such as in generating plants, transmission lines, substations and major energy users. In general information flows one way, from the users and the loads they control back to the utilities. The utilities attempt to meet the demand and succeed or fail to varying degrees (brownouts, rolling blackout, uncontrolled blackout). The total amount of power demand by the users can have a very wide probability distribution which requires spare generating plants in standby mode to respond to the rapidly changing power usage. This one-way flow of information is expensive; the last 10% of generating capacity may be required as little as 1% of the time, and brownouts and outages can be costly to consumers. Demand response can be provided by commercial, residential loads, and industrial loads. For example, Alcoa's Warrick Operation is participating in MISO as a qualified Demand Response Resource, and the Trimet Aluminium uses its smelter as a short-term mega-battery. Latency of the data flow is a major concern, with some early smart meter architectures allowing actually as long as 24 hours delay in receiving the data, preventing any possible reaction by either supplying or demanding devices. Platform for advanced services As with other industries, use of robust two-way communications, advanced sensors, and distributed computing technology will improve the efficiency, reliability and safety of power delivery and use. It also opens up the potential for entirely new services or improvements on existing ones, such as fire monitoring and alarms that can shut off power, make phone calls to emergency services, etc. Provision megabits, control power with kilobits, sell the rest The amount of data required to perform monitoring and switching one's appliances off automatically is very small compared with that already reaching even remote homes to support voice, security, Internet and TV services. Many smart grid bandwidth upgrades are paid for by over-provisioning to also support consumer services, and subsidizing the communications with energy-related services or subsidizing the energy-related services, such as higher rates during peak hours, with communications. This is particularly true where governments run both sets of services as a public monopoly. Because power and communications companies are generally separate commercial enterprises in North America and Europe, it has required considerable government and large-vendor effort to encourage various enterprises to cooperate. Some, like Cisco, see opportunity in providing devices to consumers very similar to those they have long been providing to industry. Others, such as Silver Spring Networks or Google, are data integrators rather than vendors of equipment. While the AC power control standards suggest powerline networking would be the primary means of communication among smart grid and home devices, the bits may not reach the home via Broadband over Power Lines (BPL) initially but by fixed wireless. Technology The bulk of smart grid technologies are already used in other applications such as manufacturing and telecommunications and are being adapted for use in grid operations. Integrated communications: Areas for improvement include: substation automation, demand response, distribution automation, supervisory control and data acquisition (SCADA), energy management systems, wireless mesh networks and other technologies, power-line carrier communications, and fiber-optics. Integrated communications will allow for real-time control, information and data exchange to optimize system reliability, asset utilization, and security. Sensing and measurement: core duties are evaluating congestion and grid stability, monitoring equipment health, energy theft prevention, and control strategies support. Technologies include: advanced microprocessor meters (smart meter) and meter reading equipment, wide-area monitoring systems, (typically based on online readings by Distributed temperature sensing combined with Real time thermal rating (RTTR) systems), electromagnetic signature measurement/analysis, time-of-use and real-time pricing tools, advanced switches and cables, backscatter radio technology, and Digital protective relays. Smart meters. Phasor measurement units. Many in the power systems engineering community believe that the Northeast blackout of 2003 could have been contained to a much smaller area if a wide area phasor measurement network had been in place. Distributed power flow control: power flow control devices clamp onto existing transmission lines to control the flow of power within. Transmission lines enabled with such devices support greater use of renewable energy by providing more consistent, real-time control over how that energy is routed within the grid. This technology enables the grid to more effectively store intermittent energy from renewables for later use. Smart power generation using advanced components: smart power generation is a concept of matching electricity generation with demand using multiple identical generators which can start, stop and operate efficiently at chosen load, independently of the others, making them suitable for base load and peaking power generation. Matching supply and demand, called load balancing, is essential for a stable and reliable supply of electricity. Short-term deviations in the balance lead to frequency variations and a prolonged mismatch results in blackouts. Operators of power transmission systems are charged with the balancing task, matching the power output of all the generators to the load of their electrical grid. The load balancing task has become much more challenging as increasingly intermittent and variable generators such as wind turbines and solar cells are added to the grid, forcing other producers to adapt their output much more frequently than has been required in the past. First two dynamic grid stability power plants utilizing the concept has been ordered by Elering and will be built by Wärtsilä in Kiisa, Estonia (Kiisa Power Plant). Their purpose is to "provide dynamic generation capacity to meet sudden and unexpected drops in the electricity supply." They are scheduled to be ready during 2013 and 2014, and their total output will be 250 MW. Power system automation enables rapid diagnosis of and precise solutions to specific grid disruptions or outages. These technologies rely on and contribute to each of the other four key areas. Three technology categories for advanced control methods are: distributed intelligent agents (control systems), analytical tools (software algorithms and high-speed computers), and operational applications (SCADA, substation automation, demand response, etc.). Using artificial intelligence programming techniques, Fujian power grid in China created a wide area protection system that is rapidly able to accurately calculate a control strategy and execute it. The Voltage Stability Monitoring & Control (VSMC) software uses a sensitivity-based successive linear programming method to reliably determine the optimal control solution. IT companies disrupting the energy market Smart grid provides IT-based solutions which the traditional power grid is lacking. These new solutions pave the way of new entrants that were traditionally not related to the energy grid. Technology companies are disrupting the traditional energy market players in several ways. They develop complex distribution systems to meet the more decentralized power generation due to microgrids. Additionally is the increase in data collection bringing many new possibilities for technology companies as deploying transmission grid sensors at a user level and balancing system reserves. The technology in microgrids makes energy consumption cheaper for households than buying from utilities. Additionally, residents can manage their energy consumption easier and more effectively with the connection to smart meters. However, the performances and reliability of microgrids strongly depend on the continuous interaction between power generation, storage and load requirements. A hybrid offering combining renewable energy sources with storing energy sources as coal and gas is showing the hybrid offering of a microgrid serving alone. Consequences As a consequence of the entrance of the technology companies in the energy market, utilities and DSO's need to create new business models to keep current customers and to create new customers. Focus on a customer engagement strategy DSO's can focus on creating good customer engagement strategies to create loyalty and trust towards the customer. To retain and attract customers who decide to produce their own energy through microgrids, DSO's can offer purchase agreements for the sale of surplus energy that the consumer produces. Indifference from the IT companies, both DSO's and utilities can use their market experience to give consumers energy-use advice and efficiency upgrades to create excellent customer service. Create alliances with new entered technology companies Instead of trying to compete against IT companies in their expertise, both utilities and DSO's can try to create alliances with IT companies to create good solutions together. The French utility company Engie did this by buying the service provider Ecova and OpTerra Energy Services. Renewable energy sources The generation of renewable energy can often be connected at the distribution level, instead of the transmission grids, which means that DSO's can manage the flows and distribute power locally. This brings new opportunity for DSO's to expand their market by selling energy directly to the consumer. Simultaneously, this is challenging the utilities producing fossil fuels who already are trapped by high costs of aging assets. Stricter regulations for producing traditional energy resources from the government increases the difficulty of stay in business and increases the pressure on traditional energy companies to make the shift to renewable energy sources. An example of a utility changing business model to produce more renewable energy is the Norwegian-based company, Equinor, which was a state-owned oil company which now are heavily investing in renewable energy. Research Major programs IntelliGrid – Created by the Electric Power Research Institute (EPRI), IntelliGrid architecture provides methodology, tools, and recommendations for standards and technologies for utility use in planning, specifying, and procuring IT-based systems, such as advanced metering, distribution automation, and demand response. The architecture also provides a living laboratory for assessing devices, systems, and technology. Several utilities have applied IntelliGrid architecture including Southern California Edison, Long Island Power Authority, Salt River Project, and TXU Electric Delivery. The IntelliGrid Consortium is a public/private partnership that integrates and optimizes global research efforts, funds technology R&D, works to integrate technologies, and disseminates technical information. Grid 2030 – Grid 2030 is a joint vision statement for the U.S. electrical system developed by the electric utility industry, equipment manufacturers, information technology providers, federal and state government agencies, interest groups, universities, and national laboratories. It covers generation, transmission, distribution, storage, and end-use. The National Electric Delivery Technologies Roadmap is the implementation document for the Grid 2030 vision. The Roadmap outlines the key issues and challenges for modernizing the grid and suggests paths that government and industry can take to build America's future electric delivery system. Modern Grid Initiative (MGI) is a collaborative effort between the U.S. Department of Energy (DOE), the National Energy Technology Laboratory (NETL), utilities, consumers, researchers, and other grid stakeholders to modernize and integrate the U.S. electrical grid. DOE's Office of Electricity Delivery and Energy Reliability (OE) sponsors the initiative, which builds upon Grid 2030 and the National Electricity Delivery Technologies Roadmap and is aligned with other programs such as GridWise and GridWorks. GridWise – A DOE OE program focused on developing information technology to modernize the U.S. electrical grid. Working with the GridWise Alliance, the program invests in communications architecture and standards; simulation and analysis tools; smart technologies; test beds and demonstration projects; and new regulatory, institutional, and market frameworks. The GridWise Alliance is a consortium of public and private electricity sector stakeholders, providing a forum for idea exchanges, cooperative efforts, and meetings with policy makers at federal and state levels. GridWise Architecture Council (GWAC) was formed by the U.S. Department of Energy to promote and enable interoperability among the many entities that interact with the nation's electric power system. The GWAC members are a balanced and respected team representing the many constituencies of the electricity supply chain and users. The GWAC provides industry guidance and tools to articulate the goal of interoperability across the electric system, identify the concepts and architectures needed to make interoperability possible, and develop actionable steps to facilitate the inter operation of the systems, devices, and institutions that encompass the nation's electric system. The GridWise Architecture Council Interoperability Context Setting Framework, V 1.1 defines necessary guidelines and principles. GridWorks – A DOE OE program focused on improving the reliability of the electric system through modernizing key grid components such as cables and conductors, substations and protective systems, and power electronics. The program's focus includes coordinating efforts on high temperature superconducting systems, transmission reliability technologies, electric distribution technologies, energy storage devices, and GridWise systems. Pacific Northwest Smart Grid Demonstration Project. - This project is a demonstration across five Pacific Northwest states-Idaho, Montana, Oregon, Washington, and Wyoming. It involves about 60,000 metered customers, and contains many key functions of the future smart grid. Solar Cities - In Australia, the Solar Cities programme included close collaboration with energy companies to trial smart meters, peak and off-peak pricing, remote switching and related efforts. It also provided some limited funding for grid upgrades. Smart Grid Energy Research Center (SMERC) - Located at University of California, Los Angeles dedicated its efforts to large-scale testing of its smart EV charging network technology. It created another platform for bidirectional flow of information between a utility and consumer end-devices. SMERC also developed a demand response (DR) test bed that comprises a Control Center, Demand Response Automation Server (DRAS), Home-Area-Network (HAN), Battery Energy Storage System (BESS), and photovoltaic (PV) panels. These technologies are installed within the Los Angeles Department of Water and Power and Southern California Edison territory as a network of EV chargers, battery energy storage systems, solar panels, DC fast charger, and Vehicle-to-Grid (V2G) units. These platforms, communications and control networks enables UCLA-led projects within the area to be tested in partnership with two local utilities, SCE and LADWP. Smart Quart - In Germany, the Smart Quart project develops three smart districts to develop, test and showcase technology to operate smart grids. The project is a collaboration of E.ON, Viessmann, gridX and hydrogenious together with the RWTH Aachen University. It is planned that by the end of 2024 all three districts are supplied with locally generated energy and are largely independent of fossil energy sources. Smart grid modelling Many different concepts have been used to model intelligent power grids. They are generally studied within the framework of complex systems. In a recent brainstorming session, the power grid was considered within the context of optimal control, ecology, human cognition, glassy dynamics, information theory, microphysics of clouds, and many others. Here is a selection of the types of analyses that have appeared in recent years. Protection systems that verify and supervise themselves Pelqim Spahiu and Ian R. Evans in their study introduced the concept of a substation based smart protection and hybrid Inspection Unit. Kuramoto oscillators The Kuramoto model is a well-studied system. The power grid has been described in this context as well. The goal is to keep the system in balance, or to maintain phase synchronization (also known as phase locking). Non-uniform oscillators also help to model different technologies, different types of power generators, patterns of consumption, and so on. The model has also been used to describe the synchronization patterns in the blinking of fireflies. Bio-systems Power grids have been related to complex biological systems in many other contexts. In one study, power grids were compared to the dolphin social network. These creatures streamline or intensify communication in case of an unusual situation. The intercommunications that enable them to survive are highly complex. Random fuse networks In percolation theory, random fuse networks have been studied. The current density might be too low in some areas, and too strong in others. The analysis can therefore be used to smooth out potential problems in the network. For instance, high-speed computer analysis can predict blown fuses and correct for them, or analyze patterns that might lead to a power outage. It is difficult for humans to predict the long term patterns in complex networks, so fuse or diode networks are used instead. Smart Grid Communication Network Network Simulators are used to simulate/emulate network communication effects. This typically involves setting up a lab with the smart grid devices, applications etc. with the virtual network being provided by the network simulator. Neural networks Neural networks have been considered for power grid management as well. Electric power systems can be classified in multiple different ways: non-linear, dynamic, discrete, or random. Artificial Neural Networks (ANNs) attempt to solve the most difficult of these problems, the non-linear problems. Demand Forecasting One application of ANNs is in demand forecasting. In order for grids to operate economically and reliably, demand forecasting is essential, because it is used to predict the amount of power that will be consumed by the load. This is dependent on weather conditions, type of day, random events, incidents, etc. For non-linear loads though, the load profile isn't smooth and as predictable, resulting in higher uncertainty and less accuracy using the traditional Artificial Intelligence models. Some factors that ANNs consider when developing these sort of models: classification of load profiles of different customer classes based on the consumption of electricity, increased responsiveness of demand to predict real time electricity prices as compared to conventional grids, the need to input past demand as different components, such as peak load, base load, valley load, average load, etc. instead of joining them into a single input, and lastly, the dependence of the type on specific input variables. An example of the last case would be given the type of day, whether its weekday or weekend, that wouldn't have much of an effect on Hospital grids, but it'd be a big factor in resident housing grids' load profile. Markov processes As wind power continues to gain popularity, it becomes a necessary ingredient in realistic power grid studies. Off-line storage, wind variability, supply, demand, pricing, and other factors can be modelled as a mathematical game. Here the goal is to develop a winning strategy. Markov processes have been used to model and study this type of system. Maximum entropy All of these methods are, in one way or another, maximum entropy methods, which is an active area of research. This goes back to the ideas of Shannon, and many other researchers who studied communication networks. Continuing along similar lines today, modern wireless network research often considers the problem of network congestion, and many algorithms are being proposed to minimize it, including game theory, innovative combinations of FDMA, TDMA, and others. Economics Market outlook In 2009, the US smart grid industry was valued at about $21.4 billion – by 2014, it will exceed at least $42.8 billion. Given the success of the smart grids in the U.S., the world market is expected to grow at a faster rate, surging from $69.3 billion in 2009 to $171.4 billion by 2014. With the segments set to benefit the most will be smart metering hardware sellers and makers of software used to transmit and organize the massive amount of data collected by meters. The size of Smart Grid Market was valued at over US$30 billion in 2017 and is set to expand over 11% CAGR to hit US$70 Billion by 2024. Growing need to digitalize the power sector driven by ageing electrical grid infrastructure will stimulate the global market size. The industry is primarily driven by favorable government regulations and mandates along with rising share of renewables in the global energy mix. According to the International Energy Agency (IEA), global investments in digital electricity infrastructure was over US$50 billion in 2017. A 2011 study from the Electric Power Research Institute concludes that investment in a U.S. smart grid will cost up to $476 billion over 20 years but will provide up to $2 trillion in customer benefits over that time. In 2015, the World Economic Forum reported a transformational investment of more than $7.6 trillion by members of the OECD is needed over the next 25 years (or $300 billion per year) to modernize, expand, and decentralize the electricity infrastructure with technical innovation as key to the transformation. A 2019 study from International Energy Agency estimates that the current (depriciated) value of the US electric grid is more than USD 1 trillion. The total cost of replacing it with a smart grid is estimated to be more than USD 4 trillion. If smart grids are deployed fully across the US, the country expects to save USD 130 billion annually. General economics developments As customers can choose their electricity suppliers, depending on their different tariff methods, the focus of transportation costs will be increased. Reduction of maintenance and replacements costs will stimulate more advanced control. A smart grid precisely limits electrical power down to the residential level, network small-scale distributed energy generation and storage devices, communicate information on operating status and needs, collect information on prices and grid conditions, and move the grid beyond central control to a collaborative network. US and UK savings estimates and concerns A 2003 United States Department of Energy study calculated that internal modernization of US grids with smart grid capabilities would save between 46 and 117 billion dollars over the next 20 years if implemented within a few years of the study. As well as these industrial modernization benefits, smart grid features could expand energy efficiency beyond the grid into the home by coordinating low priority home devices such as water heaters so that their use of power takes advantage of the most desirable energy sources. Smart grids can also coordinate the production of power from large numbers of small power producers such as owners of rooftop solar panels — an arrangement that would otherwise prove problematic for power systems operators at local utilities. One important question is whether consumers will act in response to market signals. The U.S. Department of Energy (DOE) as part of the American Recovery and Reinvestment Act Smart Grid Investment Grant and Demonstrations Program funded special consumer behavior studies to examine the acceptance, retention, and response of consumers subscribed to time-based utility rate programs that involve advanced metering infrastructure and customer systems such as in-home displays and programmable communicating thermostats. Another concern is that the cost of telecommunications to fully support smart grids may be prohibitive. A less expensive communication mechanism is proposed using a form of "dynamic demand management" where devices shave peaks by shifting their loads in reaction to grid frequency. Grid frequency could be used to communicate load information without the need of an additional telecommunication network, but it would not support economic bargaining or quantification of contributions. Although there are specific and proven smart grid technologies in use, smart grid is an aggregate term for a set of related technologies on which a specification is generally agreed, rather than a name for a specific technology. Some of the benefits of such a modernized electricity network include the ability to reduce power consumption at the consumer side during peak hours, called demand side management; enabling grid connection of distributed generation power (with photovoltaic arrays, small wind turbines, micro hydro, or even combined heat power generators in buildings); incorporating grid energy storage for distributed generation load balancing; and eliminating or containing failures such as widespread power grid cascading failures. The increased efficiency and reliability of the smart grid is expected to save consumers money and help reduce emissions.<ref>Smart Grid and Renewable Energy Monitoring Systems, SpeakSolar.org 03rd September 2010</ref> Oppositions and concerns Most opposition and concerns have centered on smart meters and the items (such as remote control, remote disconnect, and variable rate pricing) enabled by them. Where opposition to smart meters is encountered, they are often marketed as "smart grid" which connects smart grid to smart meters in the eyes of opponents. Specific points of opposition or concern include: consumer concerns over privacy, e.g. use of usage data by law enforcement social concerns over "fair" availability of electricity concern that complex rate systems (e.g. variable rates) remove clarity and accountability, allowing the supplier to take advantage of the customer concern over remotely controllable "kill switch" incorporated into most smart meters social concerns over Enron style abuses of information leverage concerns over giving the government mechanisms to control the use of all power using activities concerns over RF emissions from smart meters Security While modernization of electrical grids into smart grids allows for optimization of everyday processes, a smart grid, being online, can be vulnerable to cyberattacks.Demertzis K., Iliadis L. (2018) A Computational Intelligence System Identifying Cyber-Attacks on Smart Energy Grids. In: Daras N., Rassias T. (eds) Modern Discrete Mathematics and Analysis. Springer Optimization and Its Applications, vol 131. Springer, Cham Transformers which increase the voltage of electricity created at power plants for long-distance travel, transmission lines themselves, and distribution lines which deliver the electricity to its consumers are particularly susceptible. These systems rely on sensors which gather information from the field and then deliver it to control centers, where algorithms automate analysis and decision-making processes. These decisions are sent back to the field, where existing equipment execute them. Hackers have the potential to disrupt these automated control systems, severing the channels which allow generated electricity to be utilized. This is called a denial of service or DoS attack. They can also launch integrity attacks which corrupt information being transmitted along the system as well as desynchronization attacks which affect when such information is delivered to the appropriate location. Additionally, intruders can again access via renewable energy generation systems and smart meters connected to the grid, taking advantage of more specialized weaknesses or ones whose security has not been prioritized. Because a smart grid has a large number of access points, like smart meters, defending all of its weak points can prove difficult. There is also concern on the security of the infrastructure, primarily that involving communications technology. Concerns chiefly center around the communications technology at the heart of the smart grid. Designed to allow real-time contact between utilities and meters in customers' homes and businesses, there is a risk that these capabilities could be exploited for criminal or even terrorist actions. One of the key capabilities of this connectivity is the ability to remotely switch off power supplies, enabling utilities to quickly and easily cease or modify supplies to customers who default on payment. This is undoubtedly a massive boon for energy providers, but also raises some significant security issues. Cybercriminals have infiltrated the U.S. electric grid before on numerous occasions. Aside from computer infiltration, there are also concerns that computer malware like Stuxnet, which targeted SCADA systems which are widely used in industry, could be used to attack a smart grid network. Electricity theft is a concern in the U.S. where the smart meters being deployed use RF technology to communicate with the electricity transmission network. People with knowledge of electronics can devise interference devices to cause the smart meter to report lower than actual usage. Similarly, the same technology can be employed to make it appear that the energy the consumer is using is being used by another customer, increasing their bill. The damage from a well-executed, sizable cyberattack could be extensive and long-lasting. One incapacitated substation could take from nine days to over a year to repair, depending on the nature of the attack. It can also cause an hours-long outage in a small radius. It could have an immediate effect on transportation infrastructure, as traffic lights and other routing mechanisms as well as ventilation equipment for underground roadways is reliant on electricity. Additionally, infrastructure which relies on the electric grid, including wastewater treatment facilities, the information technology sector, and communications systems could be impacted. The December 2015 Ukraine power grid cyberattack, the first recorded of its kind, disrupted services to nearly a quarter of a million people by bringing substations offline. The Council on Foreign Relations has noted that states are most likely to be the perpetrators of such an attack as they have access to the resources to carry one out despite the high level of difficulty of doing so. Cyber intrusions can be used as portions of a larger offensive, military or otherwise. Some security experts warn that this type of event is easily scalable to grids elsewhere. Insurance company Lloyd's of London has already modeled the outcome of a cyberattack on the Eastern Interconnection, which has the potential to impact 15 states, put 93 million people in the dark, and cost the country's economy anywhere from $243 billion to $1 trillion in various damages. According to the U.S. House of Representatives Subcommittee on Economic Development, Public Buildings, and Emergency Management, the electric grid has already seen a sizable number of cyber intrusions, with two in every five aiming to incapacitate it. As such, the U.S. Department of Energy has prioritized research and development to decrease the electric grid's vulnerability to cyberattacks, citing them as an "imminent danger" in its 2017 Quadrennial Energy Review. The Department of Energy has also identified both attack resistance and self-healing as major keys to ensuring that today's smart grid is future-proof. While there are regulations already in place, namely the Critical Infrastructure Protection Standards introduced by the North America Electric Reliability Council, a significant number of them are suggestions rather than mandates. Most electricity generation, transmission, and distribution facilities and equipment are owned by private stakeholders, further complicating the task of assessing adherence to such standards. Additionally, even if utilities want to fully comply, they may find that it is too expensive to do so. Some experts argue that the first step to increasing the cyber defenses of the smart electric grid is completing a comprehensive risk analysis of existing infrastructure, including research of software, hardware, and communication processes. Additionally, as intrusions themselves can provide valuable information, it could be useful to analyze system logs and other records of their nature and timing. Common weaknesses already identified using such methods by the Department of Homeland Security include poor code quality, improper authentication, and weak firewall rules. Once this step is completed, some suggest that it makes sense to then complete an analysis of the potential consequences of the aforementioned failures or shortcomings. This includes both immediate consequences as well as second- and third-order cascading effects on parallel systems. Finally, risk mitigation solutions, which may include simple remediation of infrastructure inadequacies or novel strategies, can be deployed to address the situation. Some such measures include recoding of control system algorithms to make them more able to resist and recover from cyberattacks or preventive techniques that allow more efficient detection of unusual or unauthorized changes to data. Strategies to account for human error which can compromise systems include educating those who work in the field to be wary of strange USB drives, which can introduce malware if inserted, even if just to check their contents. Other solutions include utilizing transmission substations, constrained SCADA networks, policy based data sharing, and attestation for constrained smart meters. Transmission substations utilize one-time signature authentication technologies and one-way hash chain constructs. These constraints have since been remedied with the creation of a fast-signing and verification technology and buffering-free data processing. A similar solution has been constructed for constrained SCADA networks. This involves applying a Hash-Based Message Authentication Code to byte streams, converting the random-error detection available on legacy systems to a mechanism that guarantees data authenticity. Policy-based data sharing utilizes GPS-clock-synchronized-fine-grain power grid measurements to provide increased grid stability and reliability. It does this through synchro-phasor requirements that are gathered by PMUs. Attestation for constrained smart meters faces a slightly different challenge, however. One of the biggest issues with attestation for constrained smart meters is that in order to prevent energy theft, and similar attacks, cyber security providers have to make sure that the devices’ software is authentic. To combat this problem, an architecture for constrained smart networks has been created and implemented at a low level in the embedded system. Other challenges to adoption Before a utility installs an advanced metering system, or any type of smart system, it must make a business case for the investment. Some components, like the power system stabilizers (PSS) installed on generators are very expensive, require complex integration in the grid's control system, are needed only during emergencies, and are only effective if other suppliers on the network have them. Without any incentive to install them, power suppliers don't. Most utilities find it difficult to justify installing a communications infrastructure for a single application (e.g. meter reading). Because of this, a utility must typically identify several applications that will use the same communications infrastructure – for example, reading a meter, monitoring power quality, remote connection and disconnection of customers, enabling demand response, etc. Ideally, the communications infrastructure will not only support near-term applications, but unanticipated applications that will arise in the future. Regulatory or legislative actions can also drive utilities to implement pieces of a smart grid puzzle. Each utility has a unique set of business, regulatory, and legislative drivers that guide its investments. This means that each utility will take a different path to creating their smart grid and that different utilities will create smart grids at different adoption rates. Some features of smart grids draw opposition from industries that currently are, or hope to provide similar services. An example is competition with cable and DSL Internet providers from broadband over powerline internet access. Providers of SCADA control systems for grids have intentionally designed proprietary hardware, protocols and software so that they cannot inter-operate with other systems in order to tie its customers to the vendor. The incorporation of digital communications and computer infrastructure with the grid's existing physical infrastructure poses challenges and inherent vulnerabilities. According to IEEE Security and Privacy Magazine, the smart grid will require that people develop and use large computer and communication infrastructure that supports a greater degree of situational awareness and that allows for more specific command and control operations. This process is necessary to support major systems such as demand-response wide-area measurement and control, storage and transportation of electricity, and the automation of electric distribution. Power Theft / Power Loss Various "smart grid" systems have dual functions. This includes Advanced Metering Infrastructure systems which, when used with various software can be used to detect power theft and by process of elimination, detect where equipment failures have taken place. These are in addition to their primary functions of eliminating the need for human meter reading and measuring the time-of-use of electricity. The worldwide power loss including theft is estimated at approximately two-hundred billion dollars annually. Electricity theft also represents a major challenge when providing reliable electrical service in developing countries. Deployments and attempted deployments Enel The earliest, and one of the largest, example of a smart grid is the Italian system installed by Enel S.p.A. of Italy. Completed in 2005, the Telegestore project was highly unusual in the utility world because the company designed and manufactured their own meters, acted as their own system integrator, and developed their own system software. The Telegestore project is widely regarded as the first commercial scale use of smart grid technology to the home, and delivers annual savings of 500 million euro at a project cost of 2.1 billion euro. US Dept. of Energy - ARRA Smart Grid Project One of the largest deployment programs in the world to-date is the U.S. Dept. of Energy's Smart Grid Program funded by the American Recovery and Reinvestment Act of 2009. This program required matching funding from individual utilities. A total of over $9 billion in Public/Private funds were invested as part of this program. Technologies included Advanced Metering Infrastructure, including over 65 million Advanced "Smart" Meters, Customer Interface Systems, Distribution & Substation Automation, Volt/VAR Optimization Systems, over 1,000 Synchrophasors, Dynamic Line Rating, Cyber Security Projects, Advanced Distribution Management Systems, Energy Storage Systems, and Renewable Energy Integration Projects. This program consisted of Investment Grants (matching), Demonstration Projects, Consumer Acceptance Studies, and Workforce Education Programs. Reports from all individual utility programs as well as overall impact reports will be completed by the second quarter of 2015. Austin, Texas In the US, the city of Austin, Texas has been working on building its smart grid since 2003, when its utility first replaced 1/3 of its manual meters with smart meters that communicate via a wireless mesh network. It currently manages 200,000 devices real-time (smart meters, smart thermostats, and sensors across its service area), and expects to be supporting 500,000 devices real-time in 2009 servicing 1 million consumers and 43,000 businesses. Boulder, Colorado Boulder, Colorado completed the first phase of its smart grid project in August 2008. Both systems use the smart meter as a gateway to the home automation network (HAN) that controls smart sockets and devices. Some HAN designers favor decoupling control functions from the meter, out of concern of future mismatches with new standards and technologies available from the fast moving business segment of home electronic devices. Hydro One Hydro One, in Ontario, Canada is in the midst of a large-scale Smart Grid initiative, deploying a standards-compliant communications infrastructure from Trilliant. By the end of 2010, the system will serve 1.3 million customers in the province of Ontario. The initiative won the "Best AMR Initiative in North America" award from the Utility Planning Network. Île d'Yeu Île d'Yeu began a 2-year pilot program in Spring of 2020. Twenty-three houses in the Ker Pissot neighborhood and surrounding areas were interconnected with a microgrid that was automated as a smart grid with software from Engie. Sixty-four solar panels with a peak capacity of 23.7 kW were installed on five houses and a battery with a storage capacity of 15 kWh was installed on one house. Six houses store excess solar energy in their hot water heaters. A dynamic system apportions the energy provided by the solar panels and stored in the battery and hot water heaters to the system of 23 houses. The smart grid software dynamically updates energy supply and demand in 5 minute intervals, deciding whether to pull energy from the battery or from the panels and when to store it in the hot water heaters. This pilot program was the first such project in France. Mannheim The City of Mannheim in Germany is using realtime Broadband Powerline (BPL) communications in its Model City Mannheim "MoMa" project. Adelaide Adelaide in Australia also plans to implement a localised green Smart Grid electricity network in the Tonsley Park redevelopment. Sydney Sydney also in Australia, in partnership with the Australian Government implemented the Smart Grid, Smart City program. Évora InovGrid is an innovative project in Évora, Portugal that aims to equip the electricity grid with information and devices to automate grid management, improve service quality, reduce operating costs, promote energy efficiency and environmental sustainability, and increase the penetration of renewable energies and electric vehicles. It will be possible to control and manage the state of the entire electricity distribution grid at any given instant, allowing suppliers and energy services companies to use this technological platform to offer consumers information and added-value energy products and services. This project to install an intelligent energy grid places Portugal and EDP at the cutting edge of technological innovation and service provision in Europe.Portuguese Smart City E-Energy In the so-called E-Energy projects several German utilities are creating first nucleolus in six independent model regions. A technology competition identified this model regions to carry out research and development activities with the main objective to create an "Internet of Energy." Massachusetts One of the first attempted deployments of "smart grid" technologies in the United States was rejected in 2009 by electricity regulators in the Commonwealth of Massachusetts, a US state. According to an article in the Boston Globe, Northeast Utilities' Western Massachusetts Electric Co. subsidiary actually attempted to create a "smart grid" program using public subsidies that would switch low income customers from post-pay to pre-pay billing (using "smart cards") in addition to special hiked "premium" rates for electricity used above a predetermined amount. This plan was rejected by regulators as it "eroded important protections for low-income customers against shutoffs". According to the Boston Globe, the plan "unfairly targeted low-income customers and circumvented Massachusetts laws meant to help struggling consumers keep the lights on". A spokesman for an environmental group supportive of smart grid plans and Western Massachusetts' Electric's aforementioned "smart grid" plan, in particular, stated "If used properly, smart grid technology has a lot of potential for reducing peak demand, which would allow us to shut down some of the oldest, dirtiest power plants... It’s a tool." eEnergy Vermont consortium The eEnergy Vermont consortium is a US statewide initiative in Vermont, funded in part through the American Recovery and Reinvestment Act of 2009, in which all of the electric utilities in the state have rapidly adopted a variety of Smart Grid technologies, including about 90% Advanced Metering Infrastructure deployment, and are presently evaluating a variety of dynamic rate structures. Netherlands In the Netherlands a large-scale project (>5000 connections, >20 partners) was initiated to demonstrate integrated smart grids technologies, services and business cases. LIFE Factory Microgrid LIFE Factory Microgrid (LIFE13 ENV / ES / 000700) is a demonstrative project that is part of the LIFE+ 2013 program (European Commission), whose main objective is to demonstrate, through the implementation of a full-scale industrial smartgrid that microgrids can become one of the most suitable solutions for energy generation and management in factories that want to minimize their environmental impact. Chattanooga EPB in Chattanooga, TN is a municipally-owned electric utility that started construction of a smart grid in 2008, receiving a $111,567,606 grant from the US DOE in 2009 to expedite construction and implementation (for a total budget of $232,219,350). Deployment of power-line interrupters (1170 units) was completed in April 2012, and deployment of smart meters (172,079 units) was completed in 2013. The smart grid's backbone fiber-optic system was also used to provide the first gigabit-speed internet connection to residential customers in the US through the Fiber to the Home initiative, and now speeds of up to 10 gigabits per second are available to residents. The smart grid is estimated to have reduced power outages by an average of 60%, saving the city about 60 million dollars annually. It has also reduced the need for "truck rolls" to scout and troubleshoot faults, resulting in an estimated reduction of 630,000 truck driving miles, and 4.7 million pounds of carbon emissions. In January 2016, EPB became the first major power distribution system to earn Performance Excellence in Electricity Renewal (PEER) certification. OpenADR Implementations Certain deployments utilize the OpenADR standard for load shedding and demand reduction during higher demand periods. China The smart grid market in China is estimated to be $22.3 billion with a projected growth to $61.4 billion by 2015. Honeywell is developing a demand response pilot and feasibility study for China with the State Grid Corp. of China using the OpenADR demand response standard. The State Grid Corp., the Chinese Academy of Science, and General Electric intend to work together to develop standards for China's smart grid rollout. United Kingdom The OpenADR standard was demonstrated in Bracknell, England, where peak'' use in commercial buildings was reduced by 45 percent. As a result of the pilot, the Scottish and Southern Energy (SSE) said it would connect up to 30 commercial and industrial buildings in Thames Valley, west of London, to a demand response program. United States In 2009, the US Department of Energy awarded an $11 million grant to Southern California Edison and Honeywell for a demand response program that automatically turns down energy use during peak hours for participating industrial customers. The Department of Energy awarded an $11.4 million grant to Honeywell to implement the program using the OpenADR standard. Hawaiian Electric Co. (HECO) is implementing a two-year pilot project to test the ability of an ADR program to respond to the intermittence of wind power. Hawaii has a goal to obtain 70 percent of its power from renewable sources by 2030. HECO will give customers incentives for reducing power consumption within 10 minutes of a notice. Guidelines, standards and user groups Part of the IEEE Smart Grid Initiative, IEEE 2030.2 represents an extension of the work aimed at utility storage systems for transmission and distribution networks. The IEEE P2030 group expects to deliver early 2011 an overarching set of guidelines on smart grid interfaces. The new guidelines will cover areas including batteries and supercapacitors as well as flywheels. The group has also spun out a 2030.1 effort drafting guidelines for integrating electric vehicles into the smart grid. IEC TC 57 has created a family of international standards that can be used as part of the smart grid. These standards include IEC 61850 which is an architecture for substation automation, and IEC 61970/61968 – the Common Information Model (CIM). The CIM provides for common semantics to be used for turning data into information. OpenADR is an open-source smart grid communications standard used for demand response applications. It is typically used to send information and signals to cause electrical power-using devices to be turned off during periods of higher demand. MultiSpeak has created a specification that supports distribution functionality of the smart grid. MultiSpeak has a robust set of integration definitions that supports nearly all of the software interfaces necessary for a distribution utility or for the distribution portion of a vertically integrated utility. MultiSpeak integration is defined using extensible markup language (XML) and web services. The IEEE has created a standard to support synchrophasors – C37.118. The UCA International User Group discusses and supports real world experience of the standards used in smart grids. A utility task group within LonMark International deals with smart grid related issues. There is a growing trend towards the use of TCP/IP technology as a common communication platform for smart meter applications, so that utilities can deploy multiple communication systems, while using IP technology as a common management platform. IEEE P2030 is an IEEE project developing a "Draft Guide for Smart Grid Interoperability of Energy Technology and Information Technology Operation with the Electric Power System (EPS), and End-Use Applications and Loads". NIST has included ITU-T G.hn as one of the "Standards Identified for Implementation" for the Smart Grid "for which it believed there was strong stakeholder consensus". G.hn is standard for high-speed communications over power lines, phone lines and coaxial cables. OASIS EnergyInterop' – An OASIS technical committee developing XML standards for energy interoperation. Its starting point is the California OpenADR standard. Under the Energy Independence and Security Act of 2007 (EISA), NIST is charged with overseeing the identification and selection of hundreds of standards that will be required to implement the Smart Grid in the U.S. These standards will be referred by NIST to the Federal Energy Regulatory Commission (FERC). This work has begun, and the first standards have already been selected for inclusion in NIST's Smart Grid catalog. However, some commentators have suggested that the benefits that could be realized from Smart Grid standardization could be threatened by a growing number of patents that cover Smart Grid architecture and technologies. If patents that cover standardized Smart Grid elements are not revealed until technology is broadly distributed throughout the network ("locked-in"), significant disruption could occur when patent holders seek to collect unanticipated rents from large segments of the market. GridWise Alliance rankings In November 2017 the non-profit GridWise Alliance along with Clean Edge Inc., a clean energy group, released rankings for all 50 states in their efforts to modernize the electric grid. California was ranked number one. The other top states were Illinois, Texas, Maryland, Oregon, Arizona, the District of Columbia, New York, Nevada and Delaware. "The 30-plus page report from the GridWise Alliance, which represents stakeholders that design, build and operate the electric grid, takes a deep dive into grid modernization efforts across the country and ranks them by state." See also Charge control Electranet Grid friendly Grid energy storage Home energy storage Large-scale energy storage List of energy storage projects Microgrid Net metering Open smart grid protocol Smart grids by country Smart villages in Asia Super grid Vehicle-to-grid (V2G) Virtual power plant Wide area synchronous grid Smart city References Bibliography Christian Neureiter, A Domain-Specific, Model Driven Engineering Approach For Systems Engineering In The Smart Grid, MBSE4U, 2017, External links Smart Grids (European Commission) Smart Microgrids by Project Regeneration The NIST Smart Grid Collaboration Site NIST's public wiki for Smart Grid Emerging Smart Multi-Use Grids Multiple use scalable wireless network of networks Video Lecture: Computer System Security: Technical and Social Challenges in Creating a Trustworthy Power Grid , University of Illinois at Urbana-Champaign Wiley: Smart Grid Applications, Communications, and Security Video Lecture: Smart Grid: Key to a Sustainable Energy Infrastructure , University of Illinois at Urbana-Champaign Smart High Voltage Substation Based on IEC 61850 Process Bus and IEEE 1588 Time Synchronization Energy To Smart Grid (E2SG), one of the major European Smart Grid research projects Smart Grid: Communication-Enabled Intelligence for the Electric Power Grid LIFE Factory Microgrid : Smart Grid project funded by the European Commission Smart Hubs SLES : Smart Grid project part-funded by UK Research and Innovation Emerging technologies
40683387
https://en.wikipedia.org/wiki/MemoQ
MemoQ
memoQ is a proprietary computer-assisted translation software suite which runs on Microsoft Windows operating systems. It is developed by the Hungarian software company memoQ Fordítástechnológiai Zrt. (memoQ Translation Technologies), formerly Kilgray, a provider of translation management software established in 2004 and cited as one of the fastest-growing companies in the translation technology sector in 2012 and 2013. memoQ provides translation memory, terminology, machine translation integration and reference information management in desktop, client/server and web application environments. History memoQ, a translation environment tool first released in 2006, was the first product created by memoQ Translation Technologies, a company founded in Hungary by the three language technologists Balázs Kis, István Lengyel and Gábor Ugray. In the years since the software was first presented, it has grown in popularity and is now among the most frequent TEnT applications used for translation (it was rated as the third most used CAT tool in a Proz.com study in 2013 and as the second most widely used tool in a June 2010 survey of 458 working translators), after SDL Trados, Wordfast, Déjà Vu, OmegaT and others. Today it is available in desktop versions for translators (Translator Pro edition) and project managers (Project Manager edition) as well as site-installed and hosted server applications offering integration with the desktop versions and a web browser interface. There are currently several active online forums in which users provide each other with independent advice and support on the software's functions as well as many online tutorials created by professional trainers and active users. Before its commercial debut, a version of memoQ (2.0) was distributed as postcardware. Configuration As of 2018, all supported memoQ editions contained these principal modules: File statistics Word counts and comparisons with translation memory databases, internal content similarities and format tag frequency. memoQ was the first translation environment tool to enable the weighting of format tags in its count statistics to enable the effort involved with their correct placement in translated documents to be considered in planning. Another innovation introduced for file statistics was the analysis of file homogeneity for identifying internal similarities in a file or a group of files which might affect work efforts. Previously such similarities had only been identified in the form of exact text segment repetitions or in comparisons with translation unit databases (translation memories) from previous work. File translation and editing grid A columnar grid arrangement of the source and target languages for translating text, supported by other information panes such as a preview, difference highlighting with similar information in reference sources and matches with various information sources such as translation memories, stored reference files, terminology databases, machine translation suggestions and external sources. Translation memory management Creation and basic management of databases for multilingual (in the case of memoQ, bilingual) translation information in units known as "segments". This information is often exchanged between translation management and assistance systems using the file format TMX. memoQ is also able to import translation memory data in delimited text format. Terminology managementStorage and management of terminology and meta information about the terminology to assist in translation or quality assurance. memoQ is able to import terminology data in TMX and delimited text formats and export it in delimited text and an XML format. memoQ also includes an integrated facility for statistical terminology extraction from a chosen combination of documents to translate, translation memory databases and reference corpora. The stopword implementation in the terminology extraction module includes special position indicators to enable blocked terms to be included at the beginning, in the body or at the end of multi-word phrases, which distinguishes this terminology extraction approach from most others available in this type of application. Reference corpus management Also known by the trademarked name "LiveDocs", this is a diverse collection of information types, including aligned translations, bitext files from various sources, monolingual reference information in many formats and various types of media files as well as any other file types users choose to save for reference purposes. File types not known by the memoQ application are opened using external applications intended to use them. A distinguishing characteristic of bilingual text alignments in memoQ is automated alignment which need not be finalized and transferred to translation memory databases before it can be used as a basis for comparison with new texts to translate, and alignments can be improved as needed in the course of translation work. In practice this often results in much less effort to maintain legacy reference materials. Quality assurance This is for verifying the adherence to quality criteria specified by the user. Profiles can be created to focus on specific workflow tasks, such as format tag verification or adherence to specified terminology. There are also other supporting features integrated in the environment such as spelling dictionaries, lists of nontranslatable terms, autocorrection rules and "auto-translation" rules which enable matching and insertion of expressions based on regular expressions. Supported source document formats memoQ 2015 supports dozens of different file types, including: various markup and tagged formats such as XML, HTML, XLIFF, SDLXLIFF (SDL Trados Studio's native format for translation), OpenDocument files; plain text files; Microsoft Word, Excel, and PowerPoint; and some Adobe file formats, such as PSD, PDF and InDesign. To know more about supported formats and languages in memoQ, see this link: Languages and file formats. Handling of translation memories and glossaries The translation memory (TM) format of memoQ is proprietary and stored as a group of files in a folder bearing the name of the translation memory. External data can be imported in delimited text formats and Translation Memory eXchange format (TMX), and translation memory data can be exported as TMX. memoQ can also work with server-based translation memories on the memoQ Server or, using a plug-in, other external translation memory sources. memoQ Translation memories are bilingual. In translation work, translation segments are compared with translation units stored in the translation memory, and exact or fuzzy matches can be shown and inserted in the translated text. Glossaries are handled by the integrated terminology module. Glossaries can be imported in TMX or delimited text formats and exported as delimited text or MultiTerm XML. Glossaries can include two or more languages or language variants. Term matching with glossary entries can be based on many different parameters, taking into consideration capitalization, partial or fuzzy matches and other factors. Terms to be avoided can be marked as "forbidden" in the properties of a particular glossary entry. Integration of machine translation and postediting memoQ has integrated machine translation and postediting into its translation workflow. With the selection of appropriate conditions and a plug-in for machine translation, machine-generated translation units (TUs) will be inserted if no match is found in an active translation memory. The translator can then post-edit the machine translation in the attempt to make sense of it. memoQ includes plug-ins which support the several MY systems. Other MT systems can be integrated via the application programming interface (API). Interoperability with other CAT tools The designers of memoQ have followed a fairly consistent policy of interoperability or functional compatibility with similar software tools or processes involving translation by other means through both the implementation of standards such as XLIFF and TMX, handling proprietary formats of other translation-support tools and providing exchange formats easily handled in other environments. Implementation of standards Like many other translation environment tools, memoQ implements some standards, both official and de facto, for sharing translation files and reference information. These include: XLIFF, XLIFF:doc and TMX for translation files; TMX and delimited text (not a standard, but a common format) for translation memory data import, TMX for export; and TBX, TMX, XML and delimited text for terminology import, XML and delimited text for export. Proprietary format support Proprietary formats for other environments which are supported to various extents include Star Transit project packages (PXF, PPF), SDL Trados Studio (SDLPPX, SDLXLIFF), older Trados formats (TTX, bilingual DOC/RTF) and Wordfast Pro (TXML). In the case of project package formats, translation file and translation memory exchange generally work well, but other package information such as terminology or settings data may not be transferable. With translation file formats there are also some limitations associated with particular elements, such as footnote structures in bilingual DOC/RTF files from Wordfast or Trados Workbench. Terminology export also supports a configuration of the proprietary XML definition used by SDL MultiTerm. Exchange formats memoQ supports a number of bilingual exchange formats for review and translation: XLIFF for work in other environments, with proprietary (optional) extensions to provide additional information to users of the same software a simplified "bilingual DOC" format substantially compatible with the old Trados Workbench and Wordfast Classic formatting. However, these files are sensitive to corruption if strict limits on using them are not observed. a robust bilingual RTF table format, which is used in many ways for review, translation or providing feedback by filtering comments made by translators or reviewers. This format simplifies the involvement of those who do not work with computer-assisted translation tools, as translation or review of a text or comments can be performed in any word processor capable of reading an RTF file. This approach was introduced originally by Atril's Déjà Vu and has been adopted in various ways by many other environments over the years. References External links Web site of memoQ software Translation software Machine translation Natural language processing software
47455744
https://en.wikipedia.org/wiki/Cozy%20Bear
Cozy Bear
Cozy Bear, classified by the United States federal government as advanced persistent threat APT29, is a Russian hacker group believed to be associated with one or more intelligence agencies of Russia. The Dutch General Intelligence and Security Service (AIVD) deduced from security camera footage that it is led by the Russian Foreign Intelligence Service (SVR); this view is shared by the United States. Cybersecurity firm CrowdStrike also previously suggested that it may be associated with either the Russian Federal Security Service (FSB) or SVR. The group has been given various nicknames by other cybersecurity firms, including CozyCar, CozyDuke (by F-Secure), Dark Halo, The Dukes (by Volexity), NOBELIUM, Office Monkeys, StellarParticle, UNC2452, and YTTRIUM. On 20 December 2020, it was reported that CozyBear was responsible for a cyber attack on US sovereign national data, believed to be at the direction of the Russian government. Methods and technical capability Kaspersky Lab determined that the earliest samples of the MiniDuke malware attributed to the group date from 2008. The original code was written in assembly language. Symantec believes that Cozy Bear had been compromising diplomatic organizations and governments since at least 2010. The CozyDuke malware utilises a backdoor and a dropper. The malware exfiltrates data to a command and control server. Attackers may tailor the malware to the environment. The backdoor components of Cozy Bear's malware are updated over time with modifications to cryptography, trojan functionality, and anti-detection. The speed at which Cozy Bear develops and deploys its components is reminiscent of the toolset of Fancy Bear, which also uses the tools CHOPSTICK and CORESHELL. Cozy Bear's CozyDuke malware toolset is structurally and functionally similar to second stage components used in early Miniduke, Cosmicduke, and OnionDuke operations. A second stage module of the CozyDuke malware, Show.dll, appears to have been built onto the same platform as OnionDuke, suggesting that the authors are working together or are the same people. The campaigns and the malware toolsets they use are referred to as the Dukes, including Cosmicduke, Cozyduke, and Miniduke. CozyDuke is connected to the MiniDuke and CosmicDuke campaigns, as well as to the OnionDuke cyberespionage campaign. Each threat group tracks their targets and use toolsets that were likely created and updated by Russian speakers. Following exposure of the MiniDuke in 2013, updates to the malware were written in C/C++ and it was packed with a new obfuscator. Cozy Bear is suspected of being behind the 'HAMMERTOSS' remote access tool which uses commonly visited websites like Twitter and GitHub to relay command data. Seaduke is a highly configurable, low-profile Trojan only used for a small set of high-value targets. Typically, Seaduke is installed on systems already infected with the much more widely distributed CozyDuke. Attacks Cozy Bear appears to have different projects, with different user groups. The focus of its project "Nemesis Gemina" is military, government, energy, diplomatic and telecom sectors. Evidence suggests that Cozy Bear's targets have included commercial entities and government organizations in Germany, Uzbekistan, South Korea and the US, including the US State Department and the White House in 2014. Office Monkeys (2014) In March 2014, a Washington, D.C.-based private research institute was found to have CozyDuke (Trojan.Cozer) on their network. Cozy Bear then started an email campaign attempting to lure victims into clicking on a flash video of office monkeys that would also include malicious executables. By July the group had compromised government networks and directed CozyDuke-infected systems to install Miniduke onto a compromised network. In the summer of 2014, digital agents of the Dutch General Intelligence and Security Service infiltrated Cozy Bear. They found that these Russian hackers were targeting the US Democratic Party, State Department and White House. Their evidence influenced the FBI's decision to open an investigation. Pentagon (August 2015) In August 2015 Cozy Bear was linked to a spear-phishing cyber-attack against the Pentagon email system causing the shut down of the entire Joint Staff unclassified email system and Internet access during the investigation. Democratic National Committee (2016) In June 2016, Cozy Bear was implicated alongside the hacker group Fancy Bear in the Democratic National Committee cyber attacks. While the two groups were both present in the Democratic National Committee's servers at the same time, they appeared to be unaware of the other, each independently stealing the same passwords and otherwise duplicating their efforts. A CrowdStrike forensic team determined that while Cozy Bear had been on the DNC's network for over a year, Fancy Bear had only been there a few weeks. Cozy Bear's more sophisticated tradecraft and interest in traditional long-term espionage suggest that the group originates from a separate Russian intelligence agency. US think tanks and NGOs (2016) After the 2016 United States presidential election, Cozy Bear was linked to a series of coordinated and well-planned spear phishing campaigns against U.S.-based think tanks and non-governmental organizations (NGOs). Norwegian government (2017) On February 3, 2017, the Norwegian Police Security Service (PST) reported that attempts had been made to spearphish the email accounts of nine individuals in the Ministry of Defence, Ministry of Foreign Affairs, and the Labour Party. The acts were attributed to Cozy Bear, whose targets included the Norwegian Radiation Protection Authority, PST section chief Arne Christian Haugstøyl, and an unnamed colleague. Prime Minister Erna Solberg called the acts "a serious attack on our democratic institutions." The attacks were reportedly conducted in January 2017. Dutch ministries (2017) In February 2017, it was revealed that Cozy Bear and Fancy Bear had made several attempts to hack into Dutch ministries, including the Ministry of General Affairs, over the previous six months. Rob Bertholee, head of the AIVD, said on EenVandaag that the hackers were Russian and had tried to gain access to secret government documents. In a briefing to parliament, Dutch Minister of the Interior and Kingdom Relations Ronald Plasterk announced that votes for the Dutch general election in March 2017 would be counted by hand. Operation Ghost Suspicions that Cozy Bear had ceased operations were dispelled in 2019 by the discovery of three new malware families attributed to Cozy Bear: PolyglotDuke, RegDuke and FatDuke. This shows that Cozy Bear did not cease operations, but rather had developed new tools that were harder to detect. Target compromises using these newly uncovered packages are collectively referred to as Operation Ghost. COVID-19 vaccine data (2020) In July 2020 Cozy Bear was accused by the NSA, NCSC and the CSE of trying to steal data on vaccines and treatments for COVID-19 being developed in the UK, US, and Canada. SUNBURST malware supply chain attack (2020) On 8 December 2020, U.S. cybersecurity firm FireEye disclosed that a collection of their proprietary cybersecurity research tools had been stolen, possibly by "a nation with top-tier offensive capabilities." On 13 December 2020, FireEye announced that investigations into the circumstances of that intellectual property theft revealed "a global intrusion campaign ... [utilizing a] supply chain attack trojanizing SolarWinds Orion business software updates in order to distribute malware we call SUNBURST.... This campaign may have begun as early as Spring 2020 and... is the work of a highly skilled actor [utilizing] significant operational security." Shortly thereafter, SolarWinds confirmed that multiple versions of their Orion platform products had been compromised, probably by a foreign nation state. The impact of the attack prompted the U.S. Cybersecurity and Infrastructure Security Agency (CISA) to issue a rare emergency directive. Approximately 18,000 SolarWinds clients were exposed to SUNBURST, including several U.S. federal agencies. Washington Post sources identified Cozy Bear as the group responsible for the attack. According to Microsoft, the hackers then stole signing certificates that allowed them to impersonate any of a target’s existing users and accounts through the Security Assertion Markup Language. Typically abbreviated as SAML, the XML-based language provides a way for identity providers to exchange authentication and authorization data with service providers. Republican National Committee (2021) In July 2021, Cozy Bear breached systems of the Republican National Committee. Officials said they believed the attack to have been conducted through Synnex. The cyberattack came amid larger fallout over the ransomware attack spread through compromised Kaseya VSA software. See also 2016 United States election interference by Russia The Plot to Hack America References Russian advanced persistent threat groups Cybercrime Cyberwarfare Hacker groups Hacking in the 2000s Hacking in the 2010s Information technology in Russia Military units and formations established in the 2000s Organizations associated with Russian interference in the 2016 United States elections
30655849
https://en.wikipedia.org/wiki/Terminal%20capabilities
Terminal capabilities
In computing and telecommunications, the capabilities of a terminal are various terminal features, above and beyond what is available from a pure teletypewriter, that host systems (and the programs that run on them) can make use of. They are (mainly) of control codes and escape codes that can be sent to or received from the terminal. The escape codes sent to the terminal perform various functions that a CRT terminal (and software terminal emulators) is capable of, but that a teletypewriter is not; such as moving the terminal's cursor to positions on the screen, clearing and scrolling all or parts of the screen, turning on and off attached printer devices, programming programmable function keys, changing display colours and attributes (such as reverse video), and setting display title strings. The escape codes received from the terminal signify things such as function key, arrow key, and other special key (home key, end key, help key, PgUp key, PgDn key, insert key, delete key, and so forth) keystrokes. Unix and POSIX: termcap, terminfo, et al. In Unix and other POSIX-compliant systems that support the POSIX terminal interface, these capabilities are encoded in databases that are configured by a system administrator and accessed from programs via the terminfo library (which supersedes the older termcap library), upon which in turn are built libraries such as the curses and ncurses libraries, by which applications programs use the terminal capabilities to provide textual user interfaces with windows, dialogue boxes, buttons, labels, input fields, menus, and so forth. The intention is that this allows applications programs to be independent of actual terminal characteristics. They don't need to hardwire any control codes or escape sequences into their code, and so don't have problems being used on a range of terminals with a range of capabilities. termcap The (for "terminal capabilities") library was developed for BSD systems. It uses a database stored in the file . This database consists of a series of records (each of which consists of one or more lines in the file, joined by backslash characters at the ends of each line that continues onto a following one) each of which represents the capabilities of a particular terminal. The fields of the record comprise the terminal type name, or names, followed by a sequence of capabilities, separated by colons. The capability fields themselves fall into three groups: characteristics of the terminal These comprise such things as the (nominal) number of rows and columns the terminal's display has, whether output automatically wraps onto the next line when it reaches the end of a line, and so forth. control sequences sent as output to the terminal These comprise the control codes and escape sequences sent to the terminal in order for it to perform some action (not necessarily a display action). An example of one of the simplest is the output sequence to clear the screen, which may be the form feed (ASCII ) character on some types of terminal but may, say, be the escape sequence on a terminal that requires ANSI escape sequences. control sequences sent as input by the terminal These comprise the control codes and escape sequences that the terminal sends to the host to represent various actions and events, such as function keys and arrow keys being pressed. terminfo The terminfo ("terminal information") library was developed for System V systems. It uses a database stored in multiple files within a directory, which can be variously (on different Unices and POSIX-compatible systems) , , or even . (Its location isn't even uniform across different distributions of Linux.) Unlike the termcap database, the terminfo database is compiled, a machine-readable database that is constructed from a human-readable source file format by a utility program, . They can be decompiled from machine-readable form back to human-readable form by another utility program, . The command to output the human-readable form of the "vt100" terminal definition, for example, is:infocmp vt100 The use of a machine-readable format was to avoid the unnecessary overhead, in applications programs using systems such as the termcap library, of repeatedly parsing the database content to read the fields of a record. The use of multiple files was to avoid the similar overhead of parsing the database content to find the database record for the target terminal type. The terminal type name index is, effectively, the Unix/POSIX filesystem's ordinary directory structure. Originally, Unix had severe performance problems with large directories containing many files, and thus terminfo uses a two-level structure, dividing up the directory entries by first letter into a series of subdirectories. More recent filesystem formats used on Unix systems don't suffer as much from such problems (because their on-disc directory structures are no longer simple arrays of entries, but are organized into trees or hash tables) and so the necessity for this design element, that still exists in modern terminfo implementations, has since disappeared. Utility programs to exercise terminal capabilities On Unix systems, the command is used to look up a specific capability in the system's database, and output it to the command's standard output (which is, presumably, the terminal by which the function denoted by the capability is to be performed). One of the simplest operations is clearing the screen. The name of the database field that stores the output sequence for this is , so the command arguments to the program to clear the screen are tput clear Another operation is initializing or resetting the terminal to a known default state (of character attributes, fonts, colours, and so forth). The commands for this are: tput init and tput reset Normally the command uses the terminal type specified by the TERM environment variable, one of the controlling environment variables of the POSIX terminal interface. This can be overridden, however, to force to look up a different terminal type in the database, with a command-line option to the command. So, for example, to issue the reset sequence appropriate for the type of terminal named "vt100" in the database (usually a DEC VT100 terminal), irrespective of terminal type specified in environment variables, the command is:tput -T vt100 reset References What supports what Sources used Further reading Capabilities Telecommunications equipment
14064590
https://en.wikipedia.org/wiki/Defence%20Scientific%20Information%20and%20Documentation%20Centre
Defence Scientific Information and Documentation Centre
The Defence Scientific Information & Documentation Centre (DESIDOC) is a division of the Defence Research and Development Organisation (DRDO). Located in Delhi, its main function is the collection, processing and dissemination of relevant technical information for DRDO scientists. The present director of DESIDOC is Dr K Nageswara Rao. History DESIDOC started functioning in 1958 as Scientific Information Bureau (SIB). It was a division of the Defence Science Laboratory (DSL) which is now called Laser Science & Technology Centre. The DRDO library which had its beginning in 1948 became a division of SIB in 1959. In 1967 SIB was reorganised with augmented activities and named Defence Scientific Information and Documentation Centre (DESIDOC). It still continued to function under the administrative control of DSL. DESIDOC became a self-accounting unit and one of the laboratories of DRDO on 29 July 1970. The Centre was functioning in the main building of Metcalfe House, a landmark in Delhi and a national monument. In August 1988 it moved to its newly built five-storeyed building in the same Metcalfe House complex. Since it became a self-accounting unit, DESIDOC has been functioning as a central information resource for DRDO. It provides S&T information, based on its library and other information resources, to the DRDO headquarters, and its various laboratories at various places in India. Functions Library DESIDOC maintains the Defence Science Library (DSL) is headed by Sh. Tapesh Sinha, Scientist E, a well-equipped library housing 262,000 documents. It also provides access to various databases, and other reference material. Additionally, DESIDOC has taken up the initiative of digitizing complete research papers of DRDO scientists, as well as preparing presentation material and promotional material for DRDO scientists. Publications DESIDOC functions as the publication wing of DRDO, providing scientific and technical information via specialised publications, monographs, technical bulletins, online journals and popular science publications. These cover current developments in Indian defence R&D. The publications are unclassified and available free of charge online. Monographs and other publications are available on payment. The periodicals published are: Defence Science Journal - bi-monthly research periodical. Technology Focus - bi-monthly periodical focusing on the technologies, products, processes, and systems developed by DRDO. DRDO Newsletter - monthly newsletter with house bulletins of DRDO activities. DESIDOC Journal of Library & Information Technology (earlier DESIDOC Bulletin of Information Technology (DBIT)) - bi-monthly publication bringing out the current developments in library and information technology. Training programs Short term training programmes and workshops are conducted every year for DRDO personnel, mainly in the areas of library automation, Internet use, DTP, multimedia development, communication skills, stress management, etc. References External links DESIDOC Home Page Defence Science Library Home Page "DESIDOC Keeps DRDO informed", Frontier India, 5 May 2007. Index page for DRDO Publications, which are maintained and published by DESIDOC Defence Research and Development Organisation laboratories Materials science institutes Research institutes in Delhi Organisations based in Delhi Ministry of Defence (India) 1970 establishments in Delhi Research institutes established in 1970
38893
https://en.wikipedia.org/wiki/Border%20Gateway%20Protocol
Border Gateway Protocol
Border Gateway Protocol (BGP) is a standardized exterior gateway protocol designed to exchange routing and reachability information among autonomous systems (AS) on the Internet. BGP is classified as a path-vector routing protocol, and it makes routing decisions based on paths, network policies, or rule-sets configured by a network administrator. BGP used for routing within an autonomous system is called Interior Border Gateway Protocol, Internal BGP (iBGP). In contrast, the Internet application of the protocol is called Exterior Border Gateway Protocol, External BGP (eBGP). History The Border Gateway Protocol was first described in 1989 in RFC 1105, and has been in use on the Internet since 1994. IPv6 BGP was first defined in in 1994, and it was improved to in 1998. The current version of BGP is version 4 (BGP4), which was published as RFC 4271 in 2006. RFC 4271 corrected errors, clarified ambiguities and updated the specification with common industry practices. The major enhancement was the support for Classless Inter-Domain Routing (CIDR) and use of route aggregation to decrease the size of routing tables. The new RFC allows BGP4 to carry a wide range of IPv4 and IPv6 "address families". It is also called the Multiprotocol Extensions which is Multiprotocol BGP (MP-BGP). Operation BGP neighbors, called peers, are established by manual configuration among routers to create a TCP session on port 179. A BGP speaker sends 19-byte keep-alive messages every 30 seconds (protocol default value, tunable) to maintain the connection. Among routing protocols, BGP is unique in using TCP as its transport protocol. When BGP runs between two peers in the same autonomous system (AS), it is referred to as Internal BGP (iBGP or Interior Border Gateway Protocol). When it runs between different autonomous systems, it is called External BGP (eBGP or Exterior Border Gateway Protocol). Routers on the boundary of one AS exchanging information with another AS are called border or edge routers or simply eBGP peers and are typically connected directly, while iBGP peers can be interconnected through other intermediate routers. Other deployment topologies are also possible, such as running eBGP peering inside a VPN tunnel, allowing two remote sites to exchange routing information in a secure and isolated manner. The main difference between iBGP and eBGP peering is in the way routes that were received from one peer are typically propagated by default to other peers: New routes learned from an eBGP peer are re-advertised to all iBGP and eBGP peers. New routes learned from an iBGP peer are re-advertised to all eBGP peers only. These route-propagation rules effectively require that all iBGP peers inside an AS are interconnected in a full mesh with iBGP sessions. How routes are propagated can be controlled in detail via the route-maps mechanism. This mechanism consists of a set of rules. Each rule describes, for routes matching some given criteria, what action should be taken. The action could be to drop the route, or it could be to modify some attributes of the route before inserting it in the routing table. Extensions negotiation During the peering handshake, when OPEN messages are exchanged, BGP speakers can negotiate optional capabilities of the session, including multiprotocol extensions and various recovery modes. If the multiprotocol extensions to BGP are negotiated at the time of creation, the BGP speaker can prefix the Network Layer Reachability Information (NLRI) it advertises with an address family prefix. These families include the IPv4 (default), IPv6, IPv4/IPv6 Virtual Private Networks and multicast BGP. Increasingly, BGP is used as a generalized signaling protocol to carry information about routes that may not be part of the global Internet, such as VPNs. In order to make decisions in its operations with peers, a BGP peer uses a simple finite state machine (FSM) that consists of six states: Idle; Connect; Active; OpenSent; OpenConfirm; and Established. For each peer-to-peer session, a BGP implementation maintains a state variable that tracks which of these six states the session is in. The BGP defines the messages that each peer should exchange in order to change the session from one state to another. The first state is the Idle state. In the Idle state, BGP initializes all resources, refuses all inbound BGP connection attempts and initiates a TCP connection to the peer. The second state is Connect. In the Connect state, the router waits for the TCP connection to complete and transitions to the OpenSent state if successful. If unsuccessful, it starts the ConnectRetry timer and transitions to the Active state upon expiration. In the Active state, the router resets the ConnectRetry timer to zero and returns to the Connect state. In the OpenSent state, the router sends an Open message and waits for one in return in order to transition to the OpenConfirm state. Keepalive messages are exchanged and, upon successful receipt, the router is placed into the Established state. In the Established state, the router can send and receive: Keepalive; Update; and Notification messages to and from its peer. Idle State: Refuse all incoming BGP connections. Start the initialization of event triggers. Initiates a TCP connection with its configured BGP peer. Listens for a TCP connection from its peer. Changes its state to Connect. If an error occurs at any state of the FSM process, the BGP session is terminated immediately and returned to the Idle state. Some of the reasons why a router does not progress from the Idle state are: TCP port 179 is not open. A random TCP port over 1023 is not open. Peer address configured incorrectly on either router. AS number configured incorrectly on either router. Connect State: Waits for successful TCP negotiation with peer. BGP does not spend much time in this state if the TCP session has been successfully established. Sends Open message to peer and changes state to OpenSent. If an error occurs, BGP moves to the Active state. Some reasons for the error are: TCP port 179 is not open. A random TCP port over 1023 is not open. Peer address configured incorrectly on either router. AS number configured incorrectly on either router. Active State: If the router was unable to establish a successful TCP session, then it ends up in the Active state. BGP FSM tries to restart another TCP session with the peer and, if successful, then it sends an Open message to the peer. If it is unsuccessful again, the FSM is reset to the Idle state. Repeated failures may result in a router cycling between the Idle and Active states. Some of the reasons for this include: TCP port 179 is not open. A random TCP port over 1023 is not open. BGP configuration error. Network congestion. Flapping network interface. OpenSent State: BGP FSM listens for an Open message from its peer. Once the message has been received, the router checks the validity of the Open message. If there is an error it is because one of the fields in the Open message does not match between the peers, e.g., BGP version mismatch, the peering router expects a different My AS, etc. The router then sends a Notification message to the peer indicating why the error occurred. If there is no error, a Keepalive message is sent, various timers are set and the state is changed to OpenConfirm. OpenConfirm State: The peer is listening for a Keepalive message from its peer. If a Keepalive message is received and no timer has expired before reception of the Keepalive, BGP transitions to the Established state. If a timer expires before a Keepalive message is received, or if an error condition occurs, the router transitions back to the Idle state. Established State: In this state, the peers send Update messages to exchange information about each route being advertised to the BGP peer. If there is any error in the Update message then a Notification message is sent to the peer, and BGP transitions back to the Idle state. Router connectivity and learning routes In the simplest arrangement, all routers within a single AS and participating in BGP routing must be configured in a full mesh: each router must be configured as a peer to every other router. This causes scaling problems, since the number of required connections grows quadratically with the number of routers involved. To alleviate the problem, BGP implements two options: route reflectors (RFC 4456) and BGP confederations (RFC 5065). The following discussion of basic UPDATE processing assumes a full iBGP mesh. A given BGP router may accept Network Layer Reachability Information (NLRI) UPDATEs from multiple neighbors and advertise NLRI to the same, or a different set, of neighbors. Conceptually, BGP maintains its own master routing table, called the local routing information base (Loc-RIB), separate from the main routing table of the router. For each neighbor, the BGP process maintains a conceptual adjacent routing information base, incoming (Adj-RIB-In) containing the NLRI received from the neighbor, and a conceptual outgoing information base (Adj-RIB-Out) for NLRI to be sent to the neighbor. The physical storage and structure of these conceptual tables are decided by the implementer of the BGP code. Their structure is not visible to other BGP routers, although they usually can be interrogated with management commands on the local router. It is quite common, for example, to store the two Adj-RIBs and the Loc-RIB together in the same data structure, with additional information attached to the RIB entries. The additional information tells the BGP process such things as whether individual entries belong in the Adj-RIBs for specific neighbors, whether the peer-neighbor route selection process made received policies eligible for the Loc-RIB, and whether Loc-RIB entries are eligible to be submitted to the local router's routing table management process. BGP will submit the routes that it considers best to the main routing table process. Depending on the implementation of that process, the BGP route is not necessarily selected. For example, a directly connected prefix, learned from the router's own hardware, is usually most preferred. As long as that directly connected route's interface is active, the BGP route to the destination will not be put into the routing table. Once the interface goes down, and there are no more preferred routes, the Loc-RIB route would be installed in the main routing table. Until recently, it was a common mistake to say, "BGP carries policies." BGP actually carried the information with which rules inside BGP-speaking routers could make policy decisions. Some of the information carried that is explicitly intended to be used in policy decisions are communities and multi-exit discriminators (MED). The BGP standard specifies a number of decision factors, more than the ones that are used by any other common routing process, for selecting NLRI to go into the Loc-RIB. The first decision point for evaluating NLRI is that its next-hop attribute must be reachable (or resolvable). Another way of saying the next-hop must be reachable is that there must be an active route, already in the main routing table of the router, to the prefix in which the next-hop address is reachable. Next, for each neighbor, the BGP process applies various standard and implementation-dependent criteria to decide which routes conceptually should go into the Adj-RIB-In. The neighbor could send several possible routes to a destination, but the first level of preference is at the neighbor level. Only one route to each destination will be installed in the conceptual Adj-RIB-In. This process will also delete, from the Adj-RIB-In, any routes that are withdrawn by the neighbor. Whenever a conceptual Adj-RIB-In changes, the main BGP process decides if any of the neighbor's new routes are preferred to routes already in the Loc-RIB. If so, it replaces them. If a given route is withdrawn by a neighbor, and there is no other route to that destination, the route is removed from the Loc-RIB and no longer sent by BGP to the main routing table manager. If the router does not have a route to that destination from any non-BGP source, the withdrawn route will be removed from the main routing table. After verifying that the next hop is reachable, if the route comes from an internal (i.e. iBGP) peer, the first rule to apply, according to the standard, is to examine the LOCAL_PREFERENCE attribute. If there are several iBGP routes from the neighbor, the one with the highest LOCAL_PREFERENCE is selected unless there are several routes with the same LOCAL_PREFERENCE. In the latter case the route selection process moves to the next tiebreaker. While LOCAL_PREFERENCE is the first rule in the standard, once reachability of the NEXT_HOP is verified, Cisco and several other vendors first consider a decision factor called WEIGHT which is local to the router (i.e. not transmitted by BGP). The route with the highest WEIGHT is preferred. The LOCAL_PREFERENCE, WEIGHT, and other criteria can be manipulated by local configuration and software capabilities. Such manipulation, although commonly used, is outside the scope of the standard. For example, the COMMUNITY attribute (see below) is not directly used by the BGP selection process. The BGP neighbor process however can have a rule to set LOCAL_PREFERENCE or another factor based on a manually programmed rule to set the attribute if the COMMUNITY value matches some pattern matching criterion. If the route was learned from an external peer the per-neighbor BGP process computes a LOCAL_PREFERENCE value from local policy rules and then compares the LOCAL_PREFERENCE of all routes from the neighbor. At the per-neighbor level ignoring implementation-specific policy modifiers the order of tie-breaking rules is: Prefer the route with the shortest AS_PATH. An AS_PATH is the set of AS numbers that must be traversed to reach the advertised destination. AS1-AS2-AS3 is shorter than AS4-AS5-AS6-AS7. Prefer routes with the lowest value of their ORIGIN attribute. Prefer routes with the lowest MULTI_EXIT_DISC (multi-exit discriminator or MED) value. Once candidate routes are received from neighbors, the Loc-RIB software applies additional tie-breakers to routes to the same destination. If at least one route was learned from an external neighbor (i.e., the route was learned from eBGP), drop all routes learned from iBGP. Prefer the route with the lowest interior cost to the NEXT_HOP, according to the main routing table. If two neighbors advertised the same route, but one neighbor is reachable via a low-bitrate link and the other by a high-bitrate link, and the interior routing protocol calculates lowest cost based on highest bitrate, the route through the high-bitrate link would be preferred and other routes dropped. If there is more than one route still tied at this point, several BGP implementations offer a configurable option to load-share among the routes, accepting all (or all up to some number). Communities BGP communities are attribute tags that can be applied to incoming or outgoing prefixes to achieve some common goal. While it is common to say that BGP allows an administrator to set policies on how prefixes are handled by ISPs, this is generally not possible, strictly speaking. For instance, BGP natively has no concept to allow one AS to tell another AS to restrict advertisement of a prefix to only North American peering customers. Instead, an ISP generally publishes a list of well-known or proprietary communities with a description for each one, which essentially becomes an agreement of how prefixes are to be treated. RFC 1997 defines three well-known communities that have global significance; NO_EXPORT, NO_ADVERTISE and NO_EXPORT_SUBCONFED. RFC 7611 defines ACCEPT_OWN. Examples of common communities include local preference adjustments, geographic or peer type restrictions, denial-of-service attack identification, and AS prepending options. An ISP might state that any routes received from customers with community XXX:500 will be advertised to all peers (default) while community XXX:501 will only be advertised to North America. The customer simply adjusts their configuration to include the correct community or communities for each route, and the ISP is responsible for controlling who the prefix is advertised to. The end user has no technical ability to enforce correct actions being taken by the ISP, though problems in this area are generally rare and accidental. It is a common tactic for end customers to use BGP communities (usually ASN:70,80,90,100) to control the local preference the ISP assigns to advertised routes instead of using MED (the effect is similar). The community attribute is transitive, but communities applied by the customer very rarely propagated outside the next-hop AS. Not all ISPs give out their communities to the public. The BGP Extended Community Attribute was added in 2006, in order to extend the range of such attributes and to provide a community attribute structuring by means of a type field. The extended format consists of one or two octets for the type field followed by seven or six octets for the respective community attribute content. The definition of this Extended Community Attribute is documented in RFC 4360. The IANA administers the registry for BGP Extended Communities Types. The Extended Communities Attribute itself is a transitive optional BGP attribute. However, a bit in the type field within the attribute decides whether the encoded extended community is of a transitive or non-transitive nature. The IANA registry therefore provides different number ranges for the attribute types. Due to the extended attribute range, its usage can be manifold. RFC 4360 exemplarily defines the "Two-Octet AS Specific Extended Community", the "IPv4 Address Specific Extended Community", the "Opaque Extended Community", the "Route Target Community", and the "Route Origin Community". A number of BGP QoS drafts also use this Extended Community Attribute structure for inter-domain QoS signalling. With the introduction of 32-bit AS numbers, some issues were immediately obvious with the community attribute that only defines a 16 bits ASN field, which prevents the matching between this field and the real ASN value. Since RFC 7153, extended communities are compatible with 32-bit ASNs. RFC 8092 and RFC 8195 introduce a Large Community attribute of 12 bytes, divided in three field of 4 bytes each (AS:function:parameter). Multi-exit discriminators MEDs, defined in the main BGP standard, were originally intended to show to another neighbor AS the advertising AS's preference as to which of several links are preferred for inbound traffic. Another application of MEDs is to advertise the value, typically based on delay, of multiple ASs that have a presence at an IXP, that they impose to send traffic to some destination. Message header format Marker: Included for compatibility, must be set to all ones. Length: Total length of the message in octets, including the header. Type: Type of BGP message. The following values are defined: Open (1) Update (2) Notification (3) KeepAlive (4) Route-Refresh (5) Internal scalability BGP is "the most scalable of all routing protocols." An autonomous system with internal BGP (iBGP) must have all of its iBGP peers connect to each other in a full mesh (where everyone speaks to everyone directly). This full-mesh configuration requires that each router maintain a session to every other router. In large networks, this number of sessions may degrade performance of routers, due to either a lack of memory, or high CPU process requirements. Route reflectors Route reflectors reduce the number of connections required in an AS. A single router (or two for redundancy) can be made a route reflector: other routers in the AS need only be configured as peers to them. A route reflector offers an alternative to the logical full-mesh requirement of internal border gateway protocol (IBGP). A RR acts as a for IBGP sessions. The purpose of the RR is concentration. Multiple BGP routers can peer with a central point, the RR – acting as a route reflector server – rather than peer with every other router in a full mesh. All the other IBGP routers become route reflector clients. This approach, similar to OSPF's DR/BDR feature, provides large networks with added IBGP scalability. In a fully meshed IBGP network of 10 routers, 90 individual CLI statements (spread throughout all routers in the topology) are needed just to define the remote-AS of each peer: this quickly becomes a headache to manage. A RR topology could cut these 90 statements down to 18, offering a viable solution for the larger networks administered by ISPs. A route reflector is a single point of failure, therefore at least a second route reflector may be configured in order to provide redundancy. As it is an additional peer for the other 10 routers, it comes with the additional statement count to double that minus 2 of the single Route Reflector setup. An additional 11*2-2=20 statements in this case due to adding the additional Router. Additionally, in a BGP multipath Environment this also can benefit by adding local switching/Routing throughput if the RRs are acting as traditional Routers instead of just a dedicated Route Reflector Server role. Rules RR servers propagate routes inside the AS based on the following rules: If a route is received from a non-client peer, reflect to clients only and EBGP peers. If a route is received from a client peer, reflect to all non-client peers and also to client peers, except the originator of the route and reflect to EBGP peers. Cluster RR and its clients form a "Cluster". The "Cluster-ID" is then attached to every route advertised by RR to its client or nonclient peers. Cluster-ID is a cumulative, non-transitive BGP attribute and every RR MUST prepend the local CLUSTER_ID to the CLUSTER_LIST in order to avoid routing loops. Route reflectors and confederations both reduce the number of iBGP peers to each router and thus reduce processing overhead. Route reflectors are a pure performance-enhancing technique, while confederations also can be used to implement more fine-grained policy. BGP confederation Confederations are sets of autonomous systems. In common practice, only one of the confederation AS numbers is seen by the Internet as a whole. Confederations are used in very large networks where a large AS can be configured to encompass smaller more manageable internal ASs. The confederated AS is composed of multiple ASs. Each confederated AS alone has iBGP fully meshed and has connections to other ASs inside the confederation. Even though these ASs have eBGP peers to ASs within the confederation, the ASs exchange routing as if they used iBGP. In this way, the confederation preserves next hop, metric, and local preference information. To the outside world, the confederation appears to be a single AS. With this solution, iBGP transit AS problems can be resolved as iBGP requires a full mesh between all BGP routers: large number of TCP sessions and unnecessary duplication of routing traffic. Confederations can be used in conjunction with route reflectors. Both confederations and route reflectors can be subject to persistent oscillation unless specific design rules, affecting both BGP and the interior routing protocol, are followed. However, these alternatives can introduce problems of their own, including the following: route oscillation sub-optimal routing increase of BGP convergence time Additionally, route reflectors and BGP confederations were not designed to ease BGP router configuration. Nevertheless, these are common tools for experienced BGP network architects. These tools may be combined, for example, as a hierarchy of route reflectors. Stability The routing tables managed by a BGP implementation are adjusted continually to reflect actual changes in the network, such as links breaking and being restored or routers going down and coming back up. In the network as a whole it is normal for these changes to happen almost continuously, but for any particular router or link, changes are supposed to be relatively infrequent. If a router is misconfigured or mismanaged then it may get into a rapid cycle between down and up states. This pattern of repeated withdrawal and re-announcement known as route flapping can cause excessive activity in all the other routers that know about the broken link, as the same route is continually injected and withdrawn from the routing tables. The BGP design is such that delivery of traffic may not function while routes are being updated. On the Internet, a BGP routing change may cause outages for several minutes. A feature known as route flap damping (RFC 2439) is built into many BGP implementations in an attempt to mitigate the effects of route flapping. Without damping, the excessive activity can cause a heavy processing load on routers, which may in turn delay updates on other routes, and so affect overall routing stability. With damping, a route's flapping is exponentially decayed. At the first instance when a route becomes unavailable and quickly reappears, damping does not take effect, so as to maintain the normal fail-over times of BGP. At the second occurrence, BGP shuns that prefix for a certain length of time; subsequent occurrences are timed out exponentially. After the abnormalities have ceased and a suitable length of time has passed for the offending route, prefixes can be reinstated and its slate wiped clean. Damping can also mitigate denial of service attacks; damping timings are highly customizable. It is also suggested in RFC 2439 (under "Design Choices -> Stability Sensitive Suppression of Route Advertisement") that route flap damping is a feature more desirable if implemented to Exterior Border Gateway Protocol Sessions (eBGP sessions or simply called exterior peers) and not on Interior Border Gateway Protocol Sessions (iBGP sessions or simply called internal peers); With this approach when a route flaps inside an autonomous system, it is not propagated to the external ASs flapping a route to an eBGP will have a chain of flapping for the particular route throughout the backbone. This method also successfully avoids the overhead of route flap damping for iBGP sessions. However, subsequent research has shown that flap damping can actually lengthen convergence times in some cases, and can cause interruptions in connectivity even when links are not flapping. Moreover, as backbone links and router processors have become faster, some network architects have suggested that flap damping may not be as important as it used to be, since changes to the routing table can be handled much faster by routers. This has led the RIPE Routing Working Group to write that "with the current implementations of BGP flap damping, the application of flap damping in ISP networks is NOT recommended. ... If flap damping is implemented, the ISP operating that network will cause side-effects to their customers and the Internet users of their customers' content and services ... . These side-effects would quite likely be worse than the impact caused by simply not running flap damping at all." Improving stability without the problems of flap damping is the subject of current research. Routing table growth One of the largest problems faced by BGP, and indeed the Internet infrastructure as a whole, is the growth of the Internet routing table. If the global routing table grows to the point where some older, less capable routers cannot cope with the memory requirements or the CPU load of maintaining the table, these routers will cease to be effective gateways between the parts of the Internet they connect. In addition, and perhaps even more importantly, larger routing tables take longer to stabilize (see above) after a major connectivity change, leaving network service unreliable, or even unavailable, in the interim. Until late 2001, the global routing table was growing exponentially, threatening an eventual widespread breakdown of connectivity. In an attempt to prevent this, ISPs cooperated in keeping the global routing table as small as possible, by using Classless Inter-Domain Routing (CIDR) and route aggregation. While this slowed the growth of the routing table to a linear process for several years, with the expanded demand for multihoming by end user networks the growth was once again superlinear by the middle of 2004. 512k day A Y2K-like overflow triggered in 2014 for those models that were not appropriately updated. While a full IPv4 BGP table (512k day) was in excess of 512,000 prefixes, many older routers had a limit of 512k (512,000–524,288) routing table entries. On August 12, 2014, outages resulting from full tables hit eBay, LastPass and Microsoft Azure among others. A number of Cisco routers commonly in use had TCAM, a form of high-speed content-addressable memory, for storing BGP advertised routes. On impacted routers, the TCAM was default allocated as 512k IPv4 routes and 256k IPv6 routes. While the reported number of IPv6 advertised routes was only about 20k, the number of advertised IPv4 routes reached the default limit, causing a spillover effect as routers attempted to compensate for the issue by using slow software routing (as opposed to fast hardware routing via TCAM). The main method for dealing with this issue involves operators changing the TCAM allocation to allow more IPv4 entries, by reallocating some of the TCAM reserved for IPv6 routes, which requires a reboot on most routers. The 512k problem was predicted by a number of IT professionals. The actual allocations which pushed the number of routes above 512k was the announcement of about 15,000 new routes in short order, starting at 07:48 UTC. Almost all of these routes were to Verizon Autonomous Systems 701 and 705, created as a result of deaggregation of larger blocks, introducing thousands of new routes, and making the routing table reach 515,000 entries. The new routes appear to have been reaggregated within 5 minutes, but instability across the Internet apparently continued for a number of hours. Even if Verizon had not caused the routing table to exceed 512k entries in the short spike, it would have happened soon anyway through natural growth. Route summarization is often used to improve aggregation of the BGP global routing table, thereby reducing the necessary table size in routers of an AS. Consider AS1 has been allocated the big address space of , this would be counted as one route in the table, but due to customer requirement or traffic engineering purposes, AS1 wants to announce smaller, more specific routes of , , and . The prefix does not have any hosts so AS1 does not announce a specific route . This all counts as AS1 announcing four routes. AS2 will see the four routes from AS1 (, , , and ) and it is up to the routing policy of AS2 to decide whether or not to take a copy of the four routes or, as overlaps all the other specific routes, to just store the summary, . If AS2 wants to send data to prefix , it will be sent to the routers of AS1 on route . At AS1's router, it will either be dropped or a destination unreachable ICMP message will be sent back, depending on the configuration of AS1's routers. If AS1 later decides to drop the route , leaving , , and , AS1 will drop the number of routes it announces to three. AS2 will see the three routes, and depending on the routing policy of AS2, it will store a copy of the three routes, or aggregate the prefix's and to , thereby reducing the number of routes AS2 stores to only two: and . If AS2 wants to send data to prefix , it will be dropped or a destination unreachable ICMP message will be sent back at the routers of AS2 (not AS1 as before), because would not be in the routing table. AS numbers depletion and 32-bit ASNs The RFC 1771 (A Border Gateway Protocol 4 (BGP-4)) planned the coding of AS numbers on 16 bits, for 64510 possible public AS, since ASN 64512 to 65534 were reserved for private use (0 and 65535 being forbidden). In 2011, only 15000 AS numbers were still available, and projections were envisioning a complete depletion of available AS numbers in September 2013. RFC 6793 extends AS coding from 16 to 32 bits (keeping the 16 bits AS range 0 to 65535, and its reserved AS numbers), which now allows up to 4 billion available AS. An additional private AS range is also defined in RFC 6996 (from 4200000000 to 4294967294, 4294967295 being forbidden by RFC 7300). To allow the traversal of router groups not able to manage those new ASNs, the new attribute OT AS4_PATH is used. 32-bit ASN assignments started in 2007. Load balancing Another factor causing this growth of the routing table is the need for load balancing of multi-homed networks. It is not a trivial task to balance the inbound traffic to a multi-homed network across its multiple inbound paths, due to limitation of the BGP route selection process. For a multi-homed network, if it announces the same network blocks across all of its BGP peers, the result may be that one or several of its inbound links become congested while the other links remain under-utilized, because external networks all picked that set of congested paths as optimal. Like most other routing protocols, BGP does not detect congestion. To work around this problem, BGP administrators of that multihomed network may divide a large contiguous IP address block into smaller blocks and tweak the route announcement to make different blocks look optimal on different paths, so that external networks will choose a different path to reach different blocks of that multi-homed network. Such cases will increase the number of routes as seen on the global BGP table. One method growing in popularity to address the load balancing issue is to deploy BGP/LISP (Locator/Identifier Separation Protocol) gateways within an Internet exchange point to allow ingress traffic engineering across multiple links. This technique does not increase the number of routes seen on the global BGP table. Security By design, routers running BGP accept advertised routes from other BGP routers by default. This allows for automatic and decentralized routing of traffic across the Internet, but it also leaves the Internet potentially vulnerable to accidental or malicious disruption, known as BGP hijacking. Due to the extent to which BGP is embedded in the core systems of the Internet, and the number of different networks operated by many different organizations which collectively make up the Internet, correcting this vulnerability (such as by introducing the use of cryptographic keys to verify the identity of BGP routers) is a technically and economically challenging problem. Extensions An extension to BGP is the use of multipathing this typically requires identical MED, weight, origin, and AS-path although some implementations provide the ability to relax the AS-path checking to only expect an equal path length rather than the actual AS numbers in the path being expected to match too. This can then be extended further with features like Cisco's dmzlink-bw which enables a ratio of traffic sharing based on bandwidth values configured on individual links. Multiprotocol Extensions for BGP (MBGP), sometimes referred to as Multiprotocol BGP or Multicast BGP and defined in IETF RFC 4760, is an extension to (BGP) that allows different types of addresses (known as address families) to be distributed in parallel. Whereas standard BGP supports only IPv4 unicast addresses, Multiprotocol BGP supports IPv4 and IPv6 addresses and it supports unicast and multicast variants of each. Multiprotocol BGP allows information about the topology of IP multicast-capable routers to be exchanged separately from the topology of normal IPv4 unicast routers. Thus, it allows a multicast routing topology different from the unicast routing topology. Although MBGP enables the exchange of inter-domain multicast routing information, other protocols such as the Protocol Independent Multicast family are needed to build trees and forward multicast traffic. Multiprotocol BGP is also widely deployed in case of MPLS L3 VPN, to exchange VPN labels learned for the routes from the customer sites over the MPLS network, in order to distinguish between different customer sites when the traffic from the other customer sites comes to the Provider Edge router (PE router) for routing. Uses BGP4 is standard for Internet routing and required of most Internet service providers (ISPs) to establish routing between one another. Very large private IP networks use BGP internally. An example is the joining of a number of large Open Shortest Path First (OSPF) networks, when OSPF by itself does not scale to the size required. Another reason to use BGP is multihoming a network for better redundancy, either to multiple access points of a single ISP or to multiple ISPs. Implementations Routers, especially small ones intended for Small Office/Home Office (SOHO) use, may not include BGP software. Some SOHO routers simply are not capable of running BGP / using BGP routing tables of any size. Other commercial routers may need a specific software executable image that contains BGP, or a license that enables it. Open source packages that run BGP include GNU Zebra, Quagga, OpenBGPD, BIRD, XORP, and Vyatta. Devices marketed as Layer 3 switches are less likely to support BGP than devices marketed as routers, but high-end Layer 3 Switches usually can run BGP. Products marketed as switches may or may not have a size limitation on BGP tables, such as 20,000 routes, far smaller than a full Internet table plus internal routes. These devices, however, may be perfectly reasonable and useful when used for BGP routing of some smaller part of the network, such as a confederation-AS representing one of several smaller enterprises that are linked, by a BGP backbone of backbones, or a small enterprise that announces routes to an ISP but only accepts a default route and perhaps a small number of aggregated routes. A BGP router used only for a network with a single point of entry to the Internet may have a much smaller routing table size (and hence RAM and CPU requirement) than a multihomed network. Even simple multihoming can have modest routing table size. See RFC 4098 for vendor-independent performance parameters for single BGP router convergence in the control plane. The actual amount of memory required in a BGP router depends on the amount of BGP information exchanged with other BGP speakers and the way in which the particular router stores BGP information. The router may have to keep more than one copy of a route, so it can manage different policies for route advertising and acceptance to a specific neighboring AS. The term view is often used for these different policy relationships on a running router. If one router implementation takes more memory per route than another implementation, this may be a legitimate design choice, trading processing speed against memory. A full IPv4 BGP table is in excess of 590,000 prefixes. Large ISPs may add another 50% for internal and customer routes. Again depending on implementation, separate tables may be kept for each view of a different peer AS. Notable free and open source implementations of BGP include: BIRD Internet Routing Daemon, a GPL routing package for Unix-like systems. FRRouting, a fork of Quagga for Unix-like systems. GNU Zebra, a GPL routing suite supporting BGP4. (decommissioned) OpenBGPD, a BSD licensed implementation by the OpenBSD team. Quagga, a fork of GNU Zebra for Unix-like systems. XORP, the eXtensible Open Router Platform, a BSD licensed suite of routing protocols. Systems for testing BGP conformance, load or stress performance come from vendors such as: Agilent Technologies GNS3 open source network simulator Ixia Spirent Communications Standards documents , Application of the Border Gateway Protocol in the Internet Protocol (BGP-4) using SMIv2 , BGP Communities Attribute , BGP Route Flap Damping , Route Refresh Capability for BGP-4 , NOPEER Community for Border Gateway Protocol (BGP) Route Scope Control , A Border Gateway Protocol 4 (BGP-4) , BGP Security Vulnerabilities Analysis , Definitions of Managed Objects for BGP-4 , BGP-4 Protocol Analysis , BGP-4 MIB Implementation Survey , BGP-4 Implementation Report , Experience with the BGP-4 Protocol , Standards Maturity Variance Regarding the TCP MD5 Signature Option (RFC 2385) and the BGP-4 Specification , BGP Extended Communities Attribute , BGP Route Reflection – An Alternative to Full Mesh Internal BGP (iBGP) , Graceful Restart Mechanism for BGP , Multiprotocol Extensions for BGP-4 , Autonomous System Confederations for BGP , Capabilities Advertisement with BGP-4 , Dissemination of Flow Specification Rules , IPv6 Address Specific BGP Extended Community Attribute , BGP Support for Four-Octet Autonomous System (AS) Number Space , IANA Registries for BGP Extended Communities , Revised Error Handling for BGP UPDATE Messages , North-Bound Distribution of Link-State and Traffic Engineering Information Using BGP , Advertisement of Multiple Paths in BGP , BGP Large Communities Attribute , Use of BGP Large Communities , Policy Behavior for Well-Known BGP Communities draft-ietf-idr-custom-decision-08 – BGP Custom Decision Process, Feb 3, 2017 Selective Route Refresh for BGP, IETF draft , Obsolete – Border Gateway Protocol (BGP) , Obsolete – A Border Gateway Protocol 4 (BGP-4) , Obsolete – Application of the Border Gateway Protocol in the Internet , Obsolete – Definitions of Managed Objects for the Fourth Version of the Border Gateway , Obsolete – A Border Gateway Protocol 4 (BGP-4) , Obsolete – Autonomous System Confederations for BGP , Obsolete – BGP Route Reflection An Alternative to Full Mesh iBGP , Obsolete – Multiprotocol Extensions for BGP-4 , Obsolete – Autonomous System Confederations for BGP , Obsolete – Capabilities Advertisement with BGP-4 , Obsolete – BGP Support for Four-octet AS Number Space See also 2021 Facebook outage AS 7007 incident Internet Assigned Numbers Authority Packet forwarding Private IP QPPB Regional Internet registry Route analytics Route filtering Routing Assets Database Notes References Further reading Chapter "Border Gateway Protocol (BGP)" in the Cisco "IOS Technology Handbook" External links BGP Routing Resources (includes a dedicated section on BGP & ISP Core Security) BGP table statistics Internet Standards Internet protocols Routing protocols Internet architecture
51420182
https://en.wikipedia.org/wiki/Frederick%20Rennell%20Thackeray
Frederick Rennell Thackeray
General Frederick Rennell Thackeray (1775 – 19 September 1860) was a senior British Army officer. Early life Thackeray was born in Windsor, Berkshire, a younger son of physician Dr Frederick Thackeray, and a grandson of Thomas Thackeray, headmaster of Harrow School. His first cousin once removed was the writer William Makepeace Thackeray. Military career He entered the Royal Military Academy in Woolwich and became a second lieutenant in the Royal Artillery in 1793 but transferred to the Royal Engineers the following year. He served from 1793 at Gibraltar, where he was promoted first lieutenant in 1796, and then transferred to the West Indies in 1797. He took part, on 20 August 1799, in the capture of Surinam under Sir Thomas Trigge. In 1801 he was aide-de-camp to Trigge at the capture of the Swedish West India island of St. Bartholomew, the Dutch island of St. Martin and the Danish islands of St. Thomas and St. John. In 1807, now a captain, he was sent to Sicily, from where he proceeded to Egypt with the expedition under Major-General Alexander Mackenzie Fraser, returning to Sicily in September. In 1809 Thackeray was the commanding Royal Engineer with a force under Lieutenant-Colonel Haviland Smith, and was detached by Lieutenant-General Sir John Stuart to make an attack on the castle of Scylla, which Thackeray directed with such skill that although his siege was raised by a superior French force, the castle had by then become untenable and had to be blown up. In 1810 Thackeray was sent from Messina to join Colonel John Oswald in the Ionian Islands with orders to take part in the siege of the fortress of Santa Maura on the island of Lefkada. The position of the fortress on a long narrow isthmus of sand rendered it difficult to approach, and it was not only well supplied, but contained casemated barracks for a garrison of eight hundred men under General Camus. The now-General Oswald effected a landing on 23 March and the enemy were driven out of their forward entrenchments at bayonet point by the 35th Regiment of Foot. Large working parties were at once sent in and the entrenchment converted into a secure lodgement from which the British infantry and sharpshooters were so able to distress the artillery of the fort that it surrendered. Thackeray was mentioned in general orders and in despatches and received on 19 May 1810 a brevet majority in special recognition of his services on this occasion. Major Thackeray sailed in July 1812 with the Anglo-Sicilian army under Lieutenant-General Frederick Maitland and landed at Alicante in August. He took part in the operations of this army, which, after Maitland's resignation in October, was successively commanded by Generals Mackenzie, William Henry Clinton, Colin Campbell, and Sir John Murray, who arrived in February 1813. In 1813 Thackeray marched with them from Alicante to attack General Suchet, and was at the capture of Alcoy. He also took part in the Battle of Castalla when Suchet was defeated. In May 1813 he embarked with the army, fourteen thousand strong, with a powerful siege train and ample engineer stores, for Tarragona, where they disembarked in June. Thackeray directed the siege operations, and by 8 June a practicable breach was made in Fort Royal, an outwork over four hundred yards in advance of the place. Thackeray was promoted to be lieutenant-colonel in the Royal Engineers in July 1813. He had moved, at the end of June, with Lord William Bentinck's army to Alicante, and was at the occupation of Valencia on 9 July, and at the Siege of Tarragona on 30 July. He took part in the other operations of the army under Bentinck and his successor, Sir William Clinton. During October and November Thackeray was employed in rendering Tarragona once more defensible. In April 1814, by Wellington's orders, Clinton's army was broken up, and Thackeray returned to England in ill-health. At the beginning of 1815 he was appointed commanding Royal Engineer at Plymouth, transferred to Gravesend in 1817, and thence to Edinburgh in 1824 as commanding Royal Engineer of North Britain. He was promoted to be colonel in the Royal Engineers on 2 June 1825 and made a Companion of the Bath on 26 September 1831. In 1833 he was appointed commanding Royal Engineer in Ireland and promoted to be major-general on 10 January 1837 when he retired. He was made a colonel-commandant of the Corps of Royal Engineers on 29 April 1846, was promoted lieutenant-general later that year, and made full General on 20 June 1854. Death He died at his home in Windlesham, Bagshot, Surrey, on 19 September 1860, and was buried at York Town, Farnborough. He had married Lady Elizabeth Margaret Carnegie, third daughter of William, 7th Earl of Northesk. Lady Elizabeth, three sons, and five daughters survived him. His daughter Mary Elizabeth married the lawyer and historian John William Willis-Bund. References 1775 births 1860 deaths People from Windsor, Berkshire Graduates of the Royal Military Academy, Woolwich British Army generals Companions of the Order of the Bath Royal Artillery officers Royal Engineers officers British Army personnel of the Napoleonic Wars British Army personnel of the Peninsular War
35351731
https://en.wikipedia.org/wiki/Luigi%20Logrippo
Luigi Logrippo
Luigi Logrippo is a Professor of Computer Science at the Université du Québec en Outaouais (UQO). He is the principal researcher of the LOTOS group at the University Of Ottawa. Currently luigi participates in LARSI. Research areas Formal methods in security, privacy and governance including: Formal specification, formal design, validation, verification, testing Security: Enterprise data security; Access control models and methods Legal conformance, privacy Normative systems: Formal methods in telecom software engineering: Process algebras, LOTOS and E-LOTOS languages Feature interaction problem Biography Logrippo was born in Italy, received a "laurea" in law from the University of Rome in 1961. Until 1967, he worked with Olivetti, Olivetti-Bull, General Electric, and Siemens as a programmer and systems analyst. From 1967 to 1969 he worked as a Research Associate at the Institute for Computer Studies. He obtained a MSc in Computer Science from University of Manitoba in 1969. Luigi obtained a PhD in Computer Science at the University of Waterloo in 1974. From 1973 to 2002 I have been with the University of Ottawa, first in the Department of Computer Science and then in the School of Information Technology and Engineering (SITE). Luigi was Chair of the Computer Science Department from 1991 to 1997 and Administrative Director of SITE in 1997/98. He had sabbaticals at Bell Northern Research (which became Nortel), at the University of Twente (NL) and at the University of Stirling (Scotland). Lorgippo retired from the University of Ottawa and since July 1, 2002 he is currently a professor at the nearby Université du Québec en Outaouais, Département d'informatique et ingénierie. Selected publications Hemanth Khambhammettu, Sofiene Boulares, Kamel Adi, Luigi Logrippo. A Framework for Threat Assessment in Access Control Systems. To appear in the Proc. of SEC 2012, the 2012 IFIP International Information Security and Privacy Conference, Heraklion, June 4–6. Bernard Stepien, Hemanth Khambhammettu, Kamel Adi, Luigi Logrippo. CatBAC: A Generic Framework for Designing and Validating Hybrid Access Control Models. To appear in the Proc. of SFCS 2012, the First IEEE International Workshop on Security and Forensics in Communication Systems, Ottawa, June 10–15, 2012 Yacine Bouzida, Luigi Logrippo, Sergei Mankovski. Concrete and Abstract Based Access Control. To appear in the International Journal of Information Security, Springer. The final publication is available at www.springerlink.com. Int. J. Inf. Secur. DOI 10.1007/s10207-011-0138-1. Published online 14 July 2011. Logrippo, L. From e-business to e-laws and e-judgments: 4,000 years of experience. CYBERLAWS 2011, Proc. of the Second International Conference on Technical and Legal Aspects of the e-Society, Guadeloupe, Feb 2011, 22-28. Slimani, N., Khambhammettu, H., Adi, K., Logrippo, L. UACML: Unified Access Control Modeling Language. In: New Technologies, Mobility and Security (NTMS), 2011 4th IFIP International Conference in February 2011, 1-8. Ma, J., Logrippo, L., Adi, K., Mankovski, S. Risk Analysis in Access Control Systems Based on Trust Theories. The 3rd Workshop on Logics for Intelligent Agents and Multi-Agent Systems (WLIAMas 2010). Toronto, August 2010, 415-418. Shaikh, R.A., Adi, K., Logrippo, L., Mankovski, S. Inconsistency Detection Method for Access Control Policies. IEEE sixth International Conference on Information Assurance and Security (IAS 2010), Atlanta, Aug. 2010, 204-209. Ma, J., Adi, K., Mejri, M., Logrippo, L. Risk Analysis in Access Control Systems. Eight Intern. Conf. on Privacy, Security, and Trust (PST 2010). Ottawa, Aug. 2010, 160-166. Shaikh, R.A., Adi,K., Logrippo, L., Mankovski, S. Detecting Incompleteness in Access Control Policies Using Data Classification Schemes, In Proc. of the 5th International Conference on Digital Information Management (ICDIM 2010), Thunder Bay, Canada, July 2010, IEEE Press, 417-422. Ma, J., Adi, K., Logrippo, L., Mankovski, S. Risk Management in Dynamic Role Based Access Control Systems. Proc. of the 5th International Conference on Digital Information Management (ICDIM 2010), Thunder Bay, Canada, July 2010, IEEE Press, 423-430. Plesa, R., Logrippo, L. An Agent-Based Architecture for Providing Enhanced Communication Services. Chapter 15 in: Laurence T. Yang (Ed.) Research in Mobile Intelligence – Wiley series on Parallel and Distributed Computing, 2010. 320-342. Hassan, W., Logrippo, L. A Governance Requirements Extraction Model for Legal Compliance Validation. In Proc. IEEE 17th International Requirements Engineering Conference (RE'09): RELAW Workshop. Atlanta, GA. Sep. 2009, 7-12. Adi, K., Bouzida, Y., Hattak, I., Logrippo, L., Mankovskii, S. Typing for Conflict Detection in Access Control Policies. In: G. Babin, P. Kropf, M. Weiss (Eds.): E-Technologies: Innovation in an Open World. Proc. of the 4th Intern. Conf. MCETECH 2009 (Ottawa, May 2009), Lecture Notes in Business Information Processing (LNBIP 26), Springer, 2009, 212-226. Hassan, W. and Logrippo, L. Requirements and Compliance in Legal Systems: a Logic Approach. In Proc. IEEE 16th International Requirements Engineering Conference (RE'08): RELAW Workshop. Barcelona, Spain. Sep. 2008, 40-44. Logrippo, L. Normative Systems: the Meeting Point between Jurisprudence and Information Technology? In: H. Fujita, D. Pisanelli (Eds.): New Trends in Software Methodologies, Tools and Techniques – Proc. of the 6th SoMeT_07. IOS Press, 2007, 343-354. References External links Luigi Logrippo UofO home page Luigi Logrippo UQO WebSite Living people Italian emigrants to Canada University of Waterloo alumni Formal methods people Software engineering researchers Computer science writers Year of birth missing (living people)
16797
https://en.wikipedia.org/wiki/Kraftwerk
Kraftwerk
Kraftwerk (, "power station") are a German band formed in Düsseldorf in 1969 by Ralf Hütter and Florian Schneider. Widely considered innovators and pioneers of electronic music, Kraftwerk were among the first successful acts to popularize the genre. The group began as part of West Germany's experimental krautrock scene in the early 1970s before fully embracing electronic instrumentation, including synthesizers, drum machines, and vocoders. Wolfgang Flür joined the band in 1974 and Karl Bartos in 1975, expanding the band to a quartet. On commercially successful albums such as Autobahn (1974), Trans-Europe Express (1977), The Man-Machine (1978), and Computer World (1981), Kraftwerk developed a self-described experimental "robot pop" style that combined electronic music with pop melodies, sparse arrangements, and repetitive rhythms, while adopting a stylized image including matching suits. Following the release of Electric Café (1986), Flür left the group in 1987, followed by Bartos in 1990. Founding member Schneider left in 2008. The band's work has influenced a diverse range of artists and many genres of modern music, including synth-pop, hip hop, post-punk, techno, house music, ambient, and club music. In 2014, the Recording Academy honoured Kraftwerk with a Grammy Lifetime Achievement Award. They later won the Grammy Award for Best Dance/Electronic Album with their live album 3-D The Catalogue (2017) at the 2018 ceremony. As of 2021, the band continues to tour. An extensive North American tour was announced and is scheduled to take place over 24 dates between May and July 2022. In 2021, Kraftwerk was inducted into the Rock & Roll Hall of Fame in the early influence category. History Formation and early years (1969–1973) Florian Schneider (flutes, synthesizers, violin) and Ralf Hütter (organ, synthesizers) met as students at the Robert Schumann Hochschule in Düsseldorf in the late 1960s, participating in the German experimental music and art scene of the time, which Melody Maker jokingly dubbed "krautrock". They joined a quintet known as Organisation, which released one album, Tone Float in 1970, issued on RCA Records in the UK, and split shortly thereafter. Schneider became interested in synthesizers, deciding to acquire one in 1970. While visiting an exhibition in their hometown about visual artists Gilbert and George, they saw "two men wearing suits and ties, claiming to bring art into everyday life. The same year, Hütter and Schneider started bringing everyday life into art and form Kraftwerk". Early Kraftwerk line-ups from 1970 to 1974 fluctuated, as Hütter and Schneider worked with around a half-dozen other musicians during the preparations for and the recording of three albums and sporadic live appearances, including guitarist Michael Rother and drummer Klaus Dinger, who left to form Neu!. The only constant figure in these line-ups was Schneider, whose main instrument at the time was the flute; at times he also played the violin and guitar, all processed through a varied array of electronic devices. Hütter, who left the band for eight months to focus on completing his university studies, played synthesizer and keyboards (including Farfisa organ and electric piano). The band released two free-form experimental rock albums, Kraftwerk (1970) and Kraftwerk 2 (1972). The albums were mostly exploratory musical improvisations played on a variety of traditional instruments including guitar, bass, drums, organ, flute, and violin. Post-production modifications to these recordings were used to distort the sound of the instruments, particularly audio-tape manipulation and multiple dubbings of one instrument on the same track. Both albums are purely instrumental. Live performances from 1972 to 1973 were mostly made as a duo, using a simple beat-box-type electronic drum machine with preset rhythms taken from an electric organ. Occasionally, they performed with bass players as well. These shows were mainly in Germany, with occasional shows in France. Later in 1973, Wolfgang Flür joined the group for rehearsals, and the unit performed as a trio on the television show Aspekte for German television network ZDF. With Ralf und Florian, released in 1973, Kraftwerk began to rely more heavily on synthesizers and drum machines. Although almost entirely instrumental, the album marks Kraftwerk's first use of the vocoder, which became one of its musical signatures. According to English music journalist Simon Reynolds, Kraftwerk were influenced by what he called the "adrenalized insurgency" of Detroit artists of the late '60s MC5 and the Stooges. The input, expertise, and influence of producer and engineer Konrad "Conny" Plank was highly significant in the early years of Kraftwerk. Plank also worked with many of the other leading German electronic acts of that time, including members of Can, Neu!, Cluster, and Harmonia. As a result of his work with Kraftwerk, Plank's studio near Cologne became one of the most sought-after studios in the late 1970s. Plank co-produced the first four Kraftwerk albums. International breakthrough: Autobahn and Radioactivity (1974–1976) The release of Autobahn in 1974 saw Kraftwerk moving away from the sound of its first three albums. Hütter and Schneider had invested in newer technology such as the Minimoog and the EMS Synthi AKS, helping give Kraftwerk a newer, "disciplined" sound. Autobahn was also the last album that Conny Plank engineered. After the commercial success of Autobahn in the US, where it peaked at number 5 in the Billboard Top LPs & Tapes, Hütter and Schneider invested in updating their studio, thus lessening their reliance on outside producers. At this time the painter and graphic artist Emil Schult became a regular collaborator, designing artwork, cowriting lyrics, and accompanying the group on tour. The year 1975 saw a turning point in Kraftwerk's live shows. With financial support from Phonogram Inc., in the US, they were able to undertake a tour to promote the Autobahn album, a tour which took them to the US, Canada and the UK for the first time. The tour also saw a new, stable, live line-up in the form of a quartet. Hütter and Schneider continued playing keyboard synthesizers such as the Minimoog and ARP Odyssey, with Schneider's use of flute diminishing. The two men started singing live for the first time, and Schneider processing his voice with a vocoder live. Wolfgang Flür and new recruit Karl Bartos performed on home-made electronic percussion instruments. Bartos also used a Deagan vibraphone on stage. The Hütter-Schneider-Bartos-Flür formation remained in place until the late 1980s and is now regarded as the classic live line-up of Kraftwerk. Emil Schult generally fulfilled the role of tour manager. After the 1975 Autobahn tour, Kraftwerk began work on a follow-up album, Radio-Activity (German title: Radio-Aktivität). After further investment in new equipment, the Kling Klang Studio became a fully working recording studio. The group used the central theme in radio communication, which had become enhanced on their last tour of the United States. With Emil Schult working on artwork and lyrics, Kraftwerk began to compose music for the new record. Even though Radio-Activity was less commercially successful than Autobahn in the UK and United States, the album served to open up the European market for Kraftwerk, earning them a gold disc in France. Kraftwerk made videos and performed several European live dates to promote the album. With the release of Autobahn and Radio-Activity, Kraftwerk left behind avant-garde experimentation and moved towards the electronic pop tunes for which they are best known. In 1976, Kraftwerk toured in support of the Radio-Activity album. David Bowie was among the fans of the record and invited the band to support him on his Station to Station tour, an offer the group declined. Despite some innovations in touring, Kraftwerk took a break from live performances after the Radio-Activity tour of 1976. Trans-Europe Express, The Man-Machine and Computer World (1977–1982) After having finished the Radio-Activity tour Kraftwerk began recording Trans-Europe Express (German: Trans-Europa Express) at the Kling Klang Studio. Trans-Europe Express was mixed at the Record Plant Studios in Los Angeles. It was around this time that Hütter and Schneider met David Bowie at the Kling Klang Studio. A collaboration was mentioned in an interview (Brian Eno) with Hütter, but it never materialised. The release of Trans-Europe Express in March 1977 was marked with an extravagant train journey used as a press conference by EMI France. The album won a disco award in New York later that year. In May 1978 Kraftwerk released The Man-Machine (German: Die Mensch-Maschine), recorded at the Kling Klang Studio. Due to the complexity of the recording, the album was mixed at Studio Rudas in Düsseldorf. The band hired sound engineer Leanard Jackson from Detroit to work with Joschko Rudas on the final mix. The Man-Machine was the first Kraftwerk album where Karl Bartos was cocredited as a songwriter. The cover, produced in black, white and red, was inspired by Russian artist El Lissitzky and the Suprematism movement. Gunther Frohling photographed the group for the cover, a now-iconic image which featured the quartet dressed in red shirts and black ties. After it was released Kraftwerk did not release another album for three years. In May 1981 Kraftwerk released Computer World (German: Computerwelt) on EMI Records. It was recorded at Kling Klang Studio between 1978 and 1981. Much of this time was spent modifying the studio to make it portable so the band could take it on tour. Some of the electronic vocals on Computer World were generated using a Texas Instruments language translator. "Computer Love" was released as a single backed with the Man-Machine track "The Model". Radio DJs were more interested in the B-side so the single was repackaged by EMI and re-released with "The Model" as the A-side. The single reached number one in the UK, making "The Model" Kraftwerk's most successful song in that country. As a result, the Man-Machine album also became a success in the UK, peaking at number 9 in the album chart in February 1982. The band's live set focused increasingly on song-based material, with greater use of vocals and the use of sequencing equipment for both percussion and music. In contrast to their cool and controlled image, the group used sequencers interactively, which allowed for live improvisation. Ironically Kraftwerk did not own a computer at the time of recording Computer World. Kraftwerk returned to live performance with the Computer World tour of 1981, where the band effectively packed up its entire Kling Klang studio and took it along on the road. They also made greater use of live visuals including back-projected slides and films synchronized with the music as the technology developed, the use of hand-held miniaturized instruments during the set (for example, during "Pocket Calculator"), and, perhaps most famously, the use of replica mannequins of themselves to perform on stage during the song "The Robots". Electric Café (1982–1989) In 1982 Kraftwerk began to work on a new album that initially had the working title Technicolor but due to trademark issues was changed to Electric Café for its original release in 1986 (for a remastered re-release in 2009, it was retitled again after its original working title, Techno Pop). One of the songs from these recording sessions was "Tour de France", which EMI released as a single in 1983. This song was a reflection of the band's new-found obsession for cycling. After the physically demanding Computer World tour, Ralf Hütter had been looking for forms of exercise that fitted in with the image of Kraftwerk; subsequently he encouraged the group to become vegetarians and take up cycling. "Tour de France" included sounds that followed this theme including bicycle chains, gear mechanisms and the breathing of the cyclist. At the time of the single's release Ralf Hütter tried to persuade the rest of the band that they should record a whole album based on cycling. The other members of the band were not convinced, and the theme was left to the single alone. "Tour de France" was released in German and French. The vocals of the song were recorded on the Kling Klang Studio stairs to create the right atmosphere. "Tour de France" was featured in the 1984 film Breakin', showing the influence that Kraftwerk had on black American dance music. In May or June 1982, during the recording of "Tour de France", Ralf Hütter was involved in a serious cycling accident. He suffered head injuries and remained in a coma for several days. During 1983 Wolfgang Flür was beginning to spend less time in the studio. Since the band began using sequencers his role as a drummer was becoming less frequent. He preferred to spend his time travelling with his girlfriend. Flür was also experiencing artistic difficulties with the band. Though he toured the world with Kraftwerk as a drummer in 1981, his playing does not appear on that year's Computer World or on the 1986 album Electric Café. In 1987 he left the band and was replaced by Fritz Hilpert. The Mix (1990–1999) After years of withdrawal from live performance Kraftwerk began to tour Europe more frequently. In February 1990 the band played a few secret shows in Italy. Karl Bartos left the band shortly afterwards. The next proper tour was in 1991, for the album The Mix. Hütter and Schneider wished to continue the synth-pop quartet style of presentation, and recruited Fernando Abrantes as a replacement for Bartos. Abrantes left the band shortly after though. In late 1991, long-time Kling Klang Studio sound engineer Henning Schmitz was brought in to finish the remainder of the tour and to complete a new version of the quartet that remained active until 2008. In 1997 Kraftwerk made a famous appearance at the dance festival Tribal Gathering held in England. In 1998, the group toured the US and Japan for the first time since 1981, along with shows in Brazil and Argentina. Three new songs were performed during this period and a further two tested in soundchecks, which remain unreleased. Following this trek, the group decided to take another break. In July 1999 the single "Tour de France" was reissued in Europe by EMI after it had been out of print for several years. It was released for the first time on CD in addition to a repressing of the 12-inch vinyl single. Both versions feature slightly altered artwork that removed the faces of Flür and Bartos from the four-man cycling paceline depicted on the original cover. In 1999 ex-member Flür published his autobiography in Germany, Ich war ein Roboter. Later English-language editions of the book were titled Kraftwerk: I Was a Robot. In 1999, Kraftwerk were commissioned to create an a cappella jingle for the Hannover Expo 2000 world's fair in Germany. The jingle was subsequently developed into the single "Expo 2000", which was released in December 1999, and remixed and re-released as "Expo Remix" in November 2000. Tour de France Soundtracks and touring the world (2000–2009) In August 2003 the band released Tour de France Soundtracks, its first album of new material since 1986's Electric Café. In January and February 2003, before the release of the album, the band started the extensive Minimum-Maximum world tour, using four customised Sony VAIO laptop computers, effectively leaving the entire Kling Klang studio at home in Germany. The group also obtained a new set of transparent video panels to replace its four large projection screens. This greatly streamlined the running of all of the group's sequencing, sound-generating, and visual-display software. From this point, the band's equipment increasingly reduced manual playing, replacing it with interactive control of sequencing equipment. Hütter retained the most manual performance, still playing musical lines by hand on a controller keyboard and singing live vocals and having a repeating ostinato. Schneider's live vocoding had been replaced by software-controlled speech-synthesis techniques. In November, the group made a surprising appearance at the MTV European Music Awards in Edinburgh, Scotland, performing "Aerodynamik". The same year a promotional box set entitled 12345678 (subtitled The Catalogue) was issued, with plans for a proper commercial release to follow. The box featured remastered editions of the group's eight core studio albums, from Autobahn to Tour de France Soundtracks. This long-awaited box-set was eventually released in a different set of remasters in November 2009. In June 2005 the band's first-ever official live album, Minimum-Maximum, which was compiled from the shows during the band's tour of spring 2004, received extremely positive reviews. The album contained reworked tracks from existing studio albums. This included a track titled "Planet of Visions" that was a reworking of "Expo 2000". In support of this release, Kraftwerk made another quick sweep around the Balkans with dates in Serbia, Bulgaria, North Macedonia, Turkey, and Greece. In December, the Minimum-Maximum DVD was released. During 2006, the band performed at festivals in Norway, Ireland, the Czech Republic, Spain, Belgium, and Germany. In April 2008 the group played three shows in US cities Minneapolis, Milwaukee, and Denver, and were a coheadliner at the Coachella Valley Music and Arts Festival. This was their second appearance at the festival since 2004. Further shows were performed in Ireland, Poland, Ukraine, Australia, New Zealand, Hong Kong and Singapore later that year. The touring quartet consisted of Ralf Hütter, Henning Schmitz, Fritz Hilpert, and video technician Stefan Pfaffe, who became an official member in 2008. Original member Florian Schneider was absent from the lineup. Hütter stated that he was working on other projects. On 21 November, Kraftwerk officially confirmed Florian Schneider's departure from the band; The Independent commented: "There is something brilliantly Kraftwerkian about the news that Florian Schneider, a founder member of the German electronic pioneers, is leaving the band to pursue a solo career. Many successful bands break up after just a few years. It has apparently taken Schneider and his musical partner, Ralf Hütter, four decades to discover musical differences." Kraftwerk's headline set at Global Gathering in Melbourne, Australia, on 22 November was cancelled moments before it was scheduled to begin, due to a Fritz Hilpert heart problem. In 2009, Kraftwerk performed concerts with special 3D background graphics in Wolfsburg, Germany; Manchester, UK; and Randers, Denmark. Members of the audience were able to watch this multimedia part of the show with 3D glasses, which were given out. During the Manchester concert (part of the 2009 Manchester International Festival) four members of the GB cycling squad (Jason Kenny, Ed Clancy, Jamie Staff and Geraint Thomas) rode around the Velodrome while the band performed "Tour de France". The group also played several festival dates, the last being at the Bestival 2009 in September, on the Isle of Wight. 2009 also saw the release of The Catalogue box set in November. It is a 12" album-sized box set containing all eight remastered CDs in cardboard slipcases, as well as LP-sized booklets of photographs and artwork for each individual album. The Catalogue and continued touring (2010–2016) Although not officially confirmed, Ralf Hütter suggested that a second boxed set of their first three experimental albums—Kraftwerk, Kraftwerk 2 and Ralf and Florian—could be on its way, possibly seeing commercial release after their next studio album: "We've just never really taken a look at those albums. They've always been available, but as really bad bootlegs. Now we have more artwork. Emil has researched extra contemporary drawings, graphics, and photographs to go with each album, collections of paintings that we worked with, and drawings that Florian and I did. We took a lot of Polaroids in those days." Kraftwerk also released an iOS app called Kraftwerk Kling Klang Machine. The Lenbach House in Munich exhibited some Kraftwerk 3-D pieces in Autumn 2011. Kraftwerk performed three concerts to open the exhibit. Kraftwerk played at Ultra Music Festival in Miami on 23 March 2012. Initiated by Klaus Biesenbach, the Museum of Modern Art of New York organized an exhibit titled Kraftwerk – Retrospective 1 2 3 4 5 6 7 8 where the band performed their studio discography from Autobahn to Tour de France over the course of eight days to sell-out crowds. The exhibit later toured to the Tate Gallery as well as to K21 in Düsseldorf. Kraftwerk performed at the No Nukes 2012 Festival in Tokyo, Japan. Kraftwerk were also going to play at the Ultra Music Festival in Warsaw, but the event was cancelled; instead, Kraftwerk performed at Way Out West in Gothenburg. A limited edition version of the Catalogue box set was released during the retrospective, restricted to 2000 sets. Each box was individually numbered and inverted the colour scheme of the standard box. In December, Kraftwerk stated on their website that they would be playing their Catalogue in Düsseldorf and at London's Tate Modern. Kraftwerk tickets were priced at £60 in London, but fans compared that to the $20 ticket price for tickets at New York's MoMA in 2012, which caused consternation. Even so, the demand for the tickets at The Tate was so high that it shut down the website. In March 2013, the band was not allowed to perform at a music festival in China due to unspecified "political reasons". In an interview in June after performing the eight albums of The Catalogue in Sydney, Ralf Hütter stated: "Now we have finished one to eight, now we can concentrate on number nine." In July, they performed at the 47th Montreux Jazz Festival. The band also played a 3-D concert on 12 July at Scotland's biggest festival – T in the Park – in Balado, Kinross, as well as 20 July at Latitude Festival in Suffolk, and 21 July at the Longitude Festival in Dublin. In October 2013 the band played four concerts, over two nights, in Eindhoven, Netherlands. The venue, Evoluon (the former technology museum of Philips Electronics, now a conference center) was handpicked by Ralf Hütter, for its retro-futuristic UFO-like architecture. Bespoke visuals of the building, with the saucer section descending from space, were displayed during the rendition of Spacelab. In 2014, Kraftwerk brought their four-night, 3D Catalogue tour to the Walt Disney Concert Hall in Los Angeles, and at NYC's United Palace Theatre. They also played at the Cirkus in Stockholm, Sweden and at the music festival Summer Sonic in Tokyo, Japan. In November 2014 the 3D Catalogue live set was played in Paris, France, at the brand new Fondation Louis-Vuitton from 6 to 14 November. and then in the iconic Paradiso concert hall in Amsterdam, Netherlands, where they played before in 1976. In 2015, Ralf Hütter, being told that the Tour de France would be starting that year in the nearby Dutch city of Utrecht, decided that Kraftwerk would perform during the "Grand Depart". Eventually the band played three concerts 3 and 4 July in TivoliVredenburg performing "Tour de France Soundtracks" and visited the start of the Tour in-between. 3-D The Catalogue and Schneider's death (2017–present) In April 2017, Kraftwerk announced 3-D The Catalogue, a live album and video documenting performances of all eight albums in The Catalogue that was released 26 May 2017. It is available in multiple formats, the most extensive of which being a 4-disc Blu-ray set with a 236-page hardback book. The album was nominated for the Grammy Awards for Best Dance/Electronic Album and Best Surround Sound Album at the ceremony that took place on 28 January 2018, winning the former, which became the band's first Grammy win. On 20 July 2018, at a concert in Stuttgart, German astronaut Alexander Gerst performed "Spacelab" with the band while aboard the International Space Station, joining via a live video link. Gerst played melodies using a tablet as his instrument alongside Hütter as a duet, and delivered a short message to the audience. On 20 July 2019, Kraftwerk headlined the Saturday night lineup on the Lovell Stage at Bluedot Festival, a music and science festival held annually at Jodrell Bank Observatory, Cheshire, UK. The 2019 festival celebrated the 50th anniversary of the Apollo 11 moon landing. On 21 April 2020, Florian Schneider died at age 73 after a brief battle with cancer. On 3 July 2020, the German-language versions of Trans Europe Express, The Man Machine, Computer World, Techno Pop and The Mix, alongside 3-D The Catalogue, were released worldwide on streaming services for the first time. On 21 December 2020, Parlophone/WEA released Remixes, a digital compilation album. It includes remixed tracks taken from singles released 1991, 1999, 2000, 2004 and 2007, plus the previously unreleased "Non Stop", a version of "Musique Non-Stop" used as a jingle by MTV Europe beginning in 1993. The cover re-uses the cover from "Expo Remix". On October 30, 2021, Kraftwerk were inducted into the Rock & Roll Hall of Fame. In November 2021, Kraftwerk announced plans for a 2022 North American tour. Music and artistry Style Kraftwerk have been recognized as pioneers of electronic music as well as subgenres such as electropop, art pop, house music and synth-pop. In its early incarnation, the band pursued an avant-garde, experimental rock style inspired by the compositions of Karlheinz Stockhausen. Hütter has also listed the Beach Boys as a major influence. The group was also inspired by the funk music of James Brown and, later, punk rock. They were initially connected to the German krautrock scene. In the mid-1970s, they transitioned to an electronic sound which they described as "robot pop". Kraftwerk's lyrics dealt with post-war European urban life and technology—traveling by car on the Autobahn, traveling by train, using home computers, and the like. They were influenced by the modernist Bauhaus aesthetic, seeing art as inseparable from everyday function. Usually, the lyrics are very minimal but reveal both an innocent celebration of, and a knowing caution about, the modern world, as well as playing an integral role in the rhythmic structure of the songs. Many of Kraftwerk's songs express the paradoxical nature of modern urban life: a strong sense of alienation existing side by side with a celebration of the joys of modern technology. Starting with the release of Autobahn, Kraftwerk began to release a series of concept albums (Radio-Activity, Trans-Europe Express, The Man-Machine, Computer World, Tour de France Soundtracks). All of Kraftwerk's albums from Trans Europe Express onwards, except Tour de France Soundtracks, have been released in separate versions: one with German vocals for sale in Germany, Switzerland and Austria and one with English vocals for the rest of the world, with occasional variations in other languages when conceptually appropriate. Live performance has always played an important part in Kraftwerk's activities. Also, despite its live shows generally being based around formal songs and compositions, live improvisation often plays a noticeable role in its performances. This trait can be traced back to the group's roots in the first experimental Krautrock scene of the late 1960s, but, significantly, it has continued to be a part of its playing even as it makes ever greater use of digital and computer-controlled sequencing in its performances. Some of the band's familiar compositions have been observed to have developed from live improvisations at its concerts or sound-checks. Technological innovations Throughout their career, Kraftwerk have pushed the limits of music technology with some notable innovations, such as home-made instruments and custom-built devices. The group has always perceived their Kling Klang Studio as a complex music instrument, as well as a sound laboratory; Florian Schneider in particular developed a fascination with music technology, with the result that the technical aspects of sound generation and recording gradually became his main fields of activity within the band. Alexei Monroe called Kraftwerk the "first successful artists to incorporate representations of industrial sounds into non-academic electronic music". Kraftwerk used a custom-built vocoder on their albums Ralf und Florian and Autobahn; the device was constructed by engineers P. Leunig and K. Obermayer of the Physikalisch-Technische Bundesanstalt Braunschweig. Hütter and Schneider hold a patent for an electronic drum kit with sensor pads, filed in July 1975 and issued in June 1977. It must be hit with metal sticks, which are connected to the device to complete a circuit that triggers analog synthetic percussion sounds. The band first performed in public with this device in 1973, on the television program Aspekte (on the all-German channel Zweites Deutsches Fernsehen), where it was played by Wolfgang Flür. They created drum machines for Autobahn and Trans-Europe Express On the Radio-Activity tour in 1976 Kraftwerk tested out an experimental light-beam-activated drum cage allowing Flür to trigger electronic percussion through arm and hand movements. Unfortunately, the device did not work as planned, and it was quickly abandoned. The same year Ralf Hütter and Florian Schneider commissioned Bonn-based "Synthesizerstudio Bonn, Matten & Wiechers" to design and build the Synthanorma Sequenzer with Intervallomat, a 4×8 / 2×16 / 1×32 step-sequencer system with some features that commercial products couldn't provide at that time. The music sequencer was used by the band for the first time to control the electronic sources creating the rhythmic sound of the album Trans-Europe Express. Since 2002, Kraftwerk's live performances have been conducted with the use of virtual technology (i.e. software replicating and replacing original analogue or digital equipment). According to Fritz Hilpert, "the mobility of music technology and the reliability of the notebooks and software have greatly simplified the realization of complex touring setups: we generate all sounds on the laptops in real time and manipulate them with controller maps. It takes almost no time to get our compact stage system set up for performance. [...] This way, we can bring our Kling-Klang Studio with us on stage. The physical light weight of our equipment also translates into an enormous ease of use when working with software synthesizers and sound processors. Every tool imaginable is within immediate reach or just a few mouse clicks away on the Internet." Reclusiveness and eccentricity The band is notoriously reclusive, providing rare and enigmatic interviews, using life-size mannequins and robots to conduct official photo shoots, refusing to accept mail and not allowing visitors at the Kling Klang Studio, the precise location of which they used to keep secret. Another notable example of this eccentric behavior was reported to Johnny Marr of the Smiths by Karl Bartos, who explained that anyone trying to contact the band for collaboration would be told the studio telephone did not have a ringer since, while recording, the band did not like to hear any kind of noise pollution. Instead, callers were instructed to phone the studio precisely at a certain time, whereupon the phone would be answered by Ralf Hütter, despite never hearing the phone ring. Chris Martin of Coldplay recalled in a 2007 article in Q magazine the process of requesting permission to use the melody from the track "Computer Love" on "Talk" from the album X&Y. He sent a letter through the lawyers of the respective parties and several weeks later received an envelope containing a handwritten reply that simply said "yes". Influence and legacy According to music journalist Neil McCormick, Kraftwerk might be "the most influential group in pop history". NME wrote: "'The Beatles and Kraftwerk' may not have the ring of 'the Beatles and the Stones', but nonetheless, these are the two most important bands in music history". AllMusic wrote that their music "resonates in virtually every new development to impact the contemporary pop scene of the late 20th century". Kraftwerk's musical style and image can be heard and seen in 1980s synth-pop groups such as Gary Numan, Ultravox, John Foxx, Orchestral Manoeuvres in the Dark, The Human League, Depeche Mode, Visage, and Soft Cell. Kraftwerk influenced other forms of music such as hip hop, house, and drum and bass, and they are also regarded as pioneers of the electro genre. Most notably, "Trans Europe Express" and "Numbers" were interpolated into "Planet Rock" by Afrika Bambaataa & the Soul Sonic Force, one of the earliest hip-hop/electro hits. Kraftwerk helped ignite the New York electro-movement. Techno was created by three musicians from Detroit, often referred to as the 'Belleville three' (Juan Atkins, Kevin Saunderson & Derrick May), who fused the repetitive melodies of Kraftwerk with funk rhythms. The Belleville three were heavily influenced by Kraftwerk and their sounds because Kraftwerk's sounds appealed to the middle-class blacks residing in Detroit at this time. Depeche Mode's composer Martin Gore emphasized: "For anyone of our generation involved in electronic music, Kraftwerk were the godfathers". Vince Clarke of Erasure, Yazoo and Depeche Mode, is also a notable disco and Kraftwerk fan. Daniel Miller, founder of Mute Records, purchased the vocoder used by Kraftwerk in their early albums, comparing it to owning "the guitar Jimi Hendrix used on 'Purple Haze'". Andy McCluskey and Paul Humphreys, founding members of OMD, have stated that Kraftwerk was a major reference on their early work, and covered "Neon Lights" on the 1991 album, Sugar Tax. The electronic band Ladytron were inspired by Kraftwerk's song "The Model" when they composed their debut single "He Took Her to a Movie". Aphex Twin noted Kraftwerk as one of his biggest influences and called Computer World as a very influential album towards his music and sound. Björk has cited the band as one of her main musical influences. Electronic musician Kompressor has cited Kraftwerk as an influence. The band was also mentioned in the song "Rappers We Crush" by Kompressor and MC Frontalot ("I hurry away, get in my Chrysler. Oh, the dismay!/Someone's replaced all of my Backstreet Boys with Kraftwerk tapes!"). Dr. Alex Paterson of the Orb listed The Man-Machine as one of his 13 most favourite albums of all time. According to NME, Kraftwerk's pioneering "robot pop" also spawned groups like The Prodigy and Daft Punk. Kraftwerk inspired many acts from other styles and genres. David Bowie's "V-2 Schneider", from the 1977's Heroes album, was a tribute to Florian Schneider. Post-punk bands Joy Division and New Order were heavily influenced by the band. Joy Division frontman Ian Curtis was a fan, and showed his colleagues records that would influence their music. New Order also sampled "Uranium" in its biggest hit "Blue Monday". Siouxsie and the Banshees recorded a cover of "Hall of Mirrors" on their 1987 album Through the Looking Glass, which was lauded by Ralf Hütter: "In general, we consider cover versions as an appreciation of our work. The version of 'Hall of Mirrors' by Siouxsie and the Banshees is extraordinary, just like the arrangements of Alexander Bălănescu for his Balanescu Quartet release [of Possessed, 1992]. We also like the album El Baile Alemán of Señor Coconut a lot." Members of Blondie have admitted on several occasions that Kraftwerk were an important reference for their sound by the time they were working on their third album Parallel Lines. The worldwide hit "Heart of Glass" turned radically from an initial reggae-flavoured style to its distinctive electronic sound in order to imitate the technological approach of Kraftwerk's albums and adapt it to a disco concept. U2 recorded a cover version of "Neon Lights" as did Simple Minds. U2 included "Neon Lights" as the B-side of their 2004 single "Vertigo". Simple minds included theirs on an all-cover tunes album by same name. LCD Soundsystem song called "Get Innocuous!" is built on a sample of "The Robots". Rammstein also covered their song "Das Modell", releasing it as a non-album single in 1997. John Frusciante cited the ability to experiment of the group as an inspiration when working in a recording studio. The 1998 comedy The Big Lebowski features a fictional band called "Autobahn", a parody of Kraftwerk and their 1974 record Autobahn. In January 2018, BBC Radio 4 broadcast the 30-minute documentary Kraftwerk: Computer Love which examined "how Kraftwerk's classic album Computer World has changed people's lives." In October 2019, Kraftwerk were nominated for induction into the Rock and Roll Hall of Fame for 2020. On May 12, 2021, Kraftwerk were announced as an official inductee into the Rock and Roll Hall of Fame for the Class of 2021. Band members Current members Ralf Hütter – lead vocals, vocoder, synthesizers, keyboards (1969–1983, 1986–present); organ, drums and percussion, bass guitar, guitar (1969–1974) Fritz Hilpert – electronic percussion (1987–present) Henning Schmitz – electronic percussion, live keyboards (1991–present) Falk Grieffenhagen – live video technician (2012–present) Former members Florian Schneider – synthesizers, background vocals, vocoder, computer-generated vocals, acoustic and electronic flute, live saxophone, percussion, electric guitar, violin (1969–2008, died 2020) Karl Bartos – electronic percussion, vocals, live vibraphone, live keyboards (1975–1990) Wolfgang Flür – electronic percussion (1973–1987) Stefan Pfaffe – live video technician (2008–2012) Fernando Abrantes – electronic percussion, synthesizer (1991) Klaus Röder – electric guitar, electronic violin (1974) Emil Schult – electric guitar, electronic violin (1973) Michael Rother – electric guitar (1971) Klaus Dinger – drums (1969–1971; died 2008) Plato Kostic (a.k.a. Plato Riviera) – bass guitar (1973) Andreas Hohmann – drums (1969) Thomas Lohmann – drums (1969) Eberhard Kranemann – bass guitar (1969–1971) Houschäng Nejadépour – electric guitar (1969–1971) Peter Schmidt – drums (1969–1971) Karl "Charly" Weiss – drums (1969; died 2009) Timeline Discography Kraftwerk (1970) Kraftwerk 2 (1972) Ralf und Florian (1973) Autobahn (1974) Radio-Activity (1975) Trans-Europe Express (1977) The Man-Machine (1978) Computer World (1981) Electric Café (1986) The Mix (1991) Tour de France Soundtracks (2003) Videography Romantic Warriors IV: Krautrock (2019) Awards and achievements Grammy Awards |- | 1982 | "Computer World" | Best Rock Instrumental Performance | |- | 2006 | Minimum-Maximum | Best Dance/Electronic Album | |- | 2014 | Kraftwerk | Lifetime Achievement Award | |- | 2015 | Autobahn | Hall of Fame | |- | rowspan="2"| 2018 | rowspan="2"| 3-D The Catalogue | Best Dance/Electronic Album | |- | Best Surround Sound Album | |} See also List of ambient music artists References Sources Further reading Tim Barr, "Kraftwerk: From Düsseldorf to the Future" 1998 Vanni Neri & Giorgio Campani: "A Short Introduction to Kraftwerk" 2000 Albert Koch: "Kraftwerk: The Music Makers" 2002 Kraftwerk: "Kraftwerk Photobook" 2005 (included in the Minimum-Maximum Notebook set) Sean Albiez and David Pattie: Kraftwerk: Music Non-Stop 2010 David Buckley: Kraftwerk: Publikation 2012 Toby Mott: Kraftwerk: 45 RPM 2012 The Guardian: Kraftwerk sue makers of Kraftwerk charging devices 2015 External links Kraftwerk: Free Listening at SoundCloud ANTENNA – The International Kraftwerk Mailing List (since 2003 September) Kraftwerk FAQ – The Kraftwerk FAQ: Frequently asked questions and answers BBC Radio 1 Kraftwerk documentary– 2006 Kraftwerk documentary with Alex Kapranos Kraftwerk Vinyl Site for collectors AllKraftwerk Mats's Kraftwerk Page with lots of images and information, since 1997 Good evening Kraftwerk, good evening Stuttgart! by the European Space Agency 1969 establishments in West Germany 1970 establishments in West Germany Art pop musicians Astralwerks artists Avant-garde music groups Electropop groups Elektra Records artists German electronic rock musical groups EMI Records artists German electronic music groups German synthpop groups Grammy Award winners for dance and electronic music Grammy Lifetime Achievement Award winners Krautrock musical groups Musical groups established in 1969 Musical groups from Düsseldorf Musical quartets Mute Records artists Parlophone artists Progressive pop musicians Philips Records artists Vertigo Records artists Warner Records artists
2816674
https://en.wikipedia.org/wiki/Component-based%20software%20engineering
Component-based software engineering
Component-based software engineering (CBSE), also called component-based development (CBD), is a branch of software engineering that emphasizes the separation of concerns with respect to the wide-ranging functionality available throughout a given software system. It is a reuse-based approach to defining, implementing and composing loosely coupled independent components into systems. This practice aims to bring about an equally wide-ranging degree of benefits in both the short-term and the long-term for the software itself and for organizations that sponsor such software. Software engineering practitioners regard components as part of the starting platform for service-orientation. Components play this role, for example, in web services, and more recently, in service-oriented architectures (SOA), whereby a component is converted by the web service into a service and subsequently inherits further characteristics beyond that of an ordinary component. Components can produce or consume events and can be used for event-driven architectures (EDA). Definition and characteristics of components An individual software component is a software package, a web service, a web resource, or a module that encapsulates a set of related functions (or data). All system processes are placed into separate components so that all of the data and functions inside each component are semantically related (just as with the contents of classes). Because of this principle, it is often said that components are modular and cohesive. With regard to system-wide co-ordination, components communicate with each other via interfaces. When a component offers services to the rest of the system, it adopts a provided interface that specifies the services that other components can utilize, and how they can do so. This interface can be seen as a signature of the component - the client does not need to know about the inner workings of the component (implementation) in order to make use of it. This principle results in components referred to as encapsulated. The UML illustrations within this article represent provided interfaces by a lollipop-symbol attached to the outer edge of the component. However, when a component needs to use another component in order to function, it adopts a used interface that specifies the services that it needs. In the UML illustrations in this article, used interfaces are represented by an open socket symbol attached to the outer edge of the component. Another important attribute of components is that they are substitutable, so that a component can replace another (at design time or run-time), if the successor component meets the requirements of the initial component (expressed via the interfaces). Consequently, components can be replaced with either an updated version or an alternative without breaking the system in which the component operates. As a rule of thumb for engineers substituting components, component B can immediately replace component A, if component B provides at least what component A provided and uses no more than what component A used. Software components often take the form of objects (not classes) or collections of objects (from object-oriented programming), in some binary or textual form, adhering to some interface description language (IDL) so that the component may exist autonomously from other components in a computer. In other words, a component acts without changing its source code. Although the behavior of the component's source code may change based on the application's extensibility, provided by its writer. When a component is to be accessed or shared across execution contexts or network links, techniques such as serialization or marshalling are often employed to deliver the component to its destination. Reusability is an important characteristic of a high-quality software component. Programmers should design and implement software components in such a way that many different programs can reuse them. Furthermore, component-based usability testing should be considered when software components directly interact with users. It takes significant effort and awareness to write a software component that is effectively reusable. The component needs to be: fully documented thoroughly tested robust - with comprehensive input-validity checking able to pass back appropriate error messages or return codes designed with an awareness that it will be put to unforeseen uses In the 1960s, programmers built scientific subroutine libraries that were reusable in a broad array of engineering and scientific applications. Though these subroutine libraries reused well-defined algorithms in an effective manner, they had a limited domain of application. Commercial sites routinely created application programs from reusable modules written in assembly language, COBOL, PL/1 and other second- and third-generation languages using both system and user application libraries. , modern reusable components encapsulate both data structures and the algorithms that are applied to the data structures. Component-based software engineering builds on prior theories of software objects, software architectures, software frameworks and software design patterns, and the extensive theory of object-oriented programming and the object-oriented design of all these. It claims that software components, like the idea of hardware components, used for example in telecommunications, can ultimately be made interchangeable and reliable. On the other hand, it is argued that it is a mistake to focus on independent components rather than the framework (without which they would not exist). History The idea that software should be componentized - built from prefabricated components - first became prominent with Douglas McIlroy's address at the NATO conference on software engineering in Garmisch, Germany, 1968, titled Mass Produced Software Components. The conference set out to counter the so-called software crisis. McIlroy's subsequent inclusion of pipes and filters into the Unix operating system was the first implementation of an infrastructure for this idea. Brad Cox of Stepstone largely defined the modern concept of a software component. He called them Software ICs and set out to create an infrastructure and market for these components by inventing the Objective-C programming language. (He summarizes this view in his book Object-Oriented Programming - An Evolutionary Approach 1986.) The software components are used in two different contexts and two kinds: i) using components as parts to build a single executable, or ii) each executable is treated as a component in a distributed environment, where components collaborate with each other using internet or intranet communication protocols for IPC (Inter Process Communications). The above belongs to former kind, while the below belongs to later kind. IBM led the path with their System Object Model (SOM) in the early 1990s. As a reaction, Microsoft paved the way for actual deployment of component software with Object linking and embedding (OLE) and Component Object Model (COM). many successful software component models exist. Architecture A computer running several software components is often called an application server. This combination of application servers and software components is usually called distributed computing. Typical real-world application of this is in, e.g., financial applications or business software. Component models A component model is a definition of properties that components must satisfy, methods and mechanisms for the composition of components. During the last decades, researchers and practitioners have proposed several component models with different characteristics. A classification of the existing component models is given in. Examples of component models are: Enterprise JavaBeans (EJB) model, Component Object Model (COM) model, .NET model, X-MAN component model, and Common Object Request Broker Architecture (CORBA) component model. Technologies Business object technologies Newi Component-based software frameworks for specific domains Advanced Component Framework Earth System Modeling Framework (ESMF) MASH IoT Platform for Asset Management KOALA component model developed for software in consumer electronics React (JavaScript library) Software Communications Architecture (JTRS SCA) Component-oriented programming Bundles as defined by the OSGi Service Platform Component Object Model (OCX/ActiveX/COM) and DCOM from Microsoft TASCS - SciDAC Center for Technology for Advanced Scientific Component Software Eiffel programming language Enterprise JavaBeans from Sun Microsystems (now Oracle) Flow-based programming Fractal component model from ObjectWeb MidCOM component framework for Midgard and PHP Oberon, Component Pascal, and BlackBox Component Builder rCOS method of component-based model driven design from UNU-IIST SOFA component system from ObjectWeb The System.ComponentModel namespace in Microsoft .NET Unity developed by Unity Technologies Unreal Engine developed by Epic Games UNO from the OpenOffice.org office suite VCL and CLX from Borland and similar free LCL library. XPCOM from Mozilla Foundation Compound document technologies Active Documents in Oberon System and BlackBox Component Builder KParts, the KDE compound document technology Object linking and embedding (OLE) OpenDoc Distributed computing software components .NET Remoting from Microsoft 9P distributed protocol developed for Plan 9, and used by Inferno and other systems. CORBA and the CORBA Component Model from the Object Management Group D-Bus from the freedesktop.org organization DCOM and later versions of COM (and COM+) from Microsoft DSOM and SOM from IBM (now scrapped) Ice from ZeroC Java EE from Sun Kompics from SICS Universal Network Objects (UNO) from OpenOffice.org Web services REST Zope from Zope Corporation AXCIOMA (the component framework for distributed, real-time, and embedded systems) by Remedy IT COHORTE the cross-platform runtime for executing and managing robust and reliable distributed Service-oriented Component-based applications, by isandlaTech DX-MAN Service Model Generic programming emphasizes separation of algorithms from data representation Interface description languages (IDLs) Open Service Interface Definitions (OSIDs) Part of both COM and CORBA Platform-Independent Component Modeling Language SIDL - Scientific Interface Definition Language Part of the Babel Scientific Programming Language Interoperability System (SIDL and Babel are core technologies of the CCA and the SciDAC TASCS Center - see above.) SOAP IDL from World Wide Web Consortium (W3C) WDDX XML-RPC, the predecessor of SOAP Inversion of control (IoC) and Plain Old C++/Java Object (POCO/POJO) component frameworks Pipes and filters Unix operating system See also Business logic Modular programming Service Component Architecture (SCA) Software Communications Architecture (JTRS SCA) Third-party software component Web service Web components References Further reading Brad J. Cox, Andrew J. Novobilski (1991). Object-Oriented Programming: An Evolutionary Approach. 2nd ed. Addison-Wesley, Reading Bertrand Meyer (1997). Object-Oriented Software Construction. 2nd ed. Prentice Hall. George T. Heineman, William T. Councill (2001). Component-Based Software Engineering: Putting the Pieces Together. Addison-Wesley Professional, Reading 2001 Richard Veryard (2001). Component-based business : plug and play. London : Springer. Clemens Szyperski, Dominik Gruntz, Stephan Murer (2002). Component Software: Beyond Object-Oriented Programming. 2nd ed. ACM Press - Pearson Educational, London 2002 External links Why Software Reuse has Failed and How to Make It Work for You by Douglas C. Schmidt What is the True essence and reality of CBD? (Evidence to show existing CBD paradigm is flawed) comprehensive list of Component Systems on SourceForge Brief Introduction to Real COP (Component Oriented Programming) by Using a small GUI application as an example Object-oriented programming Software architecture
575970
https://en.wikipedia.org/wiki/Tenedos
Tenedos
Tenedos (, Tenedhos, ), or Bozcaada in Turkish, is an island of Turkey in the northeastern part of the Aegean Sea. Administratively, the island constitutes the Bozcaada district of Çanakkale province. With an area of it is the third largest Turkish island after Imbros (Gökçeada) and Marmara. In 2018, the district had a population of 3023. The main industries are tourism, wine production and fishing. The island has been famous for its grapes, wines and red poppies for centuries. It is a former bishopric and presently a Latin Catholic titular see. Tenedos is mentioned in both the Iliad and the Aeneid, in the latter as the site where the Greeks hid their fleet near the end of the Trojan War in order to trick the Trojans into believing the war was over and into taking the Trojan Horse within their city walls. The island was important throughout classical antiquity despite its small size due to its strategic location at the entrance of the Dardanelles. In the following centuries, the island came under the control of a succession of regional powers, including the Achaemenid Persian Empire, the Delian League, the empire of Alexander the Great, the Kingdom of Pergamon, the Roman Empire and its successor, the Byzantine Empire, before passing to the Republic of Venice. As a result of the War of Chioggia (1381) between Genoa and Venice the entire population was evacuated and the town was demolished. The Ottoman Empire established control over the deserted island in 1455. During Ottoman rule, it was resettled by both Greeks and Turks. In 1807, the island was temporarily occupied by the Russians. During this invasion the town was burnt down and many Turkish residents left the island. Under Greek administration between 1912 and 1923, Tenedos was ceded to Turkey with the Treaty of Lausanne (1923) which ended the Turkish War of Independence following the dissolution of the Ottoman Empire in the aftermath of World War I. The treaty called for a quasi-autonomous administration to accommodate the local Greek population and excluded the Greeks on the two islands of Imbros and Tenedos from the wider population exchanges that took place between Greece and Turkey. Tenedos remained majority Greek until the late 1960s and early 1970s, when many Greeks emigrated because of systemic discrimination and better opportunities elsewhere. Starting with the second half of the 20th century, there has been immigration from mainland Anatolia, especially Romani from the town of Bayramiç. Name The island is known in English as both Tenedos (the Greek name) and Bozcaada (the Turkish name). Over the centuries many other names have been used. Documented ancient Greek names for the island are Leukophrys, Calydna, Phoenice and Lyrnessus (Pliny, HN 5,140). The official Turkish name for the island is Bozcaada; the Turkish word "boz" means either a barren land or grey to brown color (sources indicate both of these meanings may have been associated with the island) and "ada" meaning island. The name Tenedos was derived, according to Apollodorus of Athens, from the Greek hero Tenes, who ruled the island at the time of the Trojan War and was killed by Achilles. Apollodorus writes that the island was originally known as Leocophrys until Tenes landed on the island and became the ruler. The island became known as Bozcaada when the Ottoman Empire took the island over. Tenedos remained a common name for the island along with Bozcaada after the Ottoman conquest of the island, often with Greek populations and Turkish populations using different names for the island. Geography and climate Tenedos is roughly triangular in shape. Its area is . It is the third largest Turkish island after Marmara Island and Imbros (Gökçeada). It is surrounded by small islets, and is situated close to the entrance of the Dardanelles. It is the only rural district (ilçe) of Turkey without any villages, and has only one major settlement, the town center. Geological evidence suggests that the island broke away from the mainland producing a terrain that is mainly plains in the west with hills in the Northeast, and the highest point is . The central part of the island is the most amenable to agricultural activities. There is a small pine forest in the Southwestern part of the island. The westernmost part of the island has large sandy areas not suitable for agriculture. The island has a Mediterranean climate with strong northern winds called etesians. Average temperature is and average annual precipitation is . There are a number of small streams running from north to south at the southwestern part of the island. Freshwater sources though are not enough for the island so water is piped in from the mainland. History Prehistory Archeological findings indicate that the first human settlement on the island dates back to the Early Bronze Age II (ca. 3000–2700 BC). Archaeological evidence suggests the culture on the island had elements in common with the cultures of northwestern Anatolia and the Cycladic Islands. Most settlement was on the small bays on the east side of the island which formed natural harbours. Settlement archaeological work was done quickly and thus did not find definitive evidence of grape cultivation on the island during this period. However, grape cultivation was common on neighboring islands and the nearby mainland during this time. According to a reconstruction, based on the myth of Tenes, Walter Leaf stated that the first inhabitants of the island could be Pelasgians, who were driven out of the Anatolian mainland by the Phrygians. According to the same author, there are possible traces of Minoan and Mycenaean Greek influence in the island. Antiquity Ancient Tenedos is referred to in Greek and Roman mythology, and archaeologists have uncovered evidence of its settlement from the Bronze Age. It would stay prominent through the age of classical Greece, fading by the time of the dominance of ancient Rome. Although a small island, Tenedos's position in the straits and its two harbors made it important to the Mediterranean powers over the centuries. For nine months of the year, the currents and the prevailing wind, the etesian, came, and still come, from the Black Sea hampering sailing vessels headed for Constantinople. They had to wait a week or more at Tenedos, waiting for the favorable southerly wind. Tenedos thus served as a shelter and way station for ships bound for the Hellespont, Propontis, Bosphorus, and places farther on. Several of the regional powers captured or attacked the island, including the Athenians, the Persians, the Macedonians under Alexander the Great, the Seleucids and the Attalids. Mythology Homer mentions Apollo as the chief deity of Tenedos in his time. According to him, the island was captured by Achilles during the siege of Troy. Nestor obtained his slave Hecamede there during one of Achilles's raids. Nestor also sailed back from Troy stopping at Tenedos and island-hopping to Lesbos. The Odyssey mentions the Greeks leaving Troy after winning the war first traveled to nearby Tenedos, sacrificed there, and then went to Lesbos before pausing to choose between alternative routes. Homer, in the Iliad mention that between Tenedos and Imbros there was a wide cavern, in which Poseidon stayed his horses. Virgil, in the Aeneid, described the Achaeans hiding their fleet at the bay of Tenedos, toward the end of the Trojan War, to trick Troy into believing the war was over and allowing them to take the Trojan Horse within Troy's city walls. In Aeneid, it is also the island from which twin serpents came to kill the Trojan priest Laocoön and his sons as punishment for throwing a spear at the Trojan Horse. According to Pindar (Nemean Odes no. 11), the island was founded after the war by bronze-clad warriors from Amyklai, traveling with Orestes. According to myth, Tenes was the son of Cycnus, himself the son of Poseidon and Calyce. Philonome, Cycnus's second wife and hence Tenes's stepmother, tried to seduce Tenes and was rejected. She then accused him of rape leading to his abandonment at sea along with his sister. They washed up on the island of Leucophrys where he was proclaimed king and the island renamed Tenedos in his honor. When Cycnus realized the lie behind the allegations he took a ship to apologize to his son. The myths differ on whether they reconciled. According to one version, when the father landed on the island of Tenedos, Tenes cut the cord holding his boat. The phrase 'hatchet of Tenes' came to mean resentment that could not be soothed. Another myth had Achilles landing on Tenedos, while sailing from Aulis to Troy. There his navy stormed the island, and Achilles fought Tenes, in this myth a son of Apollo, and killed him, not knowing Tenes's lineage and hence unaware of the danger of Apollo's revenge. Achilles would also later kill Tenes's father, Cycnus, at Troy. In Sophocles's Philoctetes, written in 409 BC, a serpent bit Philoctetes in the foot at Tenedos. According to Hyginus, the goddess Hera, upset with Philoctetes for helping Hercules, had sent the snake to punish him. His wound refused to heal, and the Greeks abandoned him, before going back to him for help later during the attack on Troy. Athenaeus quoted Nymphodorus's remarks on the beauty of the women of Tenedos. Callimachus talked of a myth in which Ino's son Melikertes washed up dead in Tenedos after being thrown into the sea by his mother, who killed herself too; the residents, Lelegians, built an altar for Melikertes and started a ritual of a woman sacrificing her infant child when the town's need was dire. The woman would then be blinded. The myths also added that the custom was abolished when Orestes' descendants settled the place. Neoptolemus stayed two days at Tenedos, following the advice of Thetis, before he go to the land of the Molossians together with Helenus. Archaic period It was at Tenedos, along with Lesbos, that the first coins with Greek writing on them were minted. Figures of bunches of grapes and wine vessels such as amphorae and kantharoi were stamped on coins. The very first coins had a twin head of a male and a female on the obverse side. The early coins were of silver and had a double-headed axe imprinted on them. Aristotle considered the axe as symbolizing the decapitation of those convicted of adultery, a Tenedian decree. The axe-head was either a religious symbol or the seal of a trade unit of currency. Apollo Smintheus, a god who both protected against and brought about plague, was worshipped in late Bronze Age Tenedos. Strabo's Geography writes that Tenedos "contains an Aeolian city and has two harbours, and a temple of Apollo Smintheus" (Strabo's Geography, Vol. 13). The relationship between Tenedos and Apollo is mentioned in Book I of the Iliad where a priest calls to Apollo with the name "O god of the silver bow, that protectest Chryse and holy Cilla and rulest Tenedos with thy might"(Iliad I). During the later part of the Bronze Age and during the Iron Age, the place served as a major point between the Mediterranean and the Black Sea. Homer's Iliad mentions the Tenedos of this era. The culture and artisanship of the area, as represented by pottery and metal vessels recovered from graves, matched that of the northeastern Aegean. Archaeologists have found no evidence to substantiate Herodotus's assertion Aeolians had settled in Tenedos by the Bronze Age. Homer mentions Tenedos as a base for the Achaean fleet during the Trojan war. The Iron Age settlement of the northeast Aegean was once attributed to Aeolians, descendants of Orestes and hence of the House of Atreus in Mycenae, from across the Aegean from Thessaly, Boiotia and Akhaia, all in mainland Greece. Pindar, in his 11th Nemean Ode, hints at a group of Peloponnesians, the children of the fighters at Troy, occupying Tenedos, with Orestes, the son of Agamemnon, landing straight on the island; specifically he refers to a Spartan Peisandros and his descendant Aristagoras, with Peisandaros having come over with Orestes. Strabo places the start of the migration sixty years after the Trojan war, initiated by Orestes's son, Penthilos, with the colonization continuing onto Penthilos's grandson. The archaeological record provides no supporting evidence for the theory of Aiolian occupation. During the pre-archaic period, adults in Lesbos were buried by placing them in large jars, and later clay coverings were used, similar to Western Asia Minor. Still later, Tenedians began to both bury and cremate their adults in pits buttressed with stone along the walls. Children were still buried covered in jars. Some items buried with the person, such as pottery, gifts and safety-pin-like clasps, resemble what is found in Anatolia, in both style and drawings and pictures, more than they resemble burial items in mainland Greece. While human, specifically infant, sacrifice has been mentioned in connection with Tenedos's ancient past, it is now considered mythical in nature. The hero Paleomon in Tenedos was worshipped by a cult in that island, and the sacrifices were attributed to the cult. At Tenedos, people did sacrifice a newborn calf dressed in buskins, after treating the cow like a pregnant women giving birth; the person who killed the calf was then stoned and driven out into a life on the sea. According to Harold Willoughby, a belief in the calf as a ritual incarnation of God drove this practice. Classical period From the Archaic to Classical period, the archaeological evidence of well-stocked graves establishes Tenedos's continuing affluence. Tall, broad-mouthed containers show grapes and olives were likely processed during this time. They were also used to bury dead infants. By the fourth century BC, grapes and wine had become relevant to the economy of the island. Tenedians likely exported surplus wine. Writings from this era talk of a shortage of agricultural land, indicating a booming settlement. A dispute with the neighboring island of Sigeum was arbitrated by Periander of Corinth, who handed over political control of a swath of the mainland to Tenedos. In the first century BC this territory was eventually incorporated into Alexandria Troas. According to some accounts, Thales of Greece died in Tenedos. Cleostratus, an astronomer, lived and worked in Tenedos, though it is unknown whether he met Thales there. Cleostratus is one of the founders of Greek astronomy, influenced as it was by the reception of Babylonian knowledge. Athens had a naval base on the island in the fifth and fourth century BC. Demosthenes mentions Apollodorus, a trierarch commanding a ship, talking of buying food during a stopover at Tenedos where he would pass the trierarchy to Polycles. In 493 BCE, the Persians overran Tenedos along with other Greek islands. During his reign, Philip II of Macedon, father of Alexander the Great, sent a Macedonian force sailing against the Persian fleet. Along with other Aegean islands such as Lesbos, Tenedos also rebelled against the Persian dominance at this time. Athens seemingly augmented its naval base with a fleet at the island around 450 BC. During the campaign of Alexander the Great against the Persians, Pharnabazus, the Persian commander, laid siege to Tenedos with a hundred ships and eventually captured it as Alexander could not send a fleet in time to save the island. The island's walls were demolished and the islanders had to accept the old treaty with the Persian emperor Artaxerxes II: the Peace of Antalcidas. Later, Alexander's commander Hegelochus of Macedon captured the island from the Persians. Alexander made an alliance with the people in Tenedos in order to limit the Persian naval power. He also took on board 3000 Greek mercenaries and oarsmen from Tenedos in his army and navy. The land was not suitable for large-scale grazing or extensive agriculture. Local grapes and wines were mentioned in inscriptions and on coins. But Pliny and other contemporary writers did not mention grapes and wines at the island. Most exports were via sea, and both necessities and luxuries had to imported, again by sea. Unlike in Athens, it is unclear whether Tenedos ever had a democracy. Marjoram (Oregano) from Tenedos was one of the relishes used in Greek cuisine. The Tenedians punished adulterers by cutting off their heads with an axe. Aristotle wrote about the social and political structure of Tenedos. He found it notable a large part of the populace worked in occupations related to ferries, possibly hundreds in a population of thousands. Pausanias noted some common proverbs in Greek originated from customs of the Tenedians. "He is a man of Tenedos" was used to allude to a person of unquestionable integrity, and "to cut with the Tenedian axe" was a full and final 'no'. Lykophron, writing in the second century BC, referred to the deity Melikertes as the "baby-slayer". Xenophon described the Spartans' sacking the place in 389 BC, but being beaten back by an Athenian fleet when trying again two years later. In Periplus of Pseudo-Scylax writes that the astronomer Kleostratos () was from Tenedos. Hellenistic period In the Hellenistic period, the Egyptian goddess Isis was also worshipped at Tenedos. There she was associated closely with the sun, with her name and title reflecting that position. Roman period During the Roman occupation of Greece, Tenedos too came under their rule. The island became a part of the Roman Republic in 133 BC, when Attalus III, the king of Pergamon, died, leaving his territory to the Romans. The Romans constructed a new port at Alexandria Troas, on the Dardanelle Strait. This led to Tenedos's decline. Tenedos lost its importance during this period. Virgil, in Aeneid, stated the harbour was deserted and ships could not moor in the bay during his time. Processing of grapes seems to have been abandoned. Olive cultivation and processing did possibly continue, though there was likely no surplus to export. Archaeological evidence indicates the settlement was mostly in the town, with only a few scattered sites in the countryside. According to Strabo there was a kinship between the peoples of Tenedos and Tenea (a town at Corinth). According to Cicero a number of deified human beings were worshipped in Greece: in Tenedos there was Tenes. Pausanias, mention at his work Description of Greece that Periklyto, who was from Tenedos, has dedicated some axes at Delphoi. During the Third Mithridatic War, in around 73 BC, Tenedos was the site of a large naval battle between Roman commander Lucullus and the fleet of the king of Pontus, Mithridates, commanded by Neoptolemus. This Battle of Tenedos was won decisively by the Romans. Around 81–75 BC, Verres, legate of the Governor of Cilicia, Gaius Dolabella, plundered the island, carrying off the statue of Tenes and some money. Towards 6 BC, geographical change made the mainland port less useful, and Tenedos became relevant again. According to Dio Chrysostom and Plutarch, Tenedos was famous for its pottery ca AD 100. Under Rome's protection, Tenedos restarted its mint after a break of more than a century. The mint continued with the old designs, improving on detail and precision. Cicero, writing in this era, noted the temple built to honor Tenes, the founder whose name the island received, and of the harsh justice system of the populace. Byzantine period When Constantinople became a prominent city in the Roman Empire, from AD 350 on, Tenedos became a crucial trading post. Emperor Justinian I ordered the construction of a large granary on Tenedos and ferries between the island and Constantinople became a major activity on the island. Ships carrying grain from Egypt to Constantinople stopped at Tenedos when the sea was unfavorable. The countryside was likely not heavily populated or utilized. There were vineyards, orchards and corn fields, at times abandoned due to disputes. The Eastern Orthodox Church placed the diocese of Tenedos under the metropolitanate of Mytilini during the ninth century, and promoted it to its own metropolitanate in early fourteenth century. By this time Tenedos was part of the Byzantine Empire but its location made it a key target of the Venetians, the Genoese, and the Ottoman Empire. The weakened Byzantine Empire and wars between Genoa and Venice for trade routes made Tenedos a key strategic location. In 1304, Andrea Morisco, a Genoese adventurer, backed by a title from the Byzantine emperor Andronikos III, took over Tenedos. Later, sensing political tension in the Byzantine empire just before the Second Byzantine Civil War, the Venetians offered 20,000 ducats in 1350 to John V Palaiologos for control of Tenedos. When John V was captured in the Byzantine civil war, he was deported to Tenedos by John VI Kantakouzenos. John V eventually claimed victory in the civil war, but the cost was significant debt, mainly to the Venetians. In the summer of 1369, John V sailed to Venice and apparently offered the island of Tenedos in exchange for twenty-five thousand ducats and his own crown jewels. However, his son (Andronikos IV Palaiologos), acting as the regent in Constantinople, rejected the deal possibly because of Genoese pressure. Andronikos tried but failed to depose his father. In 1376, John V sold the island to Venice on the same terms as before. This upset the Genoese of Galata. The Genoese helped the imprisoned Andronikos to escape and depose his father. Andronikos repaid the favor ceding them Tenedos. But the garrison on the island refused the agreement and gave control over to the Venetians. The Venetians established an outpost on the island, a move that caused significant tension with the Byzantine Empire (then represented by Andronikos IV)and the Genoese. In the Treaty of Turin, which ended the War of Chioggia between Venice and Genoa, the Venetians were to hand over control of the island to Amadeo of Savoy and the Genoese were to pay the bill for the removal of all fortifications on the island. The Treaty of Turin specified that the Venetians would destroy all the island's "castles, walls, defences, houses and habitations from top to bottom 'in such fashion that the place can never be rebuilt or reinhabited". The Greek populace was not a party to the negotiations, but were to be paid for being uprooted. The baillie of Tenedos, Zanachi Mudazzo, refused to evacuate the place, and the Doge of Venice, Antonio Venier, protested the expulsion. The senators of Venice reaffirmed the treaty, the proposed solution of handing the island back to the Emperor seen as unacceptable to the Genoese. Toward the end of 1383, the population of almost 4000 was shipped out to Euboea and Crete. Buildings on the island were then razed leaving it empty. Venetians continued to use the harbor. The Venetians were zealous guarding the right to Tenedos the Treaty of Turin provided them. The Grand Master of the Knights of Rhodes wanted to build a fortification at the island in 1405, with the knights bearing the cost, but the Venetians refused to allow this. The island remained largely uninhabited for the next decades. When Ruy Gonzáles de Clavijo visited the island in 1403 he remarked that because of the Treaty of Turin "Tenedos has since come to be uninhabited." 29 May 1416 saw the first battle at sea between the Venetians and the emerging Ottoman fleet at Gallipoli. The Venetian captain-general, Pietro Loredan, won, wiped out the Turks on board, and retired down the coast to Tenedos, where he killed all the non-Turk prisoners who had voluntarily joined the Turks. In the treaty of 1419 between Sultan Mehmed and the Venetians, Tenedos was the dividing line beyond which the Turkish fleet was not to advance. Spanish adventurer Pedro Tafur visited the island in 1437 and found it deserted, with many rabbits, the vineyards covering the island in disrepair, but the port well-maintained. He mentioned frequent Turkish attacks on shipping in the harbor. In 1453, the port was used by the commander of a single-ship Venetian fleet, Giacomo Loredan, as a monitoring point to observe the Turkish fleet, on his way to Constantinople in what would become the final defense of that city against the Turks. Ottoman period Tenedos was occupied by Sultan Mehmet II in 1455, two years after his Conquest of Constantinople ending the Byzantine empire. It became the first island controlled by the Ottoman Empire in the Aegean sea. The island was still uninhabited at that time, almost 75 years after it had been forcefully evacuated. Mehmet II rebuilt the island's fort. During his reign the Ottoman navy used the island as a supply base. The Venetians, realizing the strategic importance of the island, deployed forces on it. Giacopo Loredano took Tenedos for Venice in 1464. The same year, Ottoman Admiral Mahmud Pasha recaptured the island. During the Ottoman regime, the island was repopulated (by granting a tax exemption). The Ottoman fleet admiral and cartographer, Piri Reis, in his book Kitab-i-Bahriye, completed in 1521, included a map of the shore and the islands off it, marking Tenedos as well. He noted that ships heading north from Smyrna to the Dardanelles passed usually through the seven-mile strip of sea between the island and the mainland. Tommaso Morosini of Venice set out with 23 ships from Crete on 20 March 1646, heading to Istanbul. They stopped at Tenedos, but failed to establish a foothold there when their ship caught fire, killing many of the crew. In 1654, Hozam Ali of the Turkish fleet landed at the island, gathering Turkish forces for a naval battle against the Venetians. This, the Battle of the Dardanelles (1654), the first of four in a series, the Ottomans won. After the Battle of the Dardanelles in 1656, Barbaro Badoer of the Venetians seized the island on 8 July. The Ottoman defeat weakened its Sultan Mehmed IV, then aged 16, and strengthened the Grand Vizier, Köprülü Mehmed Pasha. In March 1657, an Ottoman Armada emerged through the Dardanelles, slipping through a Venetian blockade, with the objective of retaking the island but did not attempt to do so, concerned by the Venetian fleet. In July 1657, Köprülü made a decision to break the Venetian blockade and retake the territory. The Peace Party in the Venetian senate thought it best to not defend Tenedos, and Lemnos, and debated this with the War Party. Köprülü ended the argument by recapturing Tenedos on 31 August 1657, in the Battle of the Dardanelles (1657), the fourth and final one. Following the victory, the Grand Vizier visited the island and oversaw its repairs, during which he funded construction of a mosque, which was to be called by his name. According to the Mosque's Foundation's book, it was built on the site of an older mosque, called Mıhçı Mosque which was destroyed during Venetian occupation. By the time Köprülü died in September 1661, he had built on the island the businesses of a coffee-house, a bakery, 84 shops, and nine mills; a watermill; two mosques; a school; a rest stop for travelers and a stable; and a bath-house. Rabbits which drew the attention of Tafur two-and-a-half centuries ago were apparently still abundant in the mid 17th century. In 1659 the traveler Evliya Çelebi was sent to the island with the task of collecting game for the Sultan Mehmed IV. The disorder of the 1600s hampered supply lines and caused grain shortages in Bozcaada. As a result of the series of setbacks Ottomans faced in Rumelia during the later years of the reign of Mehmed IV, with the Grand Vizier being Sarı Süleyman Pasha, the forces at the island are reported to have mutinied in 1687 with parts of the rest of the army. These widespread mutinies would result in the deposing of the Sultan and the Grand Vizier that year. In 1691 the Venetians and allies formed a war council to discuss retaking the island. The council met regularly at the galley of Domenico Mocenigo, the captain-general of the Venetian fleet. By this time, the only people on the island were those in the fort. Mocenigo estimated their number to be around 300, and the fort to be weakly buttressed. On 17 July 1691 the war council met off the waters of the island and decided to retake Tenedos since it was, per their estimate, weakly defended but famous. As a first step they decided to gather information. At their next meeting, six days later, they learned from captured slaves that the Turkish garrison, numbering around 3000, had drug trenches and strengthened their defenses. The plan to retake the island was abandoned. Venetians would try to capture Tenedos unsuccessfully in 1697. The Peace of Karlowitz, which for the first time brought the Ottomans into the mainstream of European diplomacy, was signed on 26 January 1699 by the Ottomans, the Venetians, and a large number of Europeans powers. The Venetian senate sent its ambassador, Soranzo to Istanbul via Tenedos. At the island he was greeted with a royal reception of cannon fire and by the Pasha of the island himself. During the classical Ottoman period, the island was a kadiluk. The Ottomans built mosques, fountains, hammams, and a medrese. The Ottomans adopted the Byzantine practice of using islands as places for the internal exile of state prisoners, such as Constantine Mourousis and Halil Hamid Pasha. In October 1633, Cyril Contari, Metropolitan of Aleppo in the Orthodox Church, was made the patriarch after promising to pay the Ottoman central authority 50,000 dollars. His inability to pay led to his being exiled to the island for a short time. In 1807, a joint fleet of the Russians and British captured the island during the Russo-Turkish Wars, with the Russians using it as their military base to achieve the victories at the Dardanelles and Athos; but they ceded control as part of the Treaty of Armistice with the Ottoman Porte. However, the Russian occupations proved to be destructive for the island. The town was burnt down, the harbor was almost filled in and almost all buildings were destroyed. The islanders fled and Tenedos became deserted once more. In 1822, during the Greek War of Independence, the revolutionaries under Konstantinos Kanaris managed to attack an Ottoman fleet and burn one of its ships off Tenedos. This event was a major morale booster for the Greek Revolution and attracted the attention of the European Powers. The trees that covered the island were destroyed during the war. During the 19th century, the wine production remained a profitable business while the island's annual wheat production was only enough for three months of the islanders' consumption. Apart from wine, the only export item of the island was a small quantity of wool. Also in the 19th century there had been attempts to introduce pear, fig and mulberry trees. However, there are reports of fruit, especially fig trees being present on the island prior to those attempts. The 1852 law of the Tanzimat reorganized Turkish islands and Tenedos ended up in the sanjak of Bosje Adassi (Bozcaada), in the Vilayet Jazaǐri. In July 1874, a fire destroyed the place. In 1876, a middle school was added to those on the island, with 22 students and teaching Turkish, Arabic and Persian. By 1878, the island had 2015 males, of whom almost a quarter were Muslim, in around 800 houses. The place also hosted a company of the Ottoman foot-artillery division, along with an Austrian and French vice-consulate. The island was in the sanjak of Bigha, which seated a General Governor. Around 500 casks of gunpowder, left behind by the Russians in a military storehouse, were still there. The fort accommodated the Turkish military camp, a grain silo and two wells. In 1854, there were some 4,000 inhabitants on the island of Tenedos, of which one-third were Turks. Also, there was only one Greek school on the island with about 200 students. According to the Ottoman general census of 1893, the population of the island was divided as follows: 2,479 Greeks, 1,247 Turks, 103 Foreign Nationals and 6 Armenians. By the early 20th century, the island, still under the Turks, had around 2000 people living in wooden houses with gardens. The port provided shelter for ships from the violent northerly winds. The British had a vice consul at the island. The town served as a telegraph station, with an Austrian ship coming in every two weeks. In 1906 the town imports were at 17, 950 liras and exports, mainly wine and raisins, worth 6,250 liras. There were telegraph cables laid in the sea near the port. Between Turkey and Greece 1912-1921 During the First Balkan War, on 20 October 1912, Tenedos was the first island of the north Aegean that came under the control of the Greek Navy. The Turks that constituted part of Tenedos' population did not welcome the Greek control. By taking over the islands in the Northern Aegean sea, the Greek Navy limited the ability of the Ottoman fleet to move through the Dardanelles. Greek administration of the island lasted until 12 November 1922. Negotiations to end the Balkan war started in December 1912 in London and the issue of the Aegean islands was one persistent problem. The issue divided the great powers with Germany, Austria-Hungary, and Italy supporting the Ottoman position for return of all the Aegean islands and Britain and France supporting the Greek position for Greek control of all the Aegean islands. With Italy controlling key islands in the region, major power negotiations deadlocked in London and later in Bucharest. Romania threatened military action with the Greeks against the Ottomans in order to force negotiations in Athens in November 1913. Eventually, Greece and Great Britain pressured the Germans to support an agreement where the Ottomans would retain Tenedos, Kastelorizo and Imbros and the Greeks would control the other Aegean islands. The Greeks accepted the plan while the Ottoman Empire rejected the ceding of the other Aegean islands. This agreement would not hold, but the outbreak of World War I and the Turkish War of Independence put the issue to the side. During the World War I Gallipoli Campaign, the British used the island as a supply base and built a 600m long airstrip for military operations. After the Turkish War of Independence ended in Greek defeat in Anatolia, and the fall of Lloyd George and his Middle Eastern policies, the western powers agreed to the Treaty of Lausanne with the new Turkish Republic, in 1923. This treaty made Tenedos and Imbros part of Turkey, and it guaranteed a special autonomous administrative status there to accommodate the local Greek population. The treaty excluded the Orthodox Christians on the islands from the population exchange that took place between Greece and Turkey. Article 14 of the treaty provided specific guarantees safeguarding the rights of minorities in both the nations. In 1912, when the Ecumenical Patriarchate of Constantinople conducted its own census, the population of the island was estimated to be: 5,420 Greeks and 1,200 Turks. 1922 and later Greece returned the island to Turkey in 1922. The inhabitants, substantially Greek Orthodox, were exempt from compulsory expulsion per the Lausanne Treaty's article 14, paragraph 2. Despite the treaty, the state of international relations between Greece and Turkey, wider world issues, and domestic pressures influenced how the Greek minority of Tenedos was treated. Acting reciprocally with Greece, Turkey made systematic attempts to evacuate the Greeks on the isle. Turkey never implemented either the Article 14 guarantee of some independence for the place in local rules, or the Article 39 guarantee to Turkish citizens, of all ethnicities, of the freedom to choose the language they wanted to use in their daily lives. In early 1926, conscripts and reservists of the army from Tenedos were transported to Anatolia. Great panic was engendered, and Greek youths fearing oppression fled the island. Others, who tried to hide in the mountains, were soon discovered and moved to Anatolia. Turkish law 1151 in 1927 specifically put administration of the islands in the hands of the Turkish government and not local populations, outlawed schooling in the Greek language and closed the Greek schools. According to the official Turkish census, in 1927 there were 2,500 Greeks and 1,247 Turks on the island. The Greco-Turkish rapprochement of 1930, which marks a significant turning point in the relations of the two countries, helped Tenedos reap some benefits too. In September 1933, moreover, certain islanders who had emigrated to America were allowed to return to and settle in their native land. Responding to the Greek good will over the straits, Turkey permitted the regular election of a local Greek mayor and seven village elders as well as a number of local employees. In the 1950s, tension between Greece and Turkey eased and law 1151/1927 was abolished and replaced by law no. 5713 in 1951, according to the law regular Greek language classes were added to the curriculum of the schools on Tenedos. Also, as restriction of travel to the island was relaxed, a growing number of Greek tourists from Istanbul and abroad visited Tenedos. These tourists did not only bring much needed additional revenues, but they also put an end to the twenty-seven-year long isolation of the islands from the outside world. However, when tensions increased in 1963 over Cyprus, the Turkish government again invoked a ban against Greek language education, and appropriated community property held by Greeks on the island. In 1964 Turkey closed the Greek-speaking schools on the island again. Furthermore, with the 1964 Law On Land Expropriation (No 6830) the farm property of the Greeks on the island was taken away from their owners. These policies, better economic options elsewhere, presence of a larger Greek community in Greece, fear and pressure, resulted in an exodus of the Greek population from the isle. The migrants retain Turkish citizenship but their descendants are not entitled to it. Greeks who left the island in the 1960s, often sold their properties, at particularly low prices, to their Turkish neighbours, which reflected the situation of duress under which they had to leave. In 1992, the Human Rights Watch report concluded that the Turkish government has denied the rights of the Greek community on Imbros and Tenedos in violation of the Lausanne Treaty and international human rights laws and agreements. In recent years there has been some progress in the relations between the different religious groups on the islands. In 2005, a joint Greek and Turkish delegation visited Tenedos and later that year Turkish Prime Minister Recep Tayyip Erdoğan visited the island. After that visit, the Turkish government funded the restoration of the bell tower of the Orthodox Church in Tenedos (built originally in 1869). In 1925 the Orthodox church became part of the Metropolis of Imbros and Tenedos. Cyril Dragounis has been its bishop since 2002. In 2009, the Foundation of the Bozcaada Koimisis Theotokou Greek Orthodox Church won a judgement in the European Court of Human Rights for recognition and financial compensation over their degraded cemetery. Turkish rule Turkey continued the old practice of exiling people to the island. The Democratic Party exiled Kemal Pilavoğlu, the leader of a religious sect, Ticani, to Tenedos for life, for sacrilege against Atatürk. Foreigners were prohibited from visiting the islands until the 1990s. However, in the mid-1990s, the Turkish government financially supported the expansion of wineries and tourist opportunities on the island. Today the island is a growing summer tourist location for wine enthusiasts and others. Since 2011 an annual half marathon has been run on the island. Proverbs of ancient Greeks regarding the island Greeks used the proverb "Tenedian human" () in reference to those with frightening appearance, because when Tenes laid down laws at the island he stipulated that a man with an axe should stand behind the judge and strike the man being convicted after he had spoken in vain. In addition, they used the proverb "Tenedian advocate" (), meaning a harsh advocate. There are many explanations regarding this proverb. Some say because the Tenedians honor two axes in their dedications. Aristotle said because a Tenedian king used to try lawsuits with an axe, so that he could execute wrongdoers on the spot, or because there was a place in Tenedos called Asserina, where there was a small river in which crabs have shell which was like an axe, or because a certain king laid down a law that adulterers should both be beheaded, and he observed this in the case of his son. Others said because of what Tenes suffered at the hands of his stepmother, he used to judge homicide suits with an axe. Population In 1854, there were some 4,000 inhabitants on the island of Bozcaada, of which one-third were Turks. According to the Ottoman general census of 1893, the population of the island was divided as follows: 2,479 Greeks, 1,247 Turks, 103 Foreign Nationals and 6 Armenians. In 1912, when the Ecumenical Patriarchate of Constantinople conducted its own census, the population of the island was estimated to be: 5,420 Greeks and 1,200 Turks. In 1927, according to the official Turkish census, there were 2,500 Greeks and 1,247 Turks on the island. By 2000, the official count of ethnic Greeks permanently residing on the island had dropped to 22. In 2011 census Bozcaada's population was 2,472. During summer, many more visit the island, ballooning its population to over 10,000 people. Historically the Turkish mahalle (quarter) has been located to the south and the Greek one to the north. Each quarter has its own religious institutions, mosques on the Turkish side and churches on the Greek side. The Greek quarter was burned to the ground in the fire of 1874 and rebuilt, while the Turkish quarter has an older design. The houses are architecturally different in the two districts. The grid-planned Greek district has businesses, galleries and hotels. This district is dominated by the bell tower of the Church of the Dormition of the Mother of God. On 26 July every year, the Greeks gather here to eat, dance and celebrate the feast day of St. Paraskevi. The Turkish quarter has largely houses. The district, in its present version, dates to 1702, and contains the grave of a grand vizier, Halil Hamid Pasha. Pasha was executed on Tenedos after being exiled for scheming to replace sultan Abdülhamid I, with the "șehzade" (crown prince) Selim, the future Sultan. The grave is in the courtyard of the Alaybey Mosque, a historical monument. Another mosque, Köprülü Mehmet Paşa Mosque (also called Yali Mosque), is also a monument. The Turkish district, Alaybey, also has hammams and the Namazgah fountain. The island has native islanders from families who have lived on the island for centuries, new wealthy immigrants from Istanbul, and wage labor immigrants from mainland Anatolia, especially Romani people in Turkey from Bayramiç. Economy Traditional economic activities are fishing and wine production. The remainder of arable land is covered by olive trees and wheat fields. Most of the agriculture is done on the central plains and gentle hills of the island. Red poppies of the island are used to produce small quantities of sharbat and jam. Sheep and goats are grazed at hilly northeastern and southeastern part of the island which is not suitable for agriculture. The number of farmers involved in grape cultivation has gone up from 210 to 397 in the recent years, though the farm area has gone down from to . Tourism has been an important, but limited, economic activity since the 1970s but it developed rapidly from the 1990s onwards. The island's main attraction is the castle last rebuilt in 1815, illuminated at night, and with a view out to the open sea. The island's past is captured in a small museum, with a room dedicated to its Greek story. The town square boasts a "morning market" where fresh groceries and seafood are sold, along with the island's specialty of tomato jam. Mainlanders from Istanbul run some bars, boutiques and guesthouses. In 2010, the island was named the world's second most-beautiful island by Condé Nast's Reader Choice award. The next year, the island topped the reader's list in the same magazine for the top 10 islands in Europe. In 2012, Condé Nast again selected Bozcaada as one of the 8 best islands in the world on account of its remnants of ancient buildings, less-crowded beaches, and places to stay. Fishing plays a role in the island's economy, but similar to other Aegean islands, agriculture is a more significant economic activity. The local fishing industry is small, with the port authority counting 48 boats and 120 fishermen in 2011. Local fishing is year-round and seafood can be obtained in all seasons. The fish population has gone down over the years, resulting in a shrinking fishing industry, though increase in tourism and consequent demand for more seafood has benefited the industry. The sea off the island is one of the major routes by which fish in the Aegen sea migrate seasonally. During the migration period, boats from the outside come to the island for fishing. In 2000, a wind farm of 17 turbines was erected at the western cape. It has a nominal power capacity of 10.2 MW energy, and produces 30 GWh of electricity every year. This is much more than what the island needs, and the excess is transferred to mainland Anatolia through an underground and partly undersea cable. Overhead cables and pylons were avoided for esthetic reasons, preserving the scenic view. The land has an average wind speed of 6.4 m/s and a mean energy density of 324 W/mat its meteorological station. This indicates significant wind energy generation potential. A United Nations Industrial Development Organization (UNIDO) project, the International Centre for Hydrogen Energy Technologies (ICHET) set up an experimental renewables-hydrogen energy facility at the Bozcaada Governor's building on 7 October 2011. The project, supported by the Turkish Ministry of Energy and Natural Resources (MENR), is the first of its kind in the country. The power plant produces energy via a 20 kW solar photovoltaic array, and uses a 50 kW electrolyzer to store this energy as hydrogen. A fuel cell and hydrogen engine can convert this stored energy back into electricity when needed, and the experimental system can supply up to 20 households for a day. , the town's hospital and governor's mansion were the only two buildings in the world using hydrogen energy. A boat and a golf cart are also powered by the same system. At the governor's place, energy is captured with a rooftop 20 Kw solar array and a 30 Kw wind mill. The electricity produced is used to electrolyze water into hydrogen. This gas is stored compressed, and can be used later to generate energy or as fuel in hydrogen-powered cars. In June 2011, Henry Puna, the Prime Minister of the Cook Islands traveled to Tenedos to investigate how the island uses hydrogen energy. In 2012, the Turkish government opened a customs office on the island, possibly opening the way for future direct travel between Greek ports and the island. Wine production The island is windy throughout the year and this makes the climate dry and warm enough to grow grapes. In classical antiquity wine production was linked with the cult of Dionysus, while grapes were also depicted in the local currency. The local wine culture outlived the Ottoman period. Vineyards have existed on the island since antiquity and today occupy one-third of the total land of the island and 80% of its agricultural land, In the mid-1800s, the island exported 800,000 barrels of wine annually and was revered as the best wine in the Eastern Mediterranean. Ottoman traveler Evliya Çelebi wrote in the 16th century that the finest wines in the world were being produced in Tenedos. Today, the island is one of the major wine producing areas in Turkey and grows four local strains of grape: Çavuş, Karasakız (Kuntra), Altınbaş (Vasilaki), and Karalahna. However, in recent years traditional French varieties have increased in prominence, namely Cabernet Sauvignon. Prior to 1923, wine production on the island was exclusively done by the Greek population; however, after this point, Turkish domestic wine production increased and Greeks on the island taught the Turkish population how to manufacture wine. By 1980, there were 13 wine production plants on the island. High taxes caused many of these to go out of business until 2001 when the state decreased taxes on wine and subsidized some of the producers on the island. In recent years, newer producers have relied upon Italian and French experts to improve production. In 2010, the island produced a record 5,000 tons of wine. Corvus has introduced modern wine making techniques to Tenedos. Grape harvest festivities are held the first week of September annually. Transportation The main transportation from mainland Turkey is by ferries from Geyikli and from the town of Çanakkale. The island is roughly from mainland Turkey. From the Geyikli pier, ferry travel is available for both passengers and automobiles, and takes about 35 minutes. A passenger-only ferry service from Çanakkale began running in 2009. Both run less often during the winter months. The island is seven hours by bus and then ferry from Istanbul. In 2012, Seabird Airlines began offering flights from Istanbul's Golden Horn to the island. Culture The Turkish film Akıllı Köpek Max (Max the Smart Dog) was filmed in Bozcaada in 2012. Another Turkish film, Bi Küçük Eylül Meselesi (A Small September Affair) was filmed on the island in 2013. Notable people Abudimus, 4th-century Christian martyr Bozcaadalı Hasan Hüsnü Pasha (1832–1903), son of Bozcaadalı Hüseyin Pasha, Naval Minister, founder of the Istanbul Naval Museum Bozcaadalı Hüseyin Pasha, 19th-century Ottoman staff admiral (Riyale ) Cleostratus, ancient Greek astronomer Democrates (), ancient Olympic winner in the men's wrestling. At Leonidaion there was a statue of him which was made by Dionysicles () of Miletus. Harpalus, ancient Greek engineer Meletius II, Ecumenical Patriarch of Constantinople (1768–1769) Phoenix of Tenedos, ancient Greek general Aristagoras of Tenedos, prytanis See also Greco-Turkish relations Greek wine Imbros Treaty of Lausanne Treaty of Sèvres Turkish wine Bozcaada Castle References Bibliography Books Journals Newspapers and magazines Web sources Further reading Bora Esiz, "Bozcaada, An Island for Those who Love the Aegean" Hakan Gürüney: From Tenedos to Bozcaada. Tale of a forgotten island. In: Tenedos Local History Research Centre. No. 5, Bozcaada 2012, . Haluk Şahin, The Bozcaada Book: A Personal, historical and literary guide to the windy island also known as Tenedos, Translated by Ayşe Şahin, Troya Publishing, 2005 Papers presented to the II. National Symposium on the Aegean Islands, 2–3 July 2004, Gökçeada, Çanakkale. External links Bozcaada government website (Turkish) Bozcaada Blog website (Turkish) Bozcaada Museum (private) (Turkish) Bozcaada slide show from New York Times Travel section Bozcaada Guide Une fin de semaine sur l'ile de Bozcaada (slide show) Website, Municipality of Bozcaada (Turkish) History of Turkey Islands of Turkey Ancient Greek archaeological sites in Turkey North Aegean islands Locations in the Iliad Tenea Fishing communities in Turkey Bozcaada district Islands of Çanakkale Province Members of the Delian League Populated places in the ancient Aegean islands Populated places in ancient Troad Greek city-states
15465798
https://en.wikipedia.org/wiki/Oriental%20Club
Oriental Club
The Oriental Club in London is an exclusive gentlemen's club established in 1824 that also admits ladies since 1952, although ladies could not be full members until 2010. Charles Graves describes it as fine in quality as White's but with the space of infinitely larger clubs. It is located in Stratford Place, near Oxford Street and Bond Street. Foundation The Asiatic Journal and Monthly Miscellany reported in its April 1824, issue: The founders included the Duke of Wellington and General Sir John Malcolm, and in 1824 all the Presidencies and Provinces of British India were still controlled by the Honourable East India Company. History and membership The early years of the club, from 1824 to 1858, are detailed in a book by Stephen Wheeler published in 1925, which contains a paragraph on each member of the club of that period. James Grant said of the club in The Great Metropolis (1837): The old Smoking Room is adorned with an elaborate ram's head snuff box complete with snuff rake and spoons, though most members have forgotten its original function. On 29 July 1844, two heroes of the First Anglo-Afghan War, Sir William Nott and Sir Robert Sale, were elected as members of the club by the Committee as an "extraordinary tribute of respect and anticipating the unanimous sentiment of the Club". On 12 January 1846, a special meeting at the club in Hanover Square presided over by George Eden, 1st Earl of Auckland, a former Governor-General of India, paid a public tribute to the dying Charles Metcalfe, 1st Baron Metcalfe, which Sir James Weir Hogg described as "a wreath upon his bier". With the formation of the East India Club in 1849, the link with the Honourable East India Company began to decline. In 1850, Peter Cunningham wrote in his Hand-Book of London: In 1861, the club's Chef de cuisine, Richard Terry, published his book Indian Cookery, stating that his recipes were "gathered, not only from my own knowledge of cookery, but from Native Cooks". Charles Dickens Jr. reported in Dickens's Dictionary of London (1879): Dickens appears to have been quoting the club's own Rules and Regulations; that phrase appears there in 1889, when the total number of members was limited to eight hundred. When Lytton Strachey joined the club in 1922, at the age of forty-two, he wrote to Virginia Woolf Stephen Wheeler's 1925 book Annals of the Oriental Club, 1824–1858 also contains a list of the members of the club in the year 1924, with their years of election and their places of residence. In 1927, R. A. Rye wrote of the club's library – "The library of the Oriental Club ... contains about 4,700 volumes, mostly on oriental subjects", while in 1928 Louis Napoleon Parker mentioned in his autobiography "... the bald and venerable heads of the members of the Oriental Club, perpetually reading The Morning Post. In 1934, the novelist Alec Waugh wrote of Another writer recalling the club in the 1970s says: Club houses In its monthly issue for June 1824, The Asiatic Journal reported that "The Oriental Club expect to open their house, No. 16, Lower Grosvenor Street, early in June. The Members, in the mean time, are requested to send their names to the Secretary as above, and to pay their admission fee and first year's subscription to the bankers, Messrs Martin, Call and Co., Bond Street." The club's first purpose-built club house, in Hanover Square, was constructed in 1827–1828 and designed by Philip Wyatt and his brother Benjamin Dean Wyatt. The construction of additions to the Clubhouse that were designed by Decimus Burton, in 1853, was superintended, when eventually commenced, in 1871, by his nephew Henry Marley Burton. Edward Walford, in his Old and New London (Volume 4, 1878) wrote of this building The club remained in Hanover Square until 1961. The club house there was in use for the last time on 30 November 1961. Early in 1962, the club moved into its present club house, Stratford House in Stratford Place, just off Oxford Street, London W1C, having bought the property for conversion in 1960. The central range of Stratford House was designed by Robert Adam and was built between 1770 and 1776 for Edward Stratford, 2nd Earl of Aldborough, who paid £4,000 for the site. It had previously been the location of the Lord Mayor of London's Banqueting House, built in 1565. The house remained in the Stratford family until 1832. It belonged briefly to Grand Duke Nicholas Nikolaevich, a son of Tsar Nicholas I of Russia. The house was little altered until 1894, when its then owner, Murray Guthrie, added a second storey to the east and west wings and a colonnade in front. In 1903, a new owner, the Liberal politician Sir Edward Colebrook, later Lord Colebrooke, reconstructed the Library to an Adam design. In 1908, Lord Derby bought a lease and began more alterations, removing the colonnade and adding a third storey to both wings. He took out the original bifurcated staircase (replacing it with a less elegant single one), demolished the stables and built a Banqueting Hall with a grand ballroom above. In 1960, the Club began to convert its new property. The ballroom was turned into two floors of new bedrooms, further lifts were added, and the banqueting hall was divided into a dining room and other rooms. The club now has a main drawing room, as well as others, a members' bar, a library and an ante-room, a billiards room, an internet suite and business room, and two (non)smoking rooms, as well as a dining room and 32 bedrooms. Stratford House is a Grade I listed building. The flag flying above the club house bears an Indian elephant, which is the badge of the club. Art collection The club possesses a fine collection of paintings, including many early portraits of Britons in India such as Warren Hastings. The Bar is overlooked by a painting of Tippu Sultan, the Tiger of Mysore (1750–1799). There are portraits of the club's principal founders, the first Duke of Wellington (by H. W. Pickersgill) and Sir John Malcolm (by Samuel Lane). Other portraits include Lord Cornwallis (1738–1805), also by Samuel Lane, Sir Jamsetjee Jeejebhoy, 1st Baronet (1783–1859), by John Smart, Clive of India (1725–1774) by Nathaniel Dance-Holland, Major-General Stringer Lawrence by Sir Joshua Reynolds, Major General Sir Thomas Munro, 1st Baronet (1761–1827), by Ramsay Richard Reinagle, Edward Stratford, second Earl of Aldborough (died 1801) by Mather Brown, Mehemet Ali, Pasha of Egypt (c. 1769–1849) and General Sir William Nott, both by Thomas Brigstocke, Henry Petty-Fitzmaurice, 5th Marquess of Lansdowne (1845–1927) by Sydney P. Kenrick after John Singer Sargent, Lieutenant-General Sir Richard Strachey (1817–1908) by Lowes Dickinson (the bequest of his widow, Jane Maria Strachey), Charles Metcalfe, 1st Baron Metcalfe by F. R. Say, Thomas Snodgrass by an unknown artist, and a bust of the first Lord Lake. President of the Club 1824–1852: Arthur Wellesley, 1st Duke of Wellington (Honorary President) After Wellington's death in 1852, no further Presidents were appointed. Chairmen of the Committee 1837: Sir Pulteney Malcolm GCB RN (brother of the founder, Sir John Malcolm) 1843: Major-General Sir J. L. Lushington 1918: C. A. MacDonald 1932–1933: Sir Reginald Mant 1951: Sir Charles Innes (Governor of Burma, 1927–1932) 1954 and 1958–1962: Sir Arthur Bruce Founding Committee The first club Committee of 1824 included: Lord William Bentinck GCB (1774–1839) Right Hon. Charles Williams-Wynn MP (1775–1850) General Sir Alured Clarke GCB (1744–1832) General Sir George Nugent, Bt GCB (1757–1849) Vice-Admiral Sir Richard King, Bt (1774–1834) Vice-Admiral Sir Pulteney Malcolm KCB (1768–1838) Major General Sir John Malcolm GCB KLS (1769–1833) Sir George Staunton, Bt. MP (1781–1859) Sir Charles Forbes, 1st Baronet MP Lt General Sir Thomas Hislop Bart GCB Lt General Sir Miles Nightingall, KCB Major General Sir Patrick Rose Sir Robert Farquhar, Bt. Sir Christopher Cole KCB MP Major General Malcolm Grant Major General Haldane, CB Rear Admiral Lamber Major General Rumley Colonel Baron Tuyll Colonel Alston Colonel Baillie MP Alexander Boswell, Esq. Notable members William Beresford, 1st Viscount Beresford (1768–1854) Sir Hudson Lowe GCMG (1769–1844) Vice-Admiral Sir Henry Blackwood, 1st Baronet (1770–1832) Mountstuart Elphinstone (1779–1859), Governor of Bombay and author Sir William Nott (1782–1845), distinguished soldier of the First Anglo-Afghan War, by special election Sir Robert Sale (1782–1845), another hero of the First Anglo-Afghan War, by special election George Eden, 1st Earl of Auckland (1784–1849), Governor-General of India 1835–1842 Pownoll Pellew, 2nd Viscount Exmouth (1786–1833) George FitzClarence, 1st Earl of Munster (1794–1842), son of King William IV Alfred Burton (1802 - 1877). Mayor of Hastings, and son of the pre-eminent property developer James Burton. Alfred Burton was a long-standing member of the club, to which he donated numerous books and pictures, and to which his brother Decimus Burton and nephew Henry Marley Burton made architectural additions Mansur Ali Khan, Nawab of Bengal (1830–1884) The 1st Earl of Inchcape (1852–1932) Sir Archibald Birkmyre, 1st Baronet (1875–1935) Sir Narayana Raghavan Pillai of Elenkath, KCIE, CBE, ICS Former Governor of the Bank of India & Secretary of State; grandson of Dewan Nanoo Pillai of Elenkath Sir John Jardine Paterson (1920–2000), Calcutta businessman Austen Kark (1926–2002), managing director of the BBC World Service The Earl of Cromer (born 1946) of the Barings banking family The 8th Earl of Wilton of the Grosvenor family (See Duke of Westminster) The 3rd Lord Wrenbury The 3rd Lord Shepherd The Earl of Derby (born 1962) The 4th Earl of Inchcape (born 1943) Simon Mackay, Baron Tanlaw (born 1934) His Excellency Keichi Hayashi, Representative of the Emperor of Japan Ravi Kumar, Pillai of Kandamath. Indian aristocrat Maharaja Jai Singh Sir David Tang, KBE, Hong Kong and London businessman William Charles Langdon Brown, CBE, banker and former Member of the Hong Kong Legislative Council Sir Mark Tully (born 1936) former Chief of Bureau, BBC, New Delhi Sir George Martin (born 1926), producer of The Beatles Christopher Beazley MEP (born 1952) Alan Duncan MP David Davies MP Richard Harrington MP James Innes (born 1975), British author Swapan Dasgupta Indian MP and journalist Members in fiction Early in William Makepeace Thackeray's novel Vanity Fair (1848), Thackeray says of Joseph Sedley that "...he dined at fashionable taverns (for the Oriental Club was not as yet invented)." By the time of Sedley's return from India in 1827, "His very first point, of course, was to become a member of the Oriental Club, where he spent his mornings in the company of his brother Indians, where he dined, or whence he brought home men to dine." In Thackeray's The Newcomes (1855), Colonel Thomas Newcome and Binnie are members of the Oriental Club. Writing of Thackeray, Francis Evans Baily says "...the Anglo-Indian types in his novels, including Colonel Newcome, were drawn from members of the Oriental Club in Hanover Square". Bibliography Baillie, Alexander F., The Oriental Club and Hanover Square (London, Longman, Green, 1901, 290 pp, illustrated) Wheeler, Stephen (ed.), Annals of the Oriental Club, 1824–1858 (London, The Arden Press, 1925, xvi + 201 pp) Forrest, Denys Mostyn, The Oriental: Life Story of a West End Club (London, Batsford, 1968, 240 pp) Riches, Hugh A History of the Oriental Club (London, Oriental Club, 1998) See also List of London's gentlemen's clubs References External links The Oriental Club – official web site The Association of London Clubs – official web site Listed Buildings in Stratford Place, Westminster – at westminster.gov.uk (official web site of the City of Westminster) Gentlemen's clubs in London Grade I listed buildings in the City of Westminster Grade I listed clubhouses 1824 establishments in England Military gentlemen's clubs
42380363
https://en.wikipedia.org/wiki/World%20Software%20Corporation
World Software Corporation
World Software Corporation is a privately held corporation and the creator and distributor of Worldox, a Document Management System. World Software Corporation has 6,000 customers in 52 countries and 200 resellers; 5,200 customer installations with 4,800 being law firms, legal departments and legal organizations, 300 financial services firms and 100 in other industries. History World Software Corporation was founded in New Jersey in 1988 by Tom Burke and Kristina Burke. World Software Corporation has produced 30 product versions. World Software and Worldox technology have established and maintained a single, dedicated product and corporate focus of a Legal Document Management System over these 30 years. Worldox technology synchronizes the Cloud, Mobile, internet (browser) and (MS:) Windows environments by providing the core requirement of a seamless, cross-platform digital filing system for documents, emails, and objects (aka unstructured data). Products Customers Notable organizations using Worldox include: Law Firms Jones Walker LLP Eckert Seamans Maynard, Cooper & Gale, P.C. Sullivan & Worcester LLP Corporations: Alcoa Boise Cascade Daimler Chrysler Deutsche Bank Hilton Hotels & Resorts Pacific Gas and Electric Company Pitney Bowes Purolator Inc. Non-Profits and Non-Governmental Organizations: The World Bank Columbia University New York University Ohio State University Princeton University The Legal Aid Society of San Francisco Financial/Accounting Firms: Citigroup Crowe Horwath See also Cloud Computing Document Management System Software as a Service References Software companies based in New Jersey Companies established in 1988 Software companies of the United States
474261
https://en.wikipedia.org/wiki/Extensis
Extensis
Extensis is a software company based in Portland, Oregon. History Extensis and its parent company CreativePro.com were sold to ImageX in year 2000, which in turn sold Extensis to Japanese content-management company Celartem Technology in 2002. In 2003, Extensis acquired competitor DiamondSoft and their Font Reserve applications (stand-alone and client-server). In January 2006, Extensis merged its two font management products, Font Reserve and Suitcase into a single product called Suitcase Fusion. In August 2010, Extensis launched WebINK, a web font subscription service. This service was discontinued on July 1, 2015. In 2018, Extensis united with its sister company, LizardTech, to continue developing and distributing software solutions for compressing and distributing massive, high-resolution geospatial data. Extensive was the most important company towards the Google Docs creation leading to Asen Hon the lead manager of the team involved to work along with Bill Gates. Products Extensis develops the following products: Suitcase Fusion – Font management software for single users Suitcase Fusion for iOS – Free font manager for iOS that connects to Suitcase Fusion Suitcase TeamSync – Font management software for teams. Fonts are synced using a cloud-based server hosted by Extensis. Universal Type Server – On-premises font server for teams who want tight control over font distribution and reporting. Extensis Portfolio – Digital Asset Management solution for teams that includes descriptive automatic keywording powered by artificial intelligence. FontLink – A module for Universal Type Server that delivers fonts for documents in automated publishing workflows that use Adobe InDesign Server. Suitcase Attaché – Extended font menu for Microsoft Word and PowerPoint. Extensis Fonts – Free add-on for Google Docs that provides a better font picker Fontspiration – Free iOS app for layering text over images, video and GIFs SquishPic – Free all-in-one viewer for MrSID files with impressive file compression Compress – Free plug-in for opening MrSID files directly within Adobe Photoshop GeoExpress – Geospatial software created for geospatial professionals to manipulate digital, satellite, aerial, and UAV images and losslessly compress them to MrSID or JPEG2000 files for more efficient use. ExpressServer – Image asset handling software that uses patented compression technology to distribute massively large geospatial data. Any type of device can connect to it to access raster files and LiDAR point cloud data without the need to download those files. GeoViewer – A free all-in-one GIS viewer for MrSID imagery, raster imagery, LiDAR point clouds, vector overlays, and other geospatial file formats. A Pro version is also available that offers printing, additional project systems, and advanced area measurement tools. Extensis distributes, but does not develop the following product: FontDoctor – Font diagnosis and repair tool, included free with Suitcase Fusion Corbit – Workflow automation software that allows users to set up triggers to activate individual or a series of tasks based on a simple action or by a monitored system change. See also List of companies based in Oregon References Further reading macOS Font Management Best Practices Guide Font Management in Windows Best Practices Guide Server-based Font Management Best Practices Guide Digital Asset Management Best Practices Guide Font Management in OS X by Kurt Lang External links Computer companies of the United States Companies based in Portland, Oregon
495664
https://en.wikipedia.org/wiki/Information%20overload
Information overload
Information overload (also known as infobesity, infoxication, information anxiety, and information explosion) is the difficulty in understanding an issue and effectively making decisions when one has too much information (TMI) about that issue, and is generally associated with the excessive quantity of daily information. The term "information overload" was first used as early as 1962 by scholars in management and information studies, including in Bertram Gross' 1964 book, The Managing of Organizations, and was further popularized by Alvin Toffler in his bestselling 1970 book Future Shock. Speier et al. (1999) said that if input exceeds the processing capacity, information overload occurs, which is likely to reduce the quality of the decisions. In a newer definition, Roetzel (2019) focuses on time and resources aspects. He states that when a decision-maker is given many sets of information, such as complexity, amount, and contradiction, the quality of its decision is decreased because of the individual’s limitation of scarce resources to process all the information and optimally make the best decision. The advent of modern information technology has been a primary driver of information overload on multiple fronts: in quantity produced, ease of dissemination, and breadth of the audience reached. Longstanding technological factors have been further intensified by the rise of social media and the attention economy, which facilitates attention theft. In the age of connective digital technologies, informatics, the Internet culture (or the digital culture), information overload is associated with over-exposure, excessive viewing of information, and input abundance of information and data. Origin of the term Even though information overload is linked to digital cultures and technologies, Ann Blair notes that the term itself predates modern technologies, as indications of information overload were apparent when humans began collecting manuscripts, collecting, recording, and preserving information. One of the first social scientists to notice the negative effects of information overload was the sociologist Georg Simmel (1858–1918), who hypothesized that the overload of sensations in the modern urban world caused city dwellers to become jaded and interfered with their ability to react to new situations. The social psychologist Stanley Milgram (1933–1984) later used the concept of information overload to explain bystander behavior. Psychologists have recognized for many years that humans have a limited capacity to store current information in memory. Psychologist George Armitage Miller was very influential in this regard, proposing that people can process about seven chunks of information at a time. Miller says that under overload conditions, people become confused and are likely to make poorer decisions based on the information they have received as opposed to making informed ones. A quite early example of the term "information overload" can be found in an article by Jacob Jacoby, Donald Speller and Carol Kohn Berning, who conducted an experiment on 192 housewives which was said to confirm the hypothesis that more information about brands would lead to poorer decision making. Long before that, the concept was introduced by Diderot, although it was not by the term "information overload": In the internet age, the term "information overload" has evolved into phrases such as "information glut", "data smog", and "data glut" (Data Smog, Shenk, 1997). In his abstract, Kazi Mostak Gausul Hoq commented that people often experience an "information glut" whenever they struggle with locating information from print, online, or digital sources. What was once a term grounded in cognitive psychology has evolved into a rich metaphor used outside the world of academia. History Early history Information overload has been documented throughout periods where advances in technology have increased a production of information. As early as the 3rd or 4th century BC, people regarded information overload with disapproval. Around this time, in Ecclesiastes 12:12, the passage revealed the writer's comment "of making books there is no end" and in the 1st century AD, Seneca the Elder commented, that "the abundance of books is distraction". In 1255, the Dominican Vincent of Beauvais, also commented on the flood of information: "the multitude of books, the shortness of time and the slipperiness of memory." Similar complaints around the growth of books were also mentioned in China. There were also information enthusiasts. The Library of Alexandria was established around the 3rd century BCE or 1st century Rome, which introduced acts of preserving historical artifacts. Museums and libraries established universal grounds of preserving the past for the future, but much like books, libraries were only granted with limited access. Renaissance Renaissance humanists always had a desire to preserve their writings and observations, but were only able to record ancient texts by hand because books were expensive and only the privileged and educated could afford them. Humans experience an overload in information by excessively copying ancient manuscripts and replicating artifacts, creating libraries and museums that have remained in the present. Around 1453 AD, Johannes Gutenberg invented the printing press and this marked another period of information proliferation. As a result of lowering production costs, generation of printed materials ranging from pamphlets, manuscripts to books were made available to the average person. Following Gutenberg's invention, the introduction of mass printing began in Western Europe. Information overload was often experienced by the affluent, but the circulation of books were becoming rapidly printed and available at a lower cost, allowing the educated to purchase books. Information became recordable, by hand, and could be easily memorized for future storage and accessibility. This era marked a time where inventive methods were established to practice information accumulation. Aside from printing books and passage recording, encyclopedias and alphabetical indexes were introduced, enabling people to save and bookmark information for retrieval. These practices marked both present and future acts of information processing. Swiss scientist Conrad Gessner commented on the increasing number of libraries and printed books, and was most likely the first academic who discussed the consequences of information overload as he observed how "unmanageable" information came to be after the creation of the printing press. Blair notes that while scholars were elated with the number of books available to them, they also later experienced fatigue with the amount of excessive information that was readily available and overpopulated them. Scholars complained about the abundance of information for a variety of reasons, such as the diminishing quality of text as printers rushed to print manuscripts and the supply of new information being distracting and difficult to manage. Erasmus, one of the many recognized humanists of the 16th century asked, "Is there anywhere on earth exempt from these swarms of new books?". 18th century Many grew concerned with the rise of books in Europe, especially in England, France, and Germany. From 1750 to 1800, there was a 150% increase in the production of books. In 1795, German bookseller and publisher Johann Georg Heinzmann said "no nation printed as much as the Germans" and expressed concern about Germans reading ideas and no longer creating original thoughts and ideas. To combat information overload, scholars developed their own information records for easier and simply archival access and retrieval. Modern Europe compilers used paper and glue to cut specific notes and passages from a book and pasted them to a new sheet for storage. Carl Linnaeus developed paper slips, often called his botanical paper slips, from 1767 to 1773, to record his observations. Blair argues that these botanical paper slips gave birth to the "taxonomical system" that has endured to the present, influencing both the mass inventions of the index card and the library card catalog. Information Age In his book, The Information: A History, A Theory, A Flood, published in 2011, author James Gleick notes that engineers began taking note of the concept of information, quickly associated it in a technical sense: information was both quantifiable and measurable. He discusses how information theory was created to first bridge mathematics, engineering, and computing together, creating an information code between the fields. English speakers from Europe often equated "computer science" to "informatique, informatica, and Informatik". This leads to the idea that all information can be saved and stored on computers, even if information experiences entropy. But at the same time, the term information, and its many definitions have changed. In the second half of the 20th century, advances in computer and information technology led to the creation of the Internet. In the modern Information Age, information overload is experienced as distracting and unmanageable information such as email spam, email notifications, instant messages, Tweets and Facebook(Meta) updates in the context of the work environment. Social media has resulted in "social information overload", which can occur on sites like Meta (previously Facebook), and technology is changing to serve our social culture. In today's society, day-to-day activities increasingly involve the technological world where information technology exacerbates the number of interruptions that occur in the work environment. Management may be even more disrupted in their decision making, and may result in more poor decisions. Thus, the PIECES framework mentions information overload as a potential problem in existing information systems. As the world moves into a new era of globalization, an increasing number of people are connecting to the Internet to conduct their own research and are given the ability to contribute as well as view data on an increasing number of websites. Users are now classified as active users because more people in society are participating in the Digital and Information Age. This flow has created a new life where humanity is now in danger of becoming dependent on this method of access to information where risks of the perpetuation of misinformation are greatly increased. In a 2018 literature review, Roetzel indicates that information overload can be seen as a virus—spreading through (social) media and news networks. General causes In a piece published by Slate, Vaughan Bell argues that "Worries about information overload are as old as information itself" because each generation and century will inevitably experience a significant impact with technology. In the 21st century, Frank Furedi describes how an overload in information is metaphorically expressed as a flood, which is an indication that humanity is being "drowned" by the waves of data coming at it. This includes how the human brain continues to process information whether digitally or not. Information overload can lead to "information anxiety", which is the gap between the information that is understood and the information that it is perceived must be understood. The phenomenon of information overload is connected to the field of information technology (IT). IT corporate management implements training to "improve the productivity of knowledge workers". Ali F. Farhoomand and Don H. Drury note that employees often experience an overload in information whenever they have difficulty absorbing and assimilating the information they receive to efficiently complete a task because they feel burdened, stressed, and overwhelmed. At New York's Web 2.0 Expo in 2008, Clay Shirky's speech indicated that information overload in the modern age is a consequence of a deeper problem, which he calls "filter failure", where humans continue to overshare information with each other. This is due to the rapid rise of apps and unlimited wireless access. In the modern information age, information overload is experienced as distracting and unmanageable information such as email spam, email notifications, instant messages, Tweets, and Facebook updates in the context of the work environment. Social media has resulted in "social information overload", which can occur on sites like Facebook, and technology is changing to serve our social culture. As people view increasing amounts of information in the form of news stories, e-mails, blog posts, Facebook statuses, Tweets, Tumblr posts and other new sources of information, they become their own editors, gatekeepers, and aggregators of information. Social media platforms create a distraction as users attention spans are challenged once they enter an online platform. One concern in this field is that massive amounts of information can be distracting and negatively impact productivity and decision-making and cognitive control. Another concern is the "contamination" of useful information with information that might not be entirely accurate (information pollution). The general causes of information overload include: A rapidly increasing rate of new information being produced, also known as journalism of assertion, which is a continuous news culture where there is a premium put on how quickly news can be put out; this leads to a competitive advantage in news reporting, but also affects the quality of the news stories reported. The ease of duplication and transmission of data across the Internet. An increase in the available channels of incoming information (e.g. telephone, e-mail, instant messaging, RSS) Ever-increasing amounts of historical information to view. Contradictions and inaccuracies in available information, which is connected to misinformation. A low signal-to-noise ratio. A lack of a method for comparing and processing different kinds of information. The pieces of information are unrelated or do not have any overall structure to reveal their relationships. Email E-mail remains a major source of information overload, as people struggle to keep up with the rate of incoming messages. As well as filtering out unsolicited commercial messages (spam), users also have to contend with the growing use of email attachments in the form of lengthy reports, presentations, and media files. A December 2007 New York Times blog post described E-mail as "a $650 billion drag on the economy", and the New York Times reported in April 2008 that "e-mail has become the bane of some people's professional lives" due to information overload, yet "none of [the current wave of high-profile Internet startups focused on email] really eliminates the problem of e-mail overload because none helps us prepare replies". In January 2011, Eve Tahmincioglu, a writer for NBC News, wrote an article titled "It's Time to Deal With That Overflowing Inbox". Compiling statistics with commentary, she reported that there were 294 billion emails sent each day in 2010, up from 50 billion in 2009. Quoted in the article, workplace productivity expert Marsha Egan stated that people need to differentiate between working on e-mail and sorting through it. This meant that rather than responding to every email right away, users should delete unnecessary emails and sort the others into action or reference folders first. Egan then went on to say "We are more wired than ever before, and as a result need to be more mindful of managing email or it will end up managing us." The Daily Telegraph quoted Nicholas Carr, former executive editor of the Harvard Business Review and the author of The Shallows: What The Internet Is Doing To Our Brains, as saying that email exploits a basic human instinct to search for new information, causing people to become addicted to "mindlessly pressing levers in the hope of receiving a pellet of social or intellectual nourishment". His concern is shared by Eric Schmidt, chief executive of Google, who stated that "instantaneous devices" and the abundance of information people are exposed to through e-mail and other technology-based sources could be having an impact on the thought process, obstructing deep thinking, understanding, impeding the formation of memories and making learning more difficult. This condition of "cognitive overload" results in diminished information retaining ability and failing to connect remembrances to experiences stored in the long-term memory, leaving thoughts "thin and scattered". This is also manifest in the education process. Web accuracy In addition to e-mail, the World Wide Web has provided access to billions of pages of information. In many offices, workers are given unrestricted access to the Web, allowing them to manage their own research. The use of search engines helps users to find information quickly. However, information published online may not always be reliable, due to the lack of authority-approval or a compulsory accuracy check before publication. Internet information lacks credibility as the Web's search engines do not have the abilities to filter and manage information and misinformation. This results in people having to cross-check what they read before using it for decision-making, which takes up more time. Viktor Mayer-Schönberger, author of Delete: The Virtue of Forgetting in the Digital Age, argues that everyone can be a "participant" on the Internet, where they are all senders and receivers of information. On the Internet, trails of information are left behind, allowing other Internet participants to share and exchange information. Information becomes difficult to control on the Internet. The BBC reports that "every day, the information we send and receive onlinewhether that's checking emails or searching the internetamount to over 2.5 quintillion bytes of data." Social media Social media are applications and websites with an online community where users create and share content with each other, and it adds to the problem of information overload because so many people have access to it. It presents many different views and outlooks on subject matters so that one may have difficulty taking it all in and drawing a clear conclusion. Information overload may not be the core reason for people's anxieties about the amount of information they receive in their daily lives. Instead, information overload can be considered situational. Social media users tend to feel less overloaded by information when using their personal profiles, rather than when their work institutions expect individuals to gather a mass of information. Most people see information through social media in their lives as an aid to help manage their day-to-day activities and not an overload. Depending on what social media platform is being used, it may be easier or harder to stay up to date on posts from people. Facebook users who post and read more than others tend to be able to keep up. On the other hand, Twitter users who post and read a lot of tweets still feel like it is too much information (or none of it is interesting enough). Another problem with social media is that many people create a living by creating content for either their own or someone else's platform, which can create for creators to publish an overload of content. Effects of information overload In the context of searching for information, researchers have identified two forms of information overload: outcome overload where there are too many sources of information and textual overload where the individual sources are too long. This form of information overload may cause searchers to be less systematic. Disillusionment when a search is more challenging than expected may result in an individual being less able to search effectively. Information overload when searching can result in a satisficing strategy. Responding to information overload Savolainen identifies filtering and withdrawal as common responses to information. Filtering involves quickly working out whether a particular piece of information, such as an email, can be ignored based on certain criteria. Withdrawal refers to limiting the number of sources of information with which one interacts. They distinguish between "pull" and "push" sources of information, a "pull" source being one where one seeks out relevant information, a "push" source one where others decide what information might be interesting. They note that "pull" sources can avoid information overload but by only "pulling" information one risks missing important information. There have been many solutions proposed for how to mitigate information overload. Based on the definition of information overload, there are two general approaches to deal with it: Reduce the amount of incoming informationbe cautious of how you are exposed to information, and limit IO by unsubscribing from newsletters and advertisements. Enhance the ability to process informationrelated to information processing where how a person records, molds, and stores information is crucial. Johnson advises discipline which helps mitigate interruptions and for the elimination of push or notifications. He explains that notifications pull people's attentions away from their work and into social networks and e-mails. He also advises that people stop using their iPhones as alarm clocks which means that the phone is the first thing that people will see when they wake up leading to people checking their e-mail right away. Clay Shirky states: The use of Internet applications and add-ons such as the Inbox Pause add-on for Gmail. This add-on does not reduce the number of e-mails that people get but it pauses the inbox. Burkeman in his article talks about the feeling of being in control is the way to deal with information overload which might involve self-deception. He advises to fight irrationality with irrationality by using add-ons that allow you to pause your inbox or produce other results. Reducing large amounts of information is key. Dealing with IO from a social network site such as Facebook, a study done by Humboldt University showed some strategies that students take to try and alleviate IO while using Facebook. Some of these strategies included: Prioritizing updates from friends who were physically farther away in other countries, hiding updates from less-prioritized friends, deleting people from their friends list, narrowing the amount of personal information shared, and deactivating the Facebook account. The problem of organization Decision makers performing complex tasks have little if any excess cognitive capacity. Narrowing one's attention as a result of the interruption is likely to result in the loss of information cues, some of which may be relevant to completing the task. Under these circumstances, performance is likely to deteriorate. As the number or intensity of the distractions/interruptions increases, the decision maker's cognitive capacity is exceeded, and performance deteriorates more severely. In addition to reducing the number of possible cues attended to, more severe distractions/interruptions may encourage decision-makers to use heuristics, take shortcuts, or opt for a satisficing decision, resulting in lower decision accuracy. Some cognitive scientists and graphic designers have emphasized the distinction between raw information and information in a form that can be used in thinking. In this view, information overload may be better viewed as organization underload. That is, they suggest that the problem is not so much the volume of information but the fact that it cannot be discerned how to use it well in the raw or biased form it is presented. Authors who have taken this view include graphic artist and architect Richard Saul Wurman and statistician and cognitive scientist Edward Tufte. Wurman uses the term "information anxiety" to describe humanity's attitude toward the volume of information in general and their limitations in processing it. Tufte primarily focuses on quantitative information and explores ways to organize large complex datasets visually to facilitate clear thinking. Tufte's writing is important in such fields as information design and visual literacy, which deal with the visual communication of information. Tufte coined the term "chartjunk" to refer to useless, non-informative, or information-obscuring elements of quantitative information displays, such as the use of graphics to overemphasize the importance of certain pieces of data or information. Responding to Information Overload in email communication In a study conducted by Soucek and Moser (2010), they investigated what impact a training intervention on how to cope with information overload would have on employees. They found that the training intervention did have a positive impact on IO, especially on those who struggled with work impairment and media usage, and employees who had a higher amount of incoming emails. Responses of business and government Recent research suggests that an "attention economy" of sorts will naturally emerge from information overload, allowing Internet users greater control over their online experience with particular regard to communication mediums such as e-mail and instant messaging. This could involve some sort of cost being attached to e-mail messages. For example, managers charging a small fee for every e-mail receivede.g. $1.00which the sender must pay from their budget. The aim of such charging is to force the sender to consider the necessity of the interruption. However, such a suggestion undermines the entire basis of the popularity of e-mail, namely that e-mails are free of charge to send. Economics often assumes that people are rational in that they have the knowledge of their preferences and an ability to look for the best possible ways to maximize their preferences. People are seen as selfish and focus on what pleases them. Looking at various parts on their own results in the negligence of the other parts that work alongside it that create the effect of IO. Lincoln suggests possible ways to look at IO in a more holistic approach by recognizing the many possible factors that play a role in IO and how they work together to achieve IO. In medicine It would be impossible for an individual to read all the academic papers published in a narrow speciality, even if they spent all their time reading. A response to this is the publishing of systematic reviews such as the Cochrane Reviews. Richard Smith argues that it would be impossible for a general practictioner to read all the literature relevant to every individual patient they consult with and suggests one solution would be an expert system for use of doctors while consulting. Related terms The similar term information pollution was coined by Jakob Nielsen in 2003 The term interruption overload has begun to appear in newspapers such as the Financial Times. "TL;DR" (too long; didn't read), another initialism alluding to information overload, this one normally used derisively. Analysis paralysis Cognitive dissonance Cognitive load Continuous partial attention Internet addiction Learning curve Memory Multi-tasking See also References Further reading External links Information Age Library science
54452632
https://en.wikipedia.org/wiki/Naila%20Musayeva
Naila Musayeva
Naila Musaeva () is a professor at the Department of Information Technologies and Systems at the Azerbaijan University of Architecture and Construction. She is also Doctor of Engineering, Professor, and Head of Laboratory at the Institute of Control Systems of ANAS. National grants 2012–2015, Science Fund at the State Oil Company of Azerbaijan Republic (SOCAR), 2012–2015, Science Development Foundation under the President of the Azerbaijan Republic Membership 2002–present, Member of the Dissertation Council at the Institute of Control Systems conferring academic degrees of Doctor of Science and PhD in the following specialties: Computer sciences; Information measurement systems and control systems; System analysis, control and information processing; 2002–present, Member of the Science Council at the Institute of Control Systems; 2017 - Member of the Educational-methodical board at the Azerbaijan University of Architecture and Construction. Publications About 160 proceedings were published in this area, among which it is possible to note the following: Aliev T.A., Musaeva N.F., Suleymanova M.T., Gazizade B.I. Density Function of Noise Distribution as an Indicator for Identifying the Degree of Fault Growth in Sucker Rod Pumping Unit (SRPU)// Journal of Automation and Information Sciences, Vol.49, Issue 4, 2017, p. 1-11, by Begell House, New-York, Springer Aliev T.A., Musaeva N.F., Suleymanova M.T., Gazizade B.I. Technology for calculating the parameters of the density function of normal distribution of the useful component in a noisy process // Journal of Automation and Information Sciences, Vol.48, No2, 2016, p. 35-55, by Begell House, New-York, Springer Aliev T.A., Musaeva N.F., Suleymanova M.T., Gazizade B.I. Analytic representation of the density function of normal distribution of noise // Journal of Automation and Information Sciences, 47(8), 2015, by Begell House, New-York, Springer, p. 24-40 T.A. Aliev, N.F. Musaeva and Sattarova U.E. Noise Technologies for Operating the System for Monitoring of the Beginning of Violation of Seismic Stability of Construction Objects// Editors: Lotfi A. Zadeh, Ali M. Abbasov, Ronald R. Yager, Shahnaz N. Shahbazova, Marek Z. // Recent Developments and New Directions in Soft Computing. Studies in Fuzziness and Soft Computing, Volume 317, 2014, pp. 211–232, Springer T.A. Aliev, N.F. Musaeva and Sattarova U.E. The technology of forming the normalized correlation matrices of the matrix equations of multidimensional stochastic objects. Journal of Automation and Information Sciences, 45(1), 2013 by Begell House, New-York, Springer, p. 1-15 T.A. Aliev, N.F. Musaeva and Sattarova U.E. Robust technologies for calculating normalized correlation functions, Cybernetics and Systems Analysis, New-York, Springer, Vol. 46, No. 1, 2010, pp. 153–166 Musaeva N.F. Robust correlation coefficients as initial data for solving a problem of confluent analysis. Automatic Control and Computer Sciences. Allerton Press. Inc., New York, No 2, 2007, pp. 76–87 T.A. Aliev and N.F. Musaeva. Technology of experimental research of the stochastic processes, Automatic Control and Computer Sciences. Allerton Press. Inc., New York, No 4, 2005, pp. 15–26 N.F.Musaeva. Technology for determining the magnitude of robustness as an estimate of statistical characteristic of noisy signal, Automatic Control and Computer Sciences. Allerton Press. Inc., New York, No 5, 2005, pp. 64–74 Textbooks Мусаева Н.Ф. Построение математических моделей. Методы и современные компьютерные технологии. Учебник , 2014, 310 pp. Lambert, Germany Мусаева Н.Ф. Информационные технологии обработки экспериментальных данных. Учебник, Баку, 2007, 260 с. (Azərbaycan Respublikası Təhsil Nazirliyinin 26/04/2007-ci il tarixli 368 nömrəli əmri ilə təsdiq edilmişdir) Musayeva N.F. Təcrübi verilənlərin işlənməsinin informasiya texnologiyaları. Dərslik, Bakı, 2007, 245 s. (Azərbaycan Respublikası Təhsil Nazirliyinin 26/04/2007-ci il tarixli 367 nömrəli əmri ilə təsdiq edilmişdir) Monographs Musayeva N.F. Riyazi modellərin qurulması. Üsullar və müasir kompüter texnologiyaları. Bakı, 2014. 386 s. Мусаева Н.Ф. Алгоритмы улучшения обусловленности корреляционных матриц. Baku, Elm, 2000, p. 95 References External links mathnet.ru search.rsl.ru science.gov.az Living people 1957 births Azerbaijani scientists
249473
https://en.wikipedia.org/wiki/Open%20Source%20Development%20Labs
Open Source Development Labs
Open Source Development Labs (OSDL) was a non-profit organization supported by a consortium to promote Linux for enterprise computing. Founded in 2000, OSDL positioned itself as an independent, non-profit lab for developers who are adding enterprise capabilities to Linux. The headquarters was first incorporated in San Francisco but later relocated to Beaverton in Oregon with second facility in Yokohama, Japan. On January 22, 2007, OSDL and the Free Standards Group merged to form the Linux Foundation, narrowing their respective focuses to that of promoting Linux. Activities OSDL sponsored projects, including industry initiatives to enhance Linux for use in corporate data centres, in telecommunications networks, and on desktop computers. It also: provided hardware resources to the free software community and the open source community tested and reported on open source software employed a number of Linux developers. Its employees included Linus Torvalds, the first OSDL fellow, and Bryce Harrington. In 2005, Andrew "Tridge" Tridgell was the second OSDL fellow for a year. It had data centers in Beaverton (Oregon, United States) and Yokohama (Japan). OSDL had investment backers that included: 7 funders of Computer Associates, Fujitsu, Hitachi, Ltd., Hewlett-Packard, IBM, Intel Corporation, Nippon Electric Corporation, as well as a large collection of independent software vendors, end-user companies and educational institutions. A steering committee composed of representatives from the investment backers directed OSDL, which also had a significant staff of its own. Working groups OSDL had established five Working Groups since 2002: Mobile Linux Initiative Carrier Grade Linux Data Center Linux Desktop Linux User Advisory Council See also Patent Commons, a project launched in November 2005 by the OSDL References Free and open-source software organizations Organizations disestablished in 2007 Defunct companies based in Oregon Linux Foundation Buildings and structures in Beaverton, Oregon Companies established in 2000 Laboratories in Oregon 2007 disestablishments in Oregon
34308229
https://en.wikipedia.org/wiki/The%20East%20African%20University
The East African University
The East African University (TEAU) is a private university in Kenya. Location The university main campus is located approximately south of the central business district of the town of Kitengela, Kajiado County, Kenya. This location lies off the Nairobi-Kajiado-Namanga Road, approximately south of Nairobi, the capital of Kenya and the largest city in that country. The approximate coordinates of the university campus are:1° 39' 0.00"S, +36° 54' 0.00"E (Latitude:-1.6500; Longitude:36.9000). The coordinates are approximate because the university campus does not yet show on most publicly available maps in January 2012. The university also has another campus located at View Park Towers, Utalii Lane within the CBD (Central Business District) in Nairobi. History The idea to start the university was conceived in 2005. In 2006, of land were acquired in Kitengela, for the purpose of establishing the university. Application was then made to the Commission for University Education () for a Tertiary Education Institution License. The license was granted in November 2010 and was handed over to the university officials by the chairman of the commission, Prof. Ezra Maritum. Academics Schools , the university maintains the following schools: School of Business and Management Studies School of Computer Science and Information Technology School of Education, Arts & Social Sciences Courses Graduate School Courses None available as of January 2012 Undergraduate Degree Courses Bachelor of Science in Business Management (Accounting) Bachelor of Science in Business Management (Sales & Marketing) Bachelor of Science in Business Management (Banking & Finance) Bachelor of Science in Business Management (Human Resource Management) Bachelor of Computer Science and Information Technology Bachelor of Education (Arts) Bachelor of Business Information Technology Diploma Courses Diploma in Computer Science and IT Diploma in Business Information Technology Diploma in Business Science with the options Diploma in Actuarial Science Diploma in Credit Management Diploma in Islamic Banking and FINANCE Diploma in Procurement and Supplies Management Diploma in Co-operative Management Diploma in Management of NGOs Diploma in Project Planning and Management Certificate Courses Certificate in Science in Computer Science and IT Certificate in Business Information Technology Affiliation The University of East Africa is affiliated with Kampala University, a multi-campus institution with its main campus located in Ggaba, a southeastern suburb of Kampala, the capital of Uganda and the largest city in that country. Professor Badru Kateregga, the Chairman of the Board of Trustees of TEAU also serves as the Vice Chancellor of Kampala University. External links Homepage of TEAU See also Education in Kenya List of universities in Kenya Kajiado Kampala University References Universities in Kenya Educational institutions established in 2010 Kajiado County Education in Rift Valley Province 2010 establishments in Kenya
6058181
https://en.wikipedia.org/wiki/Trip%20computer
Trip computer
A trip computer is a computer fitted to some cars; most modern trip computers record, calculate, and display the distance travelled, the average speed, the average fuel consumption, and real-time fuel consumption. The first, mechanical trip computers, such as the Halda Speedpilot, produced by a Swedish taximeter manufacturer, were made in the 1950s as car accessories to enable the driver to maintain a given time schedule, particularly useful in rallying. One was installed as standard equipment in the 1958 Saab GT750. The 1952 Fiat 1900 came standard with a complex mechanical device, called mediometro in Italian, that showed the average speed. In 1978, the Cadillac division of General Motors introduced the "Cadillac Trip Computer", available on the Cadillac Seville; Chrysler also launched an electric trip computer on its low-end Omni/Horizon. They can range from basic to complex. The most basic trip computers incorporate average fuel mileage and perhaps an outside temperature display. Mid-range versions often include information on fuel, speed, distance, cardinal heading (compass), and elapsed time. The most advanced trip computers are reserved for high-end cars and often display average calculations for two drivers, a stop watch, tire-pressure information, over-speed warnings, and many other features. Sometimes the trip computer's display is in the gauge cluster, the dashboard or navigation-system screen, or an overhead console. Some displays include information about scheduled maintenance. The current Acura TL does this in stages, first alerting the driver with a "Due Soon" message; once the programmed mileage is reached, the message is "Due Now"; when more time or distance has elapsed, the message changes to "Past Due". Mercedes-Benz vehicles constantly monitor the quality of the oil and alert the driver when the oil has degraded to a certain extent. GM and FCA vehicles provide oil change alerts based on the number and length of trips, engine temperature, and other factors. Some vehicles also use the trip computer to allow owners to change certain aspects of vehicle behavior, e.g. how the power locks work, but in most cars "setting preferences" is now done through a center screen also used for the backup camera and radio. Some trip computers can display the diagnostic codes that mechanics use. This is especially useful when the mechanic wants to see the codes while driving the car. In 2004, Linear Logic developed the ScanGauge, which at the time was the only easily installed (via OBDII) accessory that worked as a trip computer, 4 simultaneous digital gauges, and a diagnostic trouble-code reader. This device has available 12 different measurements which can be used as the 4 digital gauges. The units of measure can be independently selected between miles/km, gallons/liters, Celsius/Fahrenheit, and PSI/kPa. In 2008, the OBDuino project announced a low-cost DIY trip computer design using the OBDII interface and the Arduino hobbyist microcontroller platform, released under the GPL open source license. See also OBDuino, an open source trip computer Some Carputer software includes trip computer functions References Auto parts Automotive accessories Automotive electronics Measuring instruments Onboard computers
26141298
https://en.wikipedia.org/wiki/HP%20IT%20Management%20Software
HP IT Management Software
HP IT Management Software is a family of Enterprise software products by Micro Focus as a result of the spin-merge of Hewlett Packard Enterprise's software assets with Micro Focus in 2017. The division was formerly owned by Hewlett Packard Enterprise, following the separation of Hewlett-Packard into HP Inc. and Hewlett Packard Enterprise in 2015. IT management software is a family of technology that helps companies manage their IT infrastructures, the people and the processes required to reap the greatest amount of responsiveness and effectiveness from today's multi-layered and highly complex data centers. Beginning in September 2005, HP purchased several software companies as part of a publicized, deliberate strategy to augment its catalog of IT management software offerings for large business customers. According to ZDNet and IDC, HP is the world's sixth largest software company. HP IT Management Software was the largest category of software sold by the HP Software Division. The concept behind IT management software is that IT needs to support the business and be run as a business rather than a cost center. The discipline includes software to help businesses manage their IT portfolio and assets, gain greater quality from their IT, govern the processes of their IT and improve IT security, to name a few. According to ComputerWorld, IT management software is designed to help businesses align IT spend and resources based on business priorities. Other companies who develop and sell IT Management software include IBM, BMC Software, Borland, CA, and Compuware. IT Management Software products Micro Focus sells several categories of software, including: business service management software, application lifecycle management software, mobile apps, big data and analytics, service and portfolio management software, automation and orchestration software, and enterprise security software. Micro Focus also provides Software as a service (SaaS), cloud computing solutions, and software services, including consulting, education, professional services, and support. For more information, see Micro Focus HP Software Division. User groups Vivit Worldwide is the independent HP Software community , designed to help members to develop their expertise and careers through education, community and advocacy programs. For almost two decades, Vivit has been the independent, unbiased, trusted and field-tested community for thousands of HP Software customers, developers and partners from all areas of the world, business and industry. Vivit membership is free. While it is not an actual "user group", the ITRC is an online forum about HP Software & Solutions products. A new HP user group, the HP Software Solutions Community, officially launched publicly in April 2010 and includes a number of former software-related communities. In June 2011, HP Software announced a new Discover Performance community and online resource center designed to serve IT executives and CIOs. Software company acquisition timeline Oct. 2011: Autonomy Corporation, provider of enterprise search and knowledge management applications solutions Mar. 2011: Vertica Systems, analytic database management software Oct. 2010: ArcSight, security management software Aug. 2010: Stratavia, database and application automation software Aug. 2010: Fortify Software, software security assurance solutions May 2008: Tower Software, document and records management software January 2008: Exstream Software, variable data publishing software July 2007: Opsware, data center automation software June 2007: SPI Dynamics, Web applications security software February 2007: Bristol Technology, Inc., business transaction monitoring technologies February 2007: PolyServe, Inc., storage software for application and file serving utilities February 2006: OuterBay, archiving software for enterprise applications and databases November 2006: Mercury Interactive Corporation, application management and delivery and IT governance software December 2006: Bitfone, mobile device software November 2005: Trustgenix, Inc., federated identity management software September 2005: Peregrine Systems Inc., asset and service management software September 2005: AppIQ, open storage area network management and storage resource management technologies See also ITIL Service-oriented architecture References External links IT Management Software Cloud computing providers
37101640
https://en.wikipedia.org/wiki/Cybercrime%20Prevention%20Act%20of%202012
Cybercrime Prevention Act of 2012
The Cybercrime Prevention Act of 2012, officially recorded as Republic Act No. 10175, is a law in the Philippines that was approved on September 12, 2012. It aims to address legal issues concerning online interactions and the Internet in the Philippines. Among the cybercrime offenses included in the bill are cybersquatting, cybersex, child pornography, identity theft, illegal access to data and libel. While hailed for penalizing illegal acts done via the Internet that were not covered by old laws, the act has been criticized for its provision on criminalizing libel, which is perceived to be a curtailment of the freedom of expression—"cyber authoritarianism". Its use against journalists like Maria Ressa, of Rappler, has drawn international condemnation. On October 9, 2012, the Supreme Court of the Philippines issued a temporary restraining order, stopping implementation of the Act for 120 days, and extended it on 5 February 2013 "until further orders from the court." On February 18, 2014, the Supreme Court upheld most of the sections of the law, including the controversial cyberlibel component. History The Cybercrime Prevention Act of 2012 is one of the first law in the Philippines which specifically criminalizes computer crime, which prior to the passage of the law had no strong legal precedent in Philippine jurisprudence. While laws such as the Electronic Commerce Act of 2000 (Republic Act No. 8792) regulated certain computer-related activities, these laws did not provide a legal basis for criminalizing crimes committed on a computer in general: for example, Onel De Guzman, the computer programmer charged with purportedly writing the ILOVEYOU computer worm, was ultimately not prosecuted by Philippine authorities due to lack of legal basis to charge him under existing Philippine laws at the time of his arrest. The first drafts of the Anti-Cybercrime and Data Privacy Acts started in 2001 under the Legal and Regulatory Committee of the former Information Technology and eCommerce Council ([ITECC) which is the forerunner of the Commission on Information and Communication Technology (CICT) and now the Department of Information and Communications Technology (DICT) It was headed by former Secretary Virgilio "Ver" Peña, with the Legal and Regulatory Committee chaired by Atty. Claro Parlade. The creation of the laws was an initiative of the Information Security and Privacy Sub-Committee chaired by Albert P. dela Cruz who was then president of the Philippine Computer Emergency Response Team (PHCERT), together with Anti-Computer Crime and Fraud Division (ACCFD) Chief, Elfren Meneses of the National Bureau of Investigation (NBI). The administrative and operational functions was provided by the Presidential Management Staff (PMS) acting as the CICT secretariat. The initial version] of the law was communicated to various other organizations and special interest groups during that time. This was superseded by several cybercrime-related bills filed in the 14th and 15th Congress. The Cybercrime Prevention Act ultimately was the product of House Bill No. 5808, authored by Representative Susan Yap-Sulit of the second district of Tarlac and 36 other co-authors, and Senate Bill No. 2796, proposed by Senator Edgardo Angara. Both bills were passed by their respective chambers within one day of each other on June 5 and 4, 2012, respectively, shortly after the impeachment of Renato Corona, and the final version of the Act was signed into law by President Benigno Aquino III on September 12. Provisions The Act, divided into 31 sections split across eight chapters, criminalizes several types of offense, including illegal access (hacking), data interference, device misuse, cybersquatting, computer-related offenses such as computer fraud, content-related offenses such as cybersex and spam, and other offenses. The law also reaffirms existing laws against child pornography, an offense under Republic Act No. 9775 (the Anti-Child Pornography Act of 2009), and libel, an offense under Section 355 of the Revised Penal Code of the Philippines, also criminalizing them when committed using a computer system. Finally, the Act includes a "catch-all" clause, making all offenses currently punishable under the Revised Penal Code also punishable under the Act when committed using a computer, with more severe penalties than what was provided by the Revised Penal Code alone. The Act has universal jurisdiction: its provisions apply to all Filipino nationals regardless of the place of commission. Jurisdiction also lies when a punishable act is either committed within the Philippines, whether the erring device is wholly or partly situated in the Philippines, or whether damage was done to any natural or juridical person who at the time of commission was within the Philippines. Regional Trial Courts shall have jurisdiction over cases involving violations of the Act. A takedown clause is included in the Act, empowering the Department of Justice to restrict and/or demand the removal of content found to be contrary to the provisions of the Act, without the need for a court order. This provision, originally not included in earlier iterations of the Act as it was being deliberated through Congress, was inserted during Senate deliberations on May 31, 2012. Complementary to the takedown clause is a clause mandating the retention of data on computer servers for six months after the date of transaction, which may be extended for another six months should law enforcement authorities request it. The Act also mandates the National Bureau of Investigation and the Philippine National Police to organize a cybercrime unit, staffed by special investigators whose responsibility will be to exclusively handle cases pertaining to violations of the Act, under the supervision of the Department of Justice. The unit is empowered to, among others, collect real-time traffic data from Internet service providers with due cause, require the disclosure of computer data within 72 hours after receipt of a court warrant from a service provider, and conduct searches and seizures of computer data and equipment. Reaction The new Act received mixed reactions from several sectors upon its enactment, particularly with how its provisions could potentially affect freedom of expression, freedom of speech and data security in the Philippines. The local business process outsourcing industry has received the new law well, citing an increase in the confidence of investors due to measures for the protection of electronic devices and online data. Media organizations and legal institutions though have criticized the Act for extending the definition of libel as defined in the Revised Penal Code of the Philippines, which has been criticized by international organizations as being outdated: the United Nations for one has remarked that the current definition of libel as defined in the Revised Penal Code is inconsistent with the International Covenant on Civil and Political Rights, and therefore violates the respect of freedom of expression. Senator Edgardo Angara, the main proponent of the Act, defended the law by saying that it is a legal framework to protect freedoms such as the freedom of expression. He asked the Act's critics to wait for the bill's implementing rules and regulations to see if the issues were addressed. He also added that the new law is unlike the controversial Stop Online Piracy Act and PROTECT IP Act. However, Senator TG Guingona criticized the bill, calling it a prior restraint to the freedom of speech and freedom of expression. The Electronic Frontier Foundation has also expressed concern about the Act, supporting local media and journalist groups which are opposed to it. The Centre for Law and Democracy also published a detailed analysis criticizing the law from a freedom of expression perspective. Malacañang has attempted to distance itself from the law; after the guilty verdict was rendered in the Maria Ressa cyberlibel case, presidential spokesman Harry Roque blamed President Duterte's predecessor, Noynoy Aquino, for any negative effects of the law. Constitutionality Several petitions were submitted to the Supreme Court questioning the constitutionality of the Act. On October 2, the Supreme Court initially deferred action on the petitions, citing an absence of justices which prevented the Court from sitting en banc. The initial lack of a temporary restraining order meant that the law went into effect as scheduled on October 3. In protest, Filipino netizens reacted by blacking out their Facebook profile pictures and trending the hashtag #NoToCybercrimeLaw on Twitter. "Anonymous" also defaced government websites, including those of the Bangko Sentral ng Pilipinas, the Metropolitan Waterworks and Sewerage System and the Intellectual Property Office. On October 8, 2012, the Supreme Court decided to issue a temporary restraining order, pausing implementation of the law for 120 days. In early December 2012, the government requested the lifting of the TRO, which was denied. Over four hours of oral arguments by petitioners were heard on January 15, 2013, followed by a three-hour rebuttal by the Office of the Solicitor General, representing the government, on January 29, 2013. This was the first time in Philippine history that oral arguments were uploaded online by the Supreme Court. Disini v. Secretary of Justice On February 18, 2014, the Supreme Court ruled that most of the law was constitutional, although it struck down other provisions, including the ones that violated double jeopardy. Notably, likes and "retweets" of libelous content, originally themselves also criminalized as libel under the law, were found to be legal. Only justice Marvic Leonen dissented from the ruling, writing that he believes the whole idea of criminal libel to be unconstitutional. While motions for reconsideration were immediately filed by numerous petitioners, including the Center for Media Freedom and Responsibility, they were all rejected on April 22, 2014. However, justice Arturo Brion, who originally wrote a separate concurring opinion, changed his vote to dissent after reconsidering whether it was just to impose higher penalties for cyberlibel than for regular libel. Effects Cyberlibel On May 24, 2013, the DOJ announced they would seek to drop the online libel provisions of the law, as well as other provisions that "are punishable under other laws already", like child pornography and cybersquatting. The DOJ said it would endorse revising the law to the 16th Congress of the Philippines, but cyberlibel remains on the books as a crime in the Philippines, and has been charged by DOJ prosecutors multiple times since then. Senator Tito Sotto is primarily responsible for the cyberlibel provision, which he added after social media comments accusing him of plagiarism; he has defended his authorship of the last minute amendment, asking reporters if it was fair that "just because [bloggers] are now accountable under the law, they are angry with me?" While libel had been a crime in the Philippines since the American imperial period, before cyberlibel it had a penalty of minimum or medium prisión correccional (six months to four years and two months), but now has a penalty of prisión mayor (six to twelve years). Duterte's administration has been accused of targeting journalists with the law, in particular Rappler. Journalists charged with cyberlibel since 2013 include Ramon Tulfo, RJ Nieto, and Maria Ressa. An online post does not even need to be public for cases to be filed by the DOJ. Roman Catholic clergy have also faced cyberlibel charges. Even foreigners have both accused others of and been accused of cyberlibel charges. As the act has universal jurisdiction, it is not required that an offender commit the offense in the Philippines; the DOJ brought up an OFW caregiver who lived in Taiwan on charges for allegedly "posting nasty and malevolent materials against President Duterte on Facebook". Insults that would be seen in other countries as minor have led to DOJ prosecutors filing cyberlibel charges: such as "crazy"; "asshole"; "senile"; and "incompetent". On 2 March 2020, the first guilty verdict in a cyberlibel case was returned against a local politician, Archie Yongco, of Aurora, Zamboanga del Sur. Yongco was found guilty of falsely accusing another local politician of murder-for-hire via a Facebook post, which he deleted minutes later, but of which archives were made; the court was unconvinced by his denial that he posted the message, and he was sentenced to eight years in jail and ordered to pay damages of (). A Magna Carta for Philippine Internet Freedom was crowdsourced by Filipino netizens with the intent of, among other things, repealing the Cybercrime Prevention Act of 2012; it failed to pass. Several organizations continue to fight for the decriminalization of all forms of libel in the Philippines, including the National Union of Journalists of the Philippines and Vera Files. See also Cyberbullying Philippine copyright law Notes References External links Alt URL 2012 in the Philippines Censorship in the Philippines Computing legislation Cyberbullying Internet in the Philippines Journalism in the Philippines Philippine intellectual property law Presidency of Benigno Aquino III Philippine law
28333802
https://en.wikipedia.org/wiki/College%20of%20Management%20Academic%20Studies
College of Management Academic Studies
The College of Management Academic Studies, a college located in the city of Rishon LeZion Israel, is the largest college in Israel. Founded in 1978, COLMAN is the first non-subsidized, not-for-profit research academic institution in Israel to be recognized and certified by the Council for Higher Education in Israel. It offers bachelor's and master's degrees in business administration, law, media, economics, organizational development and consulting, computer science, behavioral sciences, family studies and interior design. The college places an emphasis on social awareness and responsibility, encouraging both students and faculty to take part in communities and outreach activities. History The College of Management - Academic Studies (COLMAN) gained authorization in 1986 to award a BA degree in business administration and accounting. COLMAN became the first non-subsidized, non-profit academic institution in Israel to be recognized and certified by the country's supreme educational authority, the Council for Higher Education in Israel. Official recognition of COLMANS’ status marked the democratization of Israeli higher education. Today, COLMAN is the largest college in Israel and has become a dynamic, innovative force in Israel's higher educational framework. The Academic Studies has achieved its impressive growth because of its success in meeting a real need - the desire of young Israelis for a unique curricula that combines professional knowledge with practical application, close ties between faculty and students, and small classes. Teaching and degrees The Teaching Authority was established to improve the quality of teaching at institutions of higher education in Israel in general, and at COLMAN in particular through various means, while developing tools to measure quality instruction. It encourages debate on the essence of academic teaching and its quality amongst the faculty and academic management. The Research Authority aims to encourage research activity among the College schools and academic departments to aid faculty to locate research funds both in Israel and abroad and to prepare grant proposals. The Authority encourages faculty to engage in both basic and applied research as an important element for the improvement of the quality of teaching, a stepping-stone for the personal academic advancement of the faculty and the placing of the College on the academic map in Israel and abroad. Academic year The academic year is divided into three terms lasts from October to January ; from February to May; and from June to September. Libraries Central library The College of Management central library lies in Rishon LeZion campus since 1995. In the year 2010, inaugurated in its building dedicated to the library structure. The library serves the students and the academic ranks of the college. Library building covers three floors and includes reading rooms housing collection of varies fields, work in group rooms, Daily Press Room, audiovisual room and a teaching class. In the entry level there is the lending books desk, display shelves displaying new books and journals, sitting areas and computerized seating positions. Copiers are placed on each floor near the elevator. Printers are located in each work in group room. The library collection contains more than 80,000 volumes of books and copies of electronic media. The collection includes 450 journals in print, thousands of electronic journals, 40 databases, and publications of research institutes, newspapers, videotapes and DVDs. The collection covers all topics included in undergraduate and graduated curricula: management, financing, marketing, accounting, operations management, economics, international trade, international organizations, behavioral science, criminology, organizational behavior, family studies, communication, computer science, interior design art and more. Books in the library are arranged by classification numbers representing the classification of areas and issues. Law library The law library was established in 1990, in 1995 it has moved to its new residence in the law school building on the Rishon LeZion campus. The library has a key role in the academic teaching, research and activities taking place in law school and allowing large accessibility to legal information worldwide. At the entrance level, there is a reading hall containing reference books, sources files, Jewish law, multi-disciplinary collection, legislation and case law sets and display shelves displaying the latest journals. The library has a comprehensive collection of library files legislation and case law from Israel and abroad, monographs and other areas related, law journals, legal series, encyclopedias, dictionaries and guides. In addition to the printed collection, the library subscribed to a wide range of legal databases and others, including foreign legislation and case law, digital books and hundreds of electronic journals. These collections also can be accessed off-campus. Faculties and Departments Business Administration The school of business is the largest in the country with an enrollment of approximately 3,500 students. It is the first private, non-state academic program ever officially accredited by the Council for Higher Education in Israel, Israel's accreditation authority for institutes of higher education. Law The Haim Striks School of Law was founded by Daniel Friedmann (subsequently Minister of Justice) in 1990. It offers both a LLB and a subsequent Master's program. In 2005, as part of Englard report, prepared for the Council for Higher Education in order to Evaluate the Quality of Law Faculties in Israel, COMAS's Law School granted the highest ranking among the non-subsidized institutions and ranked as one of Israel's leading law schools. The Haim Striks School of Law closely cooperates with leading universities in the United States and England such as Fordham University and University of Oxford. Media Studies The School is the largest school of Media Studies in the country catering to more than 50% of Israel's students of Media Studies. The School's B.A. program in Media Studies and Management integrates theoretical and practical knowledge. Economics The School of Economics was established in 1994, and in recent years has undergone accelerated growth and development. Today, it is one of the largest departments in its field in Israel. It caters to 1,300 students for a BA in Economics and Management. Computer Science The Department of Computer Science is one of the largest and leading departments in its field in Israel. It caters to 1,300 students for a BA in Economics and Management. Behavioral Sciences The School of Behavioral Sciences at the College of Management was established in 1994 as a multidisciplinary department for studying psychology, sociology, and anthropology. Interior Design The Interior Design Department was founded in 1995 is a full academic track in interior design separate from architecture or design studies. Alumni and academics The Alumni Forum was established to cater for the College's 31,000 alumni. It aims to maintain the connection between the College and its graduates, to encourage contact between the graduates and to fulfill the needs of the alumni both academically and in the business and employment fields. Office of International Programs The College of Management - Academic Studies has an international office that works with students from several countries. There is also a student club for foreign and Israeli students on campus known as the International Student Community (ISC). See also List of universities and colleges in Israel Education in Israel References External links College website Admissions website Educational institutions established in 1978 Colleges in Israel 1978 establishments in Israel Law schools in Israel
462545
https://en.wikipedia.org/wiki/Electronic%20data%20processing
Electronic data processing
Electronic data processing (EDP) can refer to the use of automated methods to process commercial data. Typically, this uses relatively simple, repetitive activities to process large volumes of similar information. For example: stock updates applied to an inventory, banking transactions applied to account and customer master files, booking and ticketing transactions to an airline's reservation system, billing for utility services. The modifier "electronic" or "automatic" was used with "data processing" (DP), especially c. 1960, to distinguish human clerical data processing from that done by computer. History Herman Hollerith then at the U.S. Census Bureau devised a tabulating system that included cards (Hollerith card, later Punched card), a punch for holes in them representing data, a tabulator and a sorter. The system was tested in computing mortality statistics for the city of Baltimore. In the first commercial electronic data processing Hollerith machines were used to compile the data accumulated in the 1890 U.S. Census of population. Hollerith's Tabulating Machine Company merged with two other firms to form the Computing-Tabulating-Recording Company, later renamed IBM. The punch-card and tabulation machine business remained the core of electronic data processing until the advent of electronic computing in the 1950s (which then still rested on punch cards for storing information). The first commercial business computer was developed in the United Kingdom in 1951, by the J. Lyons and Co. catering organization. This was known as the 'Lyons Electronic Office' – or LEO for short. It was developed further and used widely during the 1960s and early 1970s. (Lyons formed a separate company to develop the LEO computers and this subsequently merged to form English Electric Leo Marconi and then International Computers Limited. By the end of the 1950s punched card manufacturers, Hollerith, Powers-Samas, IBM and others, were also marketing an array of computers. Early commercial systems were installed exclusively by large organizations. These could afford to invest the time and capital necessary to purchase hardware, hire specialist staff to develop bespoke software and work through the consequent (and often unexpected) organizational and cultural changes. At first, individual organizations developed their own software, including data management utilities, themselves. Different products might also have 'one-off' bespoke software. This fragmented approach led to duplicated effort and the production of management information needed manual effort. High hardware costs and relatively slow processing speeds forced developers to use resources 'efficiently'. Data storage formats were heavily compacted, for example. A common example is the removal of the century from dates, which eventually led to the 'millennium bug'. Data input required intermediate processing via punched paper tape or punched card and separate input to a repetitive, labor-intensive task, removed from user control and error-prone. Invalid or incorrect data needed correction and resubmission with consequences for data and account reconciliation. Data storage was strictly serial on paper tape, and then later to magnetic tape: the use of data storage within readily accessible memory was not cost-effective until hard disk drives were first invented and began shipping in 1957. Significant developments took place in 1959 with IBM announcing the 1401 computer and in 1962 with ICT (International Computers & Tabulators) making delivery of the ICT 1301. Like all machines during this time the processor together with the peripherals – magnetic tape drives, disks drives, drums, printers and card and paper tape input and output required considerable space in specially constructed air conditioned accommodation. Often parts of the punched card installation, in particular sorters, were retained to present the card input to the computer in a pre-sort form that reduced the processing time involved in sorting large amounts of data. Data processing facilities became available to smaller organizations in the form of the computer services bureau. These offered processing of specific applications e.g. payroll and were often a prelude to the purchase of customers' own computers. Organizations used these facilities for testing programs while awaiting the arrival of their own machine. These initial machines were delivered to customers with limited software. The design staff was divided into two groups. Systems analysts produced a systems specification and programmers translated the specification into machine language. Literature on computers and EDP was sparse and mostly obtained through articles appearing in accountancy publications and material supplied by the equipment manufacturers. The first issue of The Computer Journal published by The British Computer Society appeared in mid 1958. The UK Accountancy Body now named The Association of Chartered Certified Accountants formed an Electronic Data Processing Committee in July 1958 with the purpose of informing its members of the opportunities created by the computer. The Committee produced its first booklet in 1959, An Introduction to Electronic Computers. Also in 1958 The Institute of Chartered Accountants in England and Wales produced a paper Accounting by Electronic Methods. The notes show what may be possible and the potential implications of using a computer. Progressive organizations attempted to go beyond the straight systems transfer from punched card equipment and unit accounting machines to the computer, to producing accounts to the trial balance stage and integrated management information systems. New procedures redesigned the way paper flowed, changed organizational structures, called for a rethink of the way information was presented to management and challenged the internal control principles adopted by the designers of accounting systems. But the full realization of these benefits had to await the arrival of the next generation of computers Today As with other industrial processes commercial IT has moved in most cases from a custom-order, craft-based industry where the product was tailored to fit the customer; to multi-use components taken off the shelf to find the best-fit in any situation. Mass-production has greatly reduced costs and IT is available to the smallest organization. LEO was hardware tailored for a single client. Today, Intel Pentium and compatible chips are standard and become parts of other components which are combined as needed. One individual change of note was the freeing of computers and removable storage from protected, air-filtered environments. Microsoft and IBM at various times have been influential enough to impose order on IT and the resultant standardizations allowed specialist software to flourish. Software is available off the shelf. Apart from products such as Microsoft Office and IBM Lotus, there are also specialist packages for payroll and personnel management, account maintenance and customer management, to name a few. These are highly specialized and intricate components of larger environments, but they rely upon common conventions and interfaces. Data storage has also been standardized. Relational databases are developed by different suppliers using common formats and conventions. Common file formats can be shared by large mainframes and desktop personal computers, allowing online, real-time input and validation. In parallel, software development has fragmented. There are still specialist technicians, but these increasingly use standardized methodologies where outcomes are predictable and accessible. At the other end of the scale, any office manager can dabble in spreadsheets or databases and obtain acceptable results (but there are risks, because many don't know what Software testing is). Specialized software is software that is written for a specific task rather for a broad application area. These programs provide facilities specifically for the purpose for which they were designed. At the other end of the scale, any office manager can dabble in spreadsheets or databases and obtain acceptable results. See also Computing Data processing Data processing system Information Technology References Information technology management Data processing
705620
https://en.wikipedia.org/wiki/4X
4X
4X (abbreviation of Explore, Expand, Exploit, Exterminate) is a subgenre of strategy-based computer and board games, and include both turn-based and real-time strategy titles. The gameplay involves building an empire. Emphasis is placed upon economic and technological development, as well as a range of non-military routes to supremacy. The earliest 4X games borrowed ideas from board games and 1970s text-based computer games. The first 4X computer games were turn-based, but real-time 4X games are common. Many 4X computer games were published in the mid-1990s, but were later outsold by other types of strategy games. Sid Meier's Civilization is an important example from this formative era, and popularized the level of detail that later became a staple of the genre. In the new millennium, several 4X releases have become critically and commercially successful. In the board (and card) game domain, 4X is less of a distinct genre, in part because of the practical constraints of components and playing time. The Civilization board game that gave rise to Sid Meier's Civilization computer game, for instance, includes neither exploration nor extermination. Unless extermination is targeted at non-player entities, it tends to be either nearly impossible (because of play balance mechanisms, since player elimination is usually considered an undesirable feature) or certainly unachievable (because victory conditions are triggered before extermination can be completed) in board games. Definition The term "4X" originates from a 1993 preview of Master of Orion in Computer Gaming World by game writer Alan Emrich where he rated the game "XXXX" as a pun on the XXX rating for pornography. The four Xs were an abbreviation for "EXplore, EXpand, EXploit and EXterminate". Since then, others have adopted the term to describe games of similar scope and design. By February 1994, another author in the magazine said that Command Adventures: Starship "only pays lip service to the four X's", and other game commentators adopted the "4X" label to describe similar games. The 4X game genre has come to be defined has having the four following gameplay conventions: Explore means players send scouts across a map to reveal surrounding territories. Expand means players claim new territory by creating new settlements, or sometimes by extending the influence of existing settlements. Exploit means players gather and use resources in areas they control, and improve the efficiency of that usage. Exterminate means attacking and eliminating rival players. Since in some games all territory is eventually claimed, eliminating a rival's presence may be the only way to achieve further expansion. These gameplay elements may happen in separate phases of gameplay, or may overlap with each other over varying lengths of game time depending on game design. For example, the Space Empires series and Galactic Civilizations II: Dark Avatar have a long expansion phase, because players must make large investments in research to explore and expand into all areas. Emrich later expanded his concept for designing Master of Orion 3 with a fifth X, eXperience, an aspect that came with the subject matter of the game. Modern definition In modern-day usage, 4X games are different from other strategy games such as Command & Conquer by their greater complexity and scale, and their complex use of diplomacy. Reviewers have also said that 4X games feature a range of diplomatic options, and that they are well known for their large detailed empires and complex gameplay. In particular, 4X games offer detailed control over an empire's economy, while other computer strategy games simplify this in favor of combat-focused gameplay. Game design 4X computer and board games are a subgenre of strategy games, and include both turn-based and real-time strategy titles. The gameplay involves building an empire, which takes place in a setting such as Earth, a fantasy world, or in space. Each player takes control of a different civilization or race with unique characteristics and strengths. Most 4X games represent these racial differences with a collection of economic and military bonuses. Research and technology 4X games typically feature a technology tree, which represents a series of advancements that players can unlock to gain new units, buildings, and other capabilities. Technology trees in 4X games are typically larger than in other strategy games, featuring a larger selection of different choices. Empires must generate research resources and invest them in new technology. In 4X games, the main prerequisite for researching an advanced technology is knowledge of earlier technology. This is in contrast to non-4X real-time strategy games, where technological progress is achieved by building structures that grant access to more advanced structures and units. Research is important in 4X games because technological progress is an engine for conquest. Battles are often won by superior military technology or greater numbers, with battle tactics playing a smaller part. In contrast, military upgrades in non-4X games are sometimes small enough that technologically basic units remain important throughout the game. Combat Combat is an important part of 4X gameplay, because 4X games allow a player to win by exterminating all rival players, or by conquering a threshold amount of the game's universe. Some 4X games, such as Galactic Civilizations, resolve battles automatically, whenever two units from warring sides meet. This is in contrast to other 4X games, such as Master of Orion, that allow players to manage battles on a tactical battle screen. Even in 4X games with more detailed control over battles, victory is usually determined by superior numbers and technology, with battle tactics playing a smaller part. 4X games differ from other combat-focused strategy games by putting more emphasis on research and economics. Researching new technology will grant access to new combat units. Some 4X games even allow players to research different unit components. This is more typical of space 4X games, where players may assemble a ship from a variety of engines, shields, and weaponry. Peaceful competition 4X games allow rival players to engage in diplomacy. While some strategy games may offer shared victory and team play, diplomatic relations tend to be restricted to a binary choice between an ally or enemy. 4X games often allow more complex diplomatic relations between competitors who are not on the same team. Aside from making allies and enemies, players are also able to trade resources and information with rivals. In addition to victory through conquest, 4X games offer peaceful victory conditions or goals that involve no extermination of rival players (although war may still be a necessary by-product of reaching said goal). For example, a 4X game may offer victory to a player who achieves a certain score or the highest score after a certain number of turns. Many 4X games award victory to the first player to master an advanced technology, accumulate a large amount of culture, or complete an awe-inspiring achievement. Several 4X games award "diplomatic victory" to anyone who can win an election decided by their rival players, or maintain peace for a specified number of turns. Galactic Civilizations has a diplomatic victory which involves having alliances with at least four factions, with no factions outside of one's alliance; there are two ways to accomplish this: ally with all factions, or ally with at least the minimum number of factions and destroy the rest. Complexity 4X games are known for their complex gameplay and strategic depth. Gameplay usually takes priority over elaborate graphics. Whereas other strategy games focus on combat, 4X games also offer more detailed control over diplomacy, economics, and research; creating opportunities for diverse strategies. This also challenges the player to manage several strategies simultaneously, and plan for long-term objectives. To experience a detailed model of a large empire, 4X games are designed with a complex set of game rules. For example, the player's productivity may be limited by pollution. Players may need to balance a budget, such as managing debt, or paying down maintenance costs. 4X games often model political challenges such as civil disorder, or a senate that can oust the player's political party or force them to make peace. Such complexity requires players to manage a larger amount of information than other strategy games. Game designers often organize empire management into different interface screens and modes, such as a separate screen for diplomacy, managing individual settlements, and managing battle tactics. Sometimes systems are intricate enough to resemble a minigame. This is in contrast to most real-time strategy games. Dune II, which arguably established the conventions for the real-time strategy genre, was fundamentally designed to be a "flat interface", with no additional screens. Gameplay Since 4X games involve managing a large, detailed empire, game sessions usually last longer than other strategy games. Game sessions may require several hours of play-time, which can be particularly problematic for multiplayer matches. For example, a small-scale game in Sins of a Solar Empire can last longer than twelve hours. However, fans of the genre often expect and embrace these long game sessions; Emrich wrote that "when the various parts are properly designed, other X's seem to follow. Words like EXcite, EXperiment and EXcuses (to one's significant others)". Turn-based 4X games typically divide these sessions into hundreds of turns of gameplay. Because of repetitive actions and long-playing times, 4X games have been criticized for excessive micromanagement. In early stages of a game this is usually not a problem, but later in a game directing an empire's numerous settlements can demand several minutes to play a single turn. This increases playing-times, which are a particular burden in multiplayer games. 4X games began to offer AI governors that automate the micromanagement of a colony's build orders, but players criticized these governors for making poor decisions. In response, developers have tried other approaches to reduce micromanagement, and some approaches have been more well received than others. Commentators generally agree that Galactic Civilizations succeeds, which GamingNexus.com attributes to the game's use of programmable governors. Sins of a Solar Empire was designed to reduce the incentives for micromanagement, and reviewers found that the game's interface made empire management more elegant. On the other hand, Master of Orion III reduced micromanagement by limiting complete player control over their empire. Victory conditions Most 4X and similar strategy games feature multiple possible ways to win the game. For example, in Civilization, players may win through total domination of all opposing players by conquest of their cities, but may also win through technological achievements (being the first to launch a spacecraft to a new planet), diplomacy (achieving peace agreements with all other nations), or other means. Multiple victory conditions help to support the human player who may have to shift strategies as the game progresses and opponents secure key resources before the player can. However, these multiple conditions can also give the computer-controlled opponents multiple pathways to potentially outwit the player, who is generally going to be over-powered in certain areas over the computer opponents. A component of the late-game design in 4X games is forcing the player to commit to a specific victory condition by making the cost and resources required to secure it so great that other possible victory conditions may need to be passed over. History Origin Early 4X games were influenced by board games and text-based computer games from the 1970s. Cosmic Balance II, Andromeda Conquest and Reach for the Stars were published in 1983, and are now seen retrospectively as 4X games. Although Andromeda Conquest was only a simple game of empire expansion, Reach for the Stars introduced the relationship between economic growth, technological progress, and conquest. Trade Wars, first released in 1984, though primarily regarded as the first multiplayer space trader, included space exploration, resource management, empire building, expansion and conquest. It has been cited by the author of VGA Planets as an important influence on VGA Planets 4. In 1991, Sid Meier released Civilization and popularized the level of detail that has become common in the genre. Sid Meier's Civilization was influenced by board games such as Risk and the Avalon Hill board game also called Civilization. A notable similarity between the Civilization computer game and board game is the importance of diplomacy and technological advancement. Sid Meier's Civilization was also influenced by personal computer games such as the city management game SimCity and the wargame Empire. Civilization became widely successful and influenced many 4X games to come; Computer Gaming World compared its importance to computer gaming to that of the wheel. Armada 2525 was also released in 1991 and was cited by the Chicago Tribune as the best space game of the year. A sequel, Armada 2526 was released in 2009. In 1991, two highly influential space games were released. VGA Planets was released for the PC, while Spaceward Ho! was released on the Macintosh. Although 4X space games were ultimately more influenced by the complexity of VGA Planets, Spaceward Ho! earned praise for its relatively simple yet challenging game design. Spaceward Ho! is notable for its similarity to the 1993 game Master of Orion, with its simple yet deep gameplay. Master of Orion also drew upon earlier 4X games such as Reach for the Stars, and is considered a classic game that sets a new standard for the genre. In a preview of Master of Orion, Emrich coined the term "XXXX" to describe the emerging genre. Eventually, the "4X" label was adopted by the game industry, and is now applied to several earlier game releases. Peak Following the success of Civilization and Master of Orion, other developers began releasing their own 4X games. In 1994, Stardock launched its first version of the Galactic Civilizations series for OS/2, and the long-standing Space Empires series began as shareware. Ascendancy and Stars! were released in 1995, and both continued the genre's emphasis on strategic depth and empire management. Meanwhile, the Civilization and Master of Orion franchises expanded their market with versions for the Macintosh. Sid Meier's team also produced Colonization in 1994 and Civilization II in 1996, while Simtex released Master of Orion in 1993, Master of Magic in 1994 and Master of Orion II in 1996. By the late 1990s, real-time strategy games began outselling turn-based games. As they surged in popularity, major 4X developers fell into difficulties. Sid Meier's Firaxis Games released Sid Meier's Alpha Centauri in 1999 to critical acclaim, but the game fell short of commercial expectations. Civilization III encountered development problems followed by a rushed release in 2001. Despite the excitement over Master of Orion III, its release in 2003 was met with criticism for its lack of player control, poor interface, and weak AI. Game publishers eventually became risk-averse to financing the development of 4X games. Real-time hybrid 4X Eventually real-time 4X games were released, such as Imperium Galactica in 1997, Starships Unlimited in 2001, and Sword of the Stars in 2006, featuring a combination of turn-based strategy and real-time tactical combat. The blend of 4X and real-time strategy gameplay led Ironclad Games to market their 2008 release Sins of a Solar Empire as a "RT4X" game. This combination of features earned the game a mention as one of the top games from 2008, including GameSpot's award for best strategy game, and IGN's award for best PC game. The Total War series, debuting in 2000 with Shogun: Total War, combines a turn-based campaign map and real-time tactical battles. 4X in board games Cross-fertilization between board games and video games continued. For example, some aspects of Master of Orion III were drawn from the first edition of the board game Twilight Imperium. Even Sins of a Solar Empire was inspired by the idea of adapting the board game Buck Rogers Battle for the 25th Century into a real-time video game. Going in the opposite direction, in 2002 Eagle Games made a board game adaptation of Sid Meier's Civilization, entitled simply Sid Meier's Civilization: The Boardgame, significantly different from the board game that had inspired the computer game in the first place. Another remake based on that series, under a very similar title, Sid Meier's Civilization: The Board Game, was released in 2010 by Fantasy Flight Games, followed by Civilization: A New Dawn in 2017. As of early 2021, BoardGameGeek listed close to 200 board games classified under 4X type, including titles such as Eclipse (2011) and Heroes of Land, Air & Sea (2018). Recent history In 2003, Stardock released a remake of Galactic Civilizations, which was praised by reviewers who saw the game as a replacement for the Master of Orion series. Civilization IV was released at the end of 2005 and was considered the PC game of the year according to several reviewers, including GameSpot and GameSpy. It is now considered one of the greatest computer games in history, having been ranked the second-best PC game of all time by IGN. By 2008, the Civilization series had sold over eight million copies, followed the release of Civilization Revolution for game consoles soon after, Civilization V in 2010 and Civilization VI in 2016. Meanwhile, Stardock released Galactic Civilizations II, which was considered the sixth-best PC game of 2006 by GameSpy. Additionally, French developer Amplitude Studios released both Endless Space and Endless Legend. These successes have led Stardock's Brad Wardell to assert that 4X games have excellent growth potential, particularly among less hardcore players. Paradox Interactive specializes in grand strategy games, a genre that often overlaps with 4X, including the Europa Universalis series, the Crusader Kings series, and Stellaris. Grand strategy games differ in being "assymmetrical", meaning that players are more bound to a specific setup and not among equally free factions in exploring and progressing the game and an open world. Amplitude Studios released Endless Space 2 in 2017, met with mixed to positive reviews. The 4X genre has also been extended by gamers who have supported free software releases such as Freeciv, FreeCol, Freeorion, Golden Age of Civilizations, and C-evo. See also List of 4X video games References Strategy video games Video game genres
475584
https://en.wikipedia.org/wiki/William%20Quantrill
William Quantrill
William Clarke Quantrill (July 31, 1837 – June 6, 1865) was a Confederate guerrilla leader during the American Civil War. Having endured a tempestuous childhood before later becoming a schoolteacher, Quantrill joined a group of bandits who roamed the Missouri and Kansas countryside to apprehend escaped slaves. Later, the group became Confederate soldiers, who were referred to as "Quantrill's Raiders". It was a pro-Confederate partisan ranger outfit that was best known for its often brutal guerrilla tactics. Also notable is that the group included the young Jesse James and his older brother Frank James. Quantrill is often noted as influential in the minds of many bandits, outlaws and hired guns of the Old West as it was being settled. In May 1865, Quantrill was mortally wounded in combat by Union troops in Central Kentucky in one of the last engagements of the Civil War. He died of wounds in June. Early life William Quantrill was born at Canal Dover, Ohio, on July 31, 1837. His father was Thomas Henry Quantrill, formerly of Hagerstown, Maryland, and his mother, Caroline Cornelia Clark, was a native of Chambersburg, Pennsylvania. Quantrill was also the oldest of twelve children, four of whom died in infancy. By the time he was sixteen, Quantrill was teaching school in Ohio. In 1854, his abusive father died of tuberculosis, leaving the family with a huge financial debt. Quantrill's mother had to turn her home into a boarding house in order to survive. During this time, Quantrill helped support the family by continuing to work as a schoolteacher, but he left home a year later and headed to Mendota, Illinois. Here, Quantrill took up a job in the lumberyards, unloading timber from rail cars. One night while working the late shift, he killed a man. Authorities briefly arrested him, but Quantrill claimed that he had acted in self-defense. Since there were no eyewitnesses and the victim was a stranger who knew no one in town, William was set free. Nevertheless, the police strongly urged him to leave Mendota. Quantrill continued his career as a teacher, moving to Fort Wayne, Indiana, in February 1856. Quantrill journeyed back home to Canal Dover that fall. Quantrill spent the winter in his family's diminutive shack in the impoverished town, and he soon grew rather restless. At this time, many Ohioans were migrating to the Kansas Territory in search of cheap land and opportunity. This included Henry Torrey and Harmon Beeson, two local men hoping to build a large farm for their families out west. Although they mistrusted the 19-year-old William, his mother's pleadings persuaded them to let her son accompany them in an effort to get him to turn his life around. The party of three departed in late February 1857. Torrey and Beeson agreed to pay for Quantrill's land in exchange for a couple of months' worth of work. They settled at Marais des Cygnes, but things did not go as well as planned. After about two months, Quantrill began to slack off when it came to working the land, and he spent most days wandering aimlessly about the wilderness with a rifle. A dispute arose over the claim, and he went to court with Torrey and Beeson. The court awarded the men what was owed to them, but Quantrill paid only half of what the court had mandated. Although his relationship with Beeson was never the same, Quantrill remained friends with Torrey. Shortly afterwards, Quantrill accompanied a large group of hometown friends in their quest to start a settlement on Tuscarora Lake. However, neighbors soon began to notice Quantrill stealing goods out of other people's cabins and so they banished him from the community in January 1858. Soon thereafter, he signed on as a teamster with the U.S. Army expedition heading to Salt Lake City, Utah in the spring of 1858. Little is known of Quantrill's journey out west except that he excelled at the game of poker. He racked up piles of winnings by playing the game against his comrades at Fort Bridger but flushed it all on one hand the next day, leaving him dead broke. Quantrill then joined a group of Missouri ruffians and became somewhat of a drifter. The group helped protect Missouri farmers from the Jayhawkers for pay and slept wherever they could find lodging. Quantrill traveled back to Utah and then to Colorado but returned in less than a year to Lawrence, Kansas, in 1859 where he taught at a schoolhouse until it closed in 1860. He then took up with brigands and turned to cattle rustling and anything else that could earn him money. He also learned the profitability of capturing runaway slaves and devised plans to use free black men as bait for runaway slaves, whom he subsequently captured and returned to their masters in exchange for reward money. Initially, before 1860, Quantrill appeared to oppose slavery. For instance, he wrote to his good friend W.W. Scott in January 1858 that the Lecompton Constitution was a "swindle" and that James H. Lane, a Northern sympathizer, was "as good a man as we have here". He also called the Democrats "the worst men we have for they are all rascals, for no one can be a democrat here without being one". However, in February 1860, Quantrill wrote a letter to his mother that expressed his views on the anti-slavery supporters. He told her that slavery was right and that he now detested Jim Lane. He said that the hanging of John Brown had been too good for him and that "the devil has got unlimited sway over this territory, and will hold it until we have a better set of man and society generally." Guerrilla leader In 1861, Quantrill went to Texas with the slaveholder Marcus Gill. There, they met Joel B. Mayes and joined the Cherokee Nations. Mayes was a half Scots-Irish and half Cherokee Confederate sympathizer and a war chief of the Cherokee Nations in Texas. He had moved from Georgia to the old Indian Territory in 1838. Mayes enlisted and served as a private in Company A of the 1st Cherokee Regiment in the Confederate army. It was Mayes who taught Quantrill guerrilla warfare tactics, who would learn the ambush fighting tactics used by the Native Americans as well as sneak attacks and camouflage. Quantrill, in the company of Mayes and the Cherokee Nations, joined with General Sterling Price and fought at the Battle of Wilson's Creek and Lexington in August and September 1861. In the last days of September, Quantrill deserted General Price's army and went home to Blue Springs, Missouri, to form his own "army" of loyal men who had great belief in him and the Confederate cause, and they came to be known as "Quantrill's Raiders". By Christmas 1861, he had ten men who would follow him full-time into his pro-Confederate guerrilla organization: William Haller, George Todd, Joseph Gilcrist, Perry Hoy, John Little, James Little, Joseph Baughan, William H. Gregg, James A. Hendricks, and John W. Koger. Later in 1862, John Jarrett, John Brown (not to be confused with the abolitionist John Brown), Cole Younger, William T. "Bloody Bill" Anderson, and the James brothers would join Quantrill's army. On March 7, 1862, Quantrill and his men overcame a small Union outpost at Aubry, Kansas and ransacked the town. On March 11, 1862, Quantrill joined Confederate forces under Colonel John T. Hughes and took part in attack on Independence, Missouri. After what became known as the First Battle of Independence, the Confederate government decided to secure the loyalty of Quantrill by issuing him a "formal army commission" to the rank of captain. On September 7, 1862, after midnight, Quantrill with 140 of his men captured Olathe, Kansas, where he surprised 125 Union soldiers, who were forced to surrender. On October 5, 1862, Quantrill attacked and destroyed Shawneetown, Kansas, and Bill Anderson soon revisited and torched the rebuilding settlement. On November 5, 1862, Quantrill joined Colonel Warner Lewis to stage an attack on Lamar, Missouri, where a company of the 8th Regiment Missouri Volunteer Cavalry protected a Union outpost. Warned about the attack, the Union soldiers were able to repel the raiders, who torched part of the town before they retreated. Lawrence Massacre The most significant event in Quantrill's guerrilla career took place on August 21, 1863. Lawrence had been seen for years as the stronghold of the antislavery forces in Kansas and as a base of operation for incursions into Missouri by Jayhawkers and pro-Union forces. It was also the home of James H. Lane, a senator known in Missouri for his staunch opposition to slavery and as a leader of the Jayhawkers. During the weeks immediately preceding the raid, Union General Thomas Ewing, Jr., had ordered the detention of any civilians giving aid to Quantrill's Raiders. Several female relatives of the guerrillas had been imprisoned in a makeshift jail in Kansas City, Missouri. On August 14, the building collapsed, killing four young women and seriously injuring others. Among the dead was Josephine Anderson, the sister of one of Quantrill's key guerrilla allies, Bill Anderson. Another of Anderson's sisters, Mary, was permanently crippled in the collapse. Quantrill's men believed that the collapse was deliberate, which fanned them into a fury. Some historians have suggested that Quantrill had actually planned to raid Lawrence before the building's collapse, in retaliation for earlier Jayhawker attacks as well as the burning of Osceola, Missouri. Early in the morning of August 21, Quantrill descended from Mount Oread and attacked Lawrence at the head of a combined force of as many as 450 guerrilla fighters. Lane, a prime target of the raid, managed to escape through a cornfield in his nightshirt, but the guerrillas, on Quantrill's orders, killed around 150 men and boys who were able to carry a rifle. When Quantrill's men rode out at 9 a.m., most of Lawrence's buildings were burning, including all but two businesses. On August 25, in retaliation for the raid, General Ewing authorized General Order No. 11 (not to be confused with General Ulysses S. Grant's order of the same name). The edict ordered the depopulation of three and a half Missouri counties along the Kansas border with the exception of a few designated towns, which forced tens of thousands of civilians to abandon their homes. Union troops marched through behind them and burned buildings, torched planted fields, and shot down livestock to deprive the guerrillas of food, fodder and support. The area was so thoroughly devastated that it became known thereafter as the "Burnt District". In early October, Quantrill and his men rode south to Texas, where they decided to pass the winter. On his way, on October 6, Quantrill chose to attack Fort Blair in Baxter Springs, Kansas, which resulted in the so-called Battle of Baxter Springs. After being repelled, Quantrill surprised and destroyed a Union relief column under General James G. Blunt, who escaped, but almost 100 Union soldiers were killed. In Texas, on May 18, 1864, Quantrill's sympathizers lynched Collin County Sheriff Captain James L. Read for shooting the Calhoun Brothers from Quantrill's force who had killed a farmer in Millwood, Texas. Last years While in Texas, Quantrill and his 400 men quarreled. His once-large band broke up into several smaller guerrilla companies. One was led by his lieutenant, "Bloody Bill" Anderson, and Quantrill joined it briefly in the fall of 1864 during a fight north of the Missouri River. In the spring of 1865, now leading only a few dozen pro-confederates, Quantrill staged a series of raids in western Kentucky. Confederate General Robert E. Lee surrendered to Ulysses Grant on April 9, and General Joseph E. Johnston surrendered most of the rest of the Confederate Army to General Sherman on April 26. On May 10, Quantrill and his band were caught in a Union ambush at Wakefield Farm. Unable to escape on account of a skittish horse, he was shot in the back and paralyzed from the chest down. The unit that successfully ambushed Quantrill and his followers was led by Edwin W. Terrell, a guerrilla hunter charged with finding and eliminating high profile targets by General John M. Palmer, the commander of the District of Kentucky. The Union officials, Palmer and Governor Thomas E. Bramlette, had no interest in Quantrill staging a repeat of his performance in Missouri in 1862–1863. He was brought by wagon to Louisville, Kentucky, and taken to the military prison hospital, on the north side of Broadway at 10th Street. He died from his wounds on June 6, 1865, at the age of 27. Burial Quantrill was buried in an unmarked grave, which is now marked, in what later became known as St. John's Cemetery in Louisville. A boyhood friend of Quantrill, the newspaper reporter William W. Scott, claimed to have dug up the Louisville grave in 1887 and to have brought Quantrill's remains back to Dover at the request of Quantrill's mother. The remains were supposedly buried in Dover in 1889, but Scott attempted to sell what he said were Quantrill's bones and so it is unknown if the remains he returned to Dover or buried in Dover were genuine. In the early 1990s, the Missouri division of the Sons of Confederate Veterans convinced the Kansas State Historical Society to negotiate with authorities in Dover, which led to three arm bones, two leg bones, and some hair, all of which were allegedly Quantrill's, being re-buried in 1992 at the Old Confederate Veteran's Home Cemetery in Higginsville, Missouri. As a result, there are grave markers for Quantrill in Louisville, Dover, and Higginsville. Claims of survival In August 1907, news articles appeared in Canada and the US that claimed that J.E. Duffy, a member of a Michigan cavalry troop that had dealt with Quantrill's raiders during the Civil War, met Quantrill at Quatsino Sound, on northern Vancouver Island, while he was investigating timber rights in the area. Duffy claimed to recognize the man, living under the name of John Sharp, as Quantrill. Duffy said that Sharp admitted he was Quantrill and discussed in detail raids in Kansas and elsewhere. Sharp claimed that he had survived the ambush in Kentucky but received a bayonet and bullet wound, making his way to South America where he lived some years in Chile. He returned to the US and worked as a cattleman in Fort Worth, Texas. He then moved to Oregon, acting as a cowpuncher and drover, before he reached British Columbia in the 1890s, where he worked in logging, trapping and finally as a mine caretaker at Coal Harbour at Quatsino. Within some weeks after the news stories were published, two men came to British Columbia, travelling to Quatsino from Victoria, leaving Quatsino on a return voyage of a coastal steamer the next day. On that day, Sharp was found severely beaten and died several hours later without giving information about his attackers. The police were unable to solve the murder. Another legend that has circulated claims that Quantrill may have escaped custody and fled to Arkansas, where he lived under the name of L.J. Crocker until his death in 1917. Personal life During the war, Quantrill met the 13-year-old Sarah Katherine King at her parents' farm in Blue Springs, Missouri. They never married although she often visited and lived in camp with Quantrill and his men. At the time of his death, she was 17. Legacy Quantrill's actions remain controversial. Historians view him as an opportunistic, bloodthirsty outlaw; James M. McPherson, one of the most prominent experts on the American Civil War, calls him and Anderson "pathological killers" who "murdered and burned out Missouri Unionists". The historian Matthew Christopher Hulbert argues that Quantrill "ruled the bushwhacker pantheon" established by ex-Confederate officer and propagandist John Newman Edwards in the 1870s to provide Missouri with its own "irregular Lost Cause". Some of Quantrill's celebrity later rubbed off on other ex-Raiders, like John Jarrett, George and Oliver Shepherd, Jesse and Frank James, and Cole Younger, who went on after the war to apply Quantrill's hit-and-run tactics to bank and train robbery. The William Clarke Quantrill Society continues to celebrate Quantrill's life and deeds. In fiction Comics A Belgian comic series, Les Tuniques Bleues ("The Blue Coats", first printed in 1994) depicts Quantrill as twisted, even psychotic. In the DC Comics 12-part miniseries The Kents (1997), Quantrill is depicted as a traitorous man who lives under a false name in 1856 Kansas, pretending to befriend abolitionists and then leading them into deathtraps. Quantrill appears in two volumes of the Franco-Belgian comic series Blueberry, The Missouri Demons and Terror Over Kansas. Film Dark Command (1940), in which John Wayne opposes former schoolteacher turned guerrilla fighter "William Cantrell" in the early days of the Civil War. William Cantrell is a thinly veiled portrayal of William Quantrill. Walter Pidgeon portrays "Cantrell"/Quantrill. Renegade Girl (1946) deals with tension between Unionists and Confederates in Missouri. Ray Corrigan plays Quantrill. At the beginning of the film Fighting Man of the Plains (1949), starring Randolph Scott and Dale Robertson, Quantrill's Raiders are mentioned along with individual mentions of the more notorious members. Kansas Raiders (1950), Brian Donlevy (at age 49) portrayed Quantrill, in which Jesse James (played by Audie Murphy) falls under the influence of the guerilla leader. In Best of the Badmen (1951), Robert Ryan plays a Union officer who goes to Missouri after the Civil War to persuade the remnants of Quantrill's band to swear allegiance to the Union in return for a pardon. They are betrayed and he becomes their leader in a fight against corrupt law officers. In Red Mountain (1951), Alan Ladd plays a Confederate officer who joins and later becomes disillusioned with Quantrill, played by John Ireland. In Kansas Pacific (1953), Quantrill is the antagonist to Sterling Hayden's Federal character but is portrayed as trying to delay the building of the railroad before the war breaks out and is only captured at the end. In The Stranger Wore a Gun (1953), a former Quantrill Raider becomes bank robber until his old comrades catch up with him. Woman They Almost Lynched (1953) features Quantrill's wife Kate as a female gunslinger. Quantrill's Raiders (1958), focuses on the raid on Lawrence. Leo Gordon plays Quantrill. Young Jesse James (1960) also depicts Quantrill's influence on Jesse James. In Arizona Raiders (1965), Audie Murphy plays an ex-Quantrill Raider who is assigned the task of tracking down his former comrades. In Bandolero! (1968), Dean Martin plays Dee Bishop, a former Quantrill Raider who admits to participating in the attack on Lawrence. His brother Mace, played by James Stewart, was a member of the Union Army under General William Tecumseh Sherman. In The Outlaw Josey Wales (1976), ferry operator Sim Carstairs states to Josey Wales, "Bill Quantrill used this ferry all the time. Good friend of mine." In The Legend of the Golden Gun (1979), two men attempt to track down and kill Quantrill. Lawrence: Free State Fortress (1998) depicts the attack on Lawrence. In True Grit (1969) and True Grit (2010), Le Boeuf denounces Quantrill, whom Rooster Cogburn served with, as a killer of women and children. In Ride with the Devil (1999) protagonists ride with “Black John Ambrose” who is a loose portrayal of "Bloody Bill" Anderson and later join with Quantrill for the raid on Kansas. Quantrill, Anderson, and most Raiders are portrayed as blood thirsty and murderous. Literature Quantrill is a major character in Wildwood Boys (2000), James Carlos Blake's biographical novel of Bloody Bill Anderson. In the novel The Rebel Outlaw: Josey Wales (republished as Gone to Texas in later editions), by Asa (aka Forrest) Carter, Josey Wales is a former member of a Confederate raiding party led by "Bloody Bill" Anderson. The book is the basis of the Clint Eastwood film The Outlaw Josey Wales (1976). In Bradley Denton's alternate history tale "The Territory" (1992), Samuel Clemens joins Quantrill's Raiders and is with them when they attack Lawrence, Kansas. It was nominated for a Hugo, Nebula and World Fantasy Award for best novella. Frank Gruber's article "Quantrell's Flag" (1940), for Adventure Magazine (March through May, 1940), was published as a book titled Quantrell's Raiders (Ace Original, 954366 bound with Rebel Road). In Charles Portis' novel True Grit, and the 1969 and 2010 film versions thereof, Rooster Cogburn boasts of being a former member of Quantrill's Raiders, and LaBoeuf excoriates him for being part of the "border gang" that murdered men and children alike during the raid on Lawrence. The novel Woe To Live On (1987) by Daniel Woodrell was filmed as Ride With The Devil (1999) by Ang Lee. The film features a harrowing recreation of the Lawrence massacre and is notable for its overall authenticity. Quantrill, played by John Ales, makes brief appearances. In the novelization of the 1999 film Wild Wild West by Bruce Bethke, former Confederate General "Bloodbath" McGrath (played by Ted Levine) reflects on the fates of his several friends from the war, including Quantrill, Henry Wirz, and John Singleton Mosby. In the novel Lincoln's Sword (2010) by Debra Doyle and James D. Macdonald the raid on Lawrence, Kansas, is told from the point of view of Cole Younger. In the story Hewn in Pieces for the Lord by John J. Miller – published in Drakas!, an anthology of stories set in S. M. Stirling's alternate history series The Domination – Quantrill managed to escape after the fall of the Confederacy, get to the slave-holding Draka society in Africa, and join its ruthless Security Directorate, where he tangles with the rebellious Madhi in Sudan. In Magnus Chase, Hammer of Thor by Rick Riodian, William Quantrill is briefly mentioned as “Mother William” on page 80 Plays He is depicted in Robert Schenkkan's series of one-act plays, The Kentucky Cycle. Music Woody Guthrie's ballad Belle Starr identifies Quantrill as one of Starr's eight lovers, along with both of the James brothers. Television The actor Bruce Bennett played Quantrill in a 1954 episode of the syndicated television series Stories of the Century, starring Jim Davis as the railroad detective and narrator, Matt Clark. Gunsmoke'''s first television season episode "Reunion '78" features a showdown between cowboy Jerry Shand, who has just arrived in Dodge City, and long-time resident Andy Cully, hardware dealer, who was one of Quantrill's Raiders. Shand hails from Lawrence, Kansas, and has an old score to settle. Have Gun—Will Travel – The Teacher (1958). The episode mentions Quantrill’s Raiders. A schoolteacher wants to teach the school children about both sides of the Civil War and the people which hail from the North don’t like it. The Rough Riders episode entitled "The Plot to Assassinate President Johnson" (1959), as the title suggests, involves Quantrill in a plot to assassinate President Andrew Johnson. The TV series Hondo featured both Quantrill and Jesse James in the episode "Hondo and the Judas" (1967). The Secret Adventures of Jules Verne episode, "The Ballad of Steeley Joe" (2000), depicts both Jesse James and William Quantrill. The USA Network's television show Psych, in an episode entitled "Weekend Warriors", featured a Civil War re-enactment that included William Quantrill. The episode spoke about Quantrill's actions in Lawrence, but the reenactment featured his death at the hands of a fictional nurse, Jenny Winslow, whose family was killed at Lawrence. Quantrill's Lawrence Massacre of 1863 is depicted in Steven Spielberg's mini-series Into the West (2005) Notes References The American West, Vol. 10, American West Pub. Co., 1973, pp. 13 to 17. Banasik, Michael E., Cavalires of the bush: Quantrill and his men, Press of the Camp Pope Bookshop, 2003. Connelley, William Elsey, Quantrill and the border wars, The Torch Press, 1910 (reprinted by Kessinger Publishing, 2004). Dupuy, Trevor N., Johnson, Curt, and Bongard, David L., Harper Encyclopedia of Military Biography, Castle Books, 1992, 1st Ed., . Edwards, John N., Noted Guerillas: The Warfare of the Border, St. Louis: Bryan, Brand, & Company, 1877. Eicher, David J., The Longest Night: A Military History of the Civil War, Simon & Schuster, 2001, . Gilmore, Donald L., Civil War on the Missouri-Kansas border, Pelican Publishing, 2006. Hulbert, Matthew Christopher. The Ghosts of Guerrilla Memory: How Civil War Bushwhackers Became Gunslingers in the American West. Athens: University of Georgia Press, 2016. . Leslie, Edward E., The Devil Knows How to Ride: The True Story of William Clarke Quantrill and his Confederate Raiders, Da Capo Press, 1996, . McKelvie, B.A., Magic, Murder & Mystery, Cowichan Leader Ltd. (printer), 1966, pp. 55 to 62 Mills, Charles, Treasure Legends Of The Civil War, Apple Cheeks Press, 2001, . Schultz, Duane, Quantrill's war: the life and times of William Clarke Quantrill, 1837-1865, St. Martin's Press, 1997. Wellman, Paul I., A Dynasty of Western Outlaws, University of Nebraska Press, 1986, . Further reading Castel, Albert E., William Clarke Quantrill, University of Oklahoma Press, 1999, . Geiger, Mark W. Financial Fraud and Guerrilla Violence in Missouri's Civil War, 1861-1865, Yale University Press, 2010, Hulbert, Matthew Christopher The Ghosts of Guerrilla Memory: How Civil War Bushwhackers Became Gunslingers in the American West. Athens: University of Georgia Press, 2016. . Schultz, Duane, Quantrill's War: The Life and Times of William Clarke Quantrill, 1837–1865, Macmillan Publishing, 1997, . Historiography Crouch, Barry A. "A 'Fiend in Human Shape?' William Clarke Quantrill and his Biographers", Kansas History'' (1999) 22#2 pp 142–156 analyzes the highly polarized historiography External links William Clark Quantrill Society Official website for the Family of Frank & Jesse James: Stray Leaves, A James Family in America Since 1650 T.J. Stiles, Jesse James: Last Rebel of the Civil War Guerrilla raiders in an 1862 Harper's Weekly story, with illustration Quantrill's Guerrillas Members In The Civil War Quantrill flag at Kansas Museum of History "Guerilla Warfare in Kentucky" — Article by Civil War historian/author Bryan S. Bush (1923 book of reminiscences by Harrison Trow) 1837 births 1865 deaths Confederate States Army officers Bushwhackers Bleeding Kansas Northern-born Confederates People of Missouri in the American Civil War People of Kansas in the American Civil War People of Kentucky in the American Civil War People of the Utah War People from Dover, Ohio Deaths by firearm in Kentucky American proslavery activists Warlords American mass murderers
53787084
https://en.wikipedia.org/wiki/Numbered%20Panda
Numbered Panda
Numbered Panda (also known as IXESHE, DynCalc, DNSCALC, and APT12) is a cyber espionage group believed to be linked with the Chinese military. The group typically targets organizations in East Asia. These organizations include, but are not limited to, media outlets, high-tech companies, and governments. Numbered Panda is believed to have been operating since 2009. However, the group is also credited with a 2012 data breach at the New York Times. One of the group's typical techniques is to send PDF files loaded with malware via spear phishing campaigns. The decoy documents are typically written in traditional Chinese, which is widely used in Taiwan, and the targets are largely associated with Taiwanese interests. Numbered Panda appears to be actively seeking out cybersecurity research relating to the malware they use. After an Arbor Networks report on the group, FireEye noticed a change in the group's techniques to avoid future detection. Discovery and security reports Trend Micro first reported on Numbered Panda in a 2012 white paper. Researchers discovered that the group launched spear phishing campaigns, using the Ixeshe malware, primarily against East Asian nations since approximately 2009. CrowdStrike further discussed the group in the 2013 blog post Whois Numbered Panda. This post followed the 2012 attack on the New York Times and its subsequent 2013 reporting on the attack. In June 2014, Arbor Networks released a report detailing Numbered Panda's use of Etumbot to target Taiwan and Japan. In September 2014, FireEye released a report highlighting the group's evolution. FireEye linked the release of Arbor Networks report to Numbered Panda's change in tactics. Attacks East Asian Nations (2009-2011) Trend Micro reported on a campaign against East Asian governments, electronics manufacturers, and a telecommunications company. Numbered Panda engaged in spear phishing email campaigns with malicious attachments. Often, the malicious email attachments would be PDF files that exploited vulnerabilities in Adobe Acrobat, Adobe Reader, and Flash Player. The attackers also used an exploit that affected Microsoft Excel - . The Ixeshe malware used in this campaign allowed Numbered Panda to list all services, processes, and drives; terminate processes and services; download and upload files; start processes and services; get victims’ user names; get a machine's name and domain name; download and execute arbitrary files; cause a system to pause or sleep for a specified number of minutes; spawn a remote shell; and list all current files and directories. After installation, Ixeshe would start communicating with command-and-control servers; oftentimes three servers were hard-coded for redundancy. Numbered Panda often used compromised servers to create these command-and-control servers to increase control of a victim's network infrastructure. Using this technique, the group is believed to have amassed sixty servers by 2012. A majority of the command-and-control servers used from this campaign were located in Taiwan and the United States. Base64 was used for communication between the compromised computer and the server. Trend Micro found that, once decoded, the communication was a standardized structure that detailed the computer's name, local IP address, proxy server IP and port, and the malware ID. Researchers at CrowdStrike found that blogs and WordPress sites were frequently used in the command-and-control infrastructure to make the network traffic look more legitimate. Japan and Taiwan (2011-2014) An Arbor Security report found that Numbered Panda began a campaign against Japan and Taiwan using the Etumbot malware in 2011. Similar to the previously observed campaign, the attackers would use decoy files, such as PDF, Excel spreadsheets, or Word documents, as email attachments to gain access to victims' computers. Most of the documents observed were written in Traditional Chinese and usually pertained to Taiwanese government interests; several of the files related to upcoming conferences in Taiwan. Once the malicious file was downloaded and extracted by the victim, Etumbot uses a right-to-left override exploit to trick the victim to download the malware installer. According to Arbor Security, the "technique is a simple way for malware writers to disguise names of malicious files. A hidden Unicode character in the filename will reverse the order of the characters that follow it, so that a .scr binary file appears to be a .xls document, for example." Once the malware is installed, it sends a request to a command-and-control server with a RC4 key to encrypt subsequent communication. As was with the Ixeshe malware, Numbered Panda used Base64 encoded characters to communicate from compromised computers to the command-and-control servers. Etumbot is able to determine if the target computer is using a proxy and will bypass the proxy settings to directly establish a connection. After communication is established, the malware will send an encrypted message from the infected computer to the server with the NetBIOS name of the victim's system, user name, IP address, and if the system is using a proxy. After the May 2014 Arbor Security report detailed Etumbot, FireEye discovered that Numbered Panda changed parts of the malware. FireEye noticed that the protocols and strings previously used were changed in June 2014. The researchers at FireEye believe this change was to help the malware evade further detection. FireEye named this new version of Etumbot HighTide. Numbered Panda continued to target Taiwan with spear phishing email campaigns with malicious attachments. Attached Microsoft Word documents exploited the vulnerability to help propagate HighTide. FireEye found that compromised Taiwanese government employee email accounts were used in some of the spear phishing. HighTide differs from Etumbot in that its HTTP GET request changed the User Agent, the format and structure of the HTTP Uniform Resource Identifier, the executable file location, and the image base address. New York Times (2012) Numbered Panda is believed to be responsible for the computer network breach at the New York Times in late 2012. The attack occurred after the New York Times published a story about how the relatives of Wen Jiabao, the sixth Premier of the State Council of the People's Republic of China, "accumulated a fortune worth several billion dollars through business dealings." The computers used to launch the attack are believed to be the same university computers used by the Chinese military to attack United States military contractors. Numbered Panda used updated versions of the malware packages Aumlib and Ixeshe. The updated Aumlib allowed Numbered Panda to encode the body of a POST request to gather a victim's BIOS, external IP, and operating system. A new version of Ixeshe altered the previous version's network traffic pattern in an effort to evade existing network traffic signatures designed to detect Ixeshe related infections. References Chinese advanced persistent threat groups Espionage Cyberwarfare Electronic warfare
45641123
https://en.wikipedia.org/wiki/Outer%20Wilds
Outer Wilds
Outer Wilds is an action-adventure game developed by Mobius Digital and published by Annapurna Interactive. It was first released for Microsoft Windows, Xbox One, and PlayStation 4 in 2019; a Nintendo Switch port was first announced in 2021. The game features the player character exploring a solar system stuck in a 22-minute time loop, which ends as the sun goes supernova. The player progresses through the game by exploring the solar system and learning clues to the cause of the time loop. Outer Wilds received critical acclaim and received several Game of the Year awards, including the British Academy Games Award for Best Game. It received an expansion, Echoes of the Eye, in 2021. Gameplay Outer Wilds features an unnamed astronaut player character exploring a solar system that is stuck in a 22-minute time loop, resetting after the star goes supernova. Thus, the player is encouraged to learn why by exploring the solar system and uncovering secrets of an extinct alien race known as the Nomai, who previously visited the solar system thousands of years ago. In the first part of the game, the player links with an ancient Nomai statue which ensures that the player retains information discovered in each time loop when it restarts. For example, in order to use the ship, the player must get the launch codes from colleagues at the local observatory. These codes, and the knowledge of them, are the same across subsequent loops, allowing the player to immediately launch the ship without first visiting the observatory. The central premise of the game is exploration, with the player compelled to uncover the remains of the Nomai civilization to find the cause of the time loop and complete the game. All areas of the game are technically immediately accessible to the player upon acquiring the ship launch codes, however many areas are protected by logic puzzles which can often only be solved through learning more of the Nomai and speaking to fellow space explorers. Some events and locations change during the course of the time loop, which mean that areas and puzzles are often only accessible at certain times. One example is the paired Ash Twin and Ember Twin planets orbiting so close to each other that sand from Ash Twin is funneled over to cover Ember Twin during the loop. This process gradually reveals the secrets buried on Ash Twin while simultaneously making the Ember Twin cave system inaccessible later on in the time loop. The player character has health, fuel and oxygen meters, which are replenished when the character returns to the ship or by finding trees or refills. The player has several tools, including a camera probe which can be launched long distances and a signalscope for locating broadcast signals. There are no equipment upgrades during the game. After each death, whether the cause is the sun going supernova, or through misadventure — e.g. drowning, falling, exposure to space vacuum — the player respawns and awakens back on their home planet at the start of the time loop. Plot The player takes the role of an unnamed alien space explorer preparing for their first solo flight. After being involuntarily paired with a statue on their home planet made by the Nomai, an ancient and mysterious race that had once colonized the system, the player discovers they are trapped in a time loop. Every loop resets when the sun goes supernova after 22 minutes, or when the player-character otherwise dies. The player learns that the Nomai were obsessed with finding the "Eye of the Universe", a massive anomaly using macroscopic quantum mechanics that is older than the universe itself. Curious to find out what was held within the Eye, but having lost its signal, the Nomai built an orbital cannon to launch probes to visually find the Eye. The chance of visually finding the object with a random shot into space was infinitesimally small, so they also developed a device, the Ash Twin Project, to send the results of the probe's scan 22 minutes back in time, so that the cannon could be "reused" an infinite number of times. The amount of power required to go back in time was so high that the only viable way of obtaining it would be from a supernova, so they attempted to artificially induce the sun to explode, but were unsuccessful. The Nomai were wiped out by an extinction-level event after completing construction of these projects but before setting the time-loop process into motion. The system is now operating because the sun has naturally reached the end of its life cycle. The resulting supernova feeds power into the Ash Twin Project, conveying the player's memories back in time to their previous self and resetting the cannon for another scan. Armed with this knowledge, the player is eventually able to recover the coordinates of the Eye and input them into a derelict Nomai interstellar vessel, warping to the Eye's location. They discover that their star is not the only one going supernova. All the stars in the sky have reached the end of their lifespans and the universe is about to end. Upon entering the Eye, the player encounters quantum versions of the various characters they had befriended in their travels, and working together, they create a Big Bang, giving rise to a new universe. The ending shows a similar solar system with new life forms 14.3 billion years after its creation. Echoes of the Eye The Echoes of the Eye expansion adds a new exhibit to the observatory at the beginning of the game, which shows off the deep space satellite used to generate the player's solar system map. The player soon discovers an object that eclipses the sun – a planet-sized rotating ship, hidden within a cloaking field. Within this ship, called "the Stranger", the player finds theaters and heavily damaged slide reels which tell the story of the Stranger's inhabitants, an extinct race of owl and elk-like creatures. Similar to the Nomai, the inhabitants of the Stranger also came to the solar system after discovering the Eye of the Universe's signals, but gave up their quest after seeing that the Eye would destroy the universe and everything in it. After destroying their monuments to the Eye and constructing a device in order to block its signal from other alien races, the inhabitants began to regret destroying their homeworld, which they stripped barren in order to build the Stranger. The inhabitants eventually created artifacts and areas where they could sleep in order to enter a virtual reality of their homeworld. The player learns how to enter the simulation via the use of artifacts and discovers active consciousness of the inhabitants which are hostile to the player. The player eventually finds archives with more detailed reels of the history of the Stranger's inhabitants, as well as a vault secured by three seals. Using information of glitches within the simulation learned from the archives, the player is eventually able to unlock the vault's three seals and open it, discovering a friendly inhabitant called the Prisoner. Communicating with the player via a telepathic projection staff, the Prisoner transmits a memory where they temporarily disabled the signal blocker surrounding the Eye, causing the other inhabitants to force the Prisoner within the vault, before returning to the simulation and dying in the physical world. The player then uses the staff to explain to the Prisoner how their actions eventually lead the Nomai to discover the signal of the Eye and enter the solar system, setting the events of the game in motion. After learning that their actions were not in vain, the Prisoner exits the vault and leaves behind their staff, showing a vision of the Prisoner and player together on a raft, venturing along a river into the sunrise. If the player chooses to travel to the Eye of the Universe after having met with the Prisoner, they find a quantum version of the Prisoner who works with the player to create the new universe. Development Outer Wilds began as Alex Beachum's USC Interactive Media & Games Division master's thesis and grew into a full-production commercial release. He started the project in late 2012 for his yearlong thesis and "Advanced Game Project" assignment. Beachum had previously made a three-dimensional platformer out of Lego bricks as a kid, and was uninterested in a career in games until applying to the Interactive Media program. Beachum's original ideas were to recreate the Apollo 13 and 2001: A Space Odyssey "spirit of space exploration" in an uncontrollable environment, and to make an objective-less open world game where exploration would satiate the player's questions without feeling "aimless." Beachum took cues from The Legend of Zelda: The Wind Wakers non-player characters that would tell tales of distant lands as to entice the player to explore those areas for themselves. The game heavily employs a camping motif, reflecting Beachum's personal interest in backpacking while also emphasizing that the player-character is far from their home and alone in this galaxy. While journalists have compared Outer Wilds time loop mechanics to that of The Legend of Zelda: Majora's Mask, Beachum notes that these mechanics are used in Outer Wilds primarily "to allow the creation of large-scale dynamic systems" as opposed to "play[ing] around with causality" as in Majora's Mask. The original development team members were University of Southern California, Laguna College of Art and Design, and Atlantic University College students. Beachum's team started by working with "paper prototypes" and a "tabletop role-playing session" to brainstorm a narrative. The team built the game in the Unity3D game engine. They later wrote the game as a text adventure in Processing. After Beachum's graduation, the project hired members full-time to work towards a commercial release, with Beachum as creative director. Japanese actor Masi Oka, who has had previous experience as a programmer and founded Mobius Digital to develop mobile games, had seen the demo of Outer Wilds during a demo day for the USC Interactive Media & Games groups. Oka saw the opportunity to expand his team and hired the entire team behind the game into his studio to help bring the title to development. The game became the first title to be supported on the new video game-centric crowdfunding site, Fig, launched in August 2015. An alpha release version of the game was made available for free on the developer's site in 2015. In March 2018, Mobius announced it had received funding support from publisher Annapurna Interactive, which bought out the investment and rights from Fig, and that it was planned for release in 2018. Mobius later announced plans in June 2018 to also release the game for the Xbox One. In December 2018, it was announced that the game's release would be delayed until 2019. In exchange for additional financial support, Mobius announced that the game's initial release on PC users would be a timed-exclusive on the Epic Games Store. As it was originally announced that Fig backers would have received redemption keys on Steam for the game, some backers complained about the change; Linux users noted that as the Epic Games Store does not have a Linux-compatible front end, the change left them without any option. Outer Wilds was released on PC on May 28, 2019, and for Xbox One a day later. A PlayStation 4 version was released on October 15, 2019, with the Steam release on June 18, 2020. A PlayStation 4 retail version was released by Limited Run Games in 2020. A Nintendo Switch port was originally set for a mid-2021 release, before being delayed. An expansion, Echoes of the Eye, was released on September 28, 2021. Reception Outer Wilds received "generally favorable" reviews, according to review aggregator Metacritic. At the 2015 Game Developers Conference-sponsored Independent Games Festival, Outer Wilds won in the Seumas McNally Grand Prize and Excellence in Design categories. It was an honorable mention in the Excellence in Narrative and Nuovo Award categories. The game was listed as one of the best games of 2019 by several publications, and Edge, Polygon and Paste also featured it on their "best games of the decade" lists. Polygon's Colin Campbell praised the overall narrative and the game's meta-puzzles. Brendan Caldwell, writing for Rock, Paper, Shotgun, enjoyed the environmental exploration and the game's writing, but criticized that running out of time during some puzzles felt like "an interruption". Destructoid's Josh Tolentino appreciated the open ended nature of Outer Wilds' world and how it let the player make discoveries. Awards References Further reading External links Independent Games Festival winners Action-adventure games 2019 video games Indie video games Linux games Open-world video games MacOS games Nintendo Switch games PlayStation 4 games Seumas McNally Grand Prize winners Science fiction video games Single-player video games Video games developed in the United States Video games about time loops Video games set on fictional planets Windows games Xbox Cloud Gaming games Xbox One games Annapurna Interactive games Video games about extraterrestrial life
24883643
https://en.wikipedia.org/wiki/LSS%20Data%20Systems
LSS Data Systems
LSS Data Systems (LSS), was a Minnesota-based medical software and service company that develops products for the physician practice community. LSS was founded in 1982, and since then was a partner of Medical Information Technology (MEDITECH), developing physician practice management and ambulatory electronic health record software. In 2000 and 2001 MEDITECH invested in LSS and in February 2011, acquired complete ownership. LSS became a wholly owned subsidiary of the Massachusetts based company, and announced completion of the merger on January 1, 2014. History "Lake Superior Software" was founded in 1982 in Duluth, Minnesota by Ken Carlson and Stephanie Petersen, to develop and support physician billing and practice management software. The company developed a national clientele, opened an office just outside Minneapolis in the mid-1980s, and changed its name to "LSS Data Systems." Beginning in 1990, the company’s physician practice management system was redeveloped and re-written utilizing a newer programming language called MAGIC first developed by MEDITECH. LSS began development of additional applications in 1996, complementing its practice management system with an ambulatory electronic medical record. In 2000, both the practice management product and ambulatory EMR were incorporated into a suite of applications for physician practices co-developed with MEDITECH called the Medical and Practice Management (MPM) Suite. In addition to LSS's physician billing and practice management application (PBR) and Electronic Ambulatory Record (EAR), the suite meshes together applications jointly developed with MEDITECH, including scheduling, order management, scanning and a web-based patient portal. Before designing and developing LSS software, co-founder and former CEO Ken Carlson helped integrate the first commercial Cray-1 computer system into the time-sharing network of United Computing Systems, and built one of the first commercial TCP/IP networks as vice president of the Minnesota Supercomputer Center. Carlson, along with two colleagues, founded MRNet—Minnesota's first Internet Service Provider—bringing Internet access to Minnesota businesses and educational institutions. Co-founder and former president & COO Petersen had a background in nursing, real estate, and had worked with the HHS Utilization and Quality Control Peer Review Organization (PRO) for several years before co-founding LSS. Joanne Wood was appointed as president and COO of LSS Data Systems in February 2011 during the company's acquisition. She was also VP of client services at MEDITECH, at that time. Products and services LSS developed electronic medical software for physician practices associated with or located in communities with hospitals using Meditech health care information systems. Several LSS products have been certified by the Certification Commission for Healthcare Information Technology (CCHIT). 2008 Certification [additionally certified for Child Health] pending completion of advanced ePrescribing requirements: Medical and Practice Management Suite, Client Server Version 5.6 2007 Certification: http://www.cchit.org/products/2007/ambulatory/661 Medical and Practice Management Suite, Client Server Version 5.56] 2007 Certification: Medical and Practice Management Suite, Client Server Version 5.55 2007 Certification: Medical and Practice Management Suite, Client Server Version 5.54 2006 Certification: Medical and Practice Management Suite, MAGIC Version 5.6 LSS since certified products to be compliant with the Stage 1 Meaningful Use Standards identified by the Office of the National Coordinator for Health Information Technology with Drummond Group Inc. (see certified product info below). It has other electronic health record (EHR) products. LSS & MEDITECH Integration As a Meditech company, LSS develops its software using the same programming tools and technologies as Meditech, forming an integrated suite of products for health care organizations. The use of shared tools and system conventions with Meditech makes the user interface similar and consistent for clinicians in multiple care settings. LSS began programming software using Meditech proprietary language known as MAGIC (not to be confused with a different PC database product of the same name). References Companies based in Eden Prairie, Minnesota Software companies established in 1982 Software companies based in Minnesota Electronic health record software companies 1982 establishments in Minnesota Software companies of the United States
11868759
https://en.wikipedia.org/wiki/Moonlight%20%28runtime%29
Moonlight (runtime)
Moonlight was a free and open source implementation for Linux and other Unix-based operating systems of the now deprecated Microsoft Silverlight application framework, developed and then abandoned by the Mono Project. Like Silverlight, Moonlight was a web application framework which provided capabilities similar to those of Adobe Flash, integrating multimedia, graphics, animations and interactivity into a single runtime environment. History and overview In an interview in the beginning of June 2007, Miguel de Icaza said the Mono team expected to offer a "feasibility 'alpha' demo" in mid-June 2007, with support for Mozilla Firefox on Linux by the end of the year. After a 21-day hacking spree by the Mono team (including Chris Toshok, Larry Ewing and Jeffrey Stedfast among others), a public demo was shown at Microsoft ReMIX conference in Paris, France on June 21, 2007. However, in September 2007, developers still needed to install and compile a lot of Mono and Olive (the experimental Mono subproject for .NET 3.0 support) modules from the Mono SVN repository to be able to test Moonlight. A Moonlight IDE, named Lunar Eclipse, exists in SVN for XAML designs. Moonlight uses Cairo for rendering. Moonlight was provided as a plugin for Firefox and Chrome on popular Linux distributions. The plugin itself does not include a media codec pack, but when the Moonlight plugin detects playable media it refers users to download a free Media codec pack from Microsoft. Moonlight 2.0 tracked the Silverlight 2.0 implementation. The first completed version, Moonlight 1.0, supporting Silverlight 1.0, was released in January 2009. Moonlight 2.0 was released in December 2009. The Moonlight 2.0 release also contained some features of Silverlight 3 including a pluggable media framework which allowed Moonlight to work with pluggable open codecs, such as Theora and Dirac. Preview releases of Moonlight 4.0, targeting Silverlight 4 compatibility, were released in early 2011. In April 2011, the Moonlight team demonstrated Moonlight running on Android tablets and phones at the MIX11 Web Developers conference in Las Vegas. Shortly after the April 2011 release, Attachmate, parent to developer Mono, laid off an undisclosed number of Mono employees, and announced a deal with startup Xamarin for Mono development and support. At that time, Xamarin CEO Nat Friedman affirmed their commitment to the Moonlight project, although there were no outward signs of any further development afterward. In December 2011, de Icaza announced that work on Moonlight had stopped with no future plans. He explained that Microsoft had "cut the air supply" to it by omitting cross-platform components, making it a web-only plugin, and including Windows-only features. He advised developers to separate user interface code from the rest of their application development to ensure "a great UI experience on every platform (Mac, Linux, Android, iOS, Windows and Web)" without being dependent on third party APIs. DRM Silverlight supports Digital Rights Management in its multimedia stack, but Microsoft will not license their PlayReady DRM software for the Moonlight project to use and so Moonlight is unable to play encrypted content. Desktop support Moonlight was also usable outside of the browser as a Gtk+ widget (known as Moonlight.Gtk). A number of Desklets were written using this new technology during the Novell Hack Week in 2007. MoonBase is an experimental set of helper classes built on top of Moonlight.Gtk that can be used to create full blown C# desktop applications using the Moonlight (Silverlight 4.0) widgets and XAML files. MoonBase also has a related XAML editor/previewer. Microsoft support Shortly after the first demo at MIX 07 in Paris, Microsoft began cooperating with Novell to help with the building of Moonlight. Support included giving exclusive access to Novell for the following Silverlight artifacts: Microsoft's Test suites for Silverlight, Silverlight specification details, beyond those available on the web, Proprietary codecs made available free-of-charge for Windows Media Video and Audio, for VC-1 and MP3, and in the future H.264 and AAC, only licensed for use with Moonlight when running in a web browser. Other potential decoders include GStreamer and FFmpeg (used during the development stage) but Novell will not provide prepackaged versions of Moonlight with those libraries, because those decoders have not been granted licensing for the use of patented codec technologies. Microsoft released two public covenants not to sue for the infringement of its patents when using Moonlight. The first one covered Moonlight 1 and 2, is quite restrictive and covered only the use of Moonlight as a plugin in a browser, only implementations that are not GPLv3 licensed, and only if the Moonlight implementation has been obtained from Novell. It also notes that Microsoft may rescind these usage rights. The second covenant was an updated and broader covenant that no longer limits the covenant to users that obtain Moonlight from Novell, it covers any uses of Moonlight regardless of where it was obtained. The updated covenant covers the implementations as shipped by Novell for versions 3 and 4, it no longer distinguishes Novell from other distributions of Moonlight and expands the covenant to desktop applications created with Moonlight. The covenant does not extend to forks licensed under the GNU GPL (Moonlight itself uses the Lesser GPLv2). Codecs integration Although Moonlight is free software, the final version was going to use binary-only audio and video codecs provided by Microsoft which will be licensed for use with Moonlight only when used as a browser plugin (see above). The Windows media pack is not distributed together with the Moonlight plugin but the first time when media content in Silverlight is detected the user will be prompted to download the pack containing the codecs used in Silverlight directly from Microsoft. Self built versions could still use the FFmpeg library and there was discussion about adding GStreamer support as an alternative to using Microsoft's binary codecs for those who wish to use GStreamer instead and also for use when used outside of a browser. Mono architect Miguel de Icaza blogged that the Mono team prototyped Moonlight multimedia support using the LGPL-licensed FFmpeg engine but that they were unable to redistribute packaged versions that used that library due to FFmpeg codec licensing issues inside of the United States. Moonlight in other distributions After the release of Moonlight 2, a covenant provided by Microsoft was updated to ensure that other third party distributors can distribute Moonlight without their users having to worry about getting sued over patent infringement by Microsoft. This covenant can be found on the Microsoft website. Kevin Kofler and Tom Callaway, of Fedora, have stated publicly that the last covenant was "not acceptable" for that distribution and that "it is still not permissible in Fedora". The version of Moonlight that was going to be available direct from Novell would have access to licensed closed source media codecs provided free of charge by Microsoft. Third-party distributions of Moonlight would only be able to play non-patent encumbered media like Vorbis, Theora and Ogg. To support other formats, the distributors would have had to choose from a few licensing options: Negotiate licences directly with individual media codec owners (e.g. MPEG-LA, Fraunhofer Society) Negotiate access to Microsoft's Media Pack as Novell have done Use GStreamer or a commercial codec license Use a hardware-specific software like VDPAU At the PDC conference on October 13, 2008, Microsoft placed the 'Silverlight XAML Vocabulary' under the Microsoft Open Specification Promise, stating in a press release, "The Silverlight XAML vocabulary specification, released under the Microsoft Open Specification Promise, will better enable third-party ISVs to create products that can read and write XAML for Silverlight." Since Moonlight is essentially a XAML reader, Debian's position is that Moonlight is safe for them to redistribute (leaving each user to agree to their own licensing for Microsoft's and others' binary codecs). See also MonoDevelop – an open source IDE targeting both Mono and Microsoft .NET Framework platforms References External links Wired - Microsoft Silverlight Coming to Linux Moonlight 1.0 Media Stack article by Miguel de Icaza The H Open Source - Health Check: Moonlight Free multimedia software Free software programmed in C Free software programmed in C++ Free software programmed in C Sharp Mono (software) Microsoft Silverlight
65200154
https://en.wikipedia.org/wiki/Deondre%20Burns
Deondre Burns
Deondre Burns (born January 16, 1997) is an American professional basketball player for Keravnos of the Cypriot League. He played college basketball for the Little Rock Trojans and the Oral Roberts Golden Eagles. Early life and high school career Burns played for Newman Smith High School under coach Percy Johnson. In February 2015, Burns scored 30 points in a win against Thomas Jefferson High School to help Newman Smith qualify for the playoffs. As a senior, he tied for first in scoring among Dallas-area Class 5A players with 23.9 points per game, as well as posting 4.4 rebounds and 2.1 assists per game, and was named All-Area Honorable Mention. Burns signed with Little Rock in May 2015. College career As a sophomore, Burns averaged 7.0 points and 1.5 rebounds per game. Burns missed the 2017–18 season with a knee injury. He served as the team's sixth man during his redshirt junior season, with coach Darrell Walker saying, "I like bringing Dre' off the bench because he can score points off the bench." Burns averaged 10 points and 2.8 rebounds per game as a redshirt junior, shooting 41 percent from the field. After the season, he opted to transfer to Oral Roberts as a graduate transfer. On January 13, 2020, Burns was named Summit League player of the week after posting 22 points against North Dakota State and 19 points against North Dakota. He scored a career-high 31 points on February 6, in a 74–68 loss to North Dakota. As a senior at Oral Roberts, Burns averaged 15.3 points, 4.1 rebounds and 4.1 assists per game. He was named to the Second Team All-Summit League. Professional career On August 29, 2020, Burns signed his first professional contract with Starwings Basel of the Swiss Basketball League. He averaged 20.0 points, 4.4 rebounds and 4.2 assists per game. He subsequently joined the Metroplex Lightning of the Pro Basketball Association. On September 8, 2021, Burns signed with Keravnos of the Cypriot League. References External links Oral Roberts Golden Eagles bio Little Rock Trojans bio 1997 births Living people American men's basketball players American expatriate basketball people in Switzerland Point guards Little Rock Trojans men's basketball players Oral Roberts Golden Eagles men's basketball players People from Carrollton, Texas Sportspeople from the Dallas–Fort Worth metroplex Basketball players from Texas African-American basketball players 21st-century African-American sportspeople
38330850
https://en.wikipedia.org/wiki/Neuronal%20tracing
Neuronal tracing
Neuronal tracing, or neuron reconstruction is a technique used in neuroscience to determine the pathway of the neurites or neuronal processes, the axons and dendrites, of a neuron. From a sample preparation point of view, it may refer to some of the following as well as other genetic neuron labeling techniques, Anterograde tracing, for labeling from the cell body to synapse; Retrograde tracing, for labeling from the synapse to cell body; Viral neuronal tracing, for a technique which can be used to label in either direction; Manual tracing of neuronal imagery. In broad sense, neuron tracing is more often related to digital reconstruction of a neuron's morphology from imaging data of above samples. Digital neuronal reconstruction and neuronal tracing Digital reconstruction or tracing of neuron morphology is a fundamental task in computational neuroscience. It is also critical for mapping neuronal circuits based on advanced microscope images, usually based on light microscopy (e.g. laser scanning microscopy, bright field imaging) or electron microscopy or other methods. Due to the high complexity of neuron morphology and often seen heavy noise in such images, as well as the typically encountered massive amount of image data, it has been widely viewed as one of the most challenging computational tasks for computational neuroscience. Many image analysis based methods have been proposed to trace neuron morphology, usually in 3D, manually, semi-automatically or completely automatically. There are normally two processing steps: generation and proof editing of a reconstruction. History The need to describe or reconstruct a neuron's morphology probably began in early days of neuroscience when neurons were labeled or visualized using Golgi's methods. Many of the known neuron types, such as pyramidal neurons and Chandelier cells, were described based on their morphological characterization. The first computer-assisted neuron reconstruction system, now known as Neurolucida, was developed by Dr. Edmund Glaser and Dr. Hendrik Van der Loos in the 1960s. Modern approaches to trace a neuron started when digitized pictures of neurons were acquired using microscopes. Initially this was done in 2D. Quickly after the advanced 3D imaging, especially the fluorescence imaging and electron microscopic imaging, there were a huge demand of tracing neuron morphology from these imaging data. Methods Neurons can be often traced manually either in 2D or 3D. To do so, one may either directly paint the trajectory of neuronal processes in individual 2D sections of a 3D image volume and manage to connect them, or use the 3D Virtual Finger painting which directly converts any 2D painted trajectory in a projection image to real 3D neuron processes. The major limitation of manual tracing of neurons is the huge amount of labor in the work. Automated reconstructions of neurons can be done using model (e.g. spheres or tubes) fitting and marching, pruning of over-reconstruction, minimal cost connection of key points, ray-bursting and many others. Skeletonization is a critical step in automated neuron reconstruction, but in the case of all-path-pruning and its variants it is combined with estimation of model parameters (e.g. tube diameters). The major limitation of automated tracing is the lack of precision especially when the neuron morphology is complicated or the image has substantial amount of noise. Semi-automated neuron tracing often depends on two strategies. One is to run the completely automated neuron tracing followed by manual curation of such reconstructions. The alternative way is to produce some prior knowledge, such as the termini locations of a neuron, with which a neuron can be more easily traced automatically. Semi-automated tracing is often thought to be a balanced solution that has acceptable time cost and reasonably good reconstruction accuracy. The open source software Vaa3D-Neuron, Neurolucida 360, Imaris Filament Tracer and Aivia all provide both categories of methods. Tracing of electron microscopy image is thought to be more challenging than tracing light microscopy images, while the latter is still quite difficult, according to the DIADEM competition. For tracing electron microscopy data, manual tracing is used more often than the alternative automated or semi-automated methods. For tracing light microscopy data, more times the automated or semi-automated methods are used. Since tracing electron microscopy images takes substantial amount time, collaborative manual tracing software is useful. Crowdsourcing is an alternative way to effectively collect collaborative manual reconstruction results for such image data sets. Tools and software A number of neuron tracing tools especially software packages are available. One comprehensive Open Source software package that contains implementation of a number of neuron tracing methods developed in different research groups as well as many neuron utilities functions such as quantitative measurement, parsing, comparison, is Vaa3D and its Vaa3D-Neuron modules. Some other free tools such as NeuronStudio also provide tracing function based on specific methods. Neuroscientists also use commercial tools such as Neurolucida, Neurolucida 360, Aivia, Amira, etc. to trace and analyse neurons. Recent studies show that Neurolucida is cited over 7 times more than all other available neuron tracing programs combined, and is also the most widely used and versatile system to produce neuronal reconstruction. The BigNeuron project (https://alleninstitute.org/bigneuron/about/) is a recent substantial international collaboration effort to integrate the majority of known neuron tracing tools onto a common platform to facilitate Open Source, easy accessing of various tools at one single place. Powerful new tools such as UltraTracer, that can trace arbitrarily large image volume, have been produced through this effort. Neuron formats and databases Reconstructions of single neurons can be stored in various formats. This largely depends on the software that have been used to trace such neurons. The SWC format, which consists of a number of topologically connected structural compartments (e.g. a single tube or sphere), is often used to store digital traced neurons, especially when the morphology lacks or does not need detailed 3D shape models for individual compartments. Other more sophisticated neuron formats have separate geometrical modeling of the neuron cell body and neuron processes using Neurolucida among others. There are a few common single neuron reconstruction databases. A widely used database is http://NeuroMorpho.Org which contains over 86,000 neuron morphology of >40 species contributed worldwide by a number of research labs. Allen Institute for Brain Science, HHMI's Janelia Research Campus, and other institutes are also generating large-scale single neuron databases. Many of related neuron data databases at different scales also exist. References Neuroscience Cellular neuroscience
47506143
https://en.wikipedia.org/wiki/User%20revolt
User revolt
A user revolt is a social conflict in which users of a website collectively and openly protest a website host's or administrator's instructions for using the website. Sometimes it happens that the website hosts can control a website's use in certain ways, but the hosts also depend on the users to comply with voluntary social rules in order for the website to operate as the hosts would like. A user revolt occurs when the website users protest against the voluntary social rules of a website, and use the website in a way that is in conflict with the wishes of the website host or administrators. A user revolt is a process starting with a triggering event, then a rebellion, then a response to the rebellion. Distinction from Internet-based activism Internet-based activism is sometimes called a user revolt when website users protest the terms of a website while using that website for other purposes. A distinction between a user revolt and Internet-based activism could be that in a user revolt, an objective of the protest is to revolt against the website itself. In Internet-based activism, the primary goal of the protest is something other than reforming a website, although websites which create barriers to the larger protest may incidentally experience a user revolt for participating in the larger conflict. An example of a situation in which Internet activism includes a user revolt might be when users wish to engage in prohibited political discussion, but a government compels the website host to censor those discussions. The core conflict in this case is between users and the government, and not that the website itself as a communication medium. However, when the website as a communication medium chooses to create barriers to communication for users, then users of the website organize a user revolt even when the primary objective is something other than a website protest. Examples of Internet-based activism which led to user revolts include Social media and the Arab Spring and the Twitter Revolution. Examples AOL In 1997 AOL amended their Terms of service to permit them to sell users' telephone numbers to telemarketers. Users complained and in response AOL offered an opt-out system. Digg Publishing of DVD unlock code In 2007 in the AACS encryption key controversy various Internet users began publishing the decryption code for the Advanced Access Content System on various websites. The impact was that the code enabled anyone to write simple software, for example DeCSS, which enabled anyone else to rip DVDs and copy the content as they liked. The release of the key and derivative ripping programs made the illicit distribution of copyrighted media much easier for anyone who wished to share content which was formerly locked by the AACS system. The AACS codes were published in many places. One place in which they were published was the website Digg. On May 1, 2007, an article appeared on Digg's homepage that contained the encryption key for the AACS digital rights management protection of HD DVD and Blu-ray Disc. Then Digg, "acting on the advice of its lawyers," removed posting submissions about the secret number from its database and banned several users for submitting it. The removals were seen by many Digg users as a capitulation to corporate interests and an assault on free speech. A statement by Jay Adelson attributed the article's take-down to an attempt to comply with cease and desist letters from the Advanced Access Content System consortium and cited Digg's Terms of Use as justification for taking down the article. Although some users defended Digg's actions, as a whole the community staged a widespread revolt with numerous articles and comments being made using the encryption key. The scope of the user response was so great that one of the Digg users referred to it as a "digital Boston Tea Party". The response was also directly responsible for Digg reversing the policy and stating: "But now, after seeing hundreds of stories and reading thousands of comments, you've made it clear. You'd rather see Digg go down fighting than bow down to a bigger company. We hear you, and effective immediately we won't delete stories or comments containing the code and will deal with whatever the consequences might be." Digg v4 revolt and migration to Reddit When Digg redesigned their website in 2010 the community revolted and used the platform to advertise a user migration to competitor Reddit. Digg's version 4 release was initially unstable. The site was unreachable or unstable for weeks after its launch on August 25, 2010. Many users, upon finally reaching the site, complained about the new design and the removal of many features (such as bury, favorites, friends submissions, upcoming pages, subcategories, videos and history search). Kevin Rose replied to complaints on his blog, promising to fix the algorithm and restore some features. Alexis Ohanian, founder of rival site Reddit, said in an open letter to Rose: Disgruntled users declared a "quit Digg day" on August 30, 2010, and used Digg's own auto-submit feature to fill the front page with content from Reddit. Reddit also temporarily added the Digg shovel to their logo to welcome fleeing Digg users. Digg's traffic dropped significantly after the launch of version 4, and publishers reported a drop in direct referrals from stories on Digg's front page. New CEO Matt Williams attempted to address some of the users' concerns in a blog post on October 12, 2010, promising to reinstate many of the features that had been removed. Facebook In 2006 there was a Facebook user revolt regarding privacy concerns with the creation of Facebook's news feed feature. Users worried that the news feed would show their posts to individuals outside their friend network. Facebook staff replied to users. In 2007, there was a Facebook revolt over the automatic displaying of online purchase data and other online activity in news feeds. In response to the backlash, Facebook rolled back the changes. In 2009, Facebook users revolt over changes to the terms of service. In response to the backlash, Facebook rolled back the changes. In 2010 roughly 34,000 users left Facebook over loss of control over privacy settings (users could not opt out of sharing information publicly) as a part of the May 31 "Quit Facebook Day" campaign. Facebook rolled back some of the changes, allowing users to opt out. In 2018, revelations about election subversion on Facebook in 2016 led to the popular hashtag #DeleteFacebook. In June 2020, a social media campaign urged advertisers to stop or pause their Facebook advertising campaigns, in response to the company's hands-off approach to moderating content. Major brands including The North Face, REI, Patagonia, and Verizon took up the cause. The NAACP, Color of Change, and the Anti-Defamation League formed a coalition to drive the boycott, and Prince Harry and Meghan Markle worked behind the scenes to support the effort. Instagram In 2012 a change to Instagram's terms of service triggered a user revolt. Even during the revolt Instagram continued to get many new users. Livejournal Livejournal users revolted in 2007 when Livejournal deleted some site content. The Pirate Bay In 2009 Global Gaming Factory X sought to purchase The Pirate Bay. This led to a user revolt when community participants protested that the sale was a betrayal of community values. Reddit On July 2, 2015, Reddit began experiencing a series of blackouts as moderators set popular subreddit communities to private, in an event dubbed "AMAgeddon" – a portmanteau of AMA ("ask me anything") and Armageddon. This was done in protest of the recent firing of Victoria Taylor, an administrator who helped organize citizen-led interviews with famous people on the popular "Ask me Anything" subreddit. Organizers of the blackout also expressed resentment about the recent severance of the communication between Reddit and the moderators of subreddits. The blackout intensified on July 3 when former community manager David Croach gave an AMA about being fired. Before deleting his posts, he stated that Ellen Pao dismissed him with one year of health coverage when he had cancer and did not recover quickly enough. Following this, a Change.org petition to remove Pao as CEO of Reddit Inc. reached over 200,000 signatures. Pao posted a response on July 3 as well as an extended version of it on July 6 in which she apologized for bad communication and not delivering on promises. She also apologized on behalf of the other administrators and noted that problems already existed over the past several years. On July 10, Pao resigned as CEO and was replaced by former CEO and co-founder Steve Huffman. Twitter In 2013 Twitter users organized a revolt when Twitter took away a defensive tool that allowed people to protect themselves from other users that they chose to block. In response to the revolt Twitter restored some rights to its users. Wikipedia Spanish fork The Enciclopedia Libre was founded by contributors to the Spanish-language Wikipedia who decided to start an independent project. Led by Edgar Enyedy, they left Wikipedia on 26 February 2002, and created the new website, hosted free by the University of Seville, with the freely licensed articles of the Spanish-language Wikipedia. The split was provoked over concern that Wikipedia would accept advertising. After Wikipedia made a commitment to not use advertising, the Spanish fork attracted no more attention, and was mostly abandoned within a year of its founding. VisualEditor In 2012 The Daily Dot suggested that the Wikimedia Foundation's pursuit of more users may be at the risk of alienating the existing editors. Some experienced editors have expressed concerns about the rollout and bugs, with the German Wikipedia community voting overwhelmingly against making the VisualEditor the new default, and expressing a preference for making it an "opt-in" feature instead. Despite these complaints, the Wikimedia Foundation continued with the rollout to other languages. The Register said, "Our brief exploration suggests it certainly removes any need to so much as remember what kind of parenthesis belongs where." The Economists L.M., said it is "the most significant change in Wikipedia's short history." Softpedia ran an article titled "Wikipedia's New VisualEditor Is the Best Update in Years and You Can Make It Better". Some opponents have said that users may feel belittled by the implication that "certain people" are confused by wiki markup and therefore need the VisualEditor. The Daily Dot reported on 24 September 2013 that the Wikimedia Foundation had experienced a mounting backlash from the English Wikipedia community, which criticised the VisualEditor as slow, poorly implemented and prone to break articles' existing text formatting. In the resulting "test of wills" between the community and the Foundation, a single volunteer administrator overrode the Wikimedia Foundation's settings to change the availability of VisualEditor from opt-out to opt-in. The Foundation acquiesced, but vowed to continue developing and improving the VisualEditor. Superprotect "Superprotect" was the name for a superuser tool granted to Wikimedia Foundation staff but denied to all Wikimedia community members. In 2014 Wikimedia Foundation staff used the tool to force the installation of a new software feature on the German Wikipedia against the wishes of the Wikimedia community, who felt the feature was buggy and not ready for general use. This conflict was unprecedented. Erik Möller, then director of the Wikimedia Foundation, managed the Superprotect tool. Wikimedia commentator Andrew Lih described the superprotect feature as "Orwellian-sounding". The MediaViewer and Superprotect conflict between the Wikimedia community and the Wikimedia Foundation was called a revolt. The controversy demonstrated that the Wikimedia Foundation was unable to control the Wikimedia community with technical features, but rather, that mutual understanding and discussion among stakeholders would be required to develop Wikipedia's software. Representative dismissals Wikimedia users organized a revolt to call for the removal of Arnnon Geshuri, a member of the board of the Wikimedia Foundation. Wikimedia Foundation head Lila Tretikov resigned in February 2016 during a user revolt calling for institutional changes. Wikimedia Foundation ban of Fram On 10 June 2019, the English Wikipedia administrator Fram was banned by the Wikimedia Foundation (WMF) from editing the English Wikipedia for a period of 1 year. According to Joseph Bernstein of Buzzfeed News, this took place "without a trial", and WMF did not "disclose the complainer nor the complaint" to the community. Some in the editor community expressed anger at the WMF not providing specifics, as well as skepticism as to whether Fram deserved the ban. Another administrator unblocked Fram, later citing "overwhelming community support", but the WMF reblocked Fram. Two weeks after the ban of Fram, nine English Wikipedia administrators had resigned. See also Web hosting service References Internet activism Labor relations Nonviolent revolutions
43362330
https://en.wikipedia.org/wiki/BlackBerry%20Passport
BlackBerry Passport
BlackBerry Passport is a smartphone developed by BlackBerry Limited. Officially released on October 24, 2014, the Passport is inspired by its namesake and incorporates features designed to make the device attractive to enterprise users, such as a unique square-shaped display measuring 4.5 inches diagonally, a compact physical keyboard with touchpad gestures, and the latest release of the company's BlackBerry 10 operating system. Reception to the Passport was mixed; critics praised the quality of the device's design, screen, and keyboard for meeting the company's goals of creating a business-oriented device, along with an improved application selection through the integration of Amazon's Appstore for Android (taking advantage of the Android software support provided by BlackBerry 10) alongside BlackBerry's own store for native software. Criticism of the Passport was focused primarily on its irregular form factor, with its width being even wider than most phablet smartphones, making the device difficult to carry and use one-handed due to its increased width, while its keyboard was criticized for having made a subtle but perceptible layout change in comparison to past BlackBerry devices. Development In January 2014, BlackBerry Limited's new CEO John Chen indicated that, following the unsuccessful launch of BlackBerry 10 and its accompanying, consumer-oriented touchscreen devices (such as the BlackBerry Z10), along with the company's major loss of market share to competing smartphones such as Android devices and the iPhone line, the company planned to shift its focus back towards the enterprise market as part of its restructuring plan, and primarily manufacture phones that feature physical keyboards. In June 2014, Chen publicly teased two of the company's upcoming models, the BlackBerry Passport—a smartphone with a square display, along with a successor to the Q10 known as the BlackBerry Classic, incorporating the array of navigation keys featured on past BlackBerry OS devices. The company's return to a business-oriented focus influenced the design and functionality of the Passport; the overall design of the device was designed to evoke a similar form to its namesake, "a familiar and universal symbol of mobility". BlackBerry also touted that the use of a square-shaped, 4.5-inch display, rather than the rectangular 16:9 displays of other smartphones, in combination with its physical keyboard, would provide more room on-screen for business-oriented tasks such as document editing, image viewing (such as architectural schematics and x-rays), and web browsing. The company also noted that the increased width of the display would allow the Passport to show 60 characters per line of text, nearing a recommended measure for books at 66 per line. Development of the Passport began in 2013; while even Chen himself was hesitant about the device due to its unusual form factor, he decided to allow continued development of the Passport, believing that it carried unique design qualities in comparison to other, competing smartphones. BlackBerry officially released the Passport on September 24, 2014 during a press event featuring retired NHL player Wayne Gretzky; describing the device as being aimed towards "power professionals" who are "achievement oriented" and "highly productive", Chen remarked that the goals of the Passport were to "drive productivity" and "break through the sea of rectangular-screen, all-touch devices." Chen also joked about Apple's recent "bendgate" incident during the presentation, remarking that unlike the iPhone 6, "bending [the Passport] needs a little effort." BlackBerry announced plans to release the Passport in over 30 countries by the end of 2014; following the event, unlocked models of the Passport were made available for purchase on BlackBerry's website in Canada, France, Germany, the United Kingdom and the United States as well as Amazon. Telus in Canada and AT&T in the United States were announced as the first two North American carriers to offer the Passport. According to Blackberry's blog page, the BlackBerry World App store (12/31/2019), the BlackBerry Travel site (February 2018), and the Playbook video calling service (March 2018) will cease functioning. However vital infrastructure services for both BBOS and BB10 will continue to be provided by BlackBerry beyond December 2019. Specifications The BlackBerry Passport has dimensions similar to that of an international passport, and incorporates a steel frame with matte plastic as part of its design. The device utilizes a compact variation of BlackBerry's traditional physical keyboard design, using a modified layout with three rows and a small spacebar located in the middle of the bottom row alongside the remaining letters. Functions previously found on the fourth row (such as symbols and the Shift key) are accessible through a context-sensitive on-screen toolbar. The keyboard is also touch-sensitive; acting as a touchpad, it can register sliding gestures across its keys for scrolling, text selection, word deletion, and autocomplete suggestions. The original Passport design was produced in Black, with both White and a limited edition Red coloured versions announced on November 24, 2014. Three additional variations were later released: An AT&T version, a limited edition Black & Gold version and The Silver Edition. The AT&T version, announced at the 2015 Consumer Electronics Show in Las Vegas Nevada and later made available on February 20, 2015 has a rounded frame requested by the carrier rather than the hard-edged shape of the international version. The limited edition Black & Gold version was a limited run of only 50 devices that featured a gold coloured rim in place of the stainless steel and came with a Valextra soft calf leather case and was engraved with the production number. While BlackBerry did not disclose the number made available on its site, the device sold out the same day as it was made available. The Silver Edition retains the rounded corners similar to the AT&T version at the bottom edge and has a metallic colour scheme with a reinforced steel frame to provide extra strength and durability. The keyboard in this version was improved to make typing easier while the corners and a diamond weave pattern on the back were intended to improve grip. Finally, bevelled edges around the front facing camera and a raised border around the rear facing camera were added to protect the lenses from wear and tear. Hardware The Passport features a square-shaped 4.5-inch IPS LCD display with a resolution of 1440×1440, as opposed to a 16:9 display, making the Passport considerably wider than other phablets currently available. The Passport includes a quad-core, 2.2 GHz Qualcomm Snapdragon 801 system-on-chip with 3 GB of RAM, 32 GB of expandable, internal storage, along with a non-removable 3450 mAh battery rated for at least 30 hours of mixed usage. The Passport also includes a 13-megapixel rear-facing camera with optical image stabilization, and a 2-megapixel front-facing camera. During phone calls, the Passport can measure ambient noise using a microphone in its earpiece, which can then be used to automatically adjust call volume. Software The Passport is preloaded with BlackBerry 10.3, the latest version of BlackBerry's operating system at launch. The new version features a refreshed interface, a personal digital assistant known as BlackBerry Assistant, BlackBerry Blend, an application that allows the user to access information from their device on their smartphone and tablet such as email, documents, images and BBM, the company's own direct messaging service, alongside other new features. Alongside BlackBerry World for native applications, 10.3 also includes the third-party Amazon Appstore, offering Android apps that can run on the Passport. Connectivity-wise, both the AT&T and International versions of the Passport ship with a quad-band GSM radio and penta-band UMTS radio. The international version of the Passport also ships with a 10-band LTE radio supporting Bands 1, 2, 3, 4, 5, 7, 8, 13, 17 and 20. The AT&T version, however, supports only 9 bands and does not support LTE-FDD band 13 which is supported by the International version. Both devices also come with an 802.11ac WiFi transceiver with hotspot and Wi-Fi Direct capabilities and a Bluetooth 4.0 transceiver. Lastly, both versions sport an FM radio with RDS capabilities, support Miracast screen mirroring, as well as HDMI and VGA output via SlimPort. Reception The BlackBerry Passport received mixed reviews. Nate Ralph of CNET was positive in assessing the Passport, praising the quality of the Passport's display for meeting BlackBerry's stated goals of providing a display optimized primarily for reading and editing documents, its keyboard for having a "spacious typing experience", and unique touch gestures. The operating system was also praised for its performance, and for providing a better selection of apps through the Amazon Store, although the Assistant was panned for being slower than its competitors, and it was also noted that some apps (particularly Android games) might not be optimized well for the Passport's square screen. However, he believed that BlackBerry had gone "a step too far" in its attempt to design a device specifically for the enterprise market, noting that the size of the device made it difficult to use one-handed even in comparison with phablets, concluding that the company's "myopic focus on text and productivity comes at the cost of creating a device as pleasant to hold as it would be to use, and that decision keeps the Passport from eclipsing its well-rounded peers." Dan Seifert of The Verge praised its design for being robust and not needing a "clunky Otterbox" to withstand multiple drops, along with its display for having a high resolution and good viewing angles, its call quality, a sufficient camera (although it was panned for being slow to launch and take photos), and full-day battery life. The majority of criticism was derived from its form factor; the dimensions of the Passport (which made the device wider than both the Samsung Galaxy Note 4 and the iPhone 6 Plus) were criticized for making the device "uncomfortable" and difficult to carry in a pocket or use one-handed. The dimensions were also considered a hinder on productivity, noting that some use cases (such as watching videos and using Twitter) did not adapt well to the square screen, and that the device's keyboard was not as good as past BlackBerry phones due to its irregular layout, but still praised it for maintaining the company's traditional quality. The BlackBerry 10.3 operating system was praised for its refreshed appearance, and its attempt to address the platform's small number of third-party apps by bundling Amazon Appstore (despite still lacking key apps), but was criticized for its learning curve, performance issues (despite the device's relatively powerful hardware), and for similarly having mechanics that were "clumsy" and hindered productivity. In conclusion, Seifert stated in response to BlackBerry believing "power pros" would still carry another smartphone alongside their Passport, "if I can get my job done with just [an iPhone], why bother carrying two?" Joanna Stern of The Wall Street Journal was similarly negative, remarking that while BlackBerry still had the best e-mail client of any smartphone platform, the Passport's keyboard was inferior to that of past BlackBerry devices, and shared criticism surrounding the device's design. She felt that the Passport demonstrated that BlackBerry was still "living in the past" in regards to its view of the smartphone industry and users' apparent need for a phone specifically for work usage—especially one that is such irregularly designed. In a preliminary review, Engadget noted that even with Amazon Appstore available, there was not enough software for the device, and concluded that "[the Passport] is built well and the keyboard is comfortable, but be prepared for a few odd stares from those around you." It was also noted that the size and shape of the Passport were similar to a previous Android phablet—the LG Optimus Vu. Sales Within 6 hours, 258,000 Passports were sold, and pre-order stock on both Amazon and BlackBerry's websites was sold out within 6 hours. See also BlackBerry 10 List of BlackBerry 10 devices References Mobile phones with an integrated hardware keyboard Passport Discontinued smartphones
44385352
https://en.wikipedia.org/wiki/ISO/IEC%2020248
ISO/IEC 20248
ISO/IEC 20248 Automatic Identification and Data Capture Techniques – Data Structures – Digital Signature Meta Structure is an international standard specification under development by ISO/IEC JTC1 SC31 WG2. This development is an extension of SANS 1368, which is the current published specification. ISO/IEC 20248 and SANS 1368 are equivalent standard specifications. SANS 1368 is a South African national standard developed by the South African Bureau of Standards. ISO/IEC 20248 [and SANS 1368] specifies a method whereby data stored within a barcode and/or RFID tag is structured and digitally signed. The purpose of the standard is to provide an open and interoperable method, between services and data carriers, to verify data originality and data integrity in an offline use case. The ISO/IEC 20248 data structure is also called a "DigSig" which refers to a small, in bit count, digital signature. ISO/IEC 20248 also provides an effective and interoperable method to exchange data messages in the Internet of Things [IoT] and machine to machine [M2M] services allowing intelligent agents in such services to authenticate data messages and detect data tampering. Description ISO/IEC 20248 can be viewed as an X.509 application specification similar to S/MIME. Classic digital signatures are typically too big (the digital signature size is typically more than 2k bits) to fit in barcodes and RFID tags while maintaining the desired read performance. ISO/IEC 20248 digital signatures, including the data, are typically smaller than 512 bits. X.509 digital certificates within a public key infrastructure (PKI) is used for key and data description distribution. This method ensures the open verifiable decoding of data stored in a barcode and/or RFID tag into a tagged data structure; for example JSON and XML. ISO/IEC 20248 addresses the need to verify the integrity of physical documents and objects. The standard counters verification costs of online services and device to server malware attacks by providing a method for multi-device and offline verification of the data structure. Examples documents and objects are education and medical certificates, tax and share/stock certificates, licences, permits, contracts, tickets, cheques, border documents, birth/death/identity documents, vehicle registration plates, art, wine, gemstones and medicine. A DigSig stored in a QR code or near field communications (NFC) RFID tag can easily be read and verified using a smartphone with an ISO/IEC 20248 compliant application. The application only need to go online once to obtain the appropriate DigSig certificate, where after it can offline verify all DigSigs generated with that DigSig certificate. A DigSig stored in a barcode can be copied without influencing the data verification. For example; a birth or school certificate containing a DigSig barcode can be copied. The copied document can also be verified to contain the correct information and the issuer of the information. A DigSig barcode provides a method to detect tampering with the data. A DigSig stored in an RFID/NFC tag provides for the detection of copied and tampered data, therefore it can be used to detect the original document or object. The unique identifier of the RFID tag is used for this purpose. The DigSig Envelope ISO/IEC 20248 calls the digital signature meta structure a DigSig envelope. The DigSig envelope structure contains the DigSig certificate identifier, the digital signature and the timestamp. Fields can be contained in a DigSig envelope in 3 ways; Consider the envelope DigSig{a, b, c} which contains field sets a, b and c. a fields are signed and included in the DigSig envelope. All the information (the signed field value and the field value is stored on the AIDC) is available to verify when the data structure is read from the AIDC (barcode and/or RFID). b fields are signed but NOT included in the DigSig envelope - only the signed field value is stored on the AIDC. Therefore the value of a b field must be collected by the verifier before verification can be performed. This is useful to link a physical object with an barcode and/or RFID tag to be used as an anti-counterfeiting measure; for example the seal number of a bottle of wine may be a b field. The verifier needs to enter the seal number for a successful verification since it is not stored in the barcode on the bottle. When the seal is broken the seal number may also be destroyed and yielded unreadable; the verification can therefore not take place since it requires the seal number. A replacement seal must display the same seal number; using holograms and other techniques may make the generation of a new copied seal number not viable. Similarly the unique tag ID, also known is the TID in ISO/IEC 18000, can be used in this manner to prove that the data is stored on the correct tag. In this case the TID is a b field. The interrogator will read the DigSig envelope from the changeable tag memory and then read the non-changeable unique TID to allow for the verification. If the data was copied from one tag to another, then the verification process of the signed TID, as stored in the DigSig envelope, will reject the TID of the copied tag. c fields are NOT signed but included in the DigSig envelope - only the field value is stored on the AIDC. A c field can therefore NOT be verified, but extracted from the AIDC. This field value may be changed without affecting the integrity of the signed fields. The DigSig Data Path Typically data stored in a DigSig originate as structured data; JSON or XML. The structured data field names maps directly on the DigSig Data Description [DDD]. This allows the DigSig Generator to digitally sign the data, store it in the DigSig envelope and compact the DigSig envelope to fit in the smallest bits size possible. The DigSig envelope is then programmed in an RFID tag or printed within a barcode symbology. The DigSig Verifier reads the DigSig envelope from the barcode or RFID tag. It then identifies the relevant DigSig certificate, which it uses to extract the fields from the DigSig envelope and obtain the external fields. The Verifier then performs the verification and makes the fields available as structured data for example JSON or XML. Examples QR example The following education certificate examples use the URI-RAW DigSig envelope format. The URI format allows a generic barcode reader to read the DigSig where after it can be verified online using the URI of the trusted issuer of the DigSig. Often the ISO/IEC 20248 compliant smartphone application (App) will be available on this website for down load, where after the DigSig can be verified offline. Note, a compliant App must be able to verify DigSigs from any trusted DigSig issuer. The university certificate example illustrates the multi-language support of SANS 1368. RFID and QR Example In this example a vehicle registration plate is fitted with an ISO/IEC 18000-63 (Type 6C) RFID tag and printed with a QR barcode. The plate is both offline verifiable using a smartphone, when the vehicle is stopped; or using an RFID reader, when the vehicle drive past the reader. Note the 3 DigSig Envelope formats; RAW, URI-RAW and URI-TEXT. The DigSig stored in the RFID tag is typically in a RAW envelope format to reduce the size from the URI envelope format. Barcodes will typically use the URI-RAW format to allow generic barcode readers to perform an online verification. The RAW format is the most compact but it can only be verified with a SANS 1368 compliant application. The DigSig stored in the RFID tag will also contain the TID (Unique Tag Identifier) within the signature part. A DigSig Verifier will therefore be able to detect data copied onto another tag. QR with External data example The following QR barcode is attached to a computer or smartphone to prove it belongs to a specific person. It uses a b type field, described above, to contain a secure personal identification number [PIN] remembered by the owner of the device. The DigSig Verifier will ask for the PIN to be entered, before the verification can take place. The verification will be negative if the PIN is incorrect. The PIN for the example is "123456". The DigSig Data Description for the above DigSig is as follows: { "defManagementFields": { "mediasize":"50000", "specificationversion":1, "country":"ZAR", "DAURI":"https://www.idoctrust.com/", "verificationURI":"http://sbox.idoctrust.com/verify/", "revocationURI":"https://sbox.idoctrust.com/chkrevocation/", "optionalManagementFields":{}}}, "defDigSigFields": [{ "fieldid":"cid", "type":"unsignedInt", "benvelope":false}, { "fieldid":"signature", "type":"bstring", "binaryformat":"{160}", "bsign":false}, { "fieldid":"timestamp", "type":"date", "binaryformat":"Tepoch"}, { "fieldid":"name", "fieldname":{"eng":"Name"}, "type":"string", "range":"[a-zA-Z ]", "nullable":false}, { "fieldid":"idnumber", "fieldname":{"eng":"Employee ID Number"}, "type":"string", "range":"[0-9 ]"}, { "fieldid":"sn", "fieldname":{"eng":"Asset Serial Number"}, "type":"string", "range":"[0-9a-zA-Z ]"}, { "fieldid":"PIN", "fieldname":{"eng":"6 number PIN"}, "type":"string", "binaryformat":"{6}", "range":"[0-9]", "benvelope":false, "pragma":"enterText"}]} References SANS 1368, Automatic identification and data capture techniques — Data structures — Digital Signature meta structure FIPS PUB 186-4, Digital Signature Standard (DSS) – Computer security – Cryptography IETF RFC 3076, Canonical XML Version 1.0 IETF RFC 4627, The application/JSON media type for JavaScript Object Notation (JSON) IETF RFC 3275, (Extensible Markup Language) XML-Signature syntax and processing IETF RFC 5280, Internet X.509 Public Key Infrastructure Certificate and Certificate Revocation List (CRL) Profile ISO 7498-2, Information processing systems – Open systems interconnection – Basic reference model – Part 2: Security architecture ISO/IEC 9594-8 (ITU X.509), Information technology – Open Systems Interconnection – The Directory: Public-key and attribute certificate frameworks ISO/IEC 10181-4, Information technology – Open Systems Interconnection – Security frameworks for open systems: Non-repudiation framework ISO/IEC 11770-3, Information technology – Security techniques – Key management – Part 3: Mechanisms using asymmetric techniques ISO/IEC 11889 (all parts), Information technology – Trusted Platform Module ISO/IEC 15415, Information technology – Automatic identification and data capture techniques – Bar code print quality test specification – Two-dimensional symbols ISO/IEC 15419, Information technology – Automatic identification and data capture techniques – Bar code digital imaging and printing performance testing ISO/IEC 15423, Information technology – Automatic identification and data capture techniques – Bar code scanner and decoder performance testing ISO/IEC 15424, Information technology – Automatic identification and data capture techniques – Data Carrier Identifiers (including Symbology Identifiers) ISO/IEC 15963, Information technology – Radio frequency identification for item management – Unique identification for RF tags ISO/IEC 16022, Information technology – Automatic identification and data capture techniques – Data Matrix bar code symbology specification ISO/IEC 16023, Information technology – International symbology specification – MaxiCode ISO/IEC 18000 (all parts), Information technology – Radio frequency identification for item management ISO/IEC 18004, Information technology – Automatic identification and data capture techniques – QR Code 2005 bar code symbology specification ISO/IEC TR 14516, Information technology – Security techniques – Guidelines for the use and management of Trusted Third Party services ISO/IEC TR 19782, Information technology – Automatic identification and data capture techniques– Effects of gloss and low substrate opacity on reading of bar code symbols ISO/IEC TR 19791, Information technology – Security techniques – Security assessment of operational systems ISO/IEC TR 29162, Information technology – Guidelines for using data structures in AIDC media ISO/IEC TR 29172, Information technology – Mobile item identification and management –Reference architecture for Mobile AIDC services External links http://csrc.nist.gov http://www.ietf.org https://web.archive.org/web/20141217133239/http://idoctrust.com/ http://www.iso.org http://www.itu.int http://www.sabs.co.za Barcodes Radio-frequency identification
33040948
https://en.wikipedia.org/wiki/Inedito
Inedito
Inedito (English: All New or Unpublished) is the eleventh studio album by Italian singer Laura Pausini, released by Atlantic Records in November 2011. This is Pausini's comeback album, after two years of silence. The name of the album was confirmed on the dawn of 10 September 2011. The album was previewed with the release of the single "Benvenuto", which debuted at number one on the Italian Singles Chart. The second single from the album was "Non ho mai smesso", followed by "Bastava", released on 20 January 2012. The Spanish-language version of the album, titled Inédito, was released on 11 November 2011 as a digital download in Spain, and on 15 November 2011 as a CD both in Spain and in Latin America. In December 2011, Pausini embarked on the Inedito World Tour to promote the album, first in Italy, then coming to Latin America. In March 2012 Pausini returned to Italy, then she continued with a European leg until August 2012. A return to Latin America, North America and Australia was planned and the tour was originally going to end in December 2012 with a new set of concerts in Italy, but on 15 September 2012 Pausini announced her pregnancy and cancelled the remaining shows of the tour. The album has sold 1,000,000 copies worldwide. Background After announcing in late 2009 that she would have been out of the spotlight for two years, Pausini broke her silence in early January 2011, when her website reported that Pausini's eleventh studio album would be released in late 2011. Starting from 11 January 2011, Pausini's website was monthly renewed, adding information about her new studio album and the Inedito World Tour. On 10 September 2011, Pausini revealed the artwork and the track list of her album. The following day, the first single from Inedito, "Benvenuto", was made available in streaming through Pausini's official website. The song was officially released on 12 September 2011. On 9 November 2011, via her official Facebook page, Pausini confirmed "Non ho mai smesso", released in Italy on 11 November, as the second single from the album. In March 2012, Pausini announced that the title song, "Inedito", would not be ever released as a single, since the co-singer of that song, Gianna Nannini, was currently on vacation and did not promote it. Thus, "Mi tengo" was released as the fourth single, on 23 March 2012. In May 2012, Pausini announced that the fifth single from the album would be "Le cose che non-mi aspetto", which was released on 25 May 2012. The last single from the album is going to be "Celeste" (in Italy) and "Las cosas que no me espero" (in Spain and Latin America, featuring Carlos Baute), both to be released on 5 November 2012. Originally, "Troppo tempo" was going to be chosen as the last single (with its music video already been recorded), but when Pausini discovered she was pregnant she changed her mind and chose such song. Writing, composition and recording In October 2011, during a promotional tour in Mexico, Pausini explained that the title Inedito refers to the creative process of the album, described as very different from the one that led to her previous studio sets, because for the first time she conceived and wrote the whole album without any pressure, in the privacy of her home, instead of working on it in airports and hotel rooms. Describing the musical style of the album, Pausini claimed that it will include songs with a wide range of influences: "It's a complete album from the musical point of view, because there are a lot of differences between each song. It is influenced by rock, but it also features ballads and melodies with music by Italian and English orchestras." The album features guest appearances by Italian singer-songwriters Gianna Nannini, with whom Pausini duets in the title-track, and Ivano Fossati, who performs a guitar solo in "Troppo tempo" / "Hace tiempo". The track "Tutto non-fa te" / "Lo que tú me das" is dedicated to Pausini's mother, while "Nel primo sguardo" / "A simple vista" was written by Pausini thinking about her younger sister, Silvia, with whom she duets in the Italian version of the song. "Nel primo sguardo" is by far the only song ever released and recorded by Pausini in four different languages: Spanish, Italian, Portuguese and French. A 10-second snippet of an English version of the song was sung live by Pausini during a press conference. Also, after the album was released, during a radio broadcast, Pausini sang a snippet of an English version of "Celeste". Later it was confirmed that "Nel primo sguardo" was first written in English, under the name Beautiful, but Pausini did not find words for the chorus at first. Then, when her friend Carolina Leal translated the original version into Portuguese, Pausini knew the song would be good. The track "Ti dico ciao" / "Te digo adiós" is dedicated to Pausini's late friend, Giuseppe. Indeed, she planted a tree in front of her house, alluding to the lyrics of the song. The track "Celeste" / "Así Celeste" was written by Pausini after, on many occasions, the media constantly said she was pregnant, when indeed she was not. As declared by herself, the song's lyrics is exactly what she would say to her baby when she actually has one. Promotion On 11 November 2011, Pausini promoted her album appearing on the first episode of the new TV show by Italian presenter Piero Chiambretti, the Chiambretti Muzic Show, during which she sang a few tracks from Inedito. The tour to promote the album started on 22 December 2011 from Milan. The Inedito World Tour will reach Latin America in January and February 2012. In March 2012, Pausini will be back in Italy for a second Italian leg of the tour, while in April and May she will give concerts throughout the rest of Europe. Track listing Inedito Inédito Deluxe version The deluxe edition of Inedito includes both the Italian-language and the Spanish-language versions of the album, and also features 5 additional tracks: Special edition The special editions of Inedito and Inédito contain the two original CDs plus a live bonus track recorded on December 31, 2011 and a live DVD of the Inedito World Tour. Personnel Credits adapted from Inedito liner notes. Production credits Massimo Aluzzi – engineer Marco Barusso – engineer Simone Bertolotti – producer Marco Borsatti – engineer, mixing Andy Bradfield – mixing Jason Carmer – engineer Renato Cantele – engineer, mixing Enrico Capalbo – assistant Paolo Carta – engineer, producer Luca Chiaravalli – pre-production Fiona Cruickshank – Pro Tools Luigi De Maio – engineer Samuele Dessì – engineer Nicola Fantozzi – engineer, assistant Mo Hausler – assistant Jake Jackson – engineer Davide Palmiotto – assistant Angelo Paracchini – assistant Laura Pausini – producer Corrado Rustici – producer Giuseppe Salvadori – assistant Celso Valli – mixing, producer Daniel Vuletic – pre-production, producer Music credits Leo Abrahams – guitar Niccolò Agliardi – backing vocals, composer Prisca Amori – orchestra leader Dave Arch – piano B.I.M. Orchestra – orchestra Emiliano Bassi – drums, percussions Matteo Bassi – bass, composer Simone Bertolotti – additional keyboards, piano, Rhodes piano, glockenspiel, composer, arrangements, orchestra conductor C.V. Ensamble orchestra – orchestra Paolo Carta – guitar, computer programming, harmonica, backing vocals, composer, arrangements, orchestra conductor Luca Chiaravalli – backing vocals, additional computer programming, composer Cesare Chiodo – bass Valentino Corvino – leader Beppe Dati – composer Edoardo De Angelis – orchestra leader Samuele Dessì – acoustic guitar, electric guitar, computer programming Nathan East – bass Niccolò Fabi – composer Gianluigi Fazio – backing vocals Steve Ferrone – drums, percussions Ivano Fossati – electric guitar, vocals Elvezio Fortunato – electric guitar Marzia Gonzo – backing vocals Nick Ingman – arrangements, orchestra conductor Frank Martin – piano Gianna Nannini – vocals Everton Nelson – orchestra leader Andy Pask – bass Nicola Oliva – acoustic guitar Orchestra Edodea Ensemble – orchestra Goffredo Orlandi – composer Laura Pausini – vocals, composer, backing vocals Silvia Pausini – vocals Massimiliano Pelan – composer Kaveh Rastegar – bass Andrea Rigonat – electric guitar Royal Philharmonic Orchestra – orchestra Tommy Ruggero – percussion Corrado Rustici – guitar, keyboards, treatments, beats, arrangements, orchestra conductor Rosaria Sindona – backing vocals Solis String Quartet – orchestra Ian Thomas – drums Giuseppe Tortora – contractor Michael Urbano – drums Celso Valli – piano, Hammond organ, keyboards, armonium, orchestra conductor Paolo Valli – drums, arrangements Massimo Varini – acoustic guitar, electric guitar Daniel Vuletic – composer, arrangements Paolo Zampini – flute Bruno Zucchetti – piano, Hammond organ, keyboards, computer programming Charts Weekly charts Year-end charts Certifications and sales Release history References 2011 albums Laura Pausini albums Atlantic Records albums Italian-language albums Spanish-language albums
15333015
https://en.wikipedia.org/wiki/Zoner%20Photo%20Studio
Zoner Photo Studio
Zoner Photo Studio is a software application developed by the Czech-founded company Zoner Software. The bitmap editor and image file manager is a program used for editing digital photographs. It is used in its country of origin as well as around the world. As of 2021, this software is available only for the Windows operating system. History In 2004, Zoner Media Explorer was renamed Zoner Photo Studio because the product focus switched to strictly digital photography. A new version has been published annually throughout the software's history. Key changes brought by recent versions Version 12 of Zoner Photo Studio introduced the program's division into "modules": a Manager, Viewer, Editor, and raw module, to increase the ease of working with photo management/editing/etc. simultaneously. This was also the first version with a default charcoal gray interface, intended to ease photo viewing. Version 13 brought support for dual monitors and 64-bit versions of Windows. Version 14 brought support for batch upload to the developer's Zonerama web gallery service and for GPU acceleration via CUDA and OpenCL. Version 15 added the new Import module. Version 16 completed the Editor's switch to using the Side Panel. Version 17 revamped the raw module and made the Catalog more central to the Manager. Version 18 brought a major interface change. The five modules were distilled into three: Manager, Develop, and Editor. Develop enables non-destructive edits to raw files and other supported files. New version of Zoner Photo Studio X is subscription based and brings fine-tuned retouching tools and support of layer based editing. Interface The Photo Studio software is all-in-one, that has features for all of a photo's workflow, rather than e.g. only editing. The interface is divided into what the company calls "modules," each addressing a certain major task in that workflow, such as viewing, editing, managing, or "importing" (downloading) photos. Import Module The program's Import Module is used for copying or moving photos from a camera, a card, or other external media onto a computer. It offers various automatic actions during download, such as assigning keywords and other metadata. Manager Module The program's Manager Module is for organizing photos. It can display "metadata," that is, information about pictures, stored in the Exif, IPTC, and/or XMP standard. This metadata includes titles, descriptions, author info, keywords, audio notes, digital signatures, GPS data, ratings, colored labels, etc. Metadata can also be edited here, or used for filtering. The Manager supports batch work. The program works with image files on disk directly—they are not imported in and later exported out. However, it does have a Catalog database that serves as a "card catalog" for metadata. The Manager has several different views—besides the standard "Browser," these are "Preview," "Map," and "Compare." Their purposes match their names. Viewer The Viewer is for quick full-size picture browsing. Editor The Editor is used for photo editing. This is a bitmap editor, built for retouching purposes only and thus, without full layer support. This editor offers selections and a selection mask. Raw module The program's raw module enables conversion of raw-format data into files in standard bitmap formats. The native support does not cover all the newest cameras at any given moment, but can be expanded to support them by integrating a free DNG converter. Version 17 of the program added support for automatic lens defect corrections during development using LCP profiles. Photo Studio 15 Zoner Photo Studio was released in October 2012. Features Added in Version 15 Import Module—photo downloading The photo-download process was completely redesigned for version 15. Side Panel introduced The Side Panel was first added to the Editor in this version, where it already replaced the major editing dialogs (for e.g. colors, brightness, white balance, etc.). Version 15 is thus also when full-screen previewing first replaced small preview panes for most edits. This version saw the addition of "Quick Filters," designed for quickly applying multiple edits at once to give a certain look (e.g. "Polaroid," "Cross Process," "Lomo"). Automatic Backups of Original Version Each photo is automatically backed up to a local database the moment a user first edits that photo in the program. The program offers a "Restore Original" function powered by this database. This function is optional, but on by default. Synchronization A tool that synchronizes photo collections in multiple locations against each other. Tilt-shift This feature works like a Tilt-shift lens, that is, it blurs a picture's background (with controls for where and how much to blur). Zonerama Zonerama is a web gallery service run by the developer of Zoner Photo Studio. Zoner Inc. provides it for free to both users and non-users of the program. Albums can have configurable privacy, visibility, etc. Version 15 introduced integration between the program and this service, although version 15's integration is outdated today. This integration includes e.g. editing of Zonerama pictures directly from the program. Zoner Photo Studio 16 This version was released in October 2013. Features added in version 16 Improved touch-controls support Version 16 improves the program's usability on tablet PCs and with touch monitors. Manager This version saw the Manager's main area split into several views—the traditional Browser, plus Preview for quick full-size viewing, Map for work with pictures’ GPS coordinates, and Compare for visually comparing photos side-by-side. Version 16 added a miniature map to the Manager's Side Panel, along with support for drag-and-drop of GPS coordinates. Editor This version saw the vertical toolbar brought from the left side to the right side, and absolutely all edits moved to the Side Panel. Editor features added in this version included a "Content-aware Resize" that preserves the size of important content during shrinking. Photo Studio 17 Zoner Photo Studio 17 was published in September 2014. Features added in version 17 Import This version added the ability to assign keywords and rename pictures during import. Manager In this version the Manager's Side Panel was expanded to include a small picture preview. Photo ratings were changed from numbers to stars, and the ability to batch-keyword-tag photos in the Information pane was added. As of this version, the Catalog indexes pictures in the background (i.e. it is non-blocking) and auto-indexes the Windows Pictures library, and the Catalog section holds a more prominent place in the folder list on the left. The Catalog section also contains the controls for the DLNA capabilities added in this version. A tool for finding and deleting duplicate files was added in this version. Raw This version saw a full reworking of the raw module's interface, with some elements removed and some added; one addition is an Automatic button that auto-suggests settings. The most important newly adjustable development settings here are for histogram curves. As of this version, Manager thumbnails immediately reflect changes in the raw module. This version also adds support for lens defect correction using LCP profiles during raw development. Zoner Photo Studio 18 The English version of Zoner Photo Studio 18 was released on October 1, 2015. Features added in version 18 Interface Zoner Photo Studio 18 is divided into three sections – Manager, Develop and Editor. It replaces the former Viewer section with a standalone viewer, while Import is replaced by a button at the bottom left. The three sections share one structure – a "Navigator" on the left, a picture or pictures in the middle, and a histogram and tools on a panel on the right. Catalog The Catalog enables filtering by folders, keywords, and the date when the photos were created. As of version 18, it also supports work with external disks. Develop The Develop section enables non-destructive edits to raw, JPEG, PNG, and other bitmap file formats. Changes are saved to separate files, with the original photos remaining unchanged. HD Video Presentations Sets of photos can be turned into video presentations with music and animated transitions. Sharing Photos and albums can be shared to Facebook, Twitter, and E-mail; Zoner Postcards can be sent straight from inside the program. Zoner Photo Studio X Zoner Photo Studio X was released September 19, 2016. ZPS X brings subscription based license model. This enables fast updating cycle, with new features being released on a quarterly basis. Features added in version User interface ZPS X is divided into four modules focused in particular part of photo editing workflow: Manager, Develop, Editor and Create. Each modules incorporates unified layout. On the left side is navigator, selected photo in the middle and panel with histogram, tools on the right side. The catalog The catalog enables filtering by folders, keywords, and the date when the photos were created. Storing photos on the external drives is supported, also there is an integrated access to photos on cloud storage. Develop module The Develop section enables non-destructive edits to raw, JPEG, PNG, and other bitmap file formats. Changes are saved to separate files, with the original photos remaining unchanged. Editor module ZPS X incorporates editing in layers. Also all advanced features for modifying images can be found here. Module create This module enables to create printed materials from your photos. User can design own calendar, framed picture, collage, postcard that will be automatically shipped. Creating HD video presentation of photos is possible also from this module. Sharing options Photos can be exported or shared directly from ZPS X. User can upload photos to Zonerama gallery or use effective Pixbuf integration which enables uploading pictures to multiple social networks simultaneously. Updates Spring 2017: Refine feature Improved selection tool allows replacing the background behind a subject with fuzzy fur or frizzy hair in effective way. Autumn 2017: Smoothing brush, structure cloning and HEIF image format support Both features utilise frequency separation technique. Smoothing brush evens out color tones and softening gradients. Ideal for retouching portraits. Structure cloning is valuable tool in removing lone hair. Zoner Photos Studio X was a first editor which brings HEIF image support in Windows. Spring 2018: Face recognition, 4 new groups of preset filters, monthly payment for subscription Software is now scanning for presence of faces in a photo which is considered during automatic adjustments. New presets are reflected in settings sliders so can be fine-tuned manually. Since April users are able to purchase monthly licence of ZPS X with ongoing payment for $4.99 per month. It is alternative to $49 annual subscription. Summer 2018: Zoner Photo Cloud Zoner Photo Cloud is integrated as cloud storage option. It allows family members or co-workers to work simultaneously. ZPS X comes with basic 5 GB plan free of charge. Autumn 2018: Improved video editor ZPS X introduces a redesign of its video creator with 4K Ultra HD output. It possible to add multiple image and video tracks and perform advanced edits. In this version users are able to organise, import and export presets. Spring 2019: Faster work with RAW files and facial recognition The spring update significantly improves working with RAW files and simplifies working with RAW + JPG files. Zoner Photo Studio now has liquify and facial recognition tools powered by AI – Face-Aware Liquify. Fall 2019: New export and precise colors The new export window can produce multiple exports at once as well as export photos directly to Zonerama or Zoner Photo Cloud. Individual settings for export have also been improved making it several times faster than before. Zoner Photo Studio’s color profile management has been reworked to guarantee precise colors in every module of ZPS X. Spring 2020: Rebuilt Color Shift Zoner Photo Studio X now comes with a smart eyedropper that makes it possible to edit colors by clicking and dragging directly in the image. ZPS X is the only photo software on the market that gives the option to define an exact slice of the color wheel and shift its hue by up to 360 degrees by moving a single slider. The suite of color editing tools has been expanded to include the option to Shift Primary Colors in addition to Split Toning to get the popular cinematic color grading. Spring Special Update 2020: New photo books Creating photo books is now simpler and more intuitive thanks to new automatic features. You can upload photos to the photo book with a single click using drag and drop. The software selects the proper page layout or auto-aligns the photos for you. ZPS X has expanded its selection of photo books to include HD photo books with lay-flat bindings. There are also a number of new colors and styles of borders to choose from. Fall 2020: Local edits, the new Luma curve, and file variants The fall update focused mostly on the Develop module. Local editing tools were enhanced with the Tone Curve as well as Color Shift, allowing you to edit only a part of the photo you are working with. ZPS X comes complete with a new type of curve – the Luma curve, which better preserves the original colors’ saturation. You’ll save space on your computer with file variants which act as a virtual copy of the photo. Spring 2021: Improved video functions and more efficient photo editing In addition to trimming videos, it’s possible to adjust the image settings of the video clip with tools you know from the Develop module. You can also copy edits from clip to clip, compare before and after versions, or apply edits to multiple clips at once. In the Develop module, you can copy and paste edits quickly and easily for a large number of images. You can also choose the best photo in a series thanks to zooming in and comparing details of multiple photos at once. Zoner Photo Studio now supports Apple iCloud storage. System requirements OS: Microsoft Windows 10 (64 bit) - version 1809 or newer Processor: Intel or AMD that supports SSE2   Memory: 4GB RAM Hard Drive: 480MB Resolution: 1280×800 or more Sources References External links Zoner Photo Studio on zoner.cz Zoner Photo Studio 11 on fotografovani.cz Zoner Photo Studio 12 on fotografovani.cz Zoner Photo Cloud service on photographyblog.com HEIF image format integration on dpreview.com Comparison with other photo editors on toptenreviews.com This article uses a translation of Zoner Photo Studio from Czech Wikipedia. Image viewers Image organizers
1922113
https://en.wikipedia.org/wiki/MacProject
MacProject
MacProject was a project management and scheduling business application released along with the first Apple Macintosh systems in 1984. MacProject was one of the first major business tools for the Macintosh which enabled users to calculate the "critical path" to completion and estimate costs in money and time. If a project deadline was missed or if available resources changed, MacProject recalculated everything automatically. MacProject was written by Debra Willrett at Solosoft, and was published and distributed by Apple Computer to promote the original Macintosh personal computer. It was developed from an earlier application written by Debra Willrett for Apple's Lisa computer, LisaProject. This was the first graphical user interface (GUI) for project management. There were many other project management applications on the market at the time, but LisaProject was the first to simplify the process by allowing the user to interactively draw their project on the computer in the form of a PERT chart. Constraints could be entered for each task, and the relationships between tasks would show which ones had to be completed before a task could begin. Given the task constraints and relationships, a "critical path", schedule and budget could be calculated dynamically using heuristic methods. One of the early proponents of MacProject was James Halcomb, a well known expert in the use of the Critical Path Method. Having supervised hand-drawn network diagrams for countless complex projects, Halcomb immediately recognized the promise of the WYSIWYG graphical interface and computerized calculation of the critical path. Using a Lisa computer housed in a case designed to fit under an airplane seat, Mr. Halcomb traveled the United States demonstrating this new technology in his CPM courses. In consultation with the software's developers he authored the book Planning Big with MacProject, which introduced a generation of Mac users to PERT and CPM. In December 1987, an updated version of MacProject, called MacProject II, was introduced as a part of Claris's move to update its suite of Mac office applications. In 1991, Microsoft Project was ported to the Macintosh from Microsoft Windows and became MacProject's main competitor. However, after the release of version 3.0 of Microsoft Project in 1993, Microsoft terminated support of the Macintosh release. MacProject 1.0 is not Y2K-compliant as it cannot schedule tasks past 1999. See also MacPaint MacWrite MacDraw List of project management software References TidBits reviews September 1993 External links MacProject on Mac512.com 1984 software Classic Mac OS-only software made by Apple Inc. Project management software Discontinued software
24960699
https://en.wikipedia.org/wiki/MariaDB
MariaDB
MariaDB is a community-developed, commercially supported fork of the MySQL relational database management system (RDBMS), intended to remain free and open-source software under the GNU General Public License. Development is led by some of the original developers of MySQL, who forked it due to concerns over its acquisition by Oracle Corporation in 2009. MariaDB is intended to maintain high compatibility with MySQL, ensuring a drop-in replacement capability with library binary parity and exact matching with MySQL APIs and commands. However, new features are diverging. It includes new storage engines like Aria, ColumnStore, and MyRocks. Its lead developer/CTO is Michael "Monty" Widenius, one of the founders of MySQL AB and the founder of Monty Program AB. On 16 January 2008, MySQL AB announced that it had agreed to be acquired by Sun Microsystems for approximately $1 billion. The acquisition completed on 26 February 2008. Sun was then bought the following year by Oracle Corporation. MariaDB is named after Widenius' younger daughter, Maria. (MySQL is named after his other daughter, My.) MariaDB Server Versioning MariaDB version numbers follow MySQL's numbering scheme up to version 5.5. Thus, MariaDB 5.5 offers all of the MySQL 5.5 features. There exists a gap in MySQL versions between 5.1 and 5.5, while MariaDB issued 5.2 and 5.3 point releases. Since specific new features have been developed in MariaDB, the developers decided that a major version number change was necessary. Licensing The MariaDB Foundation mentions:MariaDB Server will remain Free and Open Source Software licensed under GPLv2, independent of any commercial entities. Third-party software MariaDB's API and protocol are compatible with those used by MySQL, plus some features to support native non-blocking operations and progress reporting. This means that all connectors, libraries and applications which work with MySQL should also work on MariaDB—whether or not they support its native features. On this basis, Fedora developers replaced MySQL with MariaDB in Fedora 19, out of concerns that Oracle was making MySQL a more closed software project. OpenBSD likewise in April 2013 dropped MySQL for MariaDB 5.5. However, for recent MySQL features, MariaDB either has no equivalent yet (like geographic function) or deliberately chose not to be 100% compatible (like GTID, JSON). The list of incompatibilities grows longer with each version. Prominent users MariaDB is used at ServiceNow, DBS Bank, Google, Mozilla, and, since 2013, the Wikimedia Foundation. Several Linux distributions and BSD operating systems include MariaDB. Some default to MariaDB, such as Arch Linux, Manjaro, Debian (from Debian 9), Fedora (from Fedora 19), Red Hat Enterprise Linux (from RHEL 7 in June 2014), CentOS (from CentOS 7), Mageia (from Mageia 2), openSUSE (from openSUSE 12.3 Dartmouth), SUSE Linux Enterprise Server (from SLES 12), OpenBSD (from 5.7), and FreeBSD. MariaDB Foundation The MariaDB Foundation was founded in 2012 to oversee the development of MariaDB. The current CEO of the MariaDB Foundation is Kaj Arnö since February 2019. The Foundation describes its mission as the following:The cornerstones of the MariaDB Foundation mission are Openness, Adoption, and Continuity. We ensure the MariaDB Server code base remains open for usage and contributions on technical merits. We strive to increase adoption by users and across use cases, platforms and means of deployment. We provide continuity to the MariaDB Server ecosystem, independent of any commercial entities. Notable sponsors of MariaDB Foundation The most notable sponsors of MariaDB Foundation are Alibaba Cloud, Tencent Cloud, Microsoft, MariaDB Corporation Ab, Servicenow, Schaffhausen Institute of Technology, IBM, and DBS Bank. The Foundation also works with technology partners, e.g. Google tasked one of its engineers to work at the MariaDB Foundation in 2013. History of MariaDB Foundation In December 2012 Michael Widenius, David Axmark, and Allan Larsson announced the formation of a foundation that would oversee the development of MariaDB. At the time of founding in 2013 the Foundation wished to create a governance model similar to that used by the Eclipse Foundation. The Board appointed the Eclipse Foundation's Executive Director Mike Milinkovich as an advisor to lead the transition. The MariaDB Foundation's first sponsor and member was MariaDB Corporation Ab that joined in 2014 after initial agreements on the division of ownership and roles between the MariaDB Foundation and MariaDB Corporation. E.g. MariaDB is a registered trademark of MariaDB Corporation Ab, used under license by the MariaDB Foundation. MariaDB Corporation Ab was originally founded in 2010 as SkySQL Corporation Ab, but changed name in 2014 to reflect its role as the main driving force behind the development of MariaDB server and the biggest support-provider for it. Foundation CEO at the time, Simon Phipps quit in 2014 on the sale of the MariaDB trademark to SkySQL. He later said: "I quit as soon as it was obvious the company was not going to allow an independent foundation." Simon Phipps was CEO of the Foundation from April 2013 to 2014. Otto Kekäläinen was the CEO from January 2015 to September 2018. Arjen Lentz was appointed CEO of the Foundation in October 2018 and resigned in December 2018. Kaj Arnö joined as the CEO on 1 February 2019. Eric Herman is the current Chairman of the Board. MariaDB Corporation Ab Initially, the development activities around MariaDB were based entirely on open source and non-commercial. To build a global business, MariaDB Corporation Ab was founded in 2010 by Patrik Backman, Ralf Wahlsten, Kaj Arnö, Max Mether, Ulf Sandberg, Mick Carney and Michael "Monty" Widenius. The current CEO of MariaDB Corporation is Michael Howard. MariaDB Corporation Ab was formed after a merger between SkySQL Corporation Ab and Monty Program on 23 April 2013. Subsequently the name was changed on 1 October 2014 to reflect the company’s role as the main driving force behind the development of MariaDB Server and the largest support-provider for it. MariaDB Corporation Ab announced in February 2022 its intention to become a publicly listed company on the New York Stock Exchange (NYSE). Products of MariaDB Corporation Ab MariaDB Corporation Ab is a contributor to the MariaDB Server, develops the MariaDB database connectors (C, C++, Java 7, Java 8, Node.js, ODBC, Python, R2DBC) as well as the MariaDB Enterprise Platform, including the MariaDB Enterprise Server, optimized for production deployments. The MariaDB Enterprise Platform includes MariaDB MaxScale, an advanced database proxy, MariaDB ColumnStore, a columnar storage engine for interactive ad hoc analytics, MariaDB Xpand, a distributed SQL storage engine for massive transactional scalability, and MariaDB Enterprise Server, an enhanced, hardened and secured version of the community server. MariaDB Corporation offers the MariaDB Enterprise Platform in the cloud under the name SkySQL, a database-as-a-service. SkySQL SkySQL general availability was announced on March 31, 2020. This database-as-a-service offering from MariaDB is a managed cloud service on Google Cloud Platform. SkySQL is a hybrid database offering that includes a column family store, object store, distributed SQL database with both a transactional and analytical query engine. The combination allows developers to use a single database for multiple use cases and avoid a proliferation of databases. The benefits of using this offering vs Amazon RDS or Microsoft Azure Database's MariaDB services offerings are versioning (SkySQL ensures users are on the most recent product release) as well as having analytics and transactional support. Investors in MariaDB Corporation Ab MariaDB Corporation has been funded with a total of $123M combined in it's A-series funding round in 2012, B-series in 2013-2016 and C-series in 2017-2022. It is undergoing a D-series round in 2022 aiming at an additional $104M in combination with it's intention to become a listed company on the New York Stock Exchange (NYSE). Some of the initial A-series investors in MariaDB Corporation Ab were e.g. OpenOcean and Tesi (Finnish Industry Investment Ltd). The B-series round was led by Intel in 2013 which itself invested $20M. In 2017 Alibaba led the C-series with a $27M investment into MariaDB in addition to a €25M investment by the European Investment Bank. See also Comparison of relational database management systems Multi-master replication References Further reading External links MariaDB Foundation website MariaDB Corporation website 2009 software Client-server database management systems Cross-platform software Free database management systems MySQL RDBMS software for Linux Software forks Software using the GPL license
2146459
https://en.wikipedia.org/wiki/Apple%20Interactive%20Television%20Box
Apple Interactive Television Box
The Apple Interactive Television Box (AITB) is a television set-top box developed by Apple Computer (now Apple Inc.) in partnership with a number of global telecommunications firms, including British Telecom and Belgacom. Prototypes of the unit were deployed at large test markets in parts of the United States and Europe in 1994 and 1995, but the product was canceled shortly thereafter, and was never mass-produced or marketed. Overview The AITB was designed as an interface between a consumer and an interactive television service. The unit's remote control would allow a user to choose what content would be shown on a connected television, and to seek with fast forward and rewind. In this regard it is similar to a modern satellite receiver or TiVo unit. The box would only pass along the user's choices to a central content server for streaming instead of issuing content itself. There were also plans for game shows, educational material for children, and other forms of content made possible by the interactive qualities of the device. Early conceptual prototypes have an unfinished feel. Near-completion units have a high production quality, the internal components often lack prototype indicators, and some units have FCC approval stickers. These facts, along with a full online manual suggest the product was very near completion before being canceled. Infrastructure Because the machine was designed to be part of a subscription data service, the AITB units are mostly inoperable. The ROM contains only what is required to continue booting from an external hard drive or from its Ethernet connection. Many of the prototypes do not appear to even attempt to boot. This is likely dependent on changes in the ROM. The ROM itself contains parts of a downsized Mac OS 7.1 enabling it to establish a network connection to the media servers provided by Oracle. The Oracle Media Server (OMS) initially ran on hardware produced by Larry Ellison's nCube Systems company, but was later also made available by Oracle on SGI, Alpha, Sun, SCO, Netware, Windows NT, and AIX systems. These servers also provided the parts of the OS not implemented in ROM of the AITB via the OMS Boot Service. Therefore, an AITB must establish a network connection successfully in order to finish the boot process. Using a command key combination and a PowerBook SCSI adapter, it is possible to get the AITB to boot into a preinstalled System 7.1 through an external SCSI hard drive. In July 2016, images were published on a video game forum that appear to show a Super Nintendo Entertainment System cartridge designed to work with the British Telecom variant of the AITB. The cartridge is labeled "BT GameCart" and includes an 8-pin serial connector designed to connect to the Apple System/Peripheral 8 port on the rear of the box. A BT promotional film for the service trial discusses a way users could download and play Nintendo video games via the system. Specifications The Apple Interactive Television Box is based upon the Macintosh Quadra 605 or LC 475. Because the box was never marketed, not all specifications have been stated by Apple. It supports MPEG-2 Transport containing ISO11172 (MPEG-1) bit streams, Apple Desktop Bus, RF in and out, S-Video out, RCA audiovideo out, RJ-45 connector for either E1 data stream on PAL devices or T1 data stream on NTSC devices, serial port, and HDI-30 SCSI. Apple intended to offer the AITB with a matching black ADB mouse, keyboard, Apple 300e CD-ROM drive, StyleWriter printer, and one of several styles of remote controls. The hard drive contains parts of a regular North American Mac OS 7.1.1 with Finder, several sockets for network connection protocols, and customized MPEG1 decoding components for the QuickTime Player software. History A few units contain a special boot ROM which allows the device to boot locally from a SCSI hard drive that has the OS and applications contained within the box; these devices were used primarily by developers inside Apple and Oracle, and for limited demonstration purposes. In normal network use, content and program code was served to the box by Oracle OMS over the network to implement the box's interactivity. A few hundred to a few thousand units were deployed at Disneyland California hotels and provided in room shopping and park navigation. Approximately 2500 units were installed and used in consumer homes in England during the second interactive television trial conducted by British Telecom and Oracle, which was in Ipswich, UK. The set-top applications were developed using Oracle's Oracle Media Objects (OMO) product, which is somewhat similar to HyperCard, but was enhanced significantly to operate in a network-based interactive TV environment. See also Apple TV Macintosh TV Apple Bandai Pippin IPTV (Internet Protocol Television) References External links Apple Interactive Television History - Computer Town Patent filed by Apple over the set-top box Apple Inc. hardware Set-top box Television technology
2813270
https://en.wikipedia.org/wiki/Hook%20%28video%20game%29
Hook (video game)
There have been several video games based on the 1991 film Hook. A side-scrolling platform game for the Nintendo Entertainment System (NES) and Game Boy was released in the United States in February 1992. Subsequent side-scrolling platform games were released for the Commodore 64 and the Super Nintendo Entertainment System (SNES) later in 1992, followed by versions for the Sega CD, Sega Genesis, and Sega's handheld Game Gear console in 1993. An arcade game was also released in 1993. A graphic adventure point-and-click game, developed and published by Ocean Software, was released for Amiga, Atari ST, and DOS in 1992. Gameplay In each version of the game, the player plays as Peter Pan, who must go through Neverland to rescue his children after they are kidnapped by Captain Hook. Each version of the game is set in Neverland, and concludes with a sword fight between Peter Pan and Captain Hook. Arcade version The arcade version is a side-scrolling beat 'em up that supports up to four players. The player chooses to play as either Peter Pan or one of the Lost Boys: Ace, Pockets, Rufio, or Thudbutt. The game is played across six stages. PC version The version for PC is a graphic adventure point-and-click game. As Peter Pan, the player must solve puzzles and problems to progress through the game. Each large problem cannot be solved without first solving several smaller problems first. Puzzles are solved by talking to characters and finding useful objects. Five icons are featured at the bottom of the screen, each one representing a different action that the player can take: "look at", "talk to", "pick up", "use", and "give". An inventory window, showing all the items the player has accumulated, is also located at the bottom of the screen. Also located at the bottom of the screen are two separate images, one depicting Captain Hook while the other shows Peter Pan. The characters' facial expressions change depending on the player's progress. Tinker Bell accompanies Peter Pan to provide hints and clues. The game has three main sections: Pirate Town, an encounter that Peter Pan has with the Lost Boys, and the confrontation with Hook. Sega and SNES versions This version is a side-scrolling platform game. The Sega CD version features identical gameplay to the Genesis and SNES versions. The Game Gear version has eight levels and the Genesis version features 11 levels, while the SNES and Sega CD versions have 12 levels. Each version features various locations that include caves, forests, lagoons, and snowy mountains. Throughout the game, the player must defend against Hook's pirate henchmen, as well as spiders, snakes, and skeletons. Peter Pan's primary weapon is a dagger. After completing the first level, the player receives the golden sword as a weapon, capable of shooting balls of energy. If the player is attacked, Peter Pan drops the sword and must use the dagger, while the golden sword can sometimes be retrieved in the following level. The player's health meter is measured as leaves. The player begins with two leaves, and loses one each time an enemy attacks. The player can collect additional leaves throughout the game to increase the health meter, for a maximum total of four leaves. Fruits that are scattered throughout each level can be collected to refill the player's health meter. After collecting pixie dust, Peter Pan has the ability to fly for short periods of time, until the Fly Meter becomes empty. Tinker Bell appears throughout the game to refill the Fly Meter. The game does not include a password feature. The film's musical score was adapted for use in the Sega CD version, which also includes digitized graphical sequences from the film, and voice acting. Additionally, the Sega CD version includes a computer-generated scan of Captain Hook's ship, which is featured during the game's introduction. Commodore 64/NES/Game Boy version This version is a side-scrolling platform action game, in which Peter Pan can fly and swim. Enemies include Hook's henchmen, as well as ghosts, zombies, and monkeys that throw bananas at the player. A map of each level is provided to the player. The player must collect items in order to proceed to the next level. Instructions are provided to the player before each level, and Tinker Bell appears so she can provide the player with hints. Tinker Bell also has the ability to revive the player if all health is lost. The game includes a two-player option. The NES and Game Boy versions are nearly identical to each other. The NES version has 16 levels, while the Game Boy version has 27 levels. Development and release The Super Nintendo version was in early development in January 1992. Ocean Software began working on the graphic adventure version in January 1992. For the graphic adventure game, the creative team read the film's script and were required to have the gameplay closely follow the film's story. It was Ocean Software's first graphic adventure game. The NES and Game Boy versions, developed by Ocean Software, were the first versions to be released; they were published by Sony Imagesoft, and were released in February 1992. The Amiga version had been published in Europe by July 1992. The SNES version, developed by Ukiyotei and published by Sony Imagesoft, had been released in the United States by September 1992. Ocean Software developed and published the Commodore 64 version, also released in 1992. By March 1993, Irem had released its arcade version of the game in the United States. The Sega CD and Genesis versions were developed by Core Design, while the Game Gear version was developed by Spidersoft; each version was published by Sony Imagesoft. The Sega CD version includes voice acting, but not from the film's actors, as licensing their voices was deemed too costly. In the United States, the Sega CD version was released in March or April 1993, while the Genesis and Game Gear versions were released in July 1993. In Europe, the Mega Drive version was released in November 1993. By December 1993, the Amiga version had been re-released in Europe by publisher Hit Squad. Reception Nintendo Power considered the NES and Game Boy versions to be nearly identical, and criticized them for being "an average running and jumping game with a pretty weak character and sluggish play control. The movie is good, but the game falls short." N-Force criticized the music of the NES version and wrote that the film "doesn't translate very well to console. You occasionally get one that does the platform adventure game extremely well–but Hook just isn't one of them." Steve Jarratt of Total! praised the graphics of the NES version but wrote that the in-game music "is a bit annoying after a while". Andy Dyer of Total! praised the Game Boy version for its music and graphics, and wrote that it was "much faster to play" than the NES version and "therefore more fun", while noting that it was also harder than the NES version. GamePro praised the music of the NES version, but wrote that Peter Pan's "limited range of sword swinging motion and lethargic forward movement make gameplay a bit of a drag." GamePro reviewed the Game Boy version and wrote that it had an "enticing musical repertoire and superbly detailed graphics, although they are tiny and a bit eye straining. Overall, this is a fun Game Boy cart". Marc Camron of Electronic Games praised the graphics of the SNES version and wrote, "What makes this game different from most games based on movie licenses is that this game is good!" N-Force praised the graphics and music of the SNES version, but criticized the standard gameplay. Nintendo Power praised the SNES version for its graphics and considered it better than the NES and Game Boy versions, but noted the occasionally slow response times for the controls. Jason Brookes of Super Play praised the colorful graphics of the SNES version, but criticized its short length and slow-moving gameplay. Mean Machines Sega praised the graphics, music and "well planned" levels of the Sega CD version, and awarded it a 72% rating, but criticized the slow controls. The magazine concluded that the game was "a real waste" of the Sega CD's "enormous potential," stating, "Visually and aurally Hook is tremendous, but underneath there is a very average game bursting to get out." Camron, who gave the Sega CD version an 89% rating, praised the music, graphics, and gameplay, but criticized the quality of featured footage from the film and the limited amount of voice acting. Sega Visions, reviewing the Sega CD version, noted that the "outstanding quality of the music will give your gaming a lift." Sega Visions wrote, "With the exception of the sound and music, the Genesis version of the Sega CD hit […] is every bit as good as the original." Sega Visions wrote about the Game Gear version: "The translation to Game Gear is superb. From great color to terrific game play and bouncy tunes, Hook Game Gear is a blast." GamePro wrote that the Genesis version does not have as good graphics or high quality sound as the preceding versions for the SNES and Sega CD, but "it's just as fun to play." Mean Machines Sega praised the graphics and music of the Genesis version, but criticized its difficulty, while calling it, "Sort of reasonably playable, in a way." The magazine concluded, "Another mediocre film becomes a mediocre platform game. Hook isn't terrible, but it's not loaded with fun either." In a retrospective review of the Genesis version, Brett Alan Weiss of AllGame noted that Peter Pan "moves along at a dreadfully slow pace, even when jumping or running in wide open spaces. He can jump high and far and can even fly and swim, but the slow motion routine gets old almost as soon at begins." He praised the graphics despite occasional glitches, but wished that the game contained hidden items or areas. He concluded, "Hook is a flawed, but fun platformer that will keep your interest at least until you beat it." James Leach of Commodore Format reviewed the Commodore 64 version. Leach praised the sound effects and music, the large levels, and the various gameplay styles, but criticized its main character for looking "a bit pasty." Leach also believed that the game was too easy, and criticized it for "Tons of boring loading" times. Commodore Format reviewed the game again in 1993, criticizing the game's repetitive gameplay and concluding, "It's got probably the most irritating multiload system in the history of gaming, making you wait while it loads a subscreen, then wait again while it loads the main level." Commodore Force praised the graphics but wrote, "Hook's multiload is possibly one of the worst I've come across", further stating, "It's a shame (and also ironic) that Hook'''s incredible amount of detail is also its downfall: all those admirable extras extend loading time." The magazine concluded, "It's a fun game to play, with lots to do and see, but can you stand the waiting? Basically, if you hate multiloads, avoid Hook like the plague."Electronic Games nominated the SNES version for its 1993 Electronic Gaming Awards, in the category of Best Electronic Game Graphics. The magazine stated, "Some of the finest game graphics can be found in Hook," writing that the game had a "unified visual appearance like no other game on the market." PC versionAmiga Action praised the graphics and music, but criticized the graphics. Tony Jones of Amiga Mania considered the game to be better than the film, and noted that it had a "much clearer storyline." Rik Haynes of CU Amiga wrote, "Sadly, despite aspiring to the heights achieved by Monkey Island, Hook has none of the finesse of rival productions from Virgin Games or Delphine." Maff Evans of Amiga Format called the game a "tedious graphic adventure" and criticized its story and characters, writing that they "don't seem to evolve at all, leaving everything seeming rather flat." Evans also criticized the control system for being "far too limited and unwieldy," and wrote, "Occasionally nice graphics, but a bit too cartoon-like for this style of game." Andy Hutchinson of ST Format criticized the Atari ST version, calling it "terribly reminiscent of Monkey Island. However, where that game is hysterical and innovative, Hook is slightly amusing and derivative." Hutchinson concluded, "A polished but ultimately unsatisfying game. Buy Hook only if you're a massive fan of graphic adventures or have pleasant childhood memories of Peter Pan. Then expect to be disappointed."Amiga User International praised the music and graphics, but wrote, "The only disappointments are that it is too short by far, and the puzzles are not really very tough. The game is pretty linear and will not let you stray very far off track." Mark Ramshaw of Amiga Power praised the music and sound effects, but criticized the game's puzzle aspect, calling it "occasionally a little predictable, sometimes a bit on the obtuse side, and just a tad too linear." The One praised the music and graphics, but criticized the short length. Several publications reviewed the game again in December 1993, after it was re-released by Hit Squad. Cam Winstanley of Amiga Power praised the graphics but criticized the difficulty of the puzzles. Paul Roundell of Amiga Action wrote, "The graphics are colourful, but average, and the interface and interaction, while workable, are certainly no breakthrough, and as always in games of this kind, the humour is dire." CU Amiga praised the music and graphics, but criticized it for occasionally illogical puzzles, as well as confusing text responses given to the player out of order as the result of poor coding. Amiga Format criticized the game's repetitive character interactions. In 1995, Matt Broughton of The One Amiga reviewed the game and wrote that it "offers enough locations and graphical treats to keep most people happy. The control system breaks no new ground, but why fix something that ain't broke?"Entertainment Weekly'' gave the game a B- and wrote that "Peter Pan tries to rediscover his inner child by hacking his way through the usual assortment of bad guys. One plus: gorgeous green-and-gold backgrounds that are truer to real life than the movie's overstuffed sets." References External links Hook (Amiga/Atari ST/DOS) at MobyGames Hook (Commodore 64/Game Boy/NES) at MobyGames Hook (all other versions) at MobyGames 1992 video games Adventure games Amiga games Arcade video games Atari ST games Commodore 64 games Core Design games DOS games Game Boy games Game Gear games Irem games Nintendo Entertainment System games Ocean Software games Epic/Sony Records games Peter Pan video games Point-and-click adventure games Sega Genesis games Sega CD games Super Nintendo Entertainment System games Ukiyotei games Video games based on films Video games based on adaptations Video games developed in Japan Video games developed in the United Kingdom
23816198
https://en.wikipedia.org/wiki/1980%20USC%20Trojans%20football%20team
1980 USC Trojans football team
The 1980 USC Trojans football team represented the University of Southern California (USC) in the 1980 NCAA Division I-A football season. In their fifth year under head coach John Robinson, the Trojans compiled an 8–2–1 record (4–2–1 against conference opponents), finished in third place in the Pacific-10 Conference (Pac-10), and outscored their opponents by a combined total of 265 to 134. Quarterback Gordon Adams led the team in passing, completing 104 of 179 passes for 1,237 yards with seven touchdowns and seven interceptions. Marcus Allen led the team in rushing with 354 carries for 1,563 yards and 14 touchdowns. Hoby Brenner led the team in receiving with 26 catches for 315 yards and no touchdowns. Schedule Personnel Game summaries Minnesota Marcus Allen 42 rushes, 216 yards Arizona St Marcus Allen 36 Rush, 133 Yds Oregon Washington Marcus Allen 30 rushes, 216 yards vs. UCLA Notre Dame Team players drafted into the NFL Ronnie Lott, 1st round, San Francisco 49ers Keith Van Horne, 1st round, Chicago Bears Dennis Smith, 1st round, Denver Broncos\ Ray Butler, 4th round, Baltimore Colts Kevin Williams, 7th round, New Orleans Saints Jeff Fisher, 7th round, Chicago Bears Steve Busick, 7th round, Denver Broncos James Hunter, 9th round, Pittsburgh Steelers Eric Scoggins, 12th round, Baltimore Colts Awards and honors Former USC Trojans player Tay Brown, was inducted into the College Football Hall of Fame References USC USC Trojans football seasons USC Trojans football
80584
https://en.wikipedia.org/wiki/Calchas
Calchas
Calchas (; , Kalkhas) is an Argive mantis, or "seer," dated to the Age of Legend, which is an aspect of Greek mythology. Calchas appears in the opening scenes of the Iliad, which is believed to have been based on a war conducted by the Achaeans against the powerful city of Troy in the Late Bronze Age. Calchas, a seer in the service of the army before Troy, is portrayed as a skilled augur, Greek ionópolos ('bird-savant'): "as an augur, Calchas had no rival in the camp." He received knowledge of the past, present, and future from the god, Apollo. He had other mantic skills as well: interpreting the entrails of the enemy during the tide of battle. His mantosune, as it is called in the Iliad, is the hereditary occupation of his family, which accounts for the most credible etymology of his name: “the dark one” in the sense of “ponderer,” based on the resemblance of pondering to melancholy, or being “blue.” Calchas has a long literary history after Homer. His appearance in the Iliad is no sort of “first” except for the chronological sequence of literature. In the legendary time of the Iliad, seers and divination are already long-standing. Family Calchas was the son of Polymele and Thestor; grandson of the seer Idmon; and brother of Leucippe, Theonoe, and Theoclymenus Career It was Calchas who prophesied that in order to gain a favourable wind to deploy the Greek ships mustered in Aulis on their way to Troy, Agamemnon would need to sacrifice his daughter, Iphigeneia, to appease Artemis, whom Agamemnon had offended. The episode was related at length in the lost Cypria, of the Epic Cycle. He also states that Troy will be sacked on the tenth year of the war. In Sophocles' Ajax, Calchas delivers a prophecy to Teucer suggesting that the protagonist will die if he leaves his tent before the day is out. Iliad In the Iliad, Calchas is cast as the apostle of divine truth. His most powerful skeptic is Agamemnon himself. Before the events of the Iliad, at the beginning of the expedition, Agamemnon had to sacrifice his daughter Iphigenia to receive favorable sailing winds. At the beginning of the Iliad Calchas delivers another blow to him. in open assembly Calchas prophesized that the captive Chryseis, a spoil of war awarded to Agamemnon, must be returned to her father Chryses in order to propitiate Apollo into lifting the plague he sent as punishment for Agamemnon's disrespect of Chryses, Apollo's priest. Agamemnon exploded in anger and called the prophet a "visionary of hell" (Fitzgerald translation) and accused Calchas of rendering unfair prophecies. Fearing Agamemnon, Calchas had already secureded a champion in Achilles, who spoke against Agamemnon in heated terms in assembly. Agamemnon grudgingly accepted the edict of Apollo (supported by the Assembly) that he give up his prize, but, as an insult to Achilles, threatens to take Achilles’ own female prize as recompense. There follows "the wrath of Achilles," part righteous anger, part galling resentment over the unjustified overreaching of Agamemnon, part love for his war bride. This dispute is a central focus of the epic. Later in the story, Poseidon assumes the form of Calchas in order to rouse and empower the Greek forces while Zeus is not observing the battle. Posthomerica Calchas also plays a role in Quintus of Smyrna's Posthomerica. Calchas said that if they were brief, they could convince Achilles to fight. It is he rather than Helenus (as suggested in Sophocles' Philoctetes) that predicts that Troy will only fall once the Argives are able to recruit Philoctetes. It is by his advice that they halt the battle, even though Neoptolemus is slaughtering the Trojans. He also tells the Argives that the city is more easily taken by strategy than by force. He endorses Odysseus' suggestion that the Trojan Horse will effectively infiltrate the Trojans. He also foresees that Aeneas will survive the battle and found the city, and tells the Argives that they will not kill him. He did not join the Argives when they boarded the ships, as he foresaw the impending doom of the Kapherean Rocks. Death Calchas died of shame at Colophon in Asia Minor shortly after the Trojan War (as told in the Cyclic Nostoi and Melampodia): the prophet Mopsus beat him in a contest of soothsaying, although Strabo placed an oracle of Calchas on Monte Gargano in Magna Graecia. It is also said that Calchas died of laughter when he thought another seer had incorrectly predicted his death. This seer had foretold Calchas would never drink from the wine produced from vines he had planted himself; Calchas made the wine, but holding the cup he died of laughter, before he could inform them they had drunk it the previous night. In medieval and later versions of the myth, Calchas is portrayed as a Trojan defector and the father of Chryseis, now called Cressida. Calchas is a character in William Shakespeare's play Troilus and Cressida. References Achaean Leaders Mythological Greek seers Metamorphoses characters Argive characters in Greek mythology Characters in Greek mythology
9001554
https://en.wikipedia.org/wiki/Yali
Yali
Yali may refer to: Cyclone Yali, a cyclone that occurred during the 1998 South Pacific cyclone season Yalı (residence), a water's edge house or mansion in Turkey Yali (mythology), a Hindu mythical creature with the body of a lion and some elephant features Yali (volcano), a Greek volcanic island Yali, Antioquia, a municipality in Colombia Yali people, a tribe of Western New Guinea Yali language, a language spoken by the Yali people Yale-China Association, known as Yali in Chinese Yali High School, Changsha, China El Yali, a Ramsar site in Chile Yali (politician), a New Guinean religious leader, politician and cargo cult leader. YALI (Linux) (Yet Another Linux Installer), an installer that is being used while installing some Linux distributions like Pardus Yali Falls Dam
970009
https://en.wikipedia.org/wiki/Helen%20%28play%29
Helen (play)
Helen (, Helenē) is a drama by Euripides about Helen, first produced in 412 BC for the Dionysia in a trilogy that also contained Euripides' lost Andromeda. The play has much in common with Iphigenia in Tauris, which is believed to have been performed around the same time period. Historical frame Helen was written soon after the Sicilian Expedition, in which Athens had suffered a massive defeat. Concurrently, the sophists – a movement of teachers who incorporated philosophy and rhetoric into their occupation – were beginning to question traditional values and religious beliefs. Within the play's framework, Euripides starkly condemns war, deeming it to be the root of all evil. Background About thirty years before this play, Herodotus argued in his Histories that Helen had never in fact arrived at Troy, but was in Egypt during the entire Trojan War. The Archaic lyric poet Stesichorus had made the same assertion in his "Palinode" (itself a correction to an earlier poem corroborating the traditional characterization that made Helen out to be a woman of ill repute). The play Helen tells a variant of this story, beginning under the premise that rather than running off to Troy with Paris, Helen was actually whisked away to Egypt by the gods. The Helen who escaped with Paris, betraying her husband and her country and initiating the ten-year conflict, was actually an eidolon, a phantom look-alike. After Paris was promised the most beautiful woman in the world by Aphrodite and he judged her fairer than her fellow goddesses Athena and Hera, Hera ordered Hermes to replace Helen, Paris' assumed prize, with a fake. Thus, the real Helen has been languishing in Egypt for years, while the Greeks and Trojans alike curse her for her supposed infidelity. In Egypt, king Proteus, who had protected Helen, has died. His son Theoclymenus, the new king with a penchant for killing Greeks, intends to marry Helen, who after all these years remains loyal to her husband Menelaus. Plot Helen receives word from the exiled Greek Teucer that Menelaus never returned to Greece from Troy, and is presumed dead, putting her in the perilous position of being available for Theoclymenus to marry, and she consults the prophetess Theonoe, sister to Theoclymenus, to find out Menelaus' fate. Her fears are allayed when a stranger arrives in Egypt and turns out to be Menelaus himself, and the long-separated couple recognize each other. At first, Menelaus does not believe that she is the real Helen, since he has hidden the Helen he won in Troy in a cave. However, the woman he was shipwrecked with was in reality, only a mere phantom of the real Helen. Before the Trojan war even began, a judgement took place, one that Paris was involved in. He gave the Goddess Aphrodite the award of the fairest since she bribed him with Helen as a bride. To take their revenge on Paris, the remaining goddesses, Athena and Hera, replaced the real Helen with a phantom. However, Menelaus did not know better. But luckily one of his sailors steps in to inform him that the false Helen has disappeared into thin air. The couple still must figure out how to escape from Egypt, but the rumor that Menelaus has died is still in circulation. Thus, Helen tells Theoclymenus that the stranger who came ashore was a messenger there to tell her that her husband was truly dead. She informs the king that she may marry him as soon as she has performed a ritual burial at sea, thus freeing her symbolically from her first wedding vows. The king agrees to this, and Helen and Menelaus use this opportunity to escape on the boat given to them for the ceremony. Theoclymenus is furious when he learns of the trick and nearly murders his sister Theonoe for not telling him that Menelaus is still alive. However, he is prevented by the miraculous intervention of the demi-gods Castor and Polydeuces, brothers of Helen and the sons of Zeus and Leda. Themes Virtue and Oaths: in the Helen, Euripides emphasizes the importance of virtue and oaths. Awaiting the return of her husband Menelaus for 17 years — the ten of the Trojan War and another seven for the search — Helen remains faithful to Menelaus and the promises she has made him: Helen made two oaths, one to the Spartan river Eurotas and another on the head of Menelaus himself as sanctifying object. Menelaus also swears fidelity to Helen: so seriously do husband and wife take their vows that they agree to commit suicide and never marry another if their plans fail. Such importance to oath-keeping is consonant with general practice during the time period (Torrance, 2009). With these oaths, Helen and Menelaus declare their love for each other and their desire to live only with the other. These oaths prove their devotion and exemplify the importance of oaths. Given the play’s humor and Euripides’ general challenging of norms and values, it remains uncertain what our playwright’s own views are. Identity and Reputation: Throughout all the different permutations of the story of Helen and the Trojan War, what makes the Trojan war distinctive is the fact that it is always caused, somehow, by Helen as the supreme embodiment of female beauty, whether she is or is not physically in Troy and whether she acts as an enthusiastic partner of Paris or as a reluctant victim of his unwanted rape. Euripides expands more on this idea by presenting his play largely from Helen’s point of view, revealing how she truly feels about being the symbolic villain of the Trojan War. Helen’s character in the play is deeply affected by the losses of the people who have died fighting to bring her back to her homeland and husband and expresses this guilt frequently: “The wrecked city of Ilium / is given up to the teeth of fire, / all through me and the deaths I caused, / all for my name of affliction” (lines 196-198). Despite this guilt, she also feels anger for being made into a symbol that people can project their hate on, even though they do not know her: “I have done nothing wrong and yet my reputation / is bad, and worse than a true evil is it to bear / the burden of faults that are not truly yours” (lines 270-272). Although she spends a lot of the beginning of the play feeling pity for the men who have died and herself as well, Euripides’ Helen is independent, confident, and intelligent. She displays her ability to think on her feet as she formulates a workable plan to return home and as she rejects her husband Menelaus’ cockamamy plans. Therefore, Euripides in his play portrays a living and breathing Helen filled with compassion and wit, not at all similar to the blameworthy person others believe her to be. Translations Edward P. Coleridge, 1891 – prose: full text Arthur S. Way, 1912 – verse Philip Vellacott, 1954 – prose and verse Richmond Lattimore, 1956 – verse Frank McGuinness, 2008 – for Shakespeare's Globe George Theodoridis, 2011 – prose: full text Emily Wilson, 2016 - verse See also Norma Jeane Baker of Troy, 2019 play Richard Strauss's opera Die ägyptische Helena, the libretto for which was adapted by Hugo von Hofmannsthal from the play by Euripides References Torrance, Isabelle. “On Your Head be it Sworn: Oath and Virtue in Euripides' Helen." The Classical Quarterly, vol. 59, no. 1, 2009, pp. 1-7. External links Plays by Euripides Trojan War literature Laconian mythology Egypt in Greek mythology Plays set in ancient Egypt Plays set in ancient Greece Cultural depictions of Helen of Troy Plays adapted into operas
209537
https://en.wikipedia.org/wiki/DBase
DBase
dBase (also stylized dBASE) was one of the first database management systems for microcomputers and the most successful in its day. The dBase system includes the core database engine, a query system, a forms engine, and a programming language that ties all of these components together. dBase's underlying file format, the file, is widely used in applications needing a simple format to store structured data. Originally released as Vulcan for PTDOS in 1978, the CP/M port caught the attention of Ashton-Tate in 1980. They licensed it and re-released it as dBASE II, and later ported to Apple II and IBM PC computers running DOS. On the PC platform, in particular, dBase became one of the best-selling software titles for a number of years. A major upgrade was released as dBase III, and ported to a wider variety of platforms, adding UNIX, and VMS. By the mid-1980s, Ashton-Tate was one of the "big three" software publishers in the early business software market, the others being Lotus Development and WordPerfect. Starting in the mid-1980s, several companies produced their own variations on the dBase product and especially the dBase programming language. These included FoxBASE+ (later renamed FoxPro), Clipper, and other so-called xBase products. Many of these were technically stronger than dBase, but could not push it aside in the market. This changed with the poor reception of dBase IV, whose design and stability were so lacking that many users switched to other products. At the same time, database products increasingly used the IBM-invented SQL (Structured Query Language). Another factor was user adoption of Microsoft Windows on desktop computers. The shift toward SQL and Windows put pressure on the makers of xBase products to invest in major redesign to provide new capabilities. In the early 1990s, xBase products constituted the leading database platform for implementing business applications. The size and impact of the xBase market did not go unnoticed, and within one year, the three top xBase firms were acquired by larger software companies: Borland purchased Ashton-Tate Microsoft bought Fox Software Computer Associates acquired Nantucket By the opening decade of the 21st century, most of the original xBase products had faded from prominence and many disappeared entirely. Products known as dBase still exist, owned by dBase LLC. History Origins In the late 1960s, Fred Thompson at the Jet Propulsion Laboratory (JPL) was using a Tymshare product named RETRIEVE to manage a database of electronic calculators, which were at that time very expensive products. In 1971, Thompson collaborated with Jack Hatfield, a programmer at JPL, to write an enhanced version of RETRIEVE which became the JPLDIS project. JPLDIS was written in FORTRAN on the UNIVAC 1108 mainframe, and was presented publicly in 1973. When Hatfield left JPL in 1974, Jeb Long took over his role. While working at JPL as a contractor, C. Wayne Ratliff entered the office football pool. He had no interest in the game as such, but felt he could win the pool by processing the post-game statistics found in newspapers. In order to do this, he turned his attention to a database system and, by chance, came across the documentation for JPLDIS. He used this as the basis for a port to PTDOS on his kit-built IMSAI 8080 microcomputer, and called the resulting system Vulcan (after Mr. Spock on Star Trek). Ashton-Tate George Tate and Hal Lashlee had built two successful start-up companies: Discount Software, which was one of the first to sell PC software programs through the mail to consumers, and Software Distributors, which was one of the first wholesale distributors of PC software in the world. They entered into an agreement with Ratliff to market Vulcan, and formed Ashton-Tate (the name Ashton chosen purely for marketing reasons) to do so. Ratliff ported Vulcan from PTDOS to CP/M. Hal Pawluk, who handled marketing for the nascent company, decided to change the name to the more business-like "dBase". Pawluk devised the use of lower case "d" and all-caps "BASE" to create a distinctive name. Pawluk suggested calling the new product version two ("II") to suggest it was less buggy than an initial release. dBase II was the result and became a standard CP/M application along with WordStar and SuperCalc. In 1981, IBM commissioned a port of dBase for the then-in-development PC. The resultant program was one of the initial pieces of software available when the IBM PC went on sale the fall of 1981. dBase was one of a very few "professional" programs on the platform at that time, and became a huge success. The customer base included not only end-users, but an increasing number of "value added resellers", or VARs, who purchased dBase, wrote applications with it, and sold the completed systems to their customers. The May 1983 release of dBase II RunTime further entrenched dBase in the VAR market by allowing the VARs to deploy their products using the lower-cost RunTime system. Although some critics stated that dBase was difficult to learn, its success created many opportunities for third parties. By 1984, more than 1,000 companies offered dBase-related application development, libraries of code to add functionality, applications using dBase II Runtime, consulting, training, and how-to books. A company in San Diego (today known as Advisor Media) premiered a magazine devoted to professional use of dBase, Data Based Advisor; its circulation exceeded 35,000 after eight months. All of these activities fueled the rapid rise of dBase as the leading product of its type. dBase III As platforms and operating systems proliferated in the early 1980s, the company found it difficult to port the assembly language-based dBase to target systems. This led to a re-write of the platform in the C programming language, using automated code conversion tools. The resulting code worked, but was essentially undocumented and inhuman in syntax, a problem that would prove to be serious in the future. In May 1984, the rewritten dBase III was released. Although reviewers widely panned its lowered performance, the product was otherwise well reviewed. After a few rapid upgrades, the system stabilized and was once again a best-seller throughout the 1980s, and formed the famous "application trio" of PC compatibles (dBase, Lotus 123, and WordPerfect). By the fall of 1984, the company had over 500 employees and was taking in US$40 million a year in sales (equivalent to $ million in ), the vast majority from dBase products. dBase IV Introduced in 1988, after delays, dBase IV had "more than 300 new or improved features". By then, FoxPro had made inroads, and even dBase IV's support for Query by Example and SQL were not enough. Along the way, Borland, which had bought Ashton Tate, brought out a revised dBase IV in 1992 but with a focus described as "designed for programmers" rather than "for ordinary users". Recent version history dBase / xBase programming language For handling data, dBase provided detailed procedural commands and functions to open and traverse records in data files (e.g., USE, SKIP, GO TOP, GO BOTTOM, and GO recno), manipulate field values (REPLACE and STORE), and manipulate text strings (e.g., STR() and SUBSTR()), numbers, and dates. dBase is an application development language and integrated navigational database management system which Ashton-Tate labeled as "relational" but it did not meet the criteria defined by Dr. Edgar F. Codd's relational model. It used a runtime interpreter architecture, which allowed the user to execute commands by typing them in a command line "dot prompt". Similarly, program scripts (text files with PRG extensions) ran in the interpreter (with the DO command). dBase programs were easy to write and test; a business person with no programming experience could develop applications. Over time, Ashton-Tate's competitors introduced so-called clone products and compilers that had more robust programming features such as user-defined functions (UDFs), arrays for complex data handling. Ashton-Tate and its competitors also began to incorporate SQL, the ANSI/ISO standard language for creating, modifying, and retrieving data stored in relational database management systems. Eventually, it became clear that the dBase world had expanded far beyond Ashton-Tate. A "third-party" community formed, consisting of Fox Software, Nantucket, Alpha Software, Data Based Advisor Magazine, SBT and other application development firms, and major developer groups. Paperback Software launched the flexible and fast VP-Info with a unique built-in compiler. The community of dBase variants sought to create a dBase language standard, supported by IEEE committee X3J19 and initiative IEEE 1192. They said "xBase" to distinguish it from the Ashton-Tate product. Ashton-Tate saw the rise of xBase as an illegal threat to its proprietary technology. In 1988 they filed suit against Fox Software and Santa Cruz Operation (SCO) for copying dBase's "structure and sequence" in FoxBase+ (SCO marketed XENIX and UNIX versions of the Fox products). In December 1990, U.S. District judge Terry Hatter Jr. dismissed Ashton-Tate's lawsuit and invalidated Ashton-Tate's copyrights for not disclosing that dBase had been based, in part, on the public domain JPLDIS. In October 1991, while the case was still under appeal, Borland International acquired Ashton-Tate, and as one of the merger's provisions the U.S. Justice Department required Borland to end the lawsuit against Fox and allow other companies to use the dBase/xBase language without the threat of legal action. By the end of 1992, major software companies raised the stakes by acquiring the leading xBase products. Borland acquired Ashton-Tate's dBase products (and later WordTech's xBase products), Microsoft acquired Fox Software's FoxBASE+ and FoxPro products, and Computer Associates acquired Nantucket's Clipper products. Advisor Media built on its Data Based Advisor magazine by launching FoxPro Advisor and Clipper Advisor (and other) developer magazines and journals, and live conferences for developers. However, a planned dBase Advisor Magazine was aborted due to the market failure of dBase IV. By the year 2000, the xBase market had faded as developers shifted to new database systems and programming languages. Computer Associates (later known as CA) eventually dropped Clipper. Borland restructured and sold dBase. Of the major acquirers, Microsoft stuck with xBase the longest, evolving FoxPro into Visual FoxPro, but the product is no longer offered. In 2006 Advisor Media stopped its last-surviving xBase magazine, FoxPro Advisor. The era of xBase dominance has ended, but there are still xBase products. The dBase product line is now owned by dBase LLC which currently sells dBASE PLUS 12.3 and a DOS-based dBASE CLASSIC (dbDOS to run it on 64-bit Windows). Some open source implementations are available, such as Harbour, xHarbour, and Clip. In 2015, a new member of the xBase family was born: the XSharp (X#) language, maintained as an open source project with a compiler, its own IDE, and Microsoft Visual Studio integration. XSharp produces .NET assemblies and uses the familiar xBase language. The XSharp product was originally created by a group of four enthusiasts who have worked for the Vulcan.NET project in the past. The compiler is created on top of the Roslyn compiler code, the code behind the C# and VB compilers from Microsoft. Programming examples Today, implementations of the dBase language have expanded to include many features targeted for business applications, including object-oriented programming, manipulation of remote and distributed data via SQL, Internet functionality, and interaction with modern devices. The following example opens an employee table ("empl"), gives every manager who supervises 1 or more employees a 10-percent raise, and then prints the names and salaries. USE empl REPLACE ALL salary WITH salary * 1.1 FOR supervisors > 0 LIST ALL fname, lname, salary TO PRINT * (comment: reserved words shown in CAPITALS for illustration purposes) Note how one does not have to keep mentioning the table name. The assumed ("current") table stays the same until told otherwise. Because of its origins as an interpreted interactive language, dBase used a variety of contextual techniques to reduce the amount of typing needed. This facilitated incremental, interactive development but also made larger-scale modular programming difficult. A tenet of modular programming is that the correct execution of a program module must not be affected by external factors such as the state of memory variables or tables being manipulated in other program modules. Because dBase was not designed with this in mind, developers had to be careful about porting (borrowing) programming code that assumed a certain context and it would make writing larger-scale modular code difficult. Work-area-specific references were still possible using the arrow notation ("B->customer") so that multiple tables could be manipulated at the same time. In addition, if the developer had the foresight to name their tables appropriately, they could clearly refer to a large number of tables open at the same time by notation such as ("employee->salary") and ("vacation->start_date"). Alternatively, the alias command could be appended to the initial opening of a table statement which made referencing a table field unambiguous and simple. For example. one can open a table and assign an alias to it in this fashion, "use EMP alias Employee", and henceforth, refer to table variables as "Employee->Name". Another notable feature is the re-use of the same clauses for different commands. For example, the FOR clause limits the scope of a given command. (It is somewhat comparable to SQL's WHERE clause.) Different commands such as LIST, DELETE, REPLACE, BROWSE, etc. could all accept a FOR clause to limit (filter) the scope of their activity. This simplifies the learning of the language. dBase was also one of the first business-oriented languages to implement string evaluation. i = 2 myMacro = "i + 10" i = &myMacro * comment: i now has the value 12 Here the "&" tells the interpreter to evaluate the string stored in "myMacro" as if it were programming code. This is an example of a feature that made dBase programming flexible and dynamic, sometimes called "meta ability" in the profession. This could allow programming expressions to be placed inside tables, somewhat reminiscent of formulas in spreadsheet software. However, it could also be problematic for pre-compiling and for making programming code secure from hacking. But, dBase tended to be used for custom internal applications for small and medium companies where the lack of protection against copying, as compared to compiled software, was often less of an issue. Interactivity In addition to the dot-prompt, dBase III, III+, and IV came packaged with an ASSIST application to manipulate data and queries, as well as an APPSGEN application which allowed the user to generate applications without resorting to code writing, like a 4GL. The dBase IV APPSGEN tool was based largely on portions of an early CP/M product named Personal Pearl. Niches Although the language has fallen out of favor as a primary business language, some find dBase an excellent interactive ad hoc data manipulation tool. Whereas SQL retrieves data sets from a relational database (RDBMS), with dBase one can more easily manipulate, format, analyze and perform calculations on individual records, strings, numbers, and so on in a step-by-step imperative (procedural) way instead of trying to figure out how to use SQL's declarative operations. Its granularity of operations is generally smaller than SQL, making it easier to split querying and table processing into easy-to-understand and easy-to-test parts. For example, one could insert a BROWSE operation between the filtering and the aggregation step to study the intermediate table or view (applied filter) before the aggregation step is applied. As an application development platform, dBase fills a gap between lower-level languages such as C, C++, and Java, and high-level proprietary 4GLs (fourth generation languages) and purely visual tools, providing relative ease-of-use for business people with less formal programming skill and high productivity for professional developers willing to trade off the low-level control. dBase remained a popular teaching tool even after sales slowed because the text-oriented commands were easier to present in printed training material than the mouse-oriented competitors. Mouse-oriented commands were added to the product over time, but the command language remained a popular de facto standard, while mousing commands tended to be vendor-specific. File formats A major legacy of dBase is its file format, which has been adopted in a number of other applications. For example, the shapefile format, developed by ESRI for spatial data in its PC ArcInfo geographic information system, uses .dbf files to store feature attribute data. Microsoft recommends saving a Microsoft Works database file in the dBase file format so that it can be read by Microsoft Excel. A package is available for Emacs to read xbase files. LibreOffice and OpenOffice Calc can read and write all generic dbf files. dBase's database system was one of the first to provide a header section for describing the structure of the data in the file. This meant that the program no longer required advance knowledge of the data structure, but rather could ask the data file how it was structured. There are several variations on the .dbf file structure, and not all dBase-related products and .dbf file structures are compatible. VP-Info is unique in that it can read all variants of the dbf file structure. A second filetype is the file format for memo fields. While character fields are limited to 254 characters each, a memo field is a 10-byte pointer into a file which can include a much larger text field. dBase was very limited in its ability to process memo fields, but some other xBase languages such as Clipper treated memo fields as strings just like character fields for all purposes except permanent storage. dBase uses files for single indexes, and (multiple-index) files for holding between 1 and 48 indexes. Some xBase languages such as VP-Info include compatibility with files while others use different file formats such as used by Clipper and used by FoxPro or FlagShip. Later iterations of Clipper included drivers for indexes. Reception Jerry Pournelle in July 1980 called Vulcan "infuriatingly excellent" because the software was powerful but the documentation was poor. He praised its speed and sophisticated queries, but said that "we do a lot of pounding at the table and screaming in rage at the documentation". References External links xBase (and dBase) File Format Description 1979 software Borland software CP/M software Database-related software for Linux Desktop database application development tools DOS software Microcomputer software Proprietary database management systems Assembly language software XBase programming language family
20051952
https://en.wikipedia.org/wiki/NIST%20hash%20function%20competition
NIST hash function competition
The NIST hash function competition was an open competition held by the US National Institute of Standards and Technology (NIST) to develop a new hash function called SHA-3 to complement the older SHA-1 and SHA-2. The competition was formally announced in the Federal Register on November 2, 2007. "NIST is initiating an effort to develop one or more additional hash algorithms through a public competition, similar to the development process for the Advanced Encryption Standard (AES)." The competition ended on October 2, 2012 when NIST announced that Keccak would be the new SHA-3 hash algorithm. The winning hash function has been published as NIST FIPS 202 the "SHA-3 Standard", to complement FIPS 180-4, the Secure Hash Standard. The NIST competition has inspired other competitions such as the Password Hashing Competition. Process Submissions were due October 31, 2008 and the list of candidates accepted for the first round was published on December 9, 2008. NIST held a conference in late February 2009 where submitters presented their algorithms and NIST officials discussed criteria for narrowing down the field of candidates for Round 2. The list of 14 candidates accepted to Round 2 was published on July 24, 2009. Another conference was held on August 23–24, 2010 (after CRYPTO 2010) at the University of California, Santa Barbara, where the second-round candidates were discussed. The announcement of the final round candidates occurred on December 10, 2010. On October 2, 2012, NIST announced its winner, choosing Keccak, created by Guido Bertoni, Joan Daemen, and Gilles Van Assche of STMicroelectronics and Michaël Peeters of NXP. Entrants This is an incomplete list of known submissions. NIST selected 51 entries for round 1. 14 of them advanced to round 2, from which 5 finalists were selected. Winner The winner was announced to be Keccak on October 2, 2012. Finalists NIST selected five SHA-3 candidate algorithms to advance to the third (and final) round: BLAKE (Aumasson et al.) Grøstl (Knudsen et al.) JH (Hongjun Wu) Keccak (Keccak team, Daemen et al.) Skein (Schneier et al.) NIST noted some factors that figured into its selection as it announced the finalists: Performance: "A couple of algorithms were wounded or eliminated by very large [hardware gate] area requirement – it seemed that the area they required precluded their use in too much of the potential application space." Security: "We preferred to be conservative about security, and in some cases did not select algorithms with exceptional performance, largely because something about them made us 'nervous,' even though we knew of no clear attack against the full algorithm." Analysis: "NIST eliminated several algorithms because of the extent of their second-round tweaks or because of a relative lack of reported cryptanalysis – either tended to create the suspicion that the design might not yet be fully tested and mature." Diversity: The finalists included hashes based on different modes of operation, including the HAIFA and sponge function constructions, and with different internal structures, including ones based on AES, bitslicing, and alternating XOR with addition. NIST has released a report explaining its evaluation algorithm-by-algorithm. Did not pass to Final Round The following hash function submissions were accepted for Round Two, but did not make it to the final round. As noted in the announcement of the finalists, "none of these candidates was clearly broken". Blue Midnight Wish CubeHash (Bernstein) ECHO (France Telecom) Fugue (IBM) Hamsi Luffa Shabal SHAvite-3 SIMD Did not pass to Round Two The following hash function submissions were accepted for Round One but did not pass to Round Two. They have neither been conceded by the submitters nor have had substantial cryptographic weaknesses. However, most of them have some weaknesses in the design components, or performance issues. ARIRANG (CIST – Korea University) CHI CRUNCH FSB Lane Lesamnta MD6 (Rivest et al.) SANDstorm (Sandia National Laboratories) Sarmal SWIFFTX TIB3 Entrants with substantial weaknesses The following non-conceded Round One entrants have had substantial cryptographic weaknesses announced: AURORA (Sony and Nagoya University) Blender Cheetah Dynamic SHA Dynamic SHA2 ECOH Edon-R EnRUPT ESSENCE LUX MCSSHA-3 NaSHA Sgàil Spectral Hash Twister Vortex Conceded entrants The following Round One entrants have been officially retracted from the competition by their submitters; they are considered broken according to the NIST official Round One Candidates web site. As such, they are withdrawn from the competition. Abacus Boole DCH Khichidi-1 MeshHash SHAMATA StreamHash Tangle WaMM Waterfall Rejected entrants Several submissions received by NIST were not accepted as First Round Candidates, following an internal review by NIST. In general, NIST gave no details as to why each was rejected. NIST also has not given a comprehensive list of rejected algorithms; there are known to be 13, but only the following are public. HASH 2X Maraca MIXIT NKS 2D Ponic ZK-Crypt See also Advanced Encryption Standard process CAESAR Competition – Competition to design authenticated encryption schemes Post-Quantum Cryptography Standardization References External links NIST website for competition Official list of second round candidates Official list of first round candidates SHA-3 Zoo Classification of the SHA-3 Candidates Hash Function Lounge VHDL source code developed by the Cryptographic Engineering Research Group (CERG) at George Mason University FIPS 202 – The SHA-3 Standard Cryptographic hash functions Cryptography contests National Institute of Standards and Technology
36544924
https://en.wikipedia.org/wiki/Adobe%20Prelude
Adobe Prelude
Adobe Prelude is a discontinued ingest and logging software application for tagging media with metadata for searching, post-production workflows, and footage lifecycle management. Adobe Prelude is also made to work closely with Adobe Premiere Pro. It is part of the Adobe Creative Cloud and is geared towards professional video editing alone or with a group. The software also offers features like rough cut creation. A speech transcription feature was removed in December 2014. History Adobe announced that on April 23, 2012 Adobe OnLocation would be shut down and Adobe Prelude would launch on May 7, 2012. Adobe stated OnLocation's production was stopping because of the growing trend in the industry toward tapeless, native workflows, Adobe stresses that Adobe Prelude is not a direct replacement for OnLocation. Adobe OnLocation was available in CS5 but not in CS6 and Adobe Prelude is only available in CS6. Adobe still offers technical support for OnLocation. In 2021, Adobe announced they would be discontinuing Adobe Prelude, starting by removing it from their website on September 8, 2021. Support for existing users will continue through September 8, 2024. Features Prelude is used to tag media, log data, create and export metadata and generate rough cuts that can be sent to Adobe Premiere Pro. A user can add a tag to a piece of media that will show up on Premiere Pro or if another user opens that media with Prelude. Ingest Footage Prelude can ingest all kinds of file types. Once ingested, Prelude can duplicate, transcode and verify the files. Log Footage Prelude can log data only using the keyboard. Create Rough Cuts Prelude is able to generate Rough Cuts. Rough Cuts are a combination of sub clips that will hold any metadata a user feeds into it. Rough cuts can hold metadata such as markers and comments, and this metadata will stay on this footage. Workflow Accessibility Prelude is an XMP - based open platform that allows for custom integration into many video editing platforms. Features from OnLocation Many features from Adobe OnLocation went to Adobe Prelude or Adobe Premiere Pro. Adobe OnLocation thrived on tape - based cameras and setting up a shot before shooting it, with the change in the industry, this problem is irrelevant in post production. Adobe OnLocation also allowed the user to add tags and scripting metadata that would carry over to Premiere Pro. OnLocation also had a Media Browser pane, which is the standard for any Adobe program today, Prelude has this Media Browser as well. Prelude Live Logger Prelude Live Logger is an application integrated with Prelude CC. Prelude Live Logger is designed to capture notes to use during video logging and editing while you shoot footage on an iPad's camera. Editors can import and combine this metadata with footage from Prelude throughout editing to facilitate various tasks. See also Creative Cloud controversy References Prelude Windows multimedia software MacOS multimedia software Shareware 2012 software Proprietary software Video editing software
10056009
https://en.wikipedia.org/wiki/Wisconsin%20International%20University%20College
Wisconsin International University College
Wisconsin International University College, Ghana is one of the earliest established private universities in Ghana. It is located at Agbogba Junction near Kwabenya in the Greater Accra Region of Ghana. It was established in January 2000 and is accredited by the National Accreditation Board as a university college and affiliated to the University of Ghana, University of Cape Coast, the Kwame Nkrumah University of Science and Technology and University for Development Studies. History In 1992, Wisconsin International University was established. The first campus was established in Tallinn, Estonia and was called: Concordia International University Estonia. In 1997, Wisconsin International University Ukraine was founded in Kyiv. Campuses There are currently three campuses: Accra Campus at Agbogba, North Legon Kanda (City Campus, Greater Accra Region) Kumasi Campus at Feyiase - Atonsu - Lake Road Organization There are currently Five Schools and two faculties Programs Undergraduate Postgraduate Certificate (Short Courses) Undergraduate Program Wisconsin Business School Department of General Business Studies BA Business Studies, General Business Department of Management Studies BA Business Studies, Human Resource Management BA Business Studies, Marketing Department of Accounting, Finance and Banking BA Business Studies, Banking and Finance BA Business Studies, Accounting BSc. Accounting School of Computing Technology Department of Business Computing BA Computer Science and Management BSc. Management and Computer Studies Department of Information Technology BSc. Information Technology Diploma - Information Technology School of Nursing Department of Nursing BSc Nursing BSc Midwifery BSc Community Health Nursing School of Communication BA Communication Studies - Specializations in Journalism (Broadcast, Print and Online) Faculty of Law Bachelor of Laws (LL.B) Faculty of Humanities and Social Sciences Department of Language, Arts and Communication Studies Department of Social Sciences BA Development and Environmental Studies BSc. Economics Postgraduate Program: School of Research and Graduate Studies MA Adult Education - Options in Rural and Community Development/Human Resource Development MBA - Options in Finance/ Project Management/ Human Resource Management/ Marketing/ Accounting/ Management Information Systems MSc - Environmental Sustainability and Management Certificate/Short Courses Professional Diploma in Functional and Advanced Investigations Diploma in Information Technology Certificate in Paralegal Studies Executive Certificate in Security Management, Forensics and Investigative Management Advanced Executive Certificate in Security Management, Forensics and Investigative Management Certificate in Music Certificate in Sign Language Certificate in Occupational Health and Safety Management Certificate in Christian Formation Leadership. Library The University currently has the following libraries: Main Campus Library Faculty of Law Library Nursing Library Kumasi Campus Library City Campus Library. Affiliations Wisconsin International University Homepage University of Cape Coast, Cape Coast University of Ghana Kwame Nkrumah University of Science and Technology University for Development Studies Nationalities at Wisconsin Currently the institution can boast of students of over 30 nationalities and speaking 20 languages from across Africa, Asia and America. See also List of universities in Ghana Notes External links National Accreditation Board Wisconsin International University College Universities in Ghana Educational institutions established in 2000 Education in Accra 2000 establishments in Ghana
49072483
https://en.wikipedia.org/wiki/2017%20Rose%20Bowl
2017 Rose Bowl
The 2017 Rose Bowl was a college football bowl game played on January 2, 2017 at the Rose Bowl stadium in Pasadena, California. This 103rd Rose Bowl Game matched the Big Ten Conference champions Penn State Nittany Lions against the USC Trojans of the Pac-12 Conference, a rematch of the 1923 and 2009 Rose Bowls, the former the first appearance for either team in the bowl and the latter the most recent appearance for either team. It was one of the 2016–17 bowl games that concluded the 2016 FBS football season. Sponsored by the Northwestern Mutual financial services organization, the game was officially known as the Rose Bowl Game presented by Northwestern Mutual. USC received the Lathrop K. Leishman trophy for winning the game. The contest, played on January 2 in keeping with the game's standard practice when New Year's Day falls on a Sunday, was televised on ESPN with a radio broadcast on ESPN Radio and XM Satellite Radio, which began at 1:30 PM (PST) with kickoff at 2:10 PM (PST). The Pasadena Tournament of Roses Association was the organizer of the game. The Rose Bowl Game was a contractual sell-out, with 64,500 tickets allocated to the participating teams and conferences. The remaining tickets were distributed to the Tournament of Roses members, sponsors, City of Pasadena residents, and the general public. Ticket prices were $150 and $210. The bowl game was preceded by the 2017 Rose Parade, the 128th annual Rose Parade which began at 8:00 a.m. (PST) on game day with a theme of "Echoes of Success." Pre-game activities The game was presided over by the 2017 Rose Queen, the Royal Court, Tournament of Roses President Brad Ratliff, and the grand marshals Janet Evans, Allyson Felix, and Greg Louganis. After the teams' arrival in Southern California, the teams participated in the traditional Lawry's Beef Bowl in Beverly Hills and the Disney Media Day at the Disneyland Resort in nearby Anaheim. The Rose Bowl Hall of Fame ceremony luncheon was held prior to the game at the Rose Bowl, where outstanding former players and participants were inducted into the hall. This year's honorees were Bobby Bell, from the University of Minnesota; Ricky Ervins, University of Southern California; Tommy Prothro, Oregon State University and UCLA; and Art Spander, award-winning sportswriter. The bands and cheerleaders from both schools participated in the pre-game Rose Parade on Colorado Boulevard in Pasadena along with the floats. Teams The teams playing in the Rose Bowl game were the highest ranking teams from the Pac-12 Conference and Big Ten Conference that were not selected to play in a College Football Playoff semifinal game. The teams were officially selected by the football committee of the Pasadena Tournament of Roses Association on Selection Sunday on December 4, 2016, based on the final rankings by the CFP committee. #9 USC Trojans The Trojans started the year with a dismal 1–3 record, but after a Week 4 loss at No. 24 Utah the Trojans reeled off an eight-game winning streak, including an upset win over No. 4 Washington to break them into the Top 25 where they have remained since. USC was led by freshman quarterback Sam Darnold, 1,000-yard rusher Ronald Jones II, receivers JuJu Smith-Schuster, Darreus Rogers and Deontay Burnett, defensive end Porter Gustin, and all-purpose player Adoree Jackson. They were coached by Clay Helton, who led them to a 9–3 season (after going 5–4 in 9 games as interim head coach the season before). The team wore its white jerseys and used the east bench during the game. #5 Penn State Nittany Lions The Nittany Lions started the season off 2–2 after losses to Pitt and No. 4 Michigan but finished the regular season on a 9–game winning streak, including a pivotal 4th-quarter comeback victory over #2 Ohio State that led to Penn State receiving a Top-25 national ranking for the first time since 2011. Penn State added a Big Ten Championship with a comeback win over #6 Wisconsin to close out the season. The Nittany Lions were led by sophomore duo quarterback Trace McSorley and 1,000+ yard rusher Saquon Barkley and junior wide receiver Chris Godwin. They came into the game 11–2 (8–1 Big Ten), a big improvement from the last two seasons under head coach James Franklin (finished 7–6 both seasons). The team wore its dark jerseys and used the west bench on game day. Other The Nittany Lions and Trojans had previously played nine times, with USC leading the series 5–4, and had played twice in the Rose Bowl in (1923 and 2009) with USC winning both games. The 2017 match was the highest scoring game in the bowl's history, with a total of 101 points, breaking the record set five years earlier at the 2012 Rose Bowl game. This record was broken in the 2018 Rose Bowl game with 102 points scored by both Oklahoma and Georgia. Game summary Scoring summary Statistics Game notes Weather: ; Partly Cloudy, wind 5-10 mph E Rose Bowl records In this game, a number of new Rose Bowl records were set. USC quarterback Sam Darnold tied or set a number of individual records: Five touchdown passes tied a record for most scores, and set a new record for most passing touchdowns. 36 points became a new individual record (five touchdowns and a two-point conversion). 473 yards in total offense became a new individual record. Penn State quarterback Trace McSorely also tied Rose Bowl records: Four touchdown passes and a running touchdown also tied the record for most scores. In the third quarter, first Darnold, then McSorley tied the existing Rose Bowl record for touchdown passes. Three interceptions tied a record for most passes intercepted, shared with nine other Rose Bowl quarterbacks. USC receiver Deontay Burnett tied an individual record for most touchdown receptions with three. USC kicker Matt Boermeester tied a Rose Bowl record with three field goals made (including the game-winner as time expired). The Trojans and Nittany Lions combined for 101 points. The previous record of 83 was set by Oregon and Wisconsin in the Ducks’ 45–38 victory in the 2012 Rose Bowl. Penn State’s 49 points became a new record for a losing team. Penn State’s 28 points in the third quarter are the most scored by a team in a single quarter. USC set a Rose Bowl comeback record by overcoming a 14-point deficit in the fourth quarter. Related events Selection Sunday, December 4, 2016 Disneyland Resort Press Conference Lawry's Beef Bowl Hall of Fame ceremony, Rose Bowl, January 1, 2017, 12:00 noon Rose Bowl Game Public Tailgate, January 2, 2017 Post-game Ratings The 2017 Rose Bowl drew more than 16 million viewers, making it the most-watched non-semifinal New Year’s Six game ever. And one of the most viewed college football games in history. Viewership was up 17% from the prior year’s 2016 Rose Bowl game between Stanford and Iowa. The telecast peaked at 19,656,000 viewers in the final minutes of the fourth quarter, which included USC’s game-winning field goal. Locally, Philadelphia (16.7) set a market record for the game, while in Pittsburgh (17.0) and Los Angeles (14.9) making it was the second highest-rated bowl game ever on ESPN. Figures do not include the 300,000 who streamed coverage on WatchESPN or other International streaming websites. If these figures were added to the final rating it would have been in the near 19 million viewership range. Some countries who streamed games on national tv broadcast the game include Canada (1.3 million), France (200,000) and Australia (98,000). References 2016–17 NCAA football bowl games 2017 2017 Rose Bowl 2017 Rose Bowl 2017 in sports in California January 2017 sports events in the United States 21st century in Pasadena, California
12576291
https://en.wikipedia.org/wiki/Kernel%20debugger
Kernel debugger
A kernel debugger is a debugger present in some operating system kernels to ease debugging and kernel development by the kernel developers. A kernel debugger might be a stub implementing low-level operations, with a full-blown debugger such as GNU Debugger (gdb), running on another machine, sending commands to the stub over a serial line or a network connection, or it might provide a command line that can be used directly on the machine being debugged. Operating systems and operating system kernels that contain a kernel debugger: The Windows NT family includes a kernel debugger named KD, which can act as a local debugger with limited capabilities (reading and writing kernel memory, and setting breakpoints) and can attach to a remote machine over a serial line, IEEE 1394 connection, USB 2.0 or USB 3.0 connection. The WinDbg GUI debugger can also be used to debug kernels on local and remote machines. BeOS and Haiku include a kernel debugger usable with either an on-screen console or over a serial line. It features various commands to inspect memory, threads, and other kernel structures. DragonFly BSD Linux kernel; No kernel debugger was included in the mainline Linux tree prior to version 2.6.26-rc1 because Linus Torvalds didn't want a kernel debugger in the kernel. KDB (local) KGDB (remote) MDB (local/remote) NetBSD (DDB for local, KGDB for remote) macOS, Darwin which runs the XNU kernel using the Mach component OpenBSD includes ddb which has a syntax is similar to GNU Debugger. References Debuggers Operating system kernels
1333305
https://en.wikipedia.org/wiki/Apache%20Maven
Apache Maven
Maven is a build automation tool used primarily for Java projects. Maven can also be used to build and manage projects written in C#, Ruby, Scala, and other languages. The Maven project is hosted by the Apache Software Foundation, where it was formerly part of the Jakarta Project. Maven addresses two aspects of building software: how software is built, and its dependencies. Unlike earlier tools like Apache Ant, it uses conventions for the build procedure. Only exceptions need to be specified. An XML file describes the software project being built, its dependencies on other external modules and components, the build order, directories, and required plug-ins. It comes with pre-defined targets for performing certain well-defined tasks such as compilation of code and its packaging. Maven dynamically downloads Java libraries and Maven plug-ins from one or more repositories such as the Maven 2 Central Repository, and stores them in a local cache. This local cache of downloaded artifacts can also be updated with artifacts created by local projects. Public repositories can also be updated. Maven is built using a plugin-based architecture that allows it to make use of any application controllable through standard input. A C/C++ native plugin is maintained for Maven 2. Alternative technologies like Gradle and sbt as build tools do not rely on XML, but keep the key concepts Maven introduced. With Apache Ivy, a dedicated dependency manager was developed as well that also supports Maven repositories. Apache Maven has support for reproducible builds. History Maven, created by Jason van Zyl, began as a sub-project of Apache Turbine in 2002. In 2003, it was voted on and accepted as a top level Apache Software Foundation project. In July 2004, Maven's release was the critical first milestone, v1.0. Maven 2 was declared v2.0 in October 2005 after about six months in beta cycles. Maven 3.0 was released in October 2010 being mostly backwards compatible with Maven 2. Maven 3.0 information began trickling out in 2008. After eight alpha releases, the first beta version of Maven 3.0 was released in April 2010. Maven 3.0 has reworked the core Project Builder infrastructure resulting in the POM's file-based representation being decoupled from its in-memory object representation. This has expanded the possibility for Maven 3.0 add-ons to leverage non-XML based project definition files. Languages suggested include Ruby (already in private prototype by Jason van Zyl), YAML, and Groovy. Special attention was given to ensuring backward compatibility of Maven 3 to Maven 2. For most projects, upgrading to Maven 3 will not require any adjustments of their project structure. The first beta of Maven 3 saw the introduction of a parallel build feature which leverages a configurable number of cores on a multi-core machine and is especially suited for large multi-module projects. Syntax Maven projects are configured using a Project Object Model (POM), which is stored in a pom.xml-file. An example file looks like: <project> <!-- model version is always 4.0.0 for Maven 2.x POMs --> <modelVersion>4.0.0</modelVersion> <!-- project coordinates, i.e. a group of values which uniquely identify this project --> <groupId>com.mycompany.app</groupId> <artifactId>my-app</artifactId> <version>1.0</version> <!-- library dependencies --> <dependencies> <dependency> <!-- coordinates of the required library --> <groupId>junit</groupId> <artifactId>junit</artifactId> <version>3.8.1</version> <!-- this dependency is only used for running and compiling tests --> <scope>test</scope> </dependency> </dependencies> </project> This POM only defines a unique identifier for the project (coordinates) and its dependency on the JUnit framework. However, that is already enough for building the project and running the unit tests associated with the project. Maven accomplishes this by embracing the idea of Convention over Configuration, that is, Maven provides default values for the project's configuration. The directory structure of a normal idiomatic Maven project has the following directory entries: The command mvn package will compile all the Java files, run any tests, and package the deliverable code and resources into target/my-app-1.0.jar (assuming the artifactId is my-app and the version is 1.0.) Using Maven, the user provides only configuration for the project, while the configurable plug-ins do the actual work of compiling the project, cleaning target directories, running unit tests, generating API documentation and so on. In general, users should not have to write plugins themselves. Contrast this with Ant and make, in which one writes imperative procedures for doing the aforementioned tasks. Design Project Object Model A Project Object Model (POM) provides all the configuration for a single project. General configuration covers the project's name, its owner and its dependencies on other projects. One can also configure individual phases of the build process, which are implemented as plugins. For example, one can configure the compiler-plugin to use Java version 1.5 for compilation, or specify packaging the project even if some unit tests fail. Larger projects should be divided into several modules, or sub-projects, each with its own POM. One can then write a root POM through which one can compile all the modules with a single command. POMs can also inherit configuration from other POMs. All POMs inherit from the Super POM by default. The Super POM provides default configuration, such as default source directories, default plugins, and so on. Plug-ins Most of Maven's functionality is in plug-ins. A plugin provides a set of goals that can be executed using the command mvn [plugin-name]:[goal-name]. For example, a Java project can be compiled with the compiler-plugin's compile-goal by running mvn compiler:compile. There are Maven plugins for building, testing, source control management, running a web server, generating Eclipse project files, and much more. Plugins are introduced and configured in a <plugins>-section of a pom.xml file. Some basic plugins are included in every project by default, and they have sensible default settings. However, it would be cumbersome if the archetypal build sequence of building, testing and packaging a software project required running each respective goal manually: mvn compiler:compile mvn surefire:test mvn jar:jar Maven's lifecycle concept handles this issue. Plugins are the primary way to extend Maven. Developing a Maven plugin can be done by extending the org.apache.maven.plugin.AbstractMojo class. Example code and explanation for a Maven plugin to create a cloud-based virtual machine running an application server is given in the article Automate development and management of cloud virtual machines. Build lifecycles The build lifecycle is a list of named phases that can be used to give order to goal execution. One of Maven's standard lifecycles is the default lifecycle, which includes the following phases, in this order: validate generate-sources process-sources generate-resources process-resources compile process-test-sources process-test-resources test-compile test package install deploy Goals provided by plugins can be associated with different phases of the lifecycle. For example, by default, the goal "compiler:compile" is associated with the "compile" phase, while the goal "surefire:test" is associated with the "test" phase. When the mvn test command is executed, Maven runs all goals associated with each of the phases up to and including the "test" phase. In such a case, Maven runs the "resources:resources" goal associated with the "process-resources" phase, then "compiler:compile", and so on until it finally runs the "surefire:test" goal. Maven also has standard phases for cleaning the project and for generating a project site. If cleaning were part of the default lifecycle, the project would be cleaned every time it was built. This is clearly undesirable, so cleaning has been given its own lifecycle. Standard lifecycles enable users new to a project the ability to accurately build, test and install every Maven project by issuing the single command mvn install. By default, Maven packages the POM file in generated JAR and WAR files. Tools like diet4j can use this information to recursively resolve and run Maven modules at run-time without requiring an "uber"-jar that contains all project code. Dependencies A central feature in Maven is dependency management. Maven's dependency-handling mechanism is organized around a coordinate system identifying individual artifacts such as software libraries or modules. The POM example above references the JUnit coordinates as a direct dependency of the project. A project that needs, say, the Hibernate library simply has to declare Hibernate's project coordinates in its POM. Maven will automatically download the dependency and the dependencies that Hibernate itself needs (called transitive dependencies) and store them in the user's local repository. Maven 2 Central Repository is used by default to search for libraries, but one can configure the repositories to be used (e.g., company-private repositories) within the POM. The fundamental difference between Maven and Ant is that Maven's design regards all projects as having a certain structure and a set of supported task work-flows (e.g., getting resources from source control, compiling the project, unit testing, etc.). While most software projects in effect support these operations and actually do have a well-defined structure, Maven requires that this structure and the operation implementation details be defined in the POM file. Thus, Maven relies on a convention on how to define projects and on the list of work-flows that are generally supported in all projects. There are search engines such as The Central Repository Search Engine, which can be used to find out coordinates for different open-source libraries and frameworks. Projects developed on a single machine can depend on each other through the local repository. The local repository is a simple folder structure that acts both as a cache for downloaded dependencies and as a centralized storage place for locally built artifacts. The Maven command mvn install builds a project and places its binaries in the local repository. Then, other projects can utilize this project by specifying its coordinates in their POMs. Interoperability Add-ons to several popular integrated development environments (IDE) targeting the Java programming language exist to provide integration of Maven with the IDE's build mechanism and source editing tools, allowing Maven to compile projects from within the IDE, and also to set the classpath for code completion, highlighting compiler errors, etc. Examples of popular IDEs supporting development with Maven include: Eclipse NetBeans IntelliJ IDEA JBuilder JDeveloper (version 11.1.2) MyEclipse Visual Studio Code These add-ons also provide the ability to edit the POM or use the POM to determine a project's complete set of dependencies directly within the IDE. Some built-in features of IDEs are forfeited when the IDE no longer performs compilation. For example, Eclipse's JDT has the ability to recompile a single Java source file after it has been edited. Many IDEs work with a flat set of projects instead of the hierarchy of folders preferred by Maven. This complicates the use of SCM systems in IDEs when using Maven. See also Apache Continuum Apache Jelly Hudson Jenkins List of build automation software References Further reading External links Maven in 5 minutes Compiling tools Java development tools Maven Maven Software using the Apache license
6145986
https://en.wikipedia.org/wiki/Hebrew%20keyboard
Hebrew keyboard
A Hebrew keyboard (Hebrew: mikledet ivrit) comes in two different keyboard layouts. Most Hebrew keyboards are bilingual, with Latin characters, usually in a US Qwerty layout. Trilingual keyboard options also exist, with the third script being Arabic or Russian, due to the sizable Arabic- and Russian-speaking populations in Israel. Layouts Standard Hebrew keyboard Standard Hebrew keyboards have a 101/104-key layout. Like the standard English keyboard layout, QWERTY, the Hebrew layout was derived from the order of letters on Hebrew typewriters. The layout is codified in SI-1452 by SII. The latest revision, from 2013, mostly modified the location of the diacritics points. One noteworthy feature is that in the standard layout, paired delimiters -– parentheses (), brackets [], braces {}, and angle brackets (less/greater than) <> –- have the opposite logical representation from the standard in left-to-right languages. This gets flipped again by the rendering engine's BiDi mirroring algorithm, resulting in the same visual representation as in Latin keyboards. Key mappings follow the logical rather than the physical representation. For instance, whether on a right-to-left or left-to-right keyboard, Shift-9 always produces a logical "open parenthesis". On a right-to-left keyboard, this is written as the Unicode character U+0029, "right parenthesis": ). This is true on Arabic keyboards as well. On a left-to-right keyboard, this is written as the Unicode character U+0028, "left parenthesis": (. In a 102/105-key layout of this form, there would be an additional key to the right of the left shift key. This would be an additional backslash key. Keyboards with 102 keys are not sold as standard, except by certain manufacturers who mistakenly group Israel into Europe, where 102 keyboards are the norm (most notable of the later group are Logitech and Apple). On computers running Windows, Alt-Shift switches between keyboard layouts. Holding down a Shift key (or pressing Caps Lock) in Windows produces the uppercase Latin letter without the need to switch layouts. Hebrew on standard Latin-based keyboards There are a variety of layouts that, for the most part, follow the phonology of the letters on a Latin-character keyboard such as the QWERTY or AZERTY. Where no phonology mapping is possible, or where multiple Hebrew letters map to a single Latin letter, a similarity in shape or other characteristic may be chosen. For instance, if ס (samech) is assigned to the S key, ש (shin/sin) may be assigned to the W key, which it arguably resembles. The shift key is often used to access the five Hebrew letters that have final forms (sofit) used at the end of words. These layouts are commonly known as "Hebrew-QWERTY" or "French AZERTY-Hebrew" layouts. While Hebrew layouts for Latin-based keyboards are not well standardized, macOS comes with a Hebrew-QWERTY variant, and software layouts for Microsoft Windows can be found on the Internet. Tools such as the Microsoft Keyboard Layout Creator can also be used to produce custom layouts. While uncommon, manufacturers are beginning to produce Hebrew-QWERTY stickers and printed keyboards, useful for those who do not wish to memorize the positions of the Hebrew characters. Niqqud History SI-1452 in its pre-2013 version made an error in the definition. Originally, it tried to assign Niqqud to the upper row of the keyboard. Due to an ambiguity in the standard's language, however, anyone reasonably reading the standard would conclude that pressing shift+the upper row keys would produce both Niqqud and the standard signs available in the US keyboard. Faced with this ambiguity, most manufacturers developed a de facto standard where pressing Shift+upper row key produces the same result as with the US mapping (except the reversal of the open and close brackets). Niqqud was delegated to a more complicated process. Typically, that would be pressing the caps-lock, and then using shift+the keys. This combination was obscure enough, in combination with the relative rare use of Niqqud in modern Hebrew, that most people did not even know of its existence. Even those who did know, would rarely memorize the quite arbitrary locations of the specific marks. Most people who needed it would use virtual graphical keyboards available on the World Wide Web, or by methods integrated into particular operating systems. The 2013 revision of SI-1452 sought out to rectify both of those problems. For compatibility reasons, it was decided to not touch the first two shifting layers of the layout (i.e. - no shift keys at all and the shift key pressed). Niqqud and other marks were added mostly to layer 3, with AltGr pressed. Notes: [1] The letter "ס" represents any Hebrew consonant. [2] For sin-dot and shin-dot, the letter "ש" (sin/shin) is used. [3] The dagesh, mappiq, and shuruk have different uses, but the same graphical representation, and hence are input in the same manner. [4] For shuruk, the letter "ו" (vav) is used since it can only be used with that letter. A rafe can be input by inserting the corresponding Unicode character, either explicitly or via a customized keyboard layout. SIL International have developed another standard, which is based on Tiro, but adds the Niqqud along the home keys. Linux comes with "Israel - Biblical Hebrew (Tiro)" as a standard layout. With this layout, niqqud can be typed without pressing the Caps Lock key. Current Layout The new layout (SI-1452, 2013 revision) was influenced by the Linux Lyx layout, that uses the first letter of the Niqqud mark name as the position for the mark. Letters where collisions happened were decided based on frequency of use, and were located in places that should still be memorable. For example, the Holam mark conflicted with Hirik, so it was placed on the Vav letter, where Holam is usually placed in Hebrew. Likewise, the Qubutz mark, which looks like three diagonal points, conflicted with the much more useful Qamatz mark, so it was placed on the backslash key, that bears visual resemblance to it. The new revision also introduced some symbols that were deemed useful. For example, it introduced that LRM and RLM invisible control characters (placed on the right and left brackets) to allow better formatting of complex BiDi text. Windows supports SI-1452 since Windows 8, which was actually shipped prior to the standard's acceptance. This is due to Microsoft's membership of the SI committee. Their implementation was based on one of the final drafts, but that draft ended up almost identical to the final standard. Linux switched to using SI-1452 once it was released, and in the process deprecated the Lyx layout, which no longer offered any added value. Paragraph Directionality Since Hebrew is read and written right-to-left, as opposed to the left-to-right system in English, the cursor keys and delete keys work backwards when Hebrew text is entered in left-to-right directionality mode. Because of the differences between left-to-right and right-to-left, some difficulties arise in punctuation marks that are common between the two languages, such as periods and commas. When using standard left-to-right input, pressing the "period" key at the end of a sentence displays the mark on the wrong side of the sentence. However, when the next sentence is started, the period moves to the correct location. This is due to the operating system defaulting to its standard text directionality when a typed character (such as a punctuation mark) does not have a specified directionality. There are several ways to force right-to left directionality. When typing, a Unicode right-to-left mark can be inserted where necessary (such as after a punctuation mark). In Notepad, or any Windows standard text box, it can be done with from the context menu Insert Unicode control character. With Windows Hebrew keyboard, RLM can be generated pressing . In Microsoft Word, the Format -> Paragraph menu can be used to change the paragraph's default direction to right-to-left. Similar setting is available in Gmail composer. There are also ways to choose the way the text is displayed, without changing the text itself. In Internet Explorer, right-to-left display can be forced by right-clicking a webpage and selecting Encoding -> Right-To-Left Document. In Notepad, or any Windows standard text box, directionality can be changed by right-clicking and selecting Right to left Reading order. Same effect can be achieved by pressing . You can switch back to Left to right Reading order by unselecting the check box or pressing . Note that this only effects presentation of the text. Next time you open the same text in Notepad, you will need to perform the same direction switch again. Access through the Ctrl key Direction marks As described above, the Hebrew keyboard setting in Microsoft Windows has a shortcut to insert the Unicode right-to-left mark. The same effect can be achieved with . The shortcuts for left-to-right mark are and . Separators The shortcut for the 'Unit Separator' control code (caret notation ) is . The shortcut for 'Record Separator' control code (caret notation ) is . Note that in Notepad, or any Windows standard text box, these characters can be easily inserted via the context menu Insert Unicode control character. For Linux, Ubuntu, Debian and ChromeOS, the sequence is followed by the control code value, then or . Access through the AltGr key Sheqel symbol The symbol "₪", which represents the sheqel sign, can be typed into Windows, Linux and Chrome OS with the Hebrew keyboard layout set, using . On Mac OS X, it can be typed as . If a US or EU layout is in use, the sequence is + for some Windows applications and on Unix heritage systems. Euro symbol For a Euro sign, one would press the (ק). Rafe The rafe is a niqqud that is essentially no longer used in Hebrew. However, it used in Yiddish spelling (according to YIVO standards). It is accessed differently from other nequddot. On macOS, the rafe is input by pressing the desired letter (ב or פ and then the backslash \: בֿ, פֿ. On Windows, the rafe is input by pressing the AltGr key and the "-" key: Note: The letter "O" represents whatever Hebrew letter is used. Yiddish digraphs These Yiddish digraphs are not used in Hebrew; if one wanted two vavs, a vav-yud, or two yuds in Hebrew, one would enter the desired keys independently. Inaccessible punctuation Certain Hebrew punctuation, such as the geresh, gershayim, maqaf, pesiq, sof pasuq, and cantillation marks, are not accessible through the standard Hebrew keyboard layout. As a result, similar looking punctuation is often used instead. For example, a quotation mark is often used for a gershayim, an apostrophe for a geresh, a hyphen for a maqaf, a comma for a pesiq, and a colon for a sof pasuq, though this depends on the platform. On iOS devices, the geresh and gershayim are actually part of the system keyboard. See also Hebrew punctuation Keyboard layout Hebrew alphabet References External links Hebrew language Israeli culture Keyboard layouts
12884
https://en.wikipedia.org/wiki/GCHQ
GCHQ
Government Communications Headquarters, commonly known as GCHQ, is an intelligence and security organisation responsible for providing signals intelligence (SIGINT) and information assurance (IA) to the government and armed forces of the United Kingdom. Based at "The Doughnut" in the suburbs of Cheltenham, GCHQ is the responsibility of the country's Secretary of State for Foreign and Commonwealth Affairs (Foreign Secretary), but it is not a part of the Foreign Office and its Director ranks as a Permanent Secretary. GCHQ was originally established after the First World War as the Government Code and Cypher School (GC&CS) and was known under that name until 1946. During the Second World War it was located at Bletchley Park, where it was responsible for breaking the German Enigma codes. There are two main components of the GCHQ, the Composite Signals Organisation (CSO), which is responsible for gathering information, and the National Cyber Security Centre (NCSC), which is responsible for securing the UK's own communications. The Joint Technical Language Service (JTLS) is a small department and cross-government resource responsible for mainly technical language support and translation and interpreting services across government departments. It is co-located with GCHQ for administrative purposes. In 2013, GCHQ received considerable media attention when the former National Security Agency contractor Edward Snowden revealed that the agency was in the process of collecting all online and telephone data in the UK via the Tempora programme. Snowden's revelations began a spate of ongoing disclosures of global surveillance. The Guardian newspaper was then forced to destroy all files Snowden had given them because of the threats of a lawsuit under the Official Secrets Act. Structure GCHQ is led by the Director of GCHQ, Jeremy Fleming, and a Corporate Board, made up of executive and non-executive directors. Reporting to the Corporate Board are: Sigint missions: comprising maths and cryptanalysis, IT and computer systems, linguistics and translation, and the intelligence analysis unit Enterprise: comprising applied research and emerging technologies, corporate knowledge and information systems, commercial supplier relationships, and biometrics Corporate management: enterprise resource planning, human resources, internal audit, and architecture National Cyber Security Centre (NCSC). History Government Code and Cypher School (GC&CS) During the First World War, the British Army and Royal Navy had separate signals intelligence agencies, MI1b and NID25 (initially known as Room 40) respectively. In 1919, the Cabinet's Secret Service Committee, chaired by Lord Curzon, recommended that a peacetime codebreaking agency should be created, a task which was given to the Director of Naval Intelligence, Hugh Sinclair. Sinclair merged staff from NID25 and MI1b into the new organisation, which initially consisted of around 25–30 officers and a similar number of clerical staff. It was titled the "Government Code and Cypher School" (GC&CS), a cover-name which was chosen by Victor Forbes of the Foreign Office. Alastair Denniston, who had been a member of NID25, was appointed as its operational head. It was initially under the control of the Admiralty and located in Watergate House, Adelphi, London. Its public function was "to advise as to the security of codes and cyphers used by all Government departments and to assist in their provision", but also had a secret directive to "study the methods of cypher communications used by foreign powers". GC&CS officially formed on 1 November 1919, and produced its first decrypt prior to that date, on 19 October. Before the Second World War, GC&CS was a relatively small department. By 1922, the main focus of GC&CS was on diplomatic traffic, with "no service traffic ever worth circulating" and so, at the initiative of Lord Curzon, it was transferred from the Admiralty to the Foreign Office. GC&CS came under the supervision of Hugh Sinclair, who by 1923 was both the Chief of SIS and Director of GC&CS. In 1925, both organisations were co-located on different floors of Broadway Buildings, opposite St. James's Park. Messages decrypted by GC&CS were distributed in blue-jacketed files that became known as "BJs". In the 1920s, GC&CS was successfully reading Soviet Union diplomatic cyphers. However, in May 1927, during a row over clandestine Soviet support for the General Strike and the distribution of subversive propaganda, Prime Minister Stanley Baldwin made details from the decrypts public. During the Second World War, GC&CS was based largely at Bletchley Park, in present-day Milton Keynes, working on understanding the German Enigma machine and Lorenz ciphers. In 1940, GC&CS was working on the diplomatic codes and ciphers of 26 countries, tackling over 150 diplomatic cryptosystems. Senior staff included Alastair Denniston, Oliver Strachey, Dilly Knox, John Tiltman, Edward Travis, Ernst Fetterlein, Josh Cooper, Donald Michie, Alan Turing, Gordon Welchman, Joan Clarke, Max Newman, William Tutte, I. J. (Jack) Good, Peter Calvocoressi and Hugh Foss. An outstation in the Far East, the Far East Combined Bureau was set up in Hong Kong in 1935 and moved to Singapore in 1939. Subsequently, with the Japanese advance down the Malay Peninsula, the Army and RAF codebreakers went to the Wireless Experimental Centre in Delhi, India. The Navy codebreakers in FECB went to Colombo, Ceylon, then to Kilindini, near Mombasa, Kenya. Post Second World War GC&CS was renamed the Government Communications Headquarters (GCHQ) in June 1946. The organisation was at first based in Eastcote in northwest London, then in 1951 moved to the outskirts of Cheltenham, setting up two sites at Oakley and Benhall. One of the major reasons for selecting Cheltenham was that the town had been the location of the headquarters of the United States Army Services of Supply for the European Theater during the War, which built up a telecommunications infrastructure in the region to carry out its logistics tasks. Following the Second World War, US and British intelligence have shared information as part of the UKUSA Agreement. The principal aspect of this is that GCHQ and its US equivalent, the National Security Agency (NSA), share technologies, infrastructure and information. GCHQ ran many signals intelligence (SIGINT) monitoring stations abroad. During the early Cold War, the remnants of the British Empire provided a global network of ground stations which were a major contribution to the UKUSA Agreement; the US regarded RAF Little Sai Wan in Hong Kong as the most valuable of these. The monitoring stations were largely run by inexpensive National Service recruits, but when this ended in the early 1960s, the increased cost of civilian employees caused budgetary problems. In 1965 a Foreign Office review found that 11,500 staff were involved in SIGINT collection (8,000 GCHQ staff and 3,500 military personnel), exceeding the size of the Diplomatic Service. Reaction to the Suez War led to the eviction of GCHQ from several of its best foreign SIGINT collection sites, including the new Perkar, Ceylon site and RAF Habbaniya, Iraq. The staff largely moved to tented encampments on military bases in Cyprus, which later became the Sovereign Base Area. Duncan Campbell and Mark Hosenball revealed the existence of GCHQ in 1976 in an article for Time Out; as a result, Hosenball was deported from the UK. GCHQ had a very low profile in the media until 1983 when the trial of Geoffrey Prime, a KGB mole within it, created considerable media interest. Trade union disputes In 1984, GCHQ was the centre of a political row when, in the wake of strikes which affected Sigint collection, the Conservative government of Margaret Thatcher prohibited its employees from belonging to a trade union. Following the breakdown of talks and the failure to negotiate a no-strike agreement, it was believed that membership of a union would be in conflict with national security. A number of mass national one-day strikes were held to protest this decision, claimed by some as the first step to wider bans on trade unions. Appeals to British Courts and European Commission of Human Rights were unsuccessful. The government offered a sum of money to each employee who agreed to give up their union membership. Appeal to the ILO resulted in a decision that government's actions were in violation of Freedom of Association and Protection of the Right to Organise Convention. A no-strike agreement was eventually negotiated and the ban lifted by the incoming Labour government in 1997, with the Government Communications Group of the Public and Commercial Services Union (PCS) being formed to represent interested employees at all grades. In 2000, a group of 14 former GCHQ employees, who had been dismissed after refusing to give up their union membership, were offered re-employment, which three of them accepted. Post Cold War 1990s: Post-Cold War restructuring The Intelligence Services Act 1994 formalised the activities of the intelligence agencies for the first time, defining their purpose, and the British Parliament's Intelligence and Security Committee was given a remit to examine the expenditure, administration and policy of the three intelligence agencies. The objectives of GCHQ were defined as working as "in the interests of national security, with particular reference to the defence and foreign policies of Her Majesty's government; in the interests of the economic wellbeing of the United Kingdom; and in support of the prevention and the detection of serious crime". During the introduction of the Intelligence Agency Act in late 1993, the former Prime Minister Jim Callaghan had described GCHQ as a "full-blown bureaucracy", adding that future bodies created to provide oversight of the intelligence agencies should "investigate whether all the functions that GCHQ carries out today are still necessary." In late 1993 civil servant Michael Quinlan advised a deep review of the work of GCHQ following the conclusion of his "Review of Intelligence Requirements and Resources", which had imposed a 3% cut on the agency. The Chief Secretary to the Treasury, Jonathan Aitken, subsequently held face to face discussions with the intelligence agency directors to assess further savings in the wake of Quinlan's review. Aldrich (2010) suggests that Sir John Adye, the then Director of GCHQ performed badly in meetings with Aitken, leading Aitken to conclude that GCHQ was "suffering from out-of-date methods of management and out-of-date methods for assessing priorities". GCHQ's budget was £850 million in 1993, (£ as of ) compared to £125 million for the Security Service and SIS (MI5 and MI6). In December 1994 the businessman Roger Hurn was commissioned to begin a review of GCHQ, which was concluded in March 1995. Hurn's report recommended a cut of £100  million in GCHQ's budget; such a large reduction had not been suffered by any British intelligence agency since the end of World War II. The J Division of GCHQ, which had collected SIGINT on Russia, disappeared as a result of the cuts. The cuts had been mostly reversed by 2000 in the wake of threats from violent non-state actors, and risks from increased terrorism, organised crime and illegal access to nuclear, chemical and biological weapons. David Omand became the Director of GCHQ in 1996, and greatly restructured the agency in the face of new and changing targets and rapid technological change. Omand introduced the concept of "Sinews" (or "SIGINT New Systems") which allowed more flexible working methods, avoiding overlaps in work by creating fourteen domains, each with a well-defined working scope. The tenure of Omand also saw the construction of a modern new headquarters, intended to consolidate the two old sites at Oakley and Benhall into a single, more open-plan work environment. Located on a 176-acre site in Benhall, it would be the largest building constructed for secret intelligence operations outside the United States. Operations at GCHQ's Chung Hom Kok listening station in Hong Kong ended in 1994. GCHQ's Hong Kong operations were extremely important to their relationship with the NSA, who contributed investment and equipment to the station. In anticipation of the transfer of Hong Kong to the Chinese government in 1997, the Hong Kong stations operations were moved to Australian Defence Satellite Communications Station in Geraldton in Western Australia. Operations that used GCHQ's intelligence-gathering capabilities in the 1990s included the monitoring of communications of Iraqi soldiers in the Gulf War, of dissident republican terrorists and the Real IRA, of the various factions involved in the Yugoslav Wars, and of the criminal Kenneth Noye. In the mid 1990s GCHQ began to assist in the investigation of cybercrime. 2000s: Coping with the Internet At the end of 2003, GCHQ moved in to its new building. Built on a circular plan around a large central courtyard, it quickly became known as the Doughnut. At the time, it was one of the largest public-sector building projects in Europe, with an estimated cost of £337 million. The new building, which was designed by Gensler and constructed by Carillion, became the base for all of GCHQ's Cheltenham operations. The public spotlight fell on GCHQ in late 2003 and early 2004 following the sacking of Katharine Gun after she leaked to The Observer a confidential email from agents at the United States' National Security Agency addressed to GCHQ agents about the wiretapping of UN delegates in the run-up to the 2003 Iraq war. GCHQ gains its intelligence by monitoring a wide variety of communications and other electronic signals. For this, a number of stations have been established in the UK and overseas. The listening stations are at Cheltenham itself, Bude, Scarborough, Ascension Island, and with the United States at Menwith Hill. Ayios Nikolaos Station in Cyprus is run by the British Army for GCHQ. In March 2010, GCHQ was criticised by the Intelligence and Security Committee for problems with its IT security practices and failing to meet its targets for work targeted against cyber attacks. As revealed by Edward Snowden in The Guardian, GCHQ spied on foreign politicians visiting the 2009 G-20 London Summit by eavesdropping phonecalls and emails and monitoring their computers, and in some cases even ongoing after the summit via keyloggers that had been installed during the summit. According to Edward Snowden, at that time GCHQ had two principal umbrella programs for collecting communications: "Mastering the Internet" (MTI) for Internet traffic, which is extracted from fibre-optic cables and can be searched by using the Tempora computer system. "Global Telecoms Exploitation" (GTE) for telephone traffic. GCHQ has also had access to the US internet monitoring programme PRISM from at least as far back as June 2010. PRISM is said to give the National Security Agency and FBI easy access to the systems of nine of the world's top internet companies, including Google, Facebook, Microsoft, Apple, Yahoo, and Skype. From 2013, GCHQ realised that public attitudes to Sigint had changed and its former unquestioned secrecy was no longer appropriate or acceptable. The growing use of the Internet, together with its inherent insecurities, meant that the communications traffic of private citizens were becoming inextricably mixed with those of their targets and openness in the handling of this issue was becoming essential to their credibility as an organisation. The Internet had become a "cyber commons", with its dominance creating a "second age of Sigint". GCHQ transformed itself accordingly, including greatly expanded Public Relations and Legal departments, and adopting public education in cyber security as an important part of its remit. 2010s In February 2014, The Guardian, based on documents provided by Snowden, revealed that GCHQ had indiscriminately collected 1.8 million private Yahoo webcam images from users across the world. In the same month NBC and The Intercept, based on documents released by Snowden, revealed the Joint Threat Research Intelligence Group and the Computer Network Exploitation units within GCHQ. Their mission was cyber operations based on "dirty tricks" to shut down enemy communications, discredit, and plant misinformation on enemies. These operations were 5% of all GCHQ operations according to a conference slideshow presented by the GCHQ. Soon after becoming Director of GCHQ in 2014, Robert Hannigan wrote an article in the Financial Times on the topic of internet surveillance, stating that "however much [large US technology companies] may dislike it, they have become the command and control networks of choice for terrorists and criminals" and that GCHQ and its sister agencies "cannot tackle these challenges at scale without greater support from the private sector", arguing that most internet users "would be comfortable with a better and more sustainable relationship between the [intelligence] agencies and the tech companies". Since the 2013 global surveillance disclosures, large US technology companies have improved security and become less co-operative with foreign intelligence agencies, including those of the UK, generally requiring a US court order before disclosing data. However the head of the UK technology industry group techUK rejected these claims, stating that they understood the issues but that disclosure obligations "must be based upon a clear and transparent legal framework and effective oversight rather than, as suggested, a deal between the industry and government". In 2015, documents obtained by The Intercept from US National Security Agency whistleblower Edward Snowden revealed that GCHQ had carried out a mass-surveillance operation, codenamed KARMA POLICE, since about 2008. The operation swept up the IP address of Internet users visiting websites, and was established with no public scrutiny or oversight. KARMA POLICE is a powerful spying tool in conjunction with other GCHQ programs because IP addresses could be cross-referenced with other data. The goal of the program, according to the documents, was "either (a) a web browsing profile for every visible user on the internet, or (b) a user profile for every visible website on the internet." In 2015, GCHQ admitted for the first time in court that it conducts computer hacking. In 2017, US Press Secretary Sean Spicer alleged that GCHQ had conducted surveillance on US President Donald Trump, basing the allegation on statements made by a media commentator during a Fox News segment. The US government formally apologised for the allegations and promised they would not be repeated. However, surveillance of Russian agents did pick up contacts made by Trump's campaign team in the run-up to his election, which were passed on to US agencies. On 31 October 2018, GCHQ joined Instagram. Security mission As well as a mission to gather intelligence, GCHQ has for a long-time had a corresponding mission to assist in the protection of the British government's own communications. When the Government Code and Cypher School (GC&CS) was created in 1919, its overt task was providing security advice. GC&CS's Security section was located in Mansfield College, Oxford during the Second World War. In April 1946, GC&CS became GCHQ, and the now GCHQ Security section moved from Oxford to join the rest of the organisation at Eastcote later that year. LCSA From 1952 to 1954, the intelligence mission of GCHQ relocated to Cheltenham; the Security section remained at Eastcote, and in March 1954 became a separate, independent organisation: the London Communications Security Agency (LCSA), which in 1958 was renamed to the London Communications-Electronic Security Agency (LCESA). In April 1965, GPO and MOD units merged with LCESA to become the Communications-Electronic Security Department (CESD). CESG In October 1969, CESD was merged into GCHQ and becoming Communications-Electronic Security Group (CESG). In 1977 CESG relocated from Eastcote to Cheltenham. CESG continued as the UK National Technical Authority for information assurance, including cryptography. CESG did not manufacture security equipment, but worked with industry to ensure the availability of suitable products and services, while GCHQ itself funded research into such areas, for example to the Centre for Quantum Computation at Oxford University and the Heilbronn Institute for Mathematical Research at the University of Bristol. In the 21st century, CESG ran a number of assurance schemes such as CHECK, CLAS, Commercial Product Assurance (CPA) and CESG Assisted Products Service (CAPS). Public key encryption In late 1969 the concept for public-key encryption was developed and proven by James H. Ellis, who had worked for CESG (and before it, CESD) since 1965. Ellis lacked the number theory expertise necessary to build a workable system. Subsequently, a feasible implementation scheme via an asymmetric key algorithm was invented by another staff member Clifford Cocks, a mathematics graduate. This fact was kept secret until 1997. NCSC In 2016, the National Cyber Security Centre was established under GCHQ but located in London, as the UK's authority on cybersecurity. It absorbed and replaced CESG as well as activities that had previously existed outside GCHQ: the Centre for Cyber Assessment (CCA), Computer Emergency Response Team UK (CERT UK) and the cyber-related responsibilities of the Centre for the Protection of National Infrastructure (CPNI). Joint Technical Language Service The Joint Technical Language Service (JTLS) was established in 1955, drawing on members of the small Ministry of Defence technical language team and others, initially to provide standard English translations for organisational expressions in any foreign language, discover the correct English equivalents of technical terms in foreign languages and discover the correct expansions of abbreviations in any language. The remit of the JTLS has expanded in the ensuing years to cover technical language support and interpreting and translation services across the UK Government and to local public sector services in Gloucestershire and surrounding counties. The JTLS also produces and publishes foreign language working aids under crown copyright and conducts research into machine translation and on-line dictionaries and glossaries. The JTLS is co-located with GCHQ for administrative purposes. International relationships GCHQ operates in partnership with equivalent agencies worldwide in a number of bi-lateral and multi-lateral relationships. The principal of these is with the United States (National Security Agency), Canada (Communications Security Establishment), Australia (Australian Signals Directorate) and New Zealand (Government Communications Security Bureau), through the mechanism of the UK-US Security Agreement, a broad intelligence-sharing agreement encompassing a range of intelligence collection methods. Relationships are alleged to include shared collection methods, such as the system described in the popular media as ECHELON, as well as analysed product. Legal basis GCHQ's legal basis is enshrined in the Intelligence Services Act 1994 Section 3 as follows: Activities that involve interception of communications are permitted under the Regulation of Investigatory Powers Act 2000; this kind of interception can only be carried out after a warrant has been issued by a Secretary of State. The Human Rights Act 1998 requires the intelligence agencies, including GCHQ, to respect citizens' rights as described in the European Convention on Human Rights. Oversight The Prime Minister nominates cross-party Members of Parliament to an Intelligence and Security Committee. The remit of the Committee includes oversight of intelligence and security activities and reports are made directly to Parliament. Its functions were increased under the Justice and Security Act 2013 to provide for further access and investigatory powers. Judicial oversight of GCHQ's conduct is exercised by the Investigatory Powers Tribunal. The UK also has an independent Intelligence Services Commissioner and Interception of Communications Commissioner, both of whom are former senior judges. The Investigatory Powers Tribunal ruled in December 2014 that GCHQ does not breach the European Convention of Human Rights, and that its activities are compliant with Articles 8 (right to privacy) and 10 (freedom of expression) of the European Convention of Human Rights. However, the Tribunal stated in February 2015 that one particular aspect, the data-sharing arrangement that allowed UK Intelligence services to request data from the US surveillance programmes Prism and Upstream, had been in contravention of human rights law prior to this until two paragraphs of additional information, providing details about the procedures and safeguards, were disclosed to the public in December 2014. Furthermore, the IPT ruled that the legislative framework in the United Kingdom does not permit mass surveillance and that while GCHQ collects and analyses data in bulk, it does not practice mass surveillance. This complements independent reports by the Interception of Communications Commissioner, and a special report made by the Intelligence and Security Committee of Parliament; although several shortcomings and potential improvements to both oversight and the legislative framework were highlighted. Abuses Despite the inherent secrecy around much of GCHQ's work, investigations carried out by the UK government after the Snowden disclosures have admitted various abuses by the security services. A report by the Intelligence and Security Committee (ISC) in 2015 revealed that a small number of staff at UK intelligence agencies had been found to misuse their surveillance powers, in one case leading to the dismissal of a member of staff at GCHQ, although there were no laws in place at the time to make these abuses a criminal offence. Later that year, a ruling by the Investigatory Powers Tribunal found that GCHQ acted unlawfully in conducting surveillance on two human rights organisations. The closed hearing found the government in breach of its internal surveillance policies in accessing and retaining the communications of the Egyptian Initiative for Personal Rights and the Legal Resources Centre in South Africa. This was only the second time in the IPT's history that it had made a positive determination in favour of applicants after a closed session. At another IPT case in 2015, GCHQ conceded that "from January 2010, the regime for the interception/obtaining, analysis, use, disclosure and destruction of legally privileged material has not been in accordance with the law for the purposes of Article 8(2) of the European convention on human rights and was accordingly unlawful". This admission was made in connection with a case brought against them by Abdelhakim Belhaj, a Libyan opponent of the former Gaddafi regime, and his wife Fatima Bouchard. The couple accused British ministers and officials of participating in their unlawful abduction, kidnapping and removal to Libya in March 2004, while Gaddafi was still in power. On 25 May 2021, the European Court of Human Rights (ECHR) ruled that the GCHQ is guilty of violating data privacy rules through their bulk interception of communications, and does not provide sufficient protections for confidential journalistic material because it gathers communications in bulk. Surveillance of parliamentarians In 2015 there was a complaint by Green Party MP Caroline Lucas that British intelligence services, including GCHQ, had been spying on MPs allegedly "in defiance of laws prohibiting it." Then-Home Secretary, Theresa May, had told Parliament in 2014 that: The Investigatory Powers Tribunal investigated the complaint, and ruled that contrary to the allegation, there was no law that gave the communications of parliament any special protection. The Wilson Doctrine merely acts as a political convention. Constitutional legal case A controversial GCHQ case determined the scope of judicial review of prerogative powers (the Crown's residual powers under common law). This was Council of Civil Service Unions v Minister for the Civil Service [1985] AC 374 (often known simply as the "GCHQ case"). In this case, a prerogative Order in Council had been used by the prime minister (who is the Minister for the Civil Service) to ban trade union activities by civil servants working at GCHQ. This order was issued without consultation. The House of Lords had to decide whether this was reviewable by judicial review. It was held that executive action is not immune from judicial review simply because it uses powers derived from common law rather than statute (thus the prerogative is reviewable). Leadership The following is a list of the heads and operational heads of GCHQ and GC&CS: Sir Hugh Sinclair KCB (1919 - 1939) (Founder) Alastair Denniston CMG CBE (1921 – February 1942) (Operational Head) Sir Edward Travis KCMG CBE (February 1942 – 1952) Sir Eric Jones KCMG CB CBE (April 1952 – 1960) Sir Clive Loehnis KCMG (1960–1964) Sir Leonard Hooper KCMG CBE (1965–1973) Sir Arthur Bonsall KCMG CBE (1973–1978) Sir Brian John Maynard Tovey KCMG (1978–1983) Sir Peter Marychurch KCMG (1983–1989) Sir John Anthony Adye KCMG (1989–1996) Sir David Omand GCB (1996 –1997) Sir Kevin Tebbit KCB CMG (1998) Sir Francis Richards KCMG CVO DL (1998–2003) Sir David Pepper KCMG (2003–2008) Sir Iain Lobban KCMG CB (2008–2014) Robert Hannigan CMG (2014–2017) Sir Jeremy Fleming KCMG CB (2017–present) Stations and former stations The following are stations and former stations that have operated since the Cold War. Current United Kingdom GCHQ Bude, Cornwall GCHQ Cheltenham, Gloucestershire (Headquarters) GCHQ London GCHQ Manchester GCHQ Scarborough, North Yorkshire RAF Digby, Lincolnshire RAF Menwith Hill, North Yorkshire Overseas GCHQ Ascension Island GCHQ Cyprus Former United Kingdom GCHQ Brora, Sutherland GCHQ Cheadle, Staffordshire GCHQ Culmhead, Somerset GCHQ Hawklaw, Fife Overseas GCHQ Hong Kong GCHQ Certified Training The GCHQ Certified Training (GCT) scheme was established to certify two main levels of cybersecurity training. There are also degree and masters level courses. These are: Awareness Level Training: giving an understanding and a foundation in cybersecurity concepts; and Application Level Training: a more in-depth course The GCT scheme was designed to help organisations find the right training that also met GCHQ's exacting standards. It was designed to assure high-quality cybersecurity training courses where the training provider had also undergone rigorous quality checks. The GCT process is carried out by APMG as the independent certification body. The scheme is part of the National Cyber Security Programme established by the Government to develop knowledge, skills and capability in all aspects of cybersecurity in the, and is based on the IISP Skills Framework. In popular culture The historical drama film The Imitation Game (2014) featured Benedict Cumberbatch portraying Alan Turing's efforts to break the Enigma code while employed by the Government Code and Cypher School. GCHQ have set a number of cryptic online challenges to the public, used to attract interest and for recruitment, starting in late 1999. The response to the 2004 challenge was described as "excellent", and the challenge set in 2015 had over 600,000 attempts. It also published the GCHQ puzzle book in 2016 which sold more than 300,000 copies, with the proceeds going to charity. A second book was published in October 2018. GCHQ appeared on the Doctor Who 2019 special "Resolution" where the Reconnaissance Scout Dalek storms the facility and exterminates the staff in order to use the organisation's resources to summon a Dalek fleet. GCHQ is the setting of the 2020 Sky One sitcom Intelligence, featuring David Schwimmer as an incompetent American NSA officer liaising with GCHQ's Cyber Crimes unit. See also GCHQ units: Joint Operations Cell National Cyber Security Centre GCHQ specifics: Capenhurst – said to be home to a GCHQ monitoring site in the 1990s Hugh Alexander – head of the cryptanalysis division at GCHQ from 1949 to 1971 Operation Socialist, a 2010–13 operation in Belgium Zircon, the 1980s cancelled GCHQ satellite project UK agencies: British intelligence agencies Joint Forces Intelligence Group RAF Intelligence UK cyber security community Elsewhere: Signals intelligence by alliances, nations and industries NSA – equivalent United States organisation Notes and references Bibliography External links Her Majesty's Government Communications Centre GovCertUK GCHQ: Britain's Most Secret Intelligence Agency BBC: A final look at GCHQ's top secret Oakley site in Cheltenham INCENSER, or how NSA and GCHQ are tapping internet cables 1919 establishments in the United Kingdom British intelligence agencies Computer security organizations Cryptography organizations Foreign relations of the United Kingdom Government agencies established in 1919 Organisations based in Cheltenham Signals intelligence agencies Foreign Office during World War II Organizations associated with Russian interference in the 2016 United States elections Headquarters in the United Kingdom
2844806
https://en.wikipedia.org/wiki/Gnash%20%28software%29
Gnash (software)
Gnash is a media player for playing SWF files. Gnash is available both as a standalone player for desktop computers and embedded devices, as well as a plugin for several browsers. It is part of the GNU Project and is a free and open-source alternative to Adobe Flash Player. It was developed from the gameswf project. Gnash was first announced in late 2005 by software developer John Gilmore. , the project's maintainer is Rob Savoye. The main developer's web site for Gnash is located on the Free Software Foundation's GNU Savannah project support server. Gnash supports most SWF v7 features and some SWF v8 and v9, however SWF v10 is not supported. History Writing a free software Flash player has been a priority of the GNU Project for some time. Prior to the launch of Gnash, the GNU Project had asked for people to assist the GPLFlash project. The majority of the previous GPLFlash developers have now moved to the Gnash project and the existing GPLFlash codebase will be refocused towards supporting embedded systems. The primary distribution terms for Gnash are those of the GNU GPL. However, since Gnash was started using the codebase of the gameswf project, which is in the public domain, code developed by the Gnash project which might be useful in gameswf is placed in the public domain. Technical details Architecture Adobe only provides an outdated version (11.2) of its official player for Linux on IA-32 and an AMD64 developer preview release in a binary-only form. Gnash, however, can be compiled and executed on many architectures, including x86, ARM, MIPS, and PowerPC. It also supports BSD-based operating systems. An early port for RISC OS, which has never had Macromedia/Adobe Flash support beyond Flash 3, does exist, as well as an early port for BeOS, where Flash support terminated at Version 4. Development of a port to AmigaOS 4.1 has also recently begun. A port to the Haiku Operating System also exists. Gnash requires one of AGG, Cairo, or OpenGL for rendering. In contrast to most GNU projects, which are typically written in C, Gnash is written in the C++ programming language because of its gameswf heritage. Flash compatibility Gnash can play SWF files up to version 7, and 80% of ActionScript 2.0. The goal of the Gnash developers is to be as compatible as possible with the proprietary player (including behavior on bad ActionScript code). However, Gnash offers some special features not available in the Adobe player, such as the possibility to extend the ActionScript classes via shared libraries: sample extensions include MySQL support, file system access and more. For security reasons the extension mechanism must be compiled-in explicitly and enabled via configuration files. Video support Gnash supports playback of FLV videos and allows playing some FLV files from YouTube, Myspace, ShowMeDo and other similar websites (older files with sound – newer files without playing sound). FLV support requires FFmpeg or GStreamer to be installed on the system. Some other free-software programs, such as MPlayer, VLC media player or players for Windows based on the ffdshow DirectShow codecs can play back the FLV format if the file is specially downloaded or piped to it. Version 0.8.8 was released 22 August 2010. Rob Savoye announced that Gnash should now work with all YouTube videos. Version 0.8.8 has GPU support, which pushed it ahead of the proprietary Adobe Flash Player in Linux, until Flash 10.2 came out with hardware acceleration built in. Gnash still suffers from high CPU usage. A Flashblock plugin can be installed by the user, turning on the Flash support on a case-by-case, as needed basis. YouTube video controls and full screen mode is functioning, although version 0.8.8 has a bug that can cause YouTube to display "Invalid parameters". Many popular Flash games do not work with Gnash 0.8.8. Cygnal Cygnal is the Gnash Project's Flash Media Server-compatible audio and video server. It handles negotiating the copyright metadata exchange, as well as streaming the content. It will need to handle many thousands of simultaneous network connection, and support running on large Linux clusters. It should support handling multiple streams with differing content, as well as a multicast stream with a single data source. Due to the patent issues surrounding MP3, and the fact that FLV and ON2 are closed formats, one of the main goals of this project is to support free codes and free protocols as the primary way of doing things. There is an optional support for MP3, FLV, and ON2 (VP6 and VP7) when playing existing Flash content. Both FLV and the VP6 & VP7 codecs are included in ffmpeg. Users can use the ffmpeg plugin for GStreamer 0.10 to use these proprietary codecs. Platform availability Gnash has successfully run on Microsoft Windows, Darwin (OS X), Irix, Solaris, BeOs, OS/2, and Haiku. Gnash has also run on the following 64-bit systems: PowerPC, Itanium, UltraSparc and AMD64. Microsoft Windows Gnash has been ported to Windows and the plugin works best with Firefox 1.0.4 or newer, and should work in any Mozilla-based browser. However, in newer browsers the plugin may become unstable or inoperative. Newer Gnash binaries for Windows do not include a plugin and currently there is no newer working Gnash plugin on Windows. Financial support The project was financially supported by a commercial company, Lulu.com until July 2010. As of March 2012, the lead developer reported donations were barely enough to pay for hosting the project on the web. Adobe Flash Player End-User License Agreement One problem for the project is the difficulty of finding developers. The current developers have never installed Adobe's Flash player, because they fear that anyone who has ever installed the Adobe Flash Player has at the same time accepted an agreement not to modify or reverse engineer Flash player. Therefore, the Gnash project has only about 6 active developers, as of November 2010. Such generic clauses, however, may be against national anticompetition laws when used in normal software license agreements. On May 2, 2012, the Court of Justice of the European Union ruled in case C-406/10 of SAS Institute Inc v World Programming Ltd that the functionality of a computer program is not covered by copyright in the European Union and that contractual provisions are null and void if they forbid observing, studying and testing a computer program in order to reproduce its behavior in a second program. This holds as long as no source code or object code was copied. See also Free Software Lightspark Shumway (software) Swfdec Ruffle (software) Notes References External links Primary Gnash website (Internet Archive copy) Project's official wiki (archived) Gnash at GNU Project Gnash's Savannah Page FSF/GNU Press Release: FSF announces GNU Gnash – Flash Movie Player An interview with Gnash project leader about the future of the product Gnash unofficial and unsupported Windows port Free software programmed in C++ Interpreters (computing) Free media players GNU Project software Adobe Flash High-priority free software projects 2005 software
245719
https://en.wikipedia.org/wiki/Creative%20Commons%20license
Creative Commons license
A Creative Commons (CC) license is one of several public copyright licenses that enable the free distribution of an otherwise copyrighted "work". A CC license is used when an author wants to give other people the right to share, use, and build upon a work that the author has created. CC provides an author flexibility (for example, they might choose to allow only non-commercial uses of a given work) and protects the people who use or redistribute an author's work from concerns of copyright infringement as long as they abide by the conditions that are specified in the license by which the author distributes the work. There are several types of Creative Commons licenses. Each license differs by several combinations that condition the terms of distribution. They were initially released on December 16, 2002, by Creative Commons, a U.S. non-profit corporation founded in 2001. There have also been five versions of the suite of licenses, numbered 1.0 through 4.0. Released in November 2013, the 4.0 license suite is the most current. While the Creative Commons license was originally grounded in the American legal system, there are now several Creative Commons jurisdiction ports which accommodate international laws. In October 2014, the Open Knowledge Foundation approved the Creative Commons CC BY, CC BY-SA and CC0 licenses as conformant with the "Open Definition" for content and data. History and international use Lawrence Lessig and Eric Eldred designed the Creative Commons License (CCL) in 2001 because they saw a need for a license between the existing modes of copyright and public domain status. Version 1.0 of the licenses was officially released on 16 December 2002. Origins The CCL allows inventors to keep the rights to their innovations while also allowing for some external use of the invention. The CCL emerged as a reaction to the decision in Eldred v. Ashcroft, in which the United States Supreme Court ruled constitutional provisions of the Copyright Term Extension Act that extended the copyright term of works to be the last living author's lifespan plus an additional 70 years. License porting The original non-localized Creative Commons licenses were written with the U.S. legal system in mind; therefore, the wording may be incompatible with local legislation in other jurisdictions, rendering the licenses unenforceable there. To address this issue, Creative Commons asked its affiliates to translate the various licenses to reflect local laws in a process called "porting." As of July 2011, Creative Commons licenses have been ported to over 50 jurisdictions worldwide. Chinese use of the Creative Commons license Working with Creative Commons, the Chinese government adapted the Creative Commons License to the Chinese context, replacing the individual monetary compensation of U.S. copyright law with incentives to Chinese innovators to innovate as a social contribution. In China, the resources of society are thought to enable an individual's innovations; the continued betterment of society serves as its own reward. Chinese law heavily prioritizes the eventual contributions that an invention will have towards society’s growth, resulting in initial laws placing limits on the length of patents and very stringent conditions regarding the use and qualifications of inventions. "Info-communism" An idea sometimes called "info-communism" found traction in the Western world after researchers at MIT grew frustrated over having aspects of their code withheld from the public. Modern copyright law roots itself in motivating innovation through rewarding innovators for socially valuable inventions. Western patent law assumes that (1) there is a right to use an invention for commerce and (2) it is up to the patentee's discretion to limit that right. The MIT researchers, led by Richard Stallman, argued for the more open proliferation of their software's use for two primary reasons: the moral obligation of altruism and collaboration, and the unfairness of restricting the freedoms of other users by depriving them of non-scarce resources. As a result, they developed the General Public License (GPL), a precursor to the Creative Commons License based on existing American copyright and patent law. The GPL allowed the economy around a piece of software to remain capitalist by allowing programmers to commercialize products that use the software, but also ensured that no single person had complete and exclusive rights to the usage of an innovation. Since then, info-communism has gained traction, with some scholars arguing in 2014 that Wikipedia itself is a manifestation of the info-communist movement. Applicable works Work licensed under a Creative Commons license is governed by applicable copyright law. This allows Creative Commons licenses to be applied to all work falling under copyright, including: books, plays, movies, music, articles, photographs, blogs, and websites. Software While software is also governed by copyright law and CC licenses are applicable, the CC recommends against using it in software specifically due to backward-compatibility limitations with existing commonly used software licenses. Instead, developers may resort to use more software-friendly free and open-source software software licenses. Outside the FOSS licensing use case for software there are several usage examples to utilize CC licenses to specify a "Freeware" license model; examples are The White Chamber, Mari0 or Assault Cube. Despite the status of CC0 as the most free copyright license, the Free Software Foundation does not recommend releasing software into the public domain using the CC0. However, application of a Creative Commons license may not modify the rights allowed by fair use or fair dealing or exert restrictions which violate copyright exceptions. Furthermore, Creative Commons licenses are non-exclusive and non-revocable. Any work or copies of the work obtained under a Creative Commons license may continue to be used under that license. In the case of works protected by multiple Creative Commons licenses, the user may choose either of them. Preconditions The author, or the licensor in case the author did a contractual transfer of rights, need to have the exclusive rights on the work. If the work has already been published under a public license, it can be uploaded by any third party, once more on another platform, by using a compatible license, and making reference and attribution to the original license (e.g. by referring the URL of the original license). Consequences The license is non-exclusive and royalty-free, unrestricted in terms of territory and duration, so is irrevocable, unless a new license is granted by the author after the work has been significantly modified. Any use of the work that is not covered by other copyright rules triggers the public license. Upon activation of the license, the licensee must adhere to all conditions of the license, otherwise the license agreement is illegitimate, and the licensee would commit a copyright infringement. The author, or the licensor as a proxy, has the legal rights to act upon any copyright infringement. The licensee has a limited period to correct any non-compliance. Types of licenses Four rights The CC licenses all grant "baseline rights", such as the right to distribute the copyrighted work worldwide for non-commercial purposes and without modification. In addition, different versions of license prescribe different rights, as shown in this table: The last two clauses are not free content licenses, according to definitions such as DFSG or the Free Software Foundation's standards, and cannot be used in contexts that require these freedoms, such as Wikipedia. For software, Creative Commons includes three free licenses created by other institutions: the BSD License, the GNU LGPL, and the GNU GPL. Mixing and matching these conditions produces sixteen possible combinations, of which eleven are valid Creative Commons licenses and five are not. Of the five invalid combinations, four include both the "nd" and "sa" clauses, which are mutually exclusive; and one includes none of the clauses. Of the eleven valid combinations, the five that lack the "by" clause have been retired because 98% of licensors requested attribution, though they do remain available for reference on the website. This leaves six regularly used licenses plus the CC0 public domain declaration. Six regularly used licenses The six licenses in most frequent use are shown in the following table. Among them, those accepted by the Wikimedia Foundation – the public domain dedication and two attribution (BY and BY-SA) licenses – allow the sharing and remixing (creating derivative works), including for commercial use, so long as attribution is given. Zero / public domain Besides copyright licenses, Creative Commons also offers CC0, a tool for relinquishing copyright and releasing material into the public domain. CC0 is a legal tool for waiving as many rights as legally possible. Or, when not legally possible, CC0 acts as fallback as public domain equivalent license. Development of CC0 began in 2007 and it was released in 2009. A major target of the license was the scientific data community. In 2010, Creative Commons announced its Public Domain Mark, a tool for labeling works already in the public domain. Together, CC0 and the Public Domain Mark replace the Public Domain Dedication and Certification, which took a U.S.-centric approach and co-mingled distinct operations. In 2011, the Free Software Foundation added CC0 to its free software licenses. However, despite CC0 being the most free and open copyright license, the Free Software Foundation currently does not recommend using CC0 to release software into the public domain. In February 2012, CC0 was submitted to Open Source Initiative (OSI) for their approval. However, controversy arose over its clause which excluded from the scope of the license any relevant patents held by the copyright holder. This clause was added with scientific data in mind rather than software, but some members of the OSI believed it could weaken users' defenses against software patents. As a result, Creative Commons withdrew their submission, and the license is not currently approved by the OSI. From 2013 to 2017, the stock photography website Unsplash used the CC0 license, distributing several million free photos a month. Lawrence Lessig, the founder of Creative Commons, has contributed to the site. Unsplash moved from using the CC0 license to their own similar license in June 2017, but with a restriction added on using the photos to make a competing service which made it incompatible with the CC0 license. In October 2014, the Open Knowledge Foundation approved the Creative Commons CC0 as conformant with the Open Definition and recommend the license to dedicate content to the public domain. Retired licenses Due to either disuse or criticism, a number of previously offered Creative Commons licenses have since been retired, and are no longer recommended for new works. The retired licenses include all licenses lacking the Attribution element other than CC0, as well as the following four licenses: Developing Nations License: a license which only applies to developing countries deemed to be "non-high-income economies" by the World Bank. Full copyright restrictions apply to people in other countries. Sampling: parts of the work can be used for any purpose other than advertising, but the whole work cannot be copied or modified Sampling Plus: parts of the work can be copied and modified for any purpose other than advertising, and the entire work can be copied for noncommercial purposes NonCommercial Sampling Plus: the whole work or parts of the work can be copied and modified for non-commercial purposes Version 4.0 The latest version 4.0 of the Creative Commons licenses, released on November 25, 2013, are generic licenses that are applicable to most jurisdictions and do not usually require ports. No new ports have been implemented in version 4.0 of the license. Version 4.0 discourages using ported versions and instead acts as a single global license. Rights and obligations Attribution Since 2004, all current licenses other than the CC0 variant require attribution of the original author, as signified by the BY component (as in the preposition "by"). The attribution must be given to "the best of [one's] ability using the information available". Creative Commons suggests the mnemonic "TASL": title -- author -- source [web link] -- [CC] licence. Generally this implies the following: Include any copyright notices (if applicable). If the work itself contains any copyright notices placed there by the copyright holder, those notices must be left intact, or reproduced in a way that is reasonable to the medium in which the work is being re-published. Cite the author's name, screen name, or user ID, etc. If the work is being published on the Internet, it is nice to link that name to the person's profile page, if such a page exists. Cite the work's title or name (if applicable), if such a thing exists. If the work is being published on the Internet, it is nice to link the name or title directly to the original work. Cite the specific CC license the work is under. If the work is being published on the Internet, it is nice if the license citation links to the license on the CC website. Mention if the work is a derivative work or adaptation. In addition to the above, one needs to identify that their work is a derivative work, e.g., "This is a Finnish translation of [original work] by [author]." or "Screenplay based on [original work] by [author]." Non-commercial licenses The "non-commercial" option included in some Creative Commons licenses is controversial in definition, as it is sometimes unclear what can be considered a non-commercial setting, and application, since its restrictions differ from the principles of open content promoted by other permissive licenses. In 2014 Wikimedia Deutschland published a guide to using Creative Commons licenses as wiki pages for translations and as PDF. Adaptability Rights in an adaptation can be expressed by a CC license that is compatible with the status or licensing of the original work or works on which the adaptation is based. Legal aspects The legal implications of large numbers of works having Creative Commons licensing are difficult to predict, and there is speculation that media creators often lack insight to be able to choose the license which best meets their intent in applying it. Some works licensed using Creative Commons licenses have been involved in several court cases. Creative Commons itself was not a party to any of these cases; they only involved licensors or licensees of Creative Commons licenses. When the cases went as far as decisions by judges (that is, they were not dismissed for lack of jurisdiction or were not settled privately out of court), they have all validated the legal robustness of Creative Commons public licenses. Dutch tabloid In early 2006, podcaster Adam Curry sued a Dutch tabloid who published photos from Curry's Flickr page without Curry's permission. The photos were licensed under the Creative Commons Non-Commercial license. While the verdict was in favor of Curry, the tabloid avoided having to pay restitution to him as long as they did not repeat the offense. Professor Bernt Hugenholtz, main creator of the Dutch CC license and director of the Institute for Information Law of the University of Amsterdam, commented, "The Dutch Court's decision is especially noteworthy because it confirms that the conditions of a Creative Commons license automatically apply to the content licensed under it, and binds users of such content even without expressly agreeing to, or having knowledge of, the conditions of the license." Virgin Mobile In 2007, Virgin Mobile Australia launched an advertising campaign promoting their cellphone text messaging service using the work of amateur photographers who uploaded their work to Flickr using a Creative Commons-BY (Attribution) license. Users licensing their images this way freed their work for use by any other entity, as long as the original creator was attributed credit, without any other compensation required. Virgin upheld this single restriction by printing a URL leading to the photographer's Flickr page on each of their ads. However, one picture, depicting 15-year-old Alison Chang at a fund-raising carwash for her church, caused some controversy when she sued Virgin Mobile. The photo was taken by Alison's church youth counselor, Justin Ho-Wee Wong, who uploaded the image to Flickr under the Creative Commons license. In 2008, the case (concerning personality rights rather than copyright as such) was thrown out of a Texas court for lack of jurisdiction. SGAE vs Fernández In the fall of 2006, the collecting society Sociedad General de Autores y Editores (SGAE) in Spain sued Ricardo Andrés Utrera Fernández, owner of a disco bar located in Badajoz who played CC-licensed music. SGAE argued that Fernández should pay royalties for public performance of the music between November 2002 and August 2005. The Lower Court rejected the collecting society's claims because the owner of the bar proved that the music he was using was not managed by the society. In February 2006, the Cultural Association Ladinamo (based in Madrid, and represented by Javier de la Cueva) was granted the use of copyleft music in their public activities. The sentence said: GateHouse Media, Inc. v. That's Great News, LLC On June 30, 2010 GateHouse Media filed a lawsuit against That's Great News. GateHouse Media owns a number of local newspapers, including Rockford Register Star, which is based in Rockford, Illinois. That's Great News makes plaques out of newspaper articles and sells them to the people featured in the articles. GateHouse sued That's Great News for copyright infringement and breach of contract. GateHouse claimed that TGN violated the non-commercial and no-derivative works restrictions on GateHouse Creative Commons licensed work when TGN published the material on its website. The case was settled on August 17, 2010, though the settlement was not made public. Drauglis v. Kappa Map Group, LLC The plaintiff was photographer Art Drauglis, who uploaded several pictures to the photo-sharing website Flickr using Creative Commons Attribution-ShareAlike 2.0 Generic License (CC BY-SA), including one entitled "Swain's Lock, Montgomery Co., MD.". The defendant was Kappa Map Group, a map-making company, which downloaded the image and used it in a compilation entitled "Montgomery Co. Maryland Street Atlas". Though there was nothing on the cover that indicated the origin of the picture, the text "Photo: Swain's Lock, Montgomery Co., MD Photographer: Carly Lesser & Art Drauglis, Creative Commoms , CC-BY-SA-2.0" appeared at the bottom of the back cover. The validity of the CC BY-SA 2.0 as a license was not in dispute. The CC BY-SA 2.0 requires that the licensee to use nothing less restrictive than the CC BY-SA 2.0 terms. The atlas was sold commercially and not for free reuse by others. The dispute was whether Drauglis' license terms that would apply to "derivative works" applied to the entire atlas. Drauglis sued the defendants in June 2014 for copyright infringement and license breach, seeking declaratory and injunctive relief, damages, fees, and costs. Drauglis asserted, among other things, that Kappa Map Group "exceeded the scope of the License because defendant did not publish the Atlas under a license with the same or similar terms as those under which the Photograph was originally licensed." The judge dismissed the case on that count, ruling that the atlas was not a derivative work of the photograph in the sense of the license, but rather a collective work. Since the atlas was not a derivative work of the photograph, Kappa Map Group did not need to license the entire atlas under the CC BY-SA 2.0 license. The judge also determined that the work had been properly attributed. In particular, the judge determined that it was sufficient to credit the author of the photo as prominently as authors of similar authorship (such as the authors of individual maps contained in the book) and that the name "CC-BY-SA-2.0" is sufficiently precise to locate the correct license on the internet and can be considered a valid URI of the license. Verband zum Schutz geistigen Eigentums im Internet (VGSE) In July 2016, German computer magazine LinuxUser reported that a German blogger Christoph Langner used two licensed photographs from Berlin photographer Dennis Skley on his private blog Linuxundich. Langner duly mentioned the author and the license and added a link to the original. Langner was later contacted by the Verband zum Schutz geistigen Eigentums im Internet (VGSE) (Association for the Protection of Intellectual Property in the Internet) with a demand for €2300 for failing to provide the full name of the work, the full name of the author, the license text, and a source link, as is required by the fine print in the license. Of this sum, €40 goes to the photographer, and the remainder is retained by VGSE. The Higher Regional Court of Köln dismissed the claim in May 2019. Works with a Creative Commons license Creative Commons maintains a content directory wiki of organizations and projects using Creative Commons licenses. On its website CC also provides case studies of projects using CC licenses across the world. CC licensed content can also be accessed through a number of content directories and search engines (see Creative Commons-licensed content directories). Unicode symbols After being proposed by Creative Commons in 2017, Creative Commons license symbols were added to Unicode with version 13.0 in 2020. The circle with an equal sign (meaning no derivatives) is present in older versions of Unicode, unlike all the other symbols. These symbols can be used in succession to indicate a particular Creative Commons license, for example, CC-BY-SA (CC-Attribution-ShareAlike) can be expressed with Unicode symbols CIRCLED CC, CIRCLED HUMAN FIGURE and CIRCLED ANTICLOCKWISE ARROW placed next to each other: 🅭🅯🄎 Case law database In December2020, the Creative Commons organization launched an online database covering licensing case law and legal scholarship. See also Free culture movement Free music Free software Non-commercial educational Notes References External links Full selection of licenses Licenses. Overview of free licenses. freedomdefined.org WHAT IS CREATIVE COMMONS LICENSE. – THE COMPLETE DEFINITIVE GUIDE Web-friendly formatted summary of CC BY-SA 3.0 Computer law Copyleft Free content licenses Intellectual property activism Intellectual property law Public copyright licenses Articles containing video clips Copyleft software licenses
20584690
https://en.wikipedia.org/wiki/The%20Girl%20Who%20Played%20with%20Fire
The Girl Who Played with Fire
The Girl Who Played with Fire () is the second novel in the best-selling Millennium series by Swedish writer Stieg Larsson. It was published posthumously in Swedish in 2006 and in English in January 2009. The book features many of the characters who appeared in The Girl with the Dragon Tattoo (2005), among them the title character, Lisbeth Salander, a brilliant computer hacker and social misfit, and Mikael Blomkvist, an investigative journalist and publisher of Millennium magazine. Widely seen as a critical success, The Girl Who Played with Fire was also (according to The Bookseller magazine) the first and only translated novel to be number one in the UK hardback chart. Synopsis The novel is formally divided into a prologue followed by four parts. The prologue of the book opens with a girl captured and restrained inside a dark room by an unidentified male. To cope with being captured, she mentally replays a past episode when she threw a milk carton filled with gasoline onto another man inside a car and tossed an ignited match onto him. Part 1 – Irregular Equations After finishing the job on the Wennerström affair (described in The Girl with the Dragon Tattoo), Lisbeth Salander disappeared from Sweden and traveled throughout Europe. The novel opens with her on the shores of the Caribbean in St George's, the capital of Grenada. Salander has become interested in Fermat's Last Theorem and mathematics, an interest that resounds with the opening page of each Part in this novel. From within her room in her hotel she observes on several occasions that her neighbor, Dr Forbes, an American tourist from Texas, physically abuses his wife, in the next room to Salander's. Salander also befriends George Bland, a 16-year-old orphaned student living in a small shack and begins tutoring him in mathematics. Salander finds Bland's company relaxing and enjoyable because Bland does not ask her personal questions, and the two develop a sexual relationship. Salander uses her connections among the hackers' network to investigate Dr Forbes and learns that he was once accused of mishandling funds in his faith-based foundation. Currently he has no assets, but his wife is the heiress to a fortune worth $40 million. As a hurricane hits Grenada, concerns for the safety of the residents at the hotel cause the hotel management to begin ushering them into a cellar. Salander remembers Bland, and braves the strong wind and rain to collect him. As the two reach the hotel entrance, Salander sees Dr Forbes on the beach with his wife and realizes that he is attempting to kill her for her inheritance. Salander attacks Forbes with the leg of a chair and abandons him to the elements. Salander, Bland, and Mrs Forbes retreat to the cellar and receive medical care; Dr Forbes is later confirmed as the only fatality of the storm. Part 2 – From Russia with Love Lisbeth Salander returns to Stockholm after more than a year's absence. Immediately before the Wennerström affair became public knowledge, Salander had laundered a sum of three billion kronor (the equivalent of about half a billion $US) into a disguised bank account. With this sum she purchases a new upscale apartment outside Mosebacke Torg and moves out of her old apartment in Lundagatan (SV). Salander allows her current sex partner, Miriam Wu, to move into her old apartment, for the price of 1 krona and the condition that Wu forward all of Salander's mail. Salander also re-establishes contact with Dragan Armansky, her former boss at Milton Securities, and her former legal guardian Holger Palmgren, who fell victim to a stroke at the beginning of Dragon Tattoo. Nils Bjurman, Palmgren's replacement, continues to nurture a growing hatred for his ward after the events of Dragon Tattoo. His fury has caused him to diminish his practice down to a single client (Salander) and focus his attention on capturing her and destroying the film she made of him raping her. He scrutinizes Salander's medical records, and thus identifies an incident named "All the Evil" as well as a person from her past as his strongest ally. In the meantime, Mikael Blomkvist, the publisher of Millennium magazine, has lost contact with Salander, who has refused even to open his letters. He is therefore surprised, shortly after her return, while he is walking past Salander's apartment in the vain hope of running into her, to see her being attacked by a ponytailed man with a beer gut, a member of the Svavelsjö outlaw motorcycle club. Blomkvist attempts to help, to Salander's astonishment, and their joint efforts enable her to elude her attacker. Millennium is approached by a couple: Dag Svensson, a young journalist, and Mia Johansson, a doctoral student. They have put together a meticulously researched report, ironically titled "From Russia with Love", about sex trafficking in Sweden and the abuse of underage girls by high-ranking figures; this is the subject of Johansson's doctoral thesis and Svensson wants Millennium to publish his exposé in book form. Whilst the research is mostly complete, Svensson, Johansson, and the Millennium staff are intrigued by recurring mentions of "Zala", a shadowy figure heavily involved in Sweden's sex-trafficking industry. Salander, hacking Mikael Blomkvist's computer, is taken aback by the mention of Zala, and visits Svensson and Johansson to ask questions. Part 3 – Absurd Equations Later the same night, Blomkvist calls on the couple, and finds them both shot dead in their apartment, the killer having apparently left the building only seconds before. Blomkvist notifies Erika Berger, the Millenniums editor-in-chief and his lover, of the double murder, and the magazine's management team holds an emergency meeting at which they decide to postpone the publication of Svensson's book and the associated magazine special. They decide to backtrack Svensson's research to ensure the accuracy of the material, and to comb through it for possible murder motives, while Blomkvist is tasked with finishing Svensson's mostly-completed book. Prosecutor Richard Ekström assembles an investigative team, led by Inspector Jan Bublanski, who selects Sonja Modig for inclusion in the team because of her sensitivity to women's issues. The team identifies Salander's fingerprints on the murder weapon, and her formal record establishes her as a violent, unstable, psychotic young woman with a history of prostitution. Armansky, Blomkvist, and Berger all vouch for Salander's intelligence and moral fiber; neither Blomkvist nor Berger was even aware of her psychiatric history. While investigating Salander's social circle, Modig finds Bjurman shot dead in his apartment with his own revolver, the same weapon used on Svensson and Johansson; Salander remains the prime suspect. In light of this new evidence, Ekström holds a press conference and discloses Salander's name and psychiatric history to the press, describing her as a danger to others and herself. Blomkvist enlists the help of managing editor Malin Eriksson to investigate the murders, during which investigation Blomkvist realizes that Salander has hacked into his notebook computer. He leaves her notes on his desktop, and her replies point him to "Zala". Blomkvist confronts Gunnar Björck, a policeman on sick leave and one of the high-ranking abusers identified by Dag and Mia, who agrees to disclose information about Zala if Blomkvist leaves him out of Millennium's exposé. Armansky realises that Milton Security should become involved in the investigation and sends two of his employees, Hedström and Bohman, to aid the formal police investigation. Miriam Wu returns from a Paris trip to find herself taken to the police station, and she confirms Salander's intelligence and moral character. However, Hedström, who carries an old grudge against Salander, leaks Wu's identity to the press, who publish stories about Wu's involvement in a Gay Pride Festival and Salander's prior friendship with a female rock group; both Wu and Salander are sensationalized in the media as members of a "lesbian Satanist gang". The press also publishes information about Salander's past. Part 4 – Terminator Part 4 begins with Salander's wondering why the press's inside source has chosen not to publicize "All the Evil", the events which dominated the gap in her biography, information she knows would swing public opinion even further against her. Blomkvist is approached by Paolo Roberto, a boxing champion and Salander's former coach. Blomkvist asks Roberto to help by finding Miriam Wu, who, released by the police, has been avoiding all contact from the press, including Blomkvist. In the meantime, at Salander's suggestion, Blomkvist focuses on Zala as the key connection among the three murders and the sex trafficking. As the police continue the investigation, Blomkvist's team also notices the three-year gap in Salander's biography. Blomkvist decides to confront Björck and trade his anonymity for information on Zala. Roberto, staking out Salander's former apartment in the hopes of catching Wu, witnesses her being kidnapped into a van by a paunchy man with a ponytail (Salander's earlier attacker) and a "blond giant". He follows the van to a warehouse south of Nykvarn, where he attempts to rescue Wu by boxing with the giant. He finds his opponent unusually muscular and totally insensitive to pain, and only through applications of massive blunt trauma can he and Wu stun the giant enough to escape. The giant recovers and sets the warehouse on fire to destroy the evidence. However, Roberto is able to direct the police to the site, where they find three buried and dismembered bodies. Visiting Bjurman's summer cabin, Salander finds a classified Swedish Security Service file written about "All The Evil", and begins to make the connection between Bjurman and Zala, whose real name is Alexander Zalachenko. By coincidence, two members of Svavelsjö MC, Carl-Magnus Lundin (the paunchy ponytailed man) and Sonny Nieminen, have been dispatched to burn the place down. Salander physically incapacitates them, leaving more suspects for Bublanski to find. She returns to her apartment and, having no choice, decides to find Zalachenko and kill him. Salander discovers the blond giant's identity ("Ronald Niedermann") and his connection to a post office box in Göteborg, and she goes there to find him and Zalachenko. In his apartment, Blomkvist finds Salander's keys, which he had picked up after her escape from Lundin. He manages to find her new, upscale apartment as well as the DVD revealing Bjurman's crime. With information from Björck and Salander's former guardian, Holger Palmgren, Blomkvist is able to piece together the entire story: Zalachenko is a former Soviet defector under secret Swedish protection, whose very existence is kept classified by Säpo; Bjurman and Björck knew about him only because they happened to be the junior officers on duty the day Zalachenko went into a police station and demanded political asylum. Zalachenko, initially a source of vital information on the USSR's intelligence operations to Säpo, began to traffic in sex slaves on the side. He became the partner of a 17-year-old woman who became pregnant with twins, Lisbeth and Camilla. Zalachenko was an itinerant father who physically and emotionally abused his partner when he was home. The cycle of violence culminated in Lisbeth Salander's deliberately setting his car alight with gasoline while her father was in it. This is the event Salander refers to as "All the Evil", since the authorities, instead of listening to her pleas on behalf of her mother, imprisoned Salander and declared her insane. Salander's mother was left with the first of a series of cerebral hemorrhages which consigned her to nursing homes and ultimately caused her death. Salander realised that the government would never acknowledge Zalachenko's crimes, which would require them to admit his existence. Zalachenko was allowed to walk away, but suffered serious injuries and had to have his foot amputated. Niedermann had killed Svensson and Johansson on Zalachenko's orders: when Salander visited them, she asked whether Bjurman had ever appeared on their list of high-ranking abusers, and they called him immediately after she left. Bjurman then called Zalachenko in a panic, leading not only to their deaths but to his own, as well. Blomkvist does not share all of his findings with Bublanski, out of respect for Salander's privacy, but between his testimony, the various character witnesses, and the additional accomplices piling up, the police are forced to admit that their original suspicions of Salander as a psychotic murderer may have been wrong. Milton Security are ejected from the investigation when it becomes clear that Hedström is the inside source who has been leaking sensational details to the press. Armansky is satisfied, as his true goal in aiding the investigation—ensuring Salander is not simply condemned as a murderer out of hand—has been achieved. Finally, Blomkvist finds Niedermann's Göteborg address, and sets off for the farm where Niedermann and Zalachenko await. He has deduced that Salander has entered what Roberto and his boxing friends called "Terminator Mode", where she attacks without restraint to defend her life and those she cares about. Salander enters the farmhouse and is captured as a result of secret cameras and alarms Zalachenko had installed. Zalachenko tells Salander that Niedermann is her half-brother. When Salander attempts to escape, Zalachenko shoots her in the hip, shoulder, and head, and Niedermann buries her, not realising she is still alive. Battling through immense pain, Salander slowly digs herself out and again attempts to kill Zalachenko with an axe, noting that Zalachenko's use of a Browning .22 firearm is the only reason she survived. On his way to Göteborg, Blomkvist sees Niedermann trying to hitch a ride, captures him at gunpoint, and ties him against a signpost by the road. The book ends as Blomkvist finds Salander and calls emergency services. Characters Main characters Mikael Blomkvist – A journalist and publisher at Millennium magazine Lisbeth Salander – A private investigator, hacker, and accused triple-murderer Alexander Zalachenko (Zala) a.k.a. Karl Axel Bodin – A former Soviet spy who turns out to be deeply involved in Salander's dark past Ronald Niedermann a.k.a. The Giant – Zalachenko's henchman who is connected to Salander in a way which she does not realise Carl-Magnus Lundin – The President of Svavelsjö Motorcycle Club (Svavelsjö MC) who sells drugs and is commissioned to kidnap Salander for Zala Related to Millennium magazine Erika Berger – Editor in chief of Millennium magazine and Blomkvist's on–off lover Harriet Vanger – Majority investor in Millennium Malin Eriksson – Managing editor of Millennium magazine Christer Malm – Art director and designer of Millennium magazine Dag Svensson – A journalist who is writing an exposé on the Swedish sex trade and Mia’s boyfriend Mia Johansson – A doctoral student in criminology and Dag’s girlfriend Henry Cortez – Part-time journalist at Millennium magazine Lotta Karim – Part-time journalist at Millennium magazine Monika Nillson – Journalist at Millennium magazine Related to Milton Security Dragan Armansky – Salander's former boss and director of Milton Security Sonny Bohman – A former policeman and part of the team Armansky assigns to support the police investigation Johan Fräklund – Chief of Operations at Milton Security and assigned to support police investigation Niklas Hedström – Works for Milton Security and is assigned to support police investigation but sabotages it. A heart problem kept him from becoming a police man. He hates Salander since she caught him blackmailing a client Related to police investigation Jan Bublanski – A police officer who is in charge of Salander's case, nicknamed Officer Bubble Sonja Modig – A detective in Bublanski's team Richard Ekström – A prosecutor of Salander's case Hans Faste – Working in Bublanski's team, causing trouble with his sexually discriminating attitude Curt Andersson – Police officer in Bublanski's team Jerker Holmberg – Police officer in Bublanski's team Other characters Annika Gianinni – Blomkvist's sister and an attorney Miriam "Mimmi" Wu – A kickboxer, university student and Salander's on and off girlfriend Nils Bjurman – An attorney and Salander's current guardian since Palmgren's stroke Paolo Roberto – A former professional boxer and Salander's boxing instructor. The character is based on the real boxer Paolo Roberto. Gunnar Björk – A Swedish Security Police officer and former punter abusing women. He is also the lead source for Blomkvist on Zalachenko. Holger Palmgren – Lisbeth Salander's former guardian; she visits him in a rehabilitation home and they play a game of chess together. In her memoir "There Are Things I Want You to Know" About Stieg Larsson and Me, Eva Gabrielsson tells readers that this chess game was inspired by her brother Björn who Stieg Larsson used to play the game with and with whom he was very close. Greger Beckman – Erika Berger's husband George Bland – Black teenage boy whom Salander has an affair with in Grenada Richard Forbes – Reverend and Salander's hotel room neighbour in Grenada Geraldine Forbes – A millionaire heiress and battered wife of Richard Forbes Sonny Nieminen – Part of Svavelsjö MC and involved in trying to kidnap Salander Reception The English version was published in January 2009 and immediately became a number 1 bestseller. It received generally positive reviews from most of the major UK newspapers. Many reviewers agreed with Joan Smith at The Sunday Times that this novel was “even more gripping and astonishing than the first”. Most of the reviewers concentrated mainly on the character of Lisbeth Salander, with Mark Lawson at the Guardian saying that "the huge pleasure of these books is Salander, a fascinating creation with a complete and complex psychology." Boyd Tonkin in The Independent said: "the spiky and sassy Lisbeth Salander – punkish wild child, traumatised survivor of the 'care' system, sexual adventurer and computer hacker of genius" was "the most original heroine to emerge in crime fiction for many years". Michiko Kakutani at The New York Times wrote that "Salander and Blomkvist, transcend their genre and insinuate themselves in the reader’s mind through their oddball individuality, their professional competence and, surprisingly, their emotional vulnerability." Cultural notes The character of Paolo Roberto is an actual person. He is a former boxer and television chef who has also dabbled in politics. He played himself in the 2009 film adaptation of the book. In the first part of the book, Salander is exploring Dimensions in Mathematics apparently written by L. C. Parnault and published by Harvard University Press in 1999. On February 9, 2009, Harvard University Press announced on their website that this book and the author are purely fictitious. The mysterious Karl Axel Bodin, in whose house Salander finds Zalachenko and Niedermann, is a historical name. Bodin was born in Karlstad and later moved to Sundsvall. He went to Norway to join the Waffen-SS; at the end of World War II, he was attached to the country's branch of the Gestapo. At the war's end, Bodin and another Swedish volunteer stole a car in an attempted escape to Sweden. The car's owner saw the theft, and soon a gunfight erupted in which the car owner and Bodin's friend were shot. Bodin left his friend behind and crossed the border. Film and television adaptations The Girl Who Played with Fire, a 2009 Swedish film directed by Daniel Alfredson. Millennium, a Swedish six-part television miniseries based on the film adaptations of Stieg Larsson's series of the same name, was broadcast on SVT1 from 20 March 2010 to 24 April 2010. The series was produced by Yellow Bird in cooperation with several production companies, including SVT, Nordisk Film, Film i Väst, and ZDF Enterprises. Dragon Tattoo Trilogy: Extended Edition is the title of the TV miniseries release on DVD, Blu-ray, and video on demand in the US. This version of the miniseries comprises nine hours of story content, including over two hours of additional footage not seen in the theatrical versions of the original Swedish films. The four-disc set includes two hour special features and extended editions of The Girl with the Dragon Tattoo, The Girl Who Played with Fire and The Girl Who Kicked the Hornet's Nest. References External links IMDb The Girl Who Played With Fire Current edition of The Girl Who Played With Fire on Amazon UK Fan site on WordPress UK publisher website for Stieg Larsson's The Girl Who Played with Fire 2006 Swedish novels Swedish crime novels Swedish mystery novels Millennium (novel series) Novels published posthumously Novels set in Sweden Works about human trafficking Swedish novels adapted into films Human trafficking in Sweden Norstedts förlag books Swedish-language novels Novels with bisexual themes
2462946
https://en.wikipedia.org/wiki/Mike%20Scully
Mike Scully
Mike Scully (born October 2, 1956) is an American television writer and producer. He is known for his work as executive producer and showrunner of the animated sitcom The Simpsons from 1997 to 2001. Scully grew up in West Springfield, Massachusetts and long had an interest in writing. He was an underachiever at school and dropped out of college, going on to work in a series of jobs. Eventually, in 1986, he moved to Los Angeles where he worked as a stand-up comic and wrote for Yakov Smirnoff. Scully went on to write for several television sitcoms before 1993, when he was hired to write for The Simpsons. There, he wrote twelve episodes, including "Lisa on Ice" and "Team Homer", and served as showrunner from seasons 9 to 12. Scully won three Primetime Emmy Awards for his work on the series, with many publications praising his episodes, but others criticizing his tenure as a period of decline in the show's quality. Scully still works on the show and also co-wrote 2007's The Simpsons Movie. More recently, Scully co-created The Pitts and Complete Savages as well as working on Everybody Loves Raymond and Parks and Recreation. He co-developed the short-lived animated television version of Napoleon Dynamite, as well as co-creating Duncanville with his wife, Julie Thacker, and Amy Poehler. Scully is married to fellow writer Julie Thacker. Early life Scully was born October 2, 1956 at Springfield Hospital in Springfield, Massachusetts and grew up in the Merrick section of West Springfield. His father, Richard, was a salesman and owned a dry cleaning business, his mother Geraldine (d. 1985) worked for the Baystate Medical Center once Scully and his brothers were old enough to be left at home alone. Scully is of Irish ancestry. As a child Scully "hoped to be a musician or a hockey player." At Main Street Elementary School, with the encouragement of his teacher James Doyle, he developed an interest in writing, serving as editor for his school newspaper. He graduated from West Springfield High School in 1974, having been voted "Most Likely Not to Live Up to Potential" by his classmates, and dropped out of Holyoke Community College after one day, undecided about what he wanted to do with his life. He took up work in the clothing department at Steiger's department store, as a janitor at the Baystate Medical Center and also as a driving instructor. He commented: "I think if I had actually succeeded at college and gotten a degree in accounting or something, I might have given up too quickly on writing. Having no marketable job skills was a tremendous incentive to keep trying to succeed as a writer." He realized "there probably wasn't going to be a career in riding around with my friends listening to Foghat," so Scully decided he "definitely wanted to break into comedy" even though he "really had no reason to believe [he] could succeed." Regardless, he moved to Los Angeles, California in 1982. Career Early career In California, Scully worked in a tuxedo store. He also got a job writing jokes for comedian Yakov Smirnoff and developed his joke writing skills by performing himself at amateur stand-up comedy nights. He purchased scripts from a variety of half-hour comedy shows, including Taxi, to train himself to write them and had numerous speculative scripts rejected. He started "bouncing around Hollywood working on some of the lousiest sitcoms in history." He served on the writing staff of The Royal Family, Out of This World, Top of the Heap and What a Country!, where he did audience warm-up, a role he also performed on Grand. The Simpsons In 1993, David Mirkin hired Scully to write for The Simpsons, as a replacement for the departing Conan O'Brien, after reading some of his sample scripts. He began as a writer and producer for the show during its fifth season and wrote the episodes "Lisa's Rival", "Two Dozen and One Greyhounds" and "Lisa on Ice" which aired in season six. "Lisa's Rival" was his first episode; he wrote the script, but the original concept had been conceived by O'Brien. Similarly, he wrote the script for "Two Dozen and One Greyhounds", which was based on an idea by Al Jean and Mike Reiss. "Lisa on Ice" was inspired by Scully's love of ice hockey and featured many experiences from his childhood, as was "Marge Be Not Proud" (which he wrote for season seven) which was based "one of the most traumatic moments" of his life, when he was caught shoplifting at age 12. He jokingly told Variety that "It's great to be paid for reliving the horrors of your life." He also wrote "Team Homer" and "Lisa's Date with Density". Scully noted: "I wrote a lot of Lisa's shows. I have five daughters, so I like Lisa a lot. I like Homer, too. Homer comes very naturally to me: I don't know if that's a good or a bad thing. A lot of my favorite episodes are the ones when Homer and Lisa are in conflict with each other ... They're very human, I think that's their appeal." Scully became showrunner of The Simpsons in 1997, during its ninth season. As showrunner and executive producer, Scully said his aim was to "not wreck the show", and he headed up the writing staff and oversaw all aspects of the show's production. During his time as showrunner he was credited with writing or co-writing five episodes: "Treehouse of Horror VIII" ("The HΩmega Man" segment), "Sunday, Cruddy Sunday", "Beyond Blunderdome", "Behind the Laughter" and "The Parent Rap". Scully was popular with the staff members, many of whom praised his organization and management skills. Writer Tom Martin said he was "quite possibly the best boss I've ever worked for" and "a great manager of people," while Don Payne commented that for Scully "it was really important that we kept decent hours". Scully served as showrunner until 2001, during season 12, making him the first person to run the show for more than two seasons. He returned in season 14 to write and executive produce the episode "How I Spent My Strummer Vacation", and co-wrote and co-produced The Simpsons Movie in 2007. Scully's tenure as showrunner of The Simpsons has been the subject of criticism from some of the show's fans. John Ortved wrote "Scully's episodes excel when compared to what The Simpsons airs nowadays, but he was the man at the helm when the ship turned towards the iceberg." The BBC noted "the common consensus is that The Simpsons' golden era ended after season nine", while an op-ed in Slate by Chris Suellentrop argued The Simpsons changed from a realistic show about family life into a typical cartoon during Scully's years: "Under Scully's tenure, The Simpsons became, well, a cartoon. ... Episodes that once would have ended with Homer and Marge bicycling into the sunset (perhaps while Bart gagged in the background) now end with Homer blowing a tranquilizer dart into Marge's neck." The Simpsons under Scully has been negatively labelled as a "gag-heavy, Homer-centric incarnation" by Jon Bonné of MSNBC, while some fans have bemoaned the transformation in Homer's character during the era, from dumb yet well-meaning to "a boorish, self-aggrandizing oaf", dubbing him "Jerkass Homer". Despite this, much of Scully's work on the show also received critical praise. Scully won five Primetime Emmy Awards for his work on The Simpsons, while Entertainment Weekly cited "How I Spent My Strummer Vacation" as the show's 22nd best episode. Robert Canning of IGN also gave the episode a positive review, something he also did for "Behind the Laughter" and "Trilogy of Error", which aired during season 12. He called the latter "one extremely enjoyable misadventure. The Simpsons may have peaked in the '90s, but that doesn't mean the eight years since haven't delivered their share of quality episodes. This was one of them." Tom Martin said that he does not understand the criticism against Scully, and that he thinks the criticism "bothered [him], and still bothers him, but he managed to not get worked up over it." Ortved noted in his book that blaming a single show runner for what some perceive as the lowering quality of the show "is unfair." When asked in 2007 how the series' longevity is sustained, Scully joked, "Lower your quality standards. Once you've done that you can go on forever." Further career Scully was a writer and co-executive producer on Everybody Loves Raymond for part of season seven and all of season eight, winning an Emmy for his work. Scully co-created (with wife Julie Thacker) The Pitts for Fox and Complete Savages for ABC, which was produced by Mel Gibson. The Pitts was a sitcom about a family suffering from bad luck. Thacker stated the show was designed "as a companion piece for The Simpsons. It had a very cartoony feel to it. We always knew the initial audience for the show would be 12-year-olds to start, and then when families saw that the writing was very Simpsons - like, because many of the writers were from The Simpsons, [we thought] families would start to watch it together." It was canceled after six episodes; Scully and Thacker laid the blame for this on the show's timeslot, 9:30 P.M., which was too late for the target audience. Complete Savages, which Thacker and Scully wrote with the "Simpsons sensibility" of layered jokes, was canceled in January 2005 due to low ratings and network anger at Scully and Thacker's decision to write to TV critics in what the Hartford Courant labelled "unsanctioned promoting". A fan of NRBQ, Scully produced, with Thacker, a documentary about the band in 2003 entitled NRBQ: Rock 'n' Roll's Best Kept Secret; Scully employed the group as the "unofficial house band" of The Simpsons during his tenure as showrunner. Scully also created a pilot for Fox called Schimmel in 2000, starring Robert Schimmel, which was dropped after Schimmel was diagnosed with cancer. Scully served as a consulting producer on the NBC series Parks and Recreation, and wrote the episodes "Ron and Tammy" in 2009, and "The Possum" in 2010. Scully also had cameo roles in the episodes "Eagleton" and "Soda Tax" as a speaker at the Pawnee community meeting. In 2012, Scully co-produced and co-wrote an animated TV version of the film Napoleon Dynamite, which was canceled after six episodes. That May, Scully signed a seven-figure, multi-year overall deal with 20th Century Fox Television to develop several projects. He served as co-executive producer on the single-season NBC sitcom The New Normal (2012–2013), alongside Allison Adler and Ryan Murphy. Scully held the same title for Fox's Dads (which debuted in 2013). In 2018, he signed an overall deal with 20th Century Fox Television. Personal life He is married to writer Julie Thacker; the couple have five daughters. His brother Brian Scully is also a comedy writer and he has a second brother, Neil, who is an ice hockey writer. His mother died in 1985. Scully was awarded an honorary doctorate in fine arts from Westfield State University in 2008. He walked the picket line during the 2007–2008 WGA strike while on crutches. Scully received a lifetime achievement award by the WGA West in 2010. Credits Episodes listed are those Scully has been credited as writing or co-writing What a Country! (1986–1987) – writer Out of This World (1987–1991) – supervising producer, writer "Baby Talk" "Mosquito Man: The Motion Picture" "Blast from the Past" "Old Flame" "Evie's Two Dads" "Evie Goes to Hollywood" "Whose House Is It, Anyway?" "Evie's Driver's License" "The Rocks That Couldn't Roll" "My Mother the Con" "Goodbye, Mr. Chris" "New Kid on the Block" "Come Fly with Evie" "Would You Buy a Used Car from This Dude?" "Mayor Evie" Grand – writer "Lady Luck" Top of the Heap (1991) – writer "The Agony and the Agony" "The Marrying Guy" The Royal Family (1992) – writer "Cocoa in Charge" The Simpsons (1993–present) – writer, producer, executive producer, showrunner, consulting producer "Lisa's Rival" (1994) "Lisa on Ice" (1994) "Two Dozen and One Greyhounds" (1995) "Marge Be Not Proud" (1995) "Team Homer" (1996) "Lisa's Date with Density" (1996) "Treehouse of Horror VIII" ("The HΩmega Man") (1997) "Sunday, Cruddy Sunday" (with Tom Martin, George Meyer and Brian Scully) (1999) "Beyond Blunderdome" (1999) "Behind the Laughter" (with Tim Long, George Meyer and Matt Selman) (2000) "The Parent Rap" (with George Meyer) (2001) "How I Spent My Strummer Vacation" (2002) The Preston Episodes (1995) – writer "The Halloween Episode" (with Julie Thacker) Schimmel (2000) – creator, producer The Pitts (2003) – creator, executive producer, writer Everybody Loves Raymond (2003–2004) – co-executive producer, writer "Fun with Debra" "Party Dress" "Blabbermouths" "Angry Sex" Complete Savages (2004–2005) – creator, executive producer, writer "Pilot" "Free Lily" "Thanksgiving with the Savages" "Saving Old Lady Riley" The Simpsons Movie (2007) – producer and writer Parks and Recreation (2009–2012) – consulting producer and writer "Ron and Tammy" (2009) "The Possum" (2010) "The Comeback Kid" (2012) Napoleon Dynamite (2012) – co-developer, producer, writer "FFA" The New Normal (2012–2013) – co-executive producer, writer "The Godparent Trap" "Dog Children" 70th Golden Globe Awards (2013) – special material Dads (2013-2014) – co-executive producer Weird Loners (2015) – co-executive producer The Carmichael Show (2015–2017) – co-executive producer, writer "Gender" "Fallen Heroes" "Man's World" "Support The Troops" Rel (2018–19) – co-executive producer, writer "Kids Visit First" Duncanville (2020–present) – creator, writer "Pilot" "Jack's Pipe Dream" – teleplay by "Free Range Children" – teleplay by "Wolf Mother" – teleplay by "Das Banana Boot" – story by "Duncan's New Word" "Who's Vrooming Who?" – story by References Footnotes Bibliography External links 1956 births Living people Television producers from Massachusetts American television writers American male television writers People from West Springfield, Massachusetts American comedy writers American people of Irish descent Screenwriters from Massachusetts
59830621
https://en.wikipedia.org/wiki/Anne%20E.%20Carpenter
Anne E. Carpenter
Anne E. Carpenter is an American scientist in the field of image analysis for cell biology and artificial intelligence for drug discovery. She is the co-creator of CellProfiler, open-source software for high-throughput biological image analysis, and a co-inventor of the Cell Painting assay, a method for image-based profiling. She is a PI and senior director of the Imaging Platform at the Broad Institute. Education & early career Undergraduate training Carpenter received her B.Sc. in Biological Sciences in 1997 from Purdue University, West Lafayette. During this time, she spent a summer in 1996 as an HHMI Undergraduate Research Fellow in the laboratory of Robert E. Malone at the University of Iowa, working on the control of recombination in yeast. Following her graduation, she spent a summer working on enhancers in Drosophila neural development as a research assistant in the laboratory of Chris Q. Doe, then at the University of Illinois, Urbana-Champaign. Graduate training Carpenter carried out research for her Ph.D. in the laboratory of Andrew S. Belmont at the University of Illinois, Urbana-Champaign. There, she developed molecular biology and automated imaging systems to rapidly assess the effects of transcriptional activators on large-scale chromatin structure using fluorescence microscopy. This work laid the foundation for studies of engineered regions of the genome, the movement of genes within the nucleus upon gene activation, and chromatin-related high-throughput screens. She received her PhD in cell biology in May 2003. Post-doctoral training and creation of CellProfiler software Carpenter trained in the laboratory of David M. Sabatini at the Whitehead Institute for Biomedical Research, Cambridge MA, during her postdoctoral work (July 2003 to December 2006). Through co-mentoring by Polina Golland, professor at MIT Computer Science and Artificial Intelligence Laboratory, Carpenter transitioned into a computational researcher during this time. Her research focused on high-throughput microscopy and living cell microarrays to reveal gene function. This required new image analysis methods, so Carpenter and collaborator Thouis Jones designed and in 2005 released the first open-source high-throughput cell image analysis software, CellProfiler, which was first published in 2006. Using this new tool, she led a team of 5 researchers to develop advanced data mining methods to systematically examine the necessity of proteins for a variety of biological processes. Research and impact In January 2007, Carpenter founded her laboratory at the Broad Institute of Harvard and MIT, as the Director of the Imaging Platform. Her first NIH R01 grant was awarded in 2010, at the age of 33, far earlier than the average. In 2017, she became a Broad Institute Scientist. The Carpenter group develops novel strategies and tools to analyse biological images, particularly microscopy images from high-throughput experiments. Her computer scientists and biologists develop free open-source image analysis and data exploration methods such as CellProfiler and CellProfiler Analyst. Their software work has contributed to open source applications and libraries, including ImageJ, TensorFlow, scikit-image, and scikit-learn. The lab collaborates with biologists, generating discoveries across fields of study and disease areas. Their software enables high-throughput screening in challenging model systems such as C. elegans, 3D cell cultures, and time-lapse video of growing cells. The focus of the Carpenter lab turned towards machine learning by 2009, and later deep learning, to identify biological structures of interest and to identify patterns resulting from chemical or genetic perturbations to identify cures for diseases. She was an early pioneer of the new field of image-based profiling, which is related to gene expression profiling but uses microscopy images as the data source. Together with Stuart Schreiber, the Carpenter laboratory invented the Cell Painting assay, which is the most widely used for this purpose. Carpenter's CellProfiler software and Cell Painting assay formed the initial scientific platform for Recursion Pharmaceuticals. Dr. Carpenter is a member of the Scientific and Technical Advisory Board for Recursion Pharmaceuticals, in addition to that of Bio-Rad Laboratories. Carpenter has given more than 150 invited lectures and has chaired several conferences and workshops. She has authored over 120 scientific publications. She is known for efforts that organize the scientific community: she was an early Board member for the Society for Biomolecular Imaging and Informatics (SBI2), she founded the CytoData Society, and she led the 2018 Data Science Bowl via Kaggle. Since 2007, Carpenter has supervised over 50 researchers and students, from postdoctoral to high-school level and is known for her informal mentoring as well. Awards & honors 2021 Honorary Fellow of the Royal Microscopical Society 2019 Named in Top 100 AI Leaders in Drug Discovery and Advanced Healthcare 2019 Merkin Institute Fellow 2018 Outstanding Young Alumni Award from the University of Illinois at Urbana-Champaign 2017 Maximizing Investigators' Research Award (MIRA), NIH NIGMS 2017 Elected Fellow, SLAS (Society for Laboratory Automation and Screening) 2014 Broad Institute Next Generation Award 2012 Awarded NSF CAREER grant 2011 Named Young Leader of the French-American Foundation 2008 Elected fellow of the Massachusetts Academy of Sciences 2008 Featured in PBS special, “Bold Visions: Women in Science & Technology” 2007 Named a “Rising Young Investigator” by Genome Technology magazine References External links 1976 births Living people American biologists Purdue University alumni University of Illinois at Urbana–Champaign alumni
67170063
https://en.wikipedia.org/wiki/Ariel%20Shamir
Ariel Shamir
Ariel Shamir () is an Israeli professor of Computer Science. He serves as dean of the Efi Arazi School of Computer Science at the IDC Herzliya. and one of the developers of Seam carving. Biography Shamir received a bachelor's and master's degree in mathematics and computer science from the Hebrew University of Jerusalem and a doctorate in computer science in 2000. He specialized mainly in computerized image and video processing, imaging and machine learning. He did his postdoctoral fellowship at the Center for Computational Imaging at the University of Texas at Austin and then researched at Mitsubishi Electric's research laboratories at Cambridge, Disney Research and MIT. In 2017, he started serving as a dean of the Efi Arazi School of Computer Science at the IDC Herzliya. Shamir has published dozens of publications on his research topics, and in 2014 he was mentioned as one of the most cited and influential researchers in the field of computer science. Since 2017, Shamir has been an associate editor of several journals like IEEE Transactions on Visualization and Computer Graphics and ACM Transactions on Graphics. As of 2021, Shamir's research has been cited over 11,000 times in academic papers worldwide. References External links The resume of Professor Shamir 1966 births Living people Israeli scientists Hebrew University of Jerusalem alumni People from Jerusalem
32220262
https://en.wikipedia.org/wiki/Bronto%20Software
Bronto Software
Oracle Bronto provides a cloud-based commerce marketing automation platform to mid-market and enterprise organizations. History In 2002, Joe Colopy and Chaz Felix founded Bronto Software in Durham, North Carolina. Bronto Software is based in the American Tobacco District of Durham, North Carolina. The company was named after Joe Colopy's childhood interest in dinosaurs. In 2011, Bronto Software significantly expanded its office space in order to accommodate for business growth. In March 2012, Bronto opened an office in London, UK to serve continued growth in Europe. In 2011, Bronto marked record-breaking growth with a 54% increase in revenue and team growth of 40%, finishing the year with 118 employees. In September 2012, Bronto was named the leading self-service email provider, and the second overall leading email service provider, to the Internet Retailer Top 1000. At the conclusion of 2012, Bronto had 152 employees, offices in Durham, NC, US and London, UK. Bronto Software expanded operations in February 2014 by opening an office in Sydney, Australia. On August 13, 2014, Bronto announced that they had doubled their world headquarters at the American Tobacco Campus in Durham, NC, adding 47,000 square feet for a total of 80,000 square feet. According to Bronto CEO, Joe Colopy, "The move reflects our focus on high growth and further supports our goal of remaining one of the preeminent places to work in the Triangle and being a center for software innovation." In April 2015, NetSuite (now Oracle Netsuite, after Oracle acquired Netsuite in 2016) signed an agreement to acquire Bronto Software for $200 million. On March 2, 2021 Oracle NetSuite sent an email to some customers with the news that the company has assigned the core product suite, Bronto Marketing Platform, to “end of life” status and that the last date of service will be May 31, 2022. Products and services The Bronto Marketing Platform includes the ability to build flowcharts for campaigns, automate the campaigns, and report on the results. The platform lets customers integrate their email, SMS, Twitter, and Facebook campaigns. The platform also includes a complete API for custom integrations as well as standard integrations with partners like Magento, Omniture, Google, Demandware, MarketLive and other web analytics and e-commerce providers. Bronto Software hosts an annual multi-day user conference to discuss e-commerce marketing through email, mobile, social campaigns. This event is usually in mid-April in Chapel Hill, North Carolina. The 2011 event featured keynote presentations from CEO Joe Colopy, COO Chaz Felix, and Sucharita Mulpuru, a principal analyst with Forrester Research. Bronto Summit 2012 featured retail expert Lauren Freedman (The e-tailing group), Anne Holland (Anne Holland Ventures, WhichTestWon), and journalist Ken Magill. Converge on Commerce, Bronto Summit 2013, featured keynotes by CEO Joe Colopy, Bryan Eisenberg and Donna Iucolano. Bronto Summit 2014 brought together speakers including Gary Vaynerchuk, Jamie Clarke, CEO and co-founder of LiveOutThere.com, and entrepreneurs Tom Lotrecchiano and Joe Schmidt, founders of CanvasOnDemand and other companies. Recognition Inc Magazine has ranked Bronto Software on their Inc 5000 list for six consecutive years (2009-2014). It was also marked as one of the 100 fastest growing software companies in North America. NCTA awarded Bronto their Software Company of the Year Award in 2011 and the Triangle Business Journal ranked Bronto #18 on their Fast 50, a list of the fastest growing privately owned companies in the Triangle. Bronto Software won the Stevie Award for Best Customer Service Department from the American Business Awards in 2009 and 2010. Bronto was named one of the Best Places to Work by the Triangle Business Journal in 2010, 2011, 2012 and 2014. Bronto Software was a SIIA CODiE Award finalist for Best Marketing Solution in 2011 and 2012, and finalist for Best Marketing Automation Solution in 2014, 2015 and 2016. References Software companies based in North Carolina Companies based in Durham, North Carolina Companies established in 2002 2002 establishments in North Carolina Software companies of the United States
18933234
https://en.wikipedia.org/wiki/Emacs
Emacs
Emacs or EMACS (Editor MACroS) is a family of text editors that are characterized by their extensibility. The manual for the most widely used variant, GNU Emacs, describes it as "the extensible, customizable, self-documenting, real-time display editor". Development of the first Emacs began in the mid-1970s, and work on its direct descendant, GNU Emacs, continues actively . Emacs has over 10,000 built-in commands and its user interface allows the user to combine these commands into macros to automate work. Implementations of Emacs typically feature a dialect of the Lisp programming language that provides a deep extension capability, allowing users and developers to write new commands and applications for the editor. Extensions have been written to manage email, files, outlines, and RSS feeds, as well as clones of ELIZA, Pong, Conway's Life, Snake and Tetris. The original EMACS was written in 1976 by David A. Moon and Guy L. Steele Jr. as a set of Editor MACroS for the TECO editor. It was inspired by the ideas of the TECO-macro editors TECMAC and TMACS. The most popular, and most ported, version of Emacs is GNU Emacs, which was created by Richard Stallman for the GNU Project. XEmacs is a variant that branched from GNU Emacs in 1991. GNU Emacs and XEmacs use similar Lisp dialects and are, for the most part, compatible with each other. XEmacs development is inactive. Emacs is, along with vi, one of the two main contenders in the traditional editor wars of Unix culture. Emacs is among the oldest free and open source projects still under development. History Emacs development began during the 1970s at the MIT AI Lab, whose PDP-6 and PDP-10 computers used the Incompatible Timesharing System (ITS) operating system that featured a default line editor known as Tape Editor and Corrector (TECO). Unlike most modern text editors, TECO used separate modes in which the user would either add text, edit existing text, or display the document. One could not place characters directly into a document by typing them into TECO, but would instead enter a character ('i') in the TECO command language telling it to switch to input mode, enter the required characters, during which time the edited text was not displayed on the screen, and finally enter a character (<esc>) to switch the editor back to command mode. (A similar technique was used to allow overtyping.) This behavior is similar to that of the program ed. Richard Stallman visited the Stanford AI Lab in 1972 or 1974 and saw the lab's E editor, written by Fred Wright. He was impressed by the editor's intuitive WYSIWYG (What You See Is What You Get) behavior, which has since become the default behavior of most modern text editors. He returned to MIT where Carl Mikkelsen, a hacker at the AI Lab, had added to TECO a combined display/editing mode called Control-R that allowed the screen display to be updated each time the user entered a keystroke. Stallman reimplemented this mode to run efficiently and then added a macro feature to the TECO display-editing mode that allowed the user to redefine any keystroke to run a TECO program. E had another feature that TECO lacked: random-access editing. TECO was a page-sequential editor that was designed for editing paper tape on the PDP-1 and typically allowed editing on only one page at a time, in the order of the pages in the file. Instead of adopting E's approach of structuring the file for page-random access on disk, Stallman modified TECO to handle large buffers more efficiently and changed its file-management method to read, edit, and write the entire file as a single buffer. Almost all modern editors use this approach. The new version of TECO quickly became popular at the AI Lab and soon accumulated a large collection of custom macros whose names often ended in MAC or MACS, which stood for macro. Two years later, Guy Steele took on the project of unifying the diverse macros into a single set. Steele and Stallman's finished implementation included facilities for extending and documenting the new macro set. The resulting system was called EMACS, which stood for Editing MACroS or, alternatively, E with MACroS. Stallman picked the name Emacs "because <E> was not in use as an abbreviation on ITS at the time." An apocryphal hacker koan alleges that the program was named after Emack & Bolio's, a popular Cambridge ice cream store. The first operational EMACS system existed in late 1976. Stallman saw a problem in too much customization and de facto forking and set certain conditions for usage. He later wrote: The original Emacs, like TECO, ran only on the PDP-10 running ITS. Its behavior was sufficiently different from that of TECO that it could be considered a text editor in its own right, and it quickly became the standard editing program on ITS. Mike McMahon ported Emacs from ITS to the TENEX and TOPS-20 operating systems. Other contributors to early versions of Emacs include Kent Pitman, Earl Killian, and Eugene Ciccarelli. By 1979, Emacs was the main editor used in MIT's AI lab and its Laboratory for Computer Science. Implementations Early implementations In the following years, programmers wrote a variety of Emacs-like editors for other computer systems. These included EINE (EINE Is Not EMACS) and ZWEI (ZWEI Was EINE Initially), which were written for the Lisp machine by Mike McMahon and Daniel Weinreb, and Sine (Sine Is Not Eine), which was written by Owen Theodore Anderson. Weinreb's EINE was the first Emacs written in Lisp. In 1978, Bernard Greenberg wrote Multics Emacs almost entirely in Multics Lisp at Honeywell's Cambridge Information Systems Lab. Multics Emacs was later maintained by Richard Soley, who went on to develop the NILE Emacs-like editor for the NIL Project, and by Barry Margolin. Many versions of Emacs, including GNU Emacs, would later adopt Lisp as an extension language. James Gosling, who would later invent NeWS and the Java programming language, wrote Gosling Emacs in 1981. The first Emacs-like editor to run on Unix, Gosling Emacs was written in C and used Mocklisp, a language with Lisp-like syntax, as an extension language. Early Ads for Computer Corporation of America's CCA EMACS (Steve Zimmerman). appeared in 1984. 1985 comparisons to GNU Emacs, when it came out, mentioned free vs. $2,400. GNU Emacs Richard Stallman began work on GNU Emacs in 1984 to produce a free software alternative to the proprietary Gosling Emacs. GNU Emacs was initially based on Gosling Emacs, but Stallman's replacement of its Mocklisp interpreter with a true Lisp interpreter required that nearly all of its code be rewritten. This became the first program released by the nascent GNU Project. GNU Emacs is written in C and provides Emacs Lisp, also implemented in C, as an extension language. Version 13, the first public release, was made on March 20, 1985. The first widely distributed version of GNU Emacs was version 15.34, released later in 1985. Early versions of GNU Emacs were numbered as 1.x.x, with the initial digit denoting the version of the C core. The 1 was dropped after version 1.12, as it was thought that the major number would never change, and thus the numbering skipped from 1 to 13. In September 2014, it was announced on the GNU emacs-devel mailing list that GNU Emacs would adopt a rapid release strategy and version numbers would increment more quickly in the future. GNU Emacs was later ported to Unix. It offered more features than Gosling Emacs, in particular a full-featured Lisp as its extension language, and soon replaced Gosling Emacs as the de facto Unix Emacs editor. Markus Hess exploited a security flaw in GNU Emacs' email subsystem in his 1986 cracking spree in which he gained superuser access to Unix computers. Most of GNU Emacs functionality is implemented through a scripting language called Emacs Lisp. Because about 70% of GNU Emacs is written in the Elisp extension language, one only needs to port the C core which implements the Elisp interpreter. This makes porting Emacs to a new platform considerably less difficult than porting an equivalent project consisting of native code only. GNU Emacs development was relatively closed until 1999 and was used as an example of the Cathedral development style in The Cathedral and the Bazaar. The project has since adopted a public development mailing list and anonymous CVS access. Development took place in a single CVS trunk until 2008 and was then switched to the Bazaar DVCS. On November 11, 2014, development was moved to Git. Richard Stallman has remained the principal maintainer of GNU Emacs, but he has stepped back from the role at times. Stefan Monnier and Chong Yidong were maintainers from 2008 to 2015. John Wiegley was named maintainer in 2015 after a meeting with Stallman at MIT. As of early 2014, GNU Emacs has had 579 individual committers throughout its history. XEmacs Lucid Emacs, based on an early alpha version of GNU Emacs 19, was developed beginning in 1991 by Jamie Zawinski and others at Lucid Inc. One of the best-known early forks in free software development occurred when the codebases of the two Emacs versions diverged and the separate development teams ceased efforts to merge them back into a single program. Lucid Emacs has since been renamed XEmacs. Its development is currently inactive, with the most recent stable version 21.4.22 released in January 2009 (while a beta was released in 2013), while GNU Emacs has implemented many formerly XEmacs-only features. Other forks of GNU Emacs Other notable forks include: Aquamacs – based on GNU Emacs (Aquamacs 3.2 is based on GNU Emacs version 24 and Aquamacs 3.3 is based on GNU Emacs version 25) which focuses on integrating with the Apple Macintosh user interface Meadow – a Japanese version for Microsoft Windows SXEmacs – Steve Youngs' fork of XEmacs Various Emacs editors In the past, projects aimed at producing small versions of Emacs proliferated. GNU Emacs was initially targeted at computers with a 32-bit flat address space and at least 1 MiB of RAM. Such computers were high end workstations and minicomputers in the 1980s, and this left a need for smaller reimplementations that would run on common personal computer hardware. Today's computers have more than enough power and capacity to eliminate these restrictions, but small clones have more recently been designed to fit on software installation disks or for use on less capable hardware. Other projects aim to implement Emacs in a different dialect of Lisp or a different programming language altogether. Although not all are still actively maintained, these clones include: MicroEMACS, which was originally written by Dave Conroy and further developed by Daniel Lawrence and which exists in many variations. mg, originally called MicroGNUEmacs and, later, mg2a, a public-domain offshoot of MicroEMACS intended to more closely resemble GNU Emacs. Now installed by default on OpenBSD. JOVE (Jonathan's Own Version of Emacs), Jonathan Payne's non-programmable Emacs implementation for UNIX-like systems. MINCE (MINCE Is Not Complete Emacs), a version for CP/M and later DOS, from Mark of the Unicorn. MINCE evolved into Final Word, which eventually became the Borland Sprint word processor. Perfect Writer, a CP/M implementation derived from MINCE that was included circa 1982 as the default word processor with the very earliest releases of the Kaypro II and Kaypro IV. It was later provided with the Kaypro 10 as an alternative to WordStar. Freemacs, a DOS version that uses an extension language based on text macro expansion and fits within the original 64 KiB flat memory limit. Zile. Zile was a recursive acronym for Zile Is Lossy Emacs, but the project was rewritten in Lua and now gives the expansion as Zile Implements Lua Editors. The new Zile still includes an implementation of Emacs in Lua called Zemacs. There is also an implementation of vi called Zi. Zmacs, for the MIT Lisp Machine and its descendants, implemented in ZetaLisp. Climacs, a Zmacs-influenced variant implemented in Common Lisp. Epsilon, an Emacs clone by Lugaru Software. Versions for DOS, Windows, Linux, FreeBSD, Mac OS X and O/S 2 are bundled in the release. It uses a non-Lisp extension language with C syntax and used a very early concurrent command shell buffer implementation under the single-tasking MS-DOS. PceEmacs is the Emacs-based editor for SWI-Prolog. Amacs, an Apple II ProDOS version of Emacs implemented in 6502 assembly by Brian Fox. Hemlock, originally written in Spice Lisp, then Common Lisp. A part of CMU Common Lisp. Influenced by Zmacs. Later forked by Lucid Common Lisp (as Helix), LispWorks and Clozure CL projects. There is also a Portable Hemlock project, which aims to provide a Hemlock, which runs on several Common Lisp implementations. umacs, an implementation under OS-9 edwin, an Emacs-like text editor included with MIT/GNU Scheme. Editors with Emacs emulation The Cocoa text system uses some of the same terminology and understands many Emacs navigation bindings. This is possible because the native UI uses the Command key (equivalent to Super) instead of the Control key. Eclipse (IDE) provides a set of Emacs keybindings. Epsilon (text editor) Defaults to Emacs emulation and supports a vi mode. GNOME Builder has an emulation mode for Emacs. GNU Readline is a line editor that understands the standard Emacs navigation keybindings. It also has a vi emulation mode. IntelliJ IDEA provides a set of Emacs keybindings. JED has an emulation mode for Emacs. Joe's Own Editor emulates Emacs keybindings when invoked as . MATLAB provides Emacs keybindings for its editor. KornShell has an Emacs line editing mode that predates Gnu Readline. Visual Studio Code provides an extension to emulate Emacs keybindings. Features Emacs is primarily a text editor and is designed for manipulating pieces of text, although it is capable of formatting and printing documents like a word processor by interfacing with external programs such as LaTeX, Ghostscript or a web browser. Emacs provides commands to manipulate and differentially display semantic units of text such as words, sentences, paragraphs and source code constructs such as functions. It also features keyboard macros for performing user-defined batches of editing commands. GNU Emacs is a real-time display editor, as its edits are displayed onscreen as they occur. This is standard behavior for modern text editors but EMACS was among the earliest to implement this. The alternative is having to issue a distinct command to display text, (e.g. after modifying it). This is done in line editors, such as ed (unix), ED (CP/M), and Edlin (MS-DOS). General architecture Almost all of the functionality in Emacs, including basic editing operations such as the insertion of characters into a document, is achieved through functions written in a dialect of the Lisp programming language. The dialect used in GNU Emacs is known as Emacs Lisp (ELisp). The ELisp layer sits atop a stable core of basic services and platform abstraction written in the C programming language. In this Lisp environment, variables and functions can be modified with no need to recompile or restart Emacs. Most configuration is stored in variables, and changed by simply changing variable values. The main text editing data structure is called buffer containing text with additional attributes; the most important ones being: point (cursor location) and mark (another location, delimiting the selected region together with the point), the name of the file it is visiting (if applicable) and local values of ELisp variables specific to the buffer. Such local values specify in particular the set of active modes (exactly one major mode typically adapting the editor to the content type of the buffer (like ELisp, C, HTML etc), and any number of minor modes controlling other editor behaviors independent of content type). Any interaction with the editor (like key presses or clicking a mouse button) is realized by executing Elisp code, typically a command, which is a function explicitly designed for interactive use. Keys can be arbitrarily redefined and commands can also be accessed by name; some commands evaluate arbitrary Elisp code from buffers (e.g. eval-region or eval-buffer). Buffers are displayed in windows, which are tiled portions of the terminal screen or the GUI window (called a frame in Emacs terms; multiple frames are possible). Depending on configuration, windows include scroll bars, line numbers, sometimes a 'header line' typically to ease navigation, and a mode line at the bottom (usually displaying buffer name, the active modes and point position of the buffer among others). The bottom of every frame is used for messages (then called 'echo area') and text input for commands (then called 'minibuffer'). Multiple windows can be opened onto the same buffer, for example to see different parts of a long text, and multiple buffers can share the same text, for example to take advantage of different major modes in a mixed-language file. The major mode can also be changed manually as needed with M-x <mode name>. Customizability Keystrokes can be recorded into macros and replayed to automate complex, repetitive tasks. This is often done on an ad-hoc basis, with each macro discarded after use, although macros can be saved and invoked later. At startup, Emacs executes an Emacs Lisp script named (recent versions also look for , , and ; Emacs will execute the first one it finds, ignoring the rest). This personal customization file can be arbitrarily long and complex, but typical content includes: Setting global variables or invoking functions to customize Emacs behaviour, for example Key bindings to override standard ones and to add shortcuts for commands that the user finds convenient but don't have a key binding by default. Example: Loading, enabling and initializing extensions (Emacs comes with many extensions, but only a few are loaded by default.) Configuring event hooks to run arbitrary code at specific times, for example to automatically recompile source code after saving a buffer () Executing arbitrary files, usually to split an overly long configuration file into manageable and homogeneous parts ( and are traditional locations for these personal scripts) The customize extension allows the user to set configuration properties such as the color scheme interactively, from within Emacs, in a more user-friendly way than by setting variables in : it offers search, descriptions and help text, multiple choice inputs, reverting to defaults, modification of the running Emacs instance without reloading, and other conveniences similar to the preferences functionality of other programs. The customized values are saved in (or another designated file) automatically. Themes, affecting the choice of fonts and colours, are defined as elisp files and chosen through the customize extension. Modes, which support editing a range of programming languages (e.g., emacs-lisp-mode, c-mode, java-mode, ESS for R) by changing fonts to highlight the code and keybindings modified (forword-function vs. forward-page). Other modes include ones that support editing spreadsheets (dismal) and structured text. Self-documenting The first Emacs contained a help library that included documentation for every command, variable and internal function. Because of this, Emacs proponents described the software as self-documenting in that it presents the user with information on its normal features and its current state. Each function includes a documentation string that is displayed to the user on request, a practice that subsequently spread to programming languages including Lisp, Java, Perl, and Python. This help system can take users to the actual code for each function, whether from a built-in library or an added third-party library. Emacs also has a built-in tutorial. Emacs displays instructions for performing simple editing commands and invoking the tutorial when it is launched with no file to edit. The tutorial is by Stuart Cracraft and Richard Stallman. Culture Church of Emacs The Church of Emacs, formed by Richard Stallman, is a parody religion created for Emacs users. While it refers to vi as the editor of the beast (vi-vi-vi being 6-6-6 in Roman numerals), it does not oppose the use of vi; rather, it calls it proprietary software anathema. ("Using a free version of vi is not a sin but a penance.") The Church of Emacs has its own newsgroup, , that has posts purporting to support this parody religion. Supporters of vi have created an opposing Cult of vi. Stallman has jokingly referred to himself as St I GNU cius, a saint in the Church of Emacs. Emacs pinky There is folklore attributing a repetitive strain injury colloquially called Emacs pinky to Emacs' strong dependence on modifier keys, although there have not been any studies done to show Emacs causes more such problems than other keyboard-heavy computer programs. Users have addressed this through various approaches. Some users recommend simply using the two Control keys on typical PC keyboards like Shift keys while touch typing to avoid overly straining the left pinky, a proper use of the keyboard will reduce the RSI. Software-side methods include: Customizing the key layout so that the key is transposed with the key. Similar techniques include defining the key as an additional Control key or transposing the Control and Meta keys. Software, such as xwrits or the built-in in Emacs, that reminds the user to take regularly scheduled breaks. Using the ErgoEmacs keybindings (with minor mode ergoemacs-mode). Customizing the whole keyboard layout to move statistically frequent Emacs keys to more appropriate places. Packages such as ace-jump-mode or elisp extensions that provide similar functionality of tiered navigation, first asking for a character then replacing occurrences of the character with access keys for cursor movement. evil-mode, an advanced Vim emulation layer. god-mode, which provides an approach similar to vim's with a mode for entering Emacs commands without modifier keys. Using customized key layout offered by Spacemacs, a project where key is used as the main key for initiating control sequences. The project also heavily incorporates both evil-mode and god-mode. StickyKeys, which turns key sequences into key combinations. Emacs' built-in viper-mode that allows use of the vi key layout for basic text editing and the Emacs scheme for more advanced features. Giving a dual role to a more-comfortably accessed key such as the space bar so that it functions as a Control key when pressed in combination with other keys. Ergonomic keyboards or keyboards with a greater number of keys adjacent to the space bar, such as Japanese keyboards, allow thumb control of other modifier keys too like Meta or Shift. Using a limited ergonomic subset of keybindings, and accessing other functionality by typing M-x <command-name>. M-x itself can also be rebound. Driving Emacs through voice input. Hardware solutions include special keyboards such as Kinesis's Contoured Keyboard, which places the modifier keys where they can easily be operated by the thumb, or the Microsoft Natural keyboard, whose large modifier keys are placed symmetrically on both sides of the keyboard and can be pressed with the palm of the hand. Foot pedals can also be used. The Emacs pinky is a relatively recent development. The Space-cadet keyboard on which Emacs was developed had oversized Control keys that were adjacent to the space bar and were easy to reach with the thumb. Terminology The word emacs is sometimes pluralized as emacsen, by phonetic analogy with boxen and VAXen, referring to different varieties of Emacs. See also Comparison of text editors Conkeror GNU TeXmacs List of text editors List of Unix commands Integrated development environment References Bibliography PDF PDF HTML External links List of Emacs implementations Architectural overview 1970s in computing Computer-related introductions in 1976 1976 software Free file comparison tools Free integrated development environments Free software programmed in C Free software programmed in Lisp Free text editors GNU Project software Hex editors Linux integrated development environments Linux text editors MacOS text editors OpenVMS text editors Text editors Unix text editors Windows text editors
32678435
https://en.wikipedia.org/wiki/Them%27s%20Fightin%27%20Herds
Them's Fightin' Herds
Them's Fightin' Herds is an indie fighting game developed by Mane6 and published by Maximum Games. Atypical of most side-scrolling fighting games, it features a cast of all-female ungulate characters fighting each other to find a champion worthy of gaining a magical key that will protect their world from predators. First released into early access in February 2018, the full release was on April 30, 2020 for Microsoft Windows, followed by Linux on March 25, 2021 and a beta macOS version was added on October 27, 2021. The project is a spiritual successor to Mane6's original planned fighting game Fighting Is Magic based on the animated television show My Little Pony: Friendship Is Magic. Mane6 was a team of nine developers and part of the adult fandom of the show. Fighting Is Magic featured the six main pony characters from that show. Early versions of this game were released in 2012, drawing attention from both players in the Evolution Championship Series due to the unique moves associated with non-bipedal characters in fighting games, as well as from Hasbro which owned the intellectual property to My Little Pony. After Hasbro sent Mane6 a cease and desist letter, Mane6 discarded the assets tied to the show, while keeping some of the fundamental gameplay factors to create the new title Them's Fightin' Herds. The creator of My Little Pony: Friendship Is Magic, Lauren Faust, offered to help with designing the new characters for the game. The development of the game was completed with crowdfunding through Indiegogo. A separate effort created by fans not associated with the Mane6 team released their Fighting Is Magic: Tribute Edition of the original Mane6 My Little Pony-inspired game in early 2014. This game was made from various beta assets of the original which Mane6 developed in the first two years, and were later leaked by other parties. Gameplay The game uses a four-button fighting system: a button each for light, medium, heavy, and magic attacks, and includes staple fighting game maneuvers such as launchers, pushblocks, and cross-ups. The game supports both local and online multiplayer via a near-isometric pixel art lobby system. Players who also own BlazBlue: Central Fiction, Guilty Gear Xrd Rev 2 or Skullgirls on Steam will unlock special lobby avatars inspired by characters from those games. Synopsis Setting and characters Them's Fightin' Herds is a fighting game based on sapient four-legged hoofed creatures from the world of Fœnum, which is being threatened by the return of carnivorous beasts known as the Predators. The Predators were locked away in a separate realm, but they have found a way to escape it. To put an end to the threat, selected champions of the various Fœnum races are chosen as "Key Seekers" by their tribes to find the key that will lock the Predators away again. The Key Seekers must face each other in a friendly competition to determine which one will be the Key Keeper who will face the champion of the Predators. There are six playable characters—Arizona the cow (voiced by Tara Strong), Velvet the reindeer (Tia Ballard), Paprika the alpaca (Marieve Herington), Oleander the unicorn (Alexa Kahn) who can summon a demonic being known as "Fred" (Keith Ferguson), Pom the sheep (Allie Moreno) and Tianhuo the longma (Kay Bess)—each with different fighting move sets and unique movement options such as flight, short hops, double jumps, or air dashes. Jessica Gee voices the game's announcer. A seventh playable character, pirate goat Shanty (Afi Ekulona), was also released as downloadable content due to the crowdfunding campaign reaching its stretch goals. Storyline The game's story mode begins with Arizona making her way from the prairie to Reine City, sent on her quest by her parents, as the declared Champion of the Prairie. Along the way, she passes through a system of caves that used to be a salt mine, which are now filled with shadowy predators. In Reine City, Arizona heads to the museum to find some clues about the Prophet's Key. When Velvet shows up and learns that Arizona is the Champion of the Prairie, she challenges Arizona to a fight on the steps in front of the museum. Arizona defeats her, and Velvet's ice sprites carry her away. Desperate to get Arizona out of the city, a citizen gives her his boat tickets and sends her off to the Alpake Highlands. In the Highlands, Arizona climbs up a mountain while fighting off birds of prey. At the top, exhausted, she is rescued by members of the local Alpake tribe. After waking up, she talks to their leader, Adobo, who warns her of the "Terror of the Foggy Mountaintops". Arizona is pursued by a mysterious figure in the fog, but the figure always stays hidden a distance away while laughing. At a clearing in the fog, Arizona examines a stone only to see it rumble; Paprika bursts out of it and proceeds to "fight" Arizona through a series of hugs, kisses, and fourth-wall-breaking stunts. Arizona prevails, leaving Paprika dazed and out cold. In the Temple of the Ancestors, which Arizona learned about at the Reine City Museum and from Adobo, Arizona fights more predators while solving some puzzles. Finally, she reaches the Hall of the Monolith. Before she can inspect the monolith, Oleander shows up, and like with Velvet, realizes that she and Arizona are rivals. Oleander and Arizona begin fighting, with Fhtng th§ ¿nsp§kbl (or "Fred") helping her out as the battle progresses. Arizona eventually prevails, but Fred knocks her out and rants about how she is ruining his plans. Fred then wakes Oleander up to tell her that she won. While Oleander copies down the inscriptions on the monolith, Fred loots Arizona for all of the items she found along her journey. After the duo leave, Arizona wakes up, severely dazed, and at a loss for what to do next. If the player loses during the third phase of Oleander's fight, the same thing happens, but Fred does not knock Arizona out and talk about how she is ruining his plans. My Little Pony: Fighting Is Magic Development and gameplay The My Little Pony: Friendship Is Magic series, while aimed at young girls and their parents, has drawn a large number of adult fans from 15–35, typically male, who are often referred to as "bronies". These fans were drawn in by the creativity of Lauren Faust and her team, who wrote the show to appeal across generations. The show's characters, Flash animations, adventure-themed stories, and occasional pop cultural references are considered other draws for the older audience. Many members of the brony fandom are technology-savvy, a common activity in the fandom being the creation of images of the show's ponies, parodying other commercial works including video games. Fighting Is Magic grew out of a set of images for a hypothetical "Marevel vs. Clopcom" game, parodying the Marvel vs. Capcom fighting game series, created by Anukan, who would later become one of the Mane6 developers. Anukan didn't expect anything to come from these images, but found that at discussion boards, fans were postulating how the various pony characters would translate into fighting games; such as what sorts of moves they would use. One of these users, Nappy, recognized the potential in realizing a complete game, and began the formation of Mane6, including Anukan, Jay Wright, Lucas Ellinghaus, James Workman, and Prominence. The team decided on using the Fighter Maker 2D game engine, despite having no prior experience with the software. After getting in the basics of having characters hit one another, they discovered that they could get the engine to include wall bounces—the rebounding of a character from walls at the edges of the screen—which according to Ellinghaus, show "the potential for both the game and the team". Much of the development work was spent in trying to achieve certain effects within the Fighter Maker engine, referred to by the team as "taming" the engine. The game was initially developed as a three button-based fighter, allowing to remain simple to be picked up by players but still offering a variety of combinations of moves, while limiting the amount of animations for the various moves for all characters. The three buttons were designed to mimic the light, medium, and heavy attacks of the Marvel vs. Capcom series. However, the development team also wanted to include an EX system like the one in Street Fighter IV where pressing two attack buttons at the same time executes a special move. Within the initial game engine, Fighter Maker, the game would only register two simultaneous button presses if they were within the same processing time frame, which would hinder gameplay. To work around this, the team designed a fourth button, (a "magic" button as described by Mane6), used to have the character remain still while doing a specific activity that would build up an EX meter, such as Twilight Sparkle reading a book. With a full EX meter, the player would then be able to execute special moves with any of the other three buttons. Mane6 focused initial efforts to build up the six main characters from the show as the initial fighters, but have stated that an expanded roster of up to seventeen characters would be in their planned final version. The game was to be downloadable and free-to-play, with local and online multiplayer modes as well as a story mode. Character-specific moves were to be present in-game. The individual movesets for each character are based not only on how they are represented in the show, but also considering other characters in fighting games to fill out their fighting style. Twilight Sparkle, in the show, is a unicorn with powerful magic abilities, which the Mane6 matched with Akuma from the Street Fighter series, while Rainbow Dash, an aggressive pegasus, was compared with Magneto's playstyle in the Marvel vs. Capcom series. Fluttershy, a timid character within the show, does not fight directly, but instead her animal companions fight for her, creating a playstyle similar to Eddie from Guilty Gear XX or Phoenix Wright in Ultimate Marvel vs. Capcom 3. In another case, Pinkie Pie, a hyperactive pony who is shown to have some fourth wall reality-warping powers in the show, allowed the team to experiment with a wide range of haphazard moves. They had designed one move where Pinkie would use her "party cannon" to launch a present at the opponent, and then she would then pop out of the present at close range. As they were developing the game, Persona 4 Arena was released, in which the character of Kuma/Teddie had a similar move. They realized they were thinking along the same lines as the professional developers and continue to work more of Pinkie's moves based on Teddie's moveset. While these other characters helped to inspire additional moves, the Mane6 team made sure to stay true to the characterization on the show and not introduce moves that would be outside of this, such as Fluttershy herself making an aggressive attack. After each character's moveset was tested and refined based on testing feedback, the team then began to animate each character, first by creating Flash-based animations and then transforming these to sprites needed for Fighter Maker. The team noted that the pony shape of the characters proved an additional challenge both visually and for the engine. With most fighting games, players can easily identify heads, arms, and legs, and know where to watch for attacks, but the same was not true for the ponies. They proceeded to add effects like sparks on the attacking character and opponent responses to help players recognize attacks. In terms of the engine, the hitboxes for the ponies were more horizontal than vertical as would be the case with humanoid fighter characters, and they had to work around this in the engine to accurately model attacks. Additionally the more horizontal shapes of the characters limited how much of the fighting stage space they could use; they overcame this within the game by using a 3/4ths view of the characters that shortened their on-screen lengths giving them more space to work with. Release The team had released early pre-alpha gameplay footage as they added the main six characters to the game. Though the team had expected the game to be popular within the brony community, the game has been noticed by other fighting game players through these videos. The team was invited to demonstrate their game at the July 2012 Evolution Championship Series by one of its founders, Joey "MrWizard" Cuellar, as part of other indie fighting games. For the 2013 Evolution series, the game was one of seventeen nominees for the "Player's Choice" slot in the main competition. Though the game is an unlicensed work of Hasbro's My Little Pony franchise, the Mane6 team did not receive any cease and desist notices from the company until February 8, 2013. Like much of the rest of the Internet phenomenon surrounding Friendship is Magic, Hasbro had been mostly tolerant; allowing episodes of the show along with parodies and mashups of the works to be redistributed freely across the Internet. This helped to create a participatory culture that has drawn a broader audience to the show, even going as far as to say they have no intentions of ever filing takedown notices as they see this as "Free Advertising and spreading". The Mane6 had taken no monetary donations for their work and planned to keep the game as a free release. Further, while a fighting game, they did not show any characters getting wounded, or show any signs of blood, as to keep with the generally nonviolent theme of the show. The Mane6 had stated that even if the project was shut down, they had learned much from the effort to apply towards their next project with original characters, which they were already planning. Cease and desist from Hasbro An unfinished version of the game was leaked to 4chan's /mlp/ board on August 2, 2012. Mane6 responded by terminating their QA program and pushing the project into a closed development cycle. In February 2013, shortly after the 2013 EVO voting selection, Hasbro's lawyers sent a cease and desist letter to the Mane6 team; This was only a few weeks before they had expected to be completed with the initial version of their game. They obeyed the cease and desist letter, halting all production and removing all assets from their website, while Mane6 attempted to enter legal negotiations with Hasbro. Artist Elosande resigned from the team. The team also sought legal advice to fight the cease and desist but were told it would be an expensive battle. Mane6 were unable to come to agreements with Hasbro and started to redo the game using new artwork assets. They subsequently renamed the new game to simply Fighting Is Magic. On February 28, 2014, a fandom news site, Equestria Daily, announced the release of a "finished" version of the game. Using a combination of the Mane6 team's unfinished build and fan-made contributions, this version features all main six characters of the television show as playable, as well as new stages and multiplayer capability. Now known as Fighting is Magic: Tribute Edition, the game is openly available for download and play. Them's Fightin' Herds With no legal option to continue to use the My Little Pony characters, the Mane6 team opted to keep most of their work to date and reworked the game with new art assets, retaining the theme of four-legged creatures. Faust herself supported the fan effort, understanding the "irony" of a fighting game based on a show about friendship, but appreciated that "the original version of the game was that they made the Ponies fight in character" without resorting to typical fighter elements like weapons, and praising the animations that the team has already built. On hearing of the cease and desist, Faust contacted the team, offering to provide some of her time to create new characters for their game, and her official involvement with the project was announced at the end of February. The team accepted her offer; developer Jay Wright noted that "you can't copyright Lauren's distinctive style", and that while the game will still be unique, it will likely still carry the spirit of My Little Pony. According to Faust, she was happy to provide "my little part to help Mane6 finish up this game in a way that stays true to the spirit of the original—but in a way that can freely be shared". Faust also helped to develop the story and setting for the game. She noted that the common story concept for fighting games, where the characters would be fighting to be the champions of a tourney, was overused, and instead designed one around where the individual characters have already been determined to be champions from their individual tribes, and now are fighting each other for the key, each believing it is their destiny to obtain and use the key against the Predators. The characters in the game remain four-legged as with the My Little Pony version, which Mane6 developer Francisco Copado believes is a first for a fighting game. Silhouette teaser images released by the Mane6 team showed a trio of four-legged non-pony characters as preliminary designs for the new game, while as of April 2013, three additional characters are still in development. The Mane6 also gained contributions from Lab Zero Games, developers of Skullgirls. Lab Zero had developed a fighting game engine, named Z-Engine, from scratch for their own title. While others had contacted Lab Zero to use the Z-Engine for other fighting games, the studio was drawn to the work of the Mane6 who, according to Lab Zero's Mike Zaimont, had shown a high degree of competence of what made a good fighting game compared to other efforts, and wanted to support their work. Lab Zero used an Indiegogo crowd-sourcing effort to gain development funds, and having readily cleared their initial target of $150,000 and with additional stretch goals of new characters and content for their game, included a $725,000 target that would allow Mane6 to use and distribute the Z-Engine for free as part of their new game, and challenged the brony community to help towards that via online donation. The goal was met on the final day of the funding campaign, which was on March 27, 2013. The Z-engine allowed the Mane6 team to expand beyond the limitations of Fighter Maker, thought they kept some of the conventions learned from working with Fighter Maker, such as the use of 3/4rd views to reduce the horizontal lengths of the four-legged characters. The Skullgirls engine also brought in the open-source networking code GGPO ("Good Game Peace Out"), designed specifically for overcoming known limitations of playing fighting games online. GGPO uses a system called "rollback" that delays the game's response to the user's input slightly, masked by character animations, coupled with predictive behavior to appear to give zero-latency gameplay for players against online opponents. In August 2015, the Mane6 revealed the revamped title, Them's Fightin' Herds, which was suggested by Craig McCracken. They announced that they would be starting an Indiegogo campaign starting on September 21, 2015, seeking to raise to complete the game. The funding was successful with a final funding amount of , meeting the stretch goals to offer the game on OS X and Linux computers alongside Microsoft Windows, and the introduction of a seventh playable character, a goat, stages and stories based on that character. By early February 2018, the Mane6 affirmed the game's early access release on February 22, 2018. They had gained support from Humble Bundle for publishing; in turn, Humble Bundle had reached out to publishers of other fighting game developers to bring themed assets into Them's Fightin' Herds, including Guilty Gear Xrd, BlazBlue: Central Fiction, and Skullgirls. On December 1, 2021, the partnership between Mane6 and Humble Bundle had ended. On January 20, 2022, Mane6 announced that they were purchased by and entered a publishing deal with Maximum Games. The game was released on April 30, 2020. Only the first chapter of the game's story mode was available at launch, though all six characters were fully playable in the game's competitive modes. Mane6 will continue to support the game and add additional chapters to the story mode following release. Art and gameplay for Shanty was showcased by Mane6 around August 2020, though they had no planned date when the character would be released. Shanty was released on March 25, 2021, alongside the Linux version of the game. Reception IGN gave the game a score of 8/10. One of the challenges that the Mane6 developers have stated with getting people to play the game is the stigma of its basis on the My Little Pony foundations and association with the brony fandom; however, Mane6 president Aaron Stavely says that once they have been able to get fighting game players to try out the game, they have generally seen them impressed with the game and walk away with positive feedback. Them's Fightin' Herds was supposed to be one of four games used for the open tournaments in the revamped online version of Evo 2020, which was cancelled. Both Them's Fightin' Herds and Skullgirls were used for the online events due to their capable forms of online play enabled by GGPO, along with Mortal Kombat 11 and Killer Instinct, which were both developed with internal versions of rollback netcodes. References External links Official developer Twitter feed 2020 video games Crowdfunded video games Early access video games Fangames Fantasy video games Fighting games Indie video games Indiegogo projects Fiction about unicorns Linux games MacOS games Multiplayer and single-player video games Multiplayer online games Retro-style video games Video games about animals Video games developed in the United States Video games featuring female protagonists Video games set on fictional planets Windows games Humble Games games
60600
https://en.wikipedia.org/wiki/Barcode
Barcode
A barcode or bar code is a method of representing data in a visual, machine-readable form. Initially, barcodes represented data by varying the widths and spacings of parallel lines. These barcodes, now commonly referred to as linear or one-dimensional (1D), can be scanned by special optical scanners, called barcode readers, of which there are several types. Later, two-dimensional (2D) variants were developed, using rectangles, dots, hexagons and other patterns, called matrix codes or 2D barcodes, although they do not use bars as such. 2D barcodes can be read using purpose-built 2D optical scanners, which exist in a few different forms. 2D barcodes can also be read by a digital camera connected to a microcomputer running software that takes a photographic image of the barcode and analyzes the image to deconstruct and decode the 2D barcode. A mobile device with an inbuilt camera, such as smartphone, can function as the latter type of 2D barcode reader using specialized application software (The same sort of mobile device could also read 1D barcodes, depending on the application software). The barcode was invented by Norman Joseph Woodland and Bernard Silver and patented in the US in 1951. The invention was based on Morse code that was extended to thin and thick bars. However, it took over twenty years before this invention became commercially successful. UK magazine 'Modern Railways' December 1962 pages 387-389 record how British Railways had already perfected a barcode-reading system capable of correctly reading rolling stock travelling at with no mistakes but the system was abandoned when privatisation of the railways took place. An early use of one type of barcode in an industrial context was sponsored by the Association of American Railroads in the late 1960s. Developed by General Telephone and Electronics (GTE) and called KarTrak ACI (Automatic Car Identification), this scheme involved placing colored stripes in various combinations on steel plates which were affixed to the sides of railroad rolling stock. Two plates were used per car, one on each side, with the arrangement of the colored stripes encoding information such as ownership, type of equipment, and identification number. The plates were read by a trackside scanner, located for instance, at the entrance to a classification yard, while the car was moving past. The project was abandoned after about ten years because the system proved unreliable after long-term use. Barcodes became commercially successful when they were used to automate supermarket checkout systems, a task for which they have become almost universal. The Uniform Grocery Product Code Council had chosen, in 1973, the barcode design developed by George Laurer. Laurer's barcode, with vertical bars, printed better than the circular barcode developed by Woodland and Silver. Their use has spread to many other tasks that are generically referred to as automatic identification and data capture (AIDC). The first scanning of the now-ubiquitous Universal Product Code (UPC) barcode was on a pack of Wrigley Company chewing gum in June 1974 at a Marsh supermarket in Troy, Ohio, using scanner produced by Photographic Sciences Corporation. QR codes, a specific type of 2D barcode, have recently become very popular due to the growth in smartphone ownership. Other systems have made inroads in the AIDC market, but the simplicity, universality and low cost of barcodes has limited the role of these other systems, particularly before technologies such as radio-frequency identification (RFID) became available after 1995. History In 1948 Bernard Silver, a graduate student at Drexel Institute of Technology in Philadelphia, Pennsylvania, US overheard the president of the local food chain, Food Fair, asking one of the deans to research a system to automatically read product information during checkout. Silver told his friend Norman Joseph Woodland about the request, and they started working on a variety of systems. Their first working system used ultraviolet ink, but the ink faded too easily and was expensive. Convinced that the system was workable with further development, Woodland left Drexel, moved into his father's apartment in Florida, and continued working on the system. His next inspiration came from Morse code, and he formed his first barcode from sand on the beach. "I just extended the dots and dashes downwards and made narrow lines and wide lines out of them." To read them, he adapted technology from optical soundtracks in movies, using a 500-watt incandescent light bulb shining through the paper onto an RCA935 photomultiplier tube (from a movie projector) on the far side. He later decided that the system would work better if it were printed as a circle instead of a line, allowing it to be scanned in any direction. On 20 October 1949, Woodland and Silver filed a patent application for "Classifying Apparatus and Method", in which they described both the linear and bull's eye printing patterns, as well as the mechanical and electronic systems needed to read the code. The patent was issued on 7 October 1952 as US Patent 2,612,994. In 1951, Woodland moved to IBM and continually tried to interest IBM in developing the system. The company eventually commissioned a report on the idea, which concluded that it was both feasible and interesting, but that processing the resulting information would require equipment that was some time off in the future. IBM offered to buy the patent, but the offer was not accepted. Philco purchased the patent in 1962 and then sold it to RCA sometime later. Collins at Sylvania During his time as an undergraduate, David Jarrett Collins worked at the Pennsylvania Railroad and became aware of the need to automatically identify railroad cars. Immediately after receiving his master's degree from MIT in 1959, he started work at GTE Sylvania and began addressing the problem. He developed a system called KarTrak using blue and red reflective stripes attached to the side of the cars, encoding a six-digit company identifier and a four-digit car number. Light reflected off the colored stripes was read by photomultiplier vacuum tubes. The Boston and Maine Railroad tested the KarTrak system on their gravel cars in 1961. The tests continued until 1967, when the Association of American Railroads (AAR) selected it as a standard, Automatic Car Identification, across the entire North American fleet. The installations began on 10 October 1967. However, the economic downturn and rash of bankruptcies in the industry in the early 1970s greatly slowed the rollout, and it was not until 1974 that 95% of the fleet was labeled. To add to its woes, the system was found to be easily fooled by dirt in certain applications, which greatly affected accuracy. The AAR abandoned the system in the late 1970s, and it was not until the mid-1980s that they introduced a similar system, this time based on radio tags. The railway project had failed, but a toll bridge in New Jersey requested a similar system so that it could quickly scan for cars that had purchased a monthly pass. Then the U.S. Post Office requested a system to track trucks entering and leaving their facilities. These applications required special retroreflector labels. Finally, Kal Kan asked the Sylvania team for a simpler (and cheaper) version which they could put on cases of pet food for inventory control. Computer Identics Corporation In 1967, with the railway system maturing, Collins went to management looking for funding for a project to develop a black-and-white version of the code for other industries. They declined, saying that the railway project was large enough, and they saw no need to branch out so quickly. Collins then quit Sylvania and formed the Computer Identics Corporation. As its first innovations, Computer Identics moved from using incandescent light bulbs in its systems, replacing them with helium–neon lasers, and incorporated a mirror as well, making it capable of locating a barcode up to several feet in front of the scanner. This made the entire process much simpler and more reliable, and typically enabled these devices to deal with damaged labels, as well, by recognizing and reading the intact portions. Computer Identics Corporation installed one of its first two scanning systems in the spring of 1969 at a General Motors (Buick) factory in Flint, Michigan. The system was used to identify a dozen types of transmissions moving on an overhead conveyor from production to shipping. The other scanning system was installed at General Trading Company's distribution center in Carlstadt, New Jersey to direct shipments to the proper loading bay. Universal Product Code In 1966, the National Association of Food Chains (NAFC) held a meeting on the idea of automated checkout systems. RCA, who had purchased the rights to the original Woodland patent, attended the meeting and initiated an internal project to develop a system based on the bullseye code. The Kroger grocery chain volunteered to test it. In the mid-1970s, the NAFC established the Ad-Hoc Committee for U.S. Supermarkets on a Uniform Grocery-Product Code to set guidelines for barcode development. In addition, it created a symbol-selection subcommittee to help standardize the approach. In cooperation with consulting firm, McKinsey & Co., they developed a standardized 11-digit code for identifying products. The committee then sent out a contract tender to develop a barcode system to print and read the code. The request went to Singer, National Cash Register (NCR), Litton Industries, RCA, Pitney-Bowes, IBM and many others. A wide variety of barcode approaches was studied, including linear codes, RCA's bullseye concentric circle code, starburst patterns and others. In the spring of 1971, RCA demonstrated their bullseye code at another industry meeting. IBM executives at the meeting noticed the crowds at the RCA booth and immediately developed their own system. IBM marketing specialist Alec Jablonover remembered that the company still employed Woodland, and he established a new facility in Raleigh-Durham Research Triangle Park to lead development. In July 1972, RCA began an 18-month test in a Kroger store in Cincinnati. Barcodes were printed on small pieces of adhesive paper, and attached by hand by store employees when they were adding price tags. The code proved to have a serious problem; the printers would sometimes smear ink, rendering the code unreadable in most orientations. However, a linear code, like the one being developed by Woodland at IBM, was printed in the direction of the stripes, so extra ink would simply make the code "taller" while remaining readable. So on 3 April 1973, the IBM UPC was selected as the NAFC standard. IBM had designed five versions of UPC symbology for future industry requirements: UPC A, B, C, D, and E. NCR installed a testbed system at Marsh's Supermarket in Troy, Ohio, near the factory that was producing the equipment. On 26 June 1974, Clyde Dawson pulled a 10-pack of Wrigley's Juicy Fruit gum out of his basket and it was scanned by Sharon Buchanan at 8:01 am. The pack of gum and the receipt are now on display in the Smithsonian Institution. It was the first commercial appearance of the UPC. In 1971, an IBM team was assembled for an intensive planning session, threshing out, 12 to 18 hours a day, how the technology would be deployed and operate cohesively across the system, and scheduling a roll-out plan. By 1973, the team were meeting with grocery manufacturers to introduce the symbol that would need to be printed on the packaging or labels of all of their products. There were no cost savings for a grocery to use it, unless at least 70% of the grocery's products had the barcode printed on the product by the manufacturer. IBM projected that 75% would be needed in 1975. Yet, although this was achieved, there were still scanning machines in fewer than 200 grocery stores by 1977. Economic studies conducted for the grocery industry committee projected over $40 million in savings to the industry from scanning by the mid-1970s. Those numbers were not achieved in that time-frame and some predicted the demise of barcode scanning. The usefulness of the barcode required the adoption of expensive scanners by a critical mass of retailers while manufacturers simultaneously adopted barcode labels. Neither wanted to move first and results were not promising for the first couple of years, with Business Week proclaiming "The Supermarket Scanner That Failed" in a 1976 article. On the other hand, experience with barcode scanning in those stores revealed additional benefits. The detailed sales information acquired by the new systems allowed greater responsiveness to customer habits, needs and preferences. This was reflected in the fact that about 5 weeks after installing barcode scanners, sales in grocery stores typically started climbing and eventually leveled off at a 10–12% increase in sales that never dropped off. There was also a 1–2% decrease in operating cost for those stores, and this enabled them to lower prices and thereby to increase market share. It was shown in the field that the return on investment for a barcode scanner was 41.5%. By 1980, 8,000 stores per year were converting. Sims Supermarkets were the first location in Australia to use barcodes, starting in 1979. Industrial adoption In 1981, the United States Department of Defense adopted the use of Code 39 for marking all products sold to the United States military. This system, Logistics Applications of Automated Marking and Reading Symbols (LOGMARS), is still used by DoD and is widely viewed as the catalyst for widespread adoption of barcoding in industrial uses. Use Barcodes are widely used around the world in many contexts. In stores, UPC barcodes are pre-printed on most items other than fresh produce from a grocery store. This speeds up processing at check-outs and helps track items and also reduces instances of shoplifting involving price tag swapping, although shoplifters can now print their own barcodes. Barcodes that encode a book's ISBN are also widely pre-printed on books, journals and other printed materials. In addition, retail chain membership cards use barcodes to identify customers, allowing for customized marketing and greater understanding of individual consumer shopping patterns. At the point of sale, shoppers can get product discounts or special marketing offers through the address or e-mail address provided at registration. Barcodes are widely used in the healthcare and hospital settings, ranging from patient identification (to access patient data, including medical history, drug allergies, etc.) to creating SOAP Notes with barcodes to medication management. They are also used to facilitate the separation and indexing of documents that have been imaged in batch scanning applications, track the organization of species in biology, and integrate with in-motion checkweighers to identify the item being weighed in a conveyor line for data collection. They can also be used to keep track of objects and people; they are used to keep track of rental cars, airline luggage, nuclear waste, registered mail, express mail and parcels. Barcoded tickets (which may be printed by the customer on their home printer, or stored on their mobile device) allow the holder to enter sports arenas, cinemas, theatres, fairgrounds, and transportation, and are used to record the arrival and departure of vehicles from rental facilities etc. This can allow proprietors to identify duplicate or fraudulent tickets more easily. Barcodes are widely used in shop floor control applications software where employees can scan work orders and track the time spent on a job. Barcodes are also used in some kinds of non-contact 1D and 2D position sensors. A series of barcodes are used in some kinds of absolute 1D linear encoder. The barcodes are packed close enough together that the reader always has one or two barcodes in its field of view. As a kind of fiducial marker, the relative position of the barcode in the field of view of the reader gives incremental precise positioning, in some cases with sub-pixel resolution. The data decoded from the barcode gives the absolute coarse position. An "address carpet", such as Howell's binary pattern and the Anoto dot pattern, is a 2D barcode designed so that a reader, even though only a tiny portion of the complete carpet is in the field of view of the reader, can find its absolute X,Y position and rotation in the carpet. 2D barcodes can embed a hyperlink to a web page. A mobile device with an inbuilt camera might be used to read the pattern and browse the linked website, which can help a shopper find the best price for an item in the vicinity. Since 2005, airlines use an IATA-standard 2D barcode on boarding passes (Bar Coded Boarding Pass (BCBP)), and since 2008 2D barcodes sent to mobile phones enable electronic boarding passes. Some applications for barcodes have fallen out of use. In the 1970s and 1980s, software source code was occasionally encoded in a barcode and printed on paper (Cauzin Softstrip and Paperbyte are barcode symbologies specifically designed for this application), and the 1991 Barcode Battler computer game system used any standard barcode to generate combat statistics. Artists have used barcodes in art, such as Scott Blake's Barcode Jesus, as part of the post-modernism movement. Symbologies The mapping between messages and barcodes is called a symbology. The specification of a symbology includes the encoding of the message into bars and spaces, any required start and stop markers, the size of the quiet zone required to be before and after the barcode, and the computation of a checksum. Linear symbologies can be classified mainly by two properties: Continuous vs. discrete Characters in discrete symbologies are composed of n bars and n − 1 spaces. There is an additional space between characters, but it does not convey information, and may have any width as long as it is not confused with the end of the code. Characters in continuous symbologies are composed of n bars and n spaces, and usually abut, with one character ending with a space and the next beginning with a bar, or vice versa. A special end pattern that has bars on both ends is required to end the code. Two-width vs. many-width A two-width, also called a binary bar code, contains bars and spaces of two widths, "wide" and "narrow". The precise width of the wide bars and spaces is not critical; typically it is permitted to be anywhere between 2 and 3 times the width of the narrow equivalents. Some other symbologies use bars of two different heights (POSTNET), or the presence or absence of bars (CPC Binary Barcode). These are normally also considered binary bar codes. Bars and spaces in many-width symbologies are all multiples of a basic width called the module; most such codes use four widths of 1, 2, 3 and 4 modules. Some symbologies use interleaving. The first character is encoded using black bars of varying width. The second character is then encoded by varying the width of the white spaces between these bars. Thus characters are encoded in pairs over the same section of the barcode. Interleaved 2 of 5 is an example of this. Stacked symbologies repeat a given linear symbology vertically. The most common among the many 2D symbologies are matrix codes, which feature square or dot-shaped modules arranged on a grid pattern. 2D symbologies also come in circular and other patterns and may employ steganography, hiding modules within an image (for example, DataGlyphs). Linear symbologies are optimized for laser scanners, which sweep a light beam across the barcode in a straight line, reading a slice of the barcode light-dark patterns. Scanning at an angle makes the modules appear wider, but does not change the width ratios. Stacked symbologies are also optimized for laser scanning, with the laser making multiple passes across the barcode. In the 1990s development of charge-coupled device (CCD) imagers to read barcodes was pioneered by Welch Allyn. Imaging does not require moving parts, as a laser scanner does. In 2007, linear imaging had begun to supplant laser scanning as the preferred scan engine for its performance and durability. 2D symbologies cannot be read by a laser, as there is typically no sweep pattern that can encompass the entire symbol. They must be scanned by an image-based scanner employing a CCD or other digital camera sensor technology. Barcode readers The earliest, and still the cheapest, barcode scanners are built from a fixed light and a single photosensor that is manually moved across the barcode. Barcode scanners can be classified into three categories based on their connection to the computer. The older type is the RS-232 barcode scanner. This type requires special programming for transferring the input data to the application program. Keyboard interface scanners connect to a computer using a PS/2 or AT keyboard–compatible adaptor cable (a "keyboard wedge"). The barcode's data is sent to the computer as if it had been typed on the keyboard. Like the keyboard interface scanner, USB scanners do not need custom code for transferring input data to the application program. On PCs running Windows the human interface device emulates the data merging action of a hardware "keyboard wedge", and the scanner automatically behaves like an additional keyboard. Most modern smartphones are able to decode barcode using their built-in camera. Google's mobile Android operating system can use their own Google Lens application to scan QR codes, or third-party apps like Barcode Scanner to read both one-dimensional barcodes and QR codes. Nokia's Symbian operating system featured a barcode scanner, while mbarcode is a QR code reader for the Maemo operating system. In Apple iOS 11, the native camera app can decode QR codes and can link to URLs, join wireless networks, or perform other operations depending on the QR Code contents. Other paid and free apps are available with scanning capabilities for other symbologies or for earlier iOS versions. With BlackBerry devices, the App World application can natively scan barcodes and load any recognized Web URLs on the device's Web browser. Windows Phone 7.5 is able to scan barcodes through the Bing search app. However, these devices are not designed specifically for the capturing of barcodes. As a result, they do not decode nearly as quickly or accurately as a dedicated barcode scanner or portable data terminal. Quality control and verification It is common for producers and users of bar codes to have a quality management system which includes verification and validation of bar codes. Barcode verification examines scanability and the quality of the barcode in comparison to industry standards and specifications. Barcode verifiers are primarily used by businesses that print and use barcodes. Any trading partner in the supply chain can test barcode quality. It is important to verify a barcode to ensure that any reader in the supply chain can successfully interpret a barcode with a low error rate. Retailers levy large penalties for non-compliant barcodes. These chargebacks can reduce a manufacturer's revenue by 2% to 10%. A barcode verifier works the way a reader does, but instead of simply decoding a barcode, a verifier performs a series of tests. For linear barcodes these tests are: Edge contrast (EC) The difference between the space reflectance (Rs) and adjoining bar reflectance (Rb). EC=Rs-Rb Minimum bar reflectance (Rb) The smallest reflectance value in a bar. Minimum space reflectance (Rs) The smallest reflectance value in a space. Symbol contrast (SC) Symbol Contrast is the difference in reflectance values of the lightest space (including the quiet zone) and the darkest bar of the symbol. The greater the difference, the higher the grade. The parameter is graded as either A, B, C, D, or F. SC=Rmax-Rmin Minimum edge contrast (ECmin) The difference between the space reflectance (Rs) and adjoining bar reflectance (Rb). EC=Rs-Rb Modulation (MOD) The parameter is graded either A, B, C, D, or F. This grade is based on the relationship between minimum edge contrast (ECmin) and symbol contrast (SC). MOD=ECmin/SC The greater the difference between minimum edge contrast and symbol contrast, the lower the grade. Scanners and verifiers perceive the narrower bars and spaces to have less intensity than wider bars and spaces; the comparison of the lesser intensity of narrow elements to the wide elements is called modulation. This condition is affected by aperture size. Inter-character gap In discrete barcodes, the space that disconnects the two contiguous characters. When present, inter-character gaps are considered spaces (elements) for purposes of edge determination and reflectance parameter grades. Defects Decode Extracting the information which has been encoded in a bar code symbol. Decodability Can be graded as A, B, C, D, or F. The Decodability grade indicates the amount of error in the width of the most deviant element in the symbol. The less deviation in the symbology, the higher the grade. Decodability is a measure of print accuracy using the symbology reference decode algorithm. 2D matrix symbols look at the parameters: Symbol contrast Modulation Decode Unused error correction Fixed (finder) pattern damage Grid non-uniformity Axial non-uniformity Depending on the parameter, each ANSI test is graded from 0.0 to 4.0 (F to A), or given a pass or fail mark. Each grade is determined by analyzing the scan reflectance profile (SRP), an analog graph of a single scan line across the entire symbol. The lowest of the 8 grades is the scan grade, and the overall ISO symbol grade is the average of the individual scan grades. For most applications a 2.5 (C) is the minimal acceptable symbol grade. Compared with a reader, a verifier measures a barcode's optical characteristics to international and industry standards. The measurement must be repeatable and consistent. Doing so requires constant conditions such as distance, illumination angle, sensor angle and verifier aperture. Based on the verification results, the production process can be adjusted to print higher quality barcodes that will scan down the supply chain. Bar code validation may include evaluations after use (and abuse) testing such as sunlight, abrasion, impact, moisture, etc. Barcode verifier standards Barcode verifier standards are defined by the International Organization for Standardization (ISO), in ISO/IEC 15426-1 (linear) or ISO/IEC 15426-2 (2D). The current international barcode quality specification is ISO/IEC 15416 (linear) and ISO/IEC 15415 (2D). The European Standard EN 1635 has been withdrawn and replaced by ISO/IEC 15416. The original U.S. barcode quality specification was ANSI X3.182. (UPCs used in the US – ANSI/UCC5). As of 2011 the ISO workgroup JTC1 SC31 was developing a Direct Part Marking (DPM) quality standard: ISO/IEC TR 29158. Benefits In point-of-sale management, barcode systems can provide detailed up-to-date information on the business, accelerating decisions and with more confidence. For example: Fast-selling items can be identified quickly and automatically reordered. Slow-selling items can be identified, preventing inventory build-up. The effects of merchandising changes can be monitored, allowing fast-moving, more profitable items to occupy the best space. Historical data can be used to predict seasonal fluctuations very accurately. Items may be repriced on the shelf to reflect both sale prices and price increases. This technology also enables the profiling of individual consumers, typically through a voluntary registration of discount cards. While pitched as a benefit to the consumer, this practice is considered to be potentially dangerous by privacy advocates. Besides sales and inventory tracking, barcodes are very useful in logistics and supply chain management. When a manufacturer packs a box for shipment, a Unique Identifying Number (UID) can be assigned to the box. A database can link the UID to relevant information about the box; such as order number, items packed, quantity packed, destination, etc. The information can be transmitted through a communication system such as Electronic Data Interchange (EDI) so the retailer has the information about a shipment before it arrives. Shipments that are sent to a Distribution Center (DC) are tracked before forwarding. When the shipment reaches its final destination, the UID gets scanned, so the store knows the shipment's source, contents, and cost. Barcode scanners are relatively low cost and extremely accurate compared to key-entry, with only about 1 substitution error in 15,000 to 36 trillion characters entered. The exact error rate depends on the type of barcode. Types of barcodes Linear barcodes A first generation, "one dimensional" barcode that is made up of lines and spaces of various widths that create specific patterns. Matrix (2D) barcodes A matrix code, also termed a 2D barcode or simply a 2D code, is a two-dimensional way to represent information. It is similar to a linear (1-dimensional) barcode, but can represent more data per unit area. Example images In popular culture In architecture, a building in Lingang New City by German architects Gerkan, Marg and Partners incorporates a barcode design, as does a shopping mall called Shtrikh-kod (Russian for barcode) in Narodnaya ulitsa ("People's Street") in the Nevskiy district of St. Petersburg, Russia. In media, in 2011, the National Film Board of Canada and ARTE France launched a web documentary entitled Barcode.tv, which allows users to view films about everyday objects by scanning the product's barcode with their iPhone camera. In professional wrestling, the WWE stable D-Generation X incorporated a barcode into their entrance video, as well as on a T-shirt. In the TV series Dark Angel, the protagonist and the other transgenics in the Manticore X-series have barcodes on the back of their necks. In video games, the protagonist of the Hitman video game series has a barcode tattoo on the back of his head. Also, QR codes can be scanned for an extra mission on Watch Dogs. In the films Back to the Future Part II and The Handmaid's Tale, cars in the future are depicted with barcode licence plates. In the Terminator films, Skynet burns barcodes onto the inside surface of the wrists of captive humans (in a similar location to the WW2 concentration camp tattoos) as a unique identifier. In music, Dave Davies of The Kinks released a solo album in 1980, AFL1-3603, which featured a giant barcode on the front cover in place of the musician's head. The album's name was also the barcode number. The April 1978 issue of Mad Magazine featured a giant barcode on the cover, with the blurb "[Mad] Hopes this issue jams up every computer in the country...for forcing us to deface our covers with this yecchy UPC symbol from now on!" The 2018 videogame Judgment features QR Codes that protagonist Takayuki Yagami can photograph with his phone camera. These are mostly to unlock parts for Yagami's Drone. Interactive Textbooks were first published by Harcourt College Publishers to Expand Education Technology with Interactive Textbooks. Designed barcodes Some brands integrate custom designs into barcodes (while keeping them readable) on their consumer products. Hoaxes about barcodes There was minor skepticism from conspiracy theorists, who considered barcodes to be an intrusive surveillance technology, and from some Christians, pioneered by a 1982 book The New Money System 666 by Mary Stewart Relfe, who thought the codes hid the number 666, representing the "Number of the Beast". Old Believers, a separation of the Russian Orthodox Church, believe barcodes are the stamp of the Antichrist. Television host Phil Donahue described barcodes as a "corporate plot against consumers". See also Automated identification and data capture (AIDC) Barcode printer European Article Numbering-Uniform Code Council Global Trade Item Number Identifier Inventory control system Object hyperlinking Semacode SMS barcode SPARQCode (QR code) List of GS1 country codes References Further reading Automating Management Information Systems: Barcode Engineering and Implementation – Harry E. Burke, Thomson Learning, Automating Management Information Systems: Principles of Barcode Applications – Harry E. Burke, Thomson Learning, The Bar Code Book – Roger C. Palmer, Helmers Publishing, , 386 pages The Bar Code Manual – Eugene F. Brighan, Thompson Learning, Handbook of Bar Coding Systems – Harry E. Burke, Van Nostrand Reinhold Company, , 219 pages Information Technology for Retail:Automatic Identification & Data Capture Systems – Girdhar Joshi, Oxford University Press, , 416 pages Lines of Communication – Craig K. Harmon, Helmers Publishing, , 425 pages Punched Cards to Bar Codes – Benjamin Nelson, Helmers Publishing, , 434 pages Revolution at the Checkout Counter: The Explosion of the Bar Code – Stephen A. Brown, Harvard University Press, Reading Between The Lines – Craig K. Harmon and Russ Adams, Helmers Publishing, , 297 pages The Black and White Solution: Bar Code and the IBM PC – Russ Adams and Joyce Lane, Helmers Publishing, , 169 pages Sourcebook of Automatic Identification and Data Collection – Russ Adams, Van Nostrand Reinhold, , 298 pages Inside Out: The Wonders of Modern Technology – Carol J. Amato, Smithmark Pub, , 1993 External links Barcode Glossary of Terms Pros and cons and relative popularity of different 1D and 2D barcode codes. Barcodes comparison chart, limits of each barcode type. Encodings Automatic identification and data capture 1952 introductions American inventions Records management technology
8077602
https://en.wikipedia.org/wiki/Single-user%20mode
Single-user mode
Single-user mode is a mode in which a multiuser computer operating system boots into a single superuser. It is mainly used for maintenance of multi-user environments such as network servers. Some tasks may require exclusive access to shared resources, for example running fsck on a network share. This mode can also be used for security purposes network services are not run, eliminating the possibility of outside interference. On some systems a lost superuser password can be changed by switching to single-user mode, but not asking for the password in such circumstances is viewed as a security vulnerability. Unix family Unix-like operating systems provide single-user mode functionality either through the System V-style runlevels, BSD-style boot-loader options, or other boot-time options. The run-level is usually changed using the init command, runlevel 1 or S will boot into single-user mode. Boot-loader options can be changed during startup before the execution of the kernel. In FreeBSD and DragonFly BSD it can be changed before rebooting the system with the command nextboot -o "-s" -k kernel, and its bootloader offers the option on bootup to start in single-user mode. In Solaris the command reboot -- -s will cause a reboot into single-user mode. macOS users can accomplish this by holding down after powering the system. The user may be required to enter a password set in the firmware. In OS X El Capitan and later releases of macOS, the mode can be reversed to single-user mode with the command sudo launchctl reboot userspace -s in Terminal, and the system can be fully rebooted in single-user mode with the command sudo launchctl reboot system -s. Single-user mode is different from a safe mode boot in that the system goes directly to the console instead of starting up the core elements of macOS (items in /System/Library/, ignoring /Library/, ~/Library/, et al.). From there users are encouraged by a prompt to run fsck or other command line utilities as needed (or installed). Microsoft Windows Microsoft Windows provides Recovery Console, Last Known Good Configuration, Safe Mode and recently Windows Recovery Environment as standard recovery means. Also, bootable BartPE-based third-party recovery discs are available. Recovery Console and recovery discs are different from single-user modes in other operating systems because they are independent of the maintained operating system. This works more like chrooting into other environment with other kernel in Linux. References Operating system technology Booting
27312585
https://en.wikipedia.org/wiki/The%20Odyssey%20%28Smith%29
The Odyssey (Smith)
The Odyssey Symphony is Robert W. Smith's second symphonic band symphony. Smith had studied both the Odyssey and Dante's Divine Comedy at Troy University. The symphony contains, in total, four movements, each being noted for having intricate and imaginative percussive and wind effects. They are as follows: Movement One: "The Iliad" Subtitled "...in the 10th Year of the Trojan War", this piece retells the story of the incredible victory of the Greeks against the Trojans, using the famous "Trojan Horse". The movement opens with call-and-response horn duet and motif that is prominent in both this and the fourth movement. This quickly broadens into a majestic fanfare, another recurring theme in the piece, which in reality serves as a sort of theme for Odysseus. The final sustained note of the fanfare decrescendoes into yet another motif: a flute/horn duet backed by a harp (usually on synthesizer), playing their own call-and-response/echo theme. The full band returns with the fanfare before entering an aggressive section: the woodwinds play rapid alternating triplet patterns while the brass re-enter with an entirely new, even more menacing theme. This new theme reaches its climax and quickly repeats its first part before a rapid woodwind descent which sets the tone for the second part of the movement, "The Trojan Horse". Like in many of his pieces, Smith has used unusual percussion instruments and effects to achieve a certain mode and image. In this piece, he has instructed the cymbal players to grind the edge of one cymbal into the inner dome of the other, producing the sound of the squeaky wheel. While the Greeks wheel the horse into the city, the flute/duet melody returns briefly, highlighted by an ominous clarinet choir. The music eventually fades away, and a second effect initiates: The "Fire" effect involves members of the band crinkling paper gently while brake drums provoke a sword fight. The "fire" quickly spreads across the band, eventually coming to the crescendo which reintroduces the "Aggressive" theme, albeit with more triumphant feel. This slightly-modified theme brings the band to its final, victorious climax. If the band is transitioning to movement II, "The Winds of Poseidon", an optional fine is supplied, in which the flute/horn duet is repeated one more time with a different ending. Movement Two: "The Winds of Poseidon" Movement Three: "The Isle of Calypso" This movement picks up with Odysseus lamenting as he is stranded on the strange island belonging to the goddess Calypso. Here, he can have anything he wants, even immortality, but he is never truly happy, as he remembers that he promised his beloved Penelope that they would grow old and die together. After a full year, Zeus and Hermes finally persuade the saddened Calypso to let Odysseus go free, so that he can once more rule Ithaca. This song captures the hero's woes during his time on the island. This lyrical piece, the emotional climax of the symphony, opens with a special "Clock" effect, which can be achieved in various ways (knocking pieces of wood back and forth against each other, amplifying the sounds of a real antique clock, etc.). A prominent cymbal scrape leads to the entrance of an Ocean drum, while the piano begins the background theme. A mournful English Horn solo introduces the main theme of the piece, and is soon joined by a euphonium duet and the rest of the winds. The song reaches a fake climax, before descending back into the original English Horn melody. A flashback to the flute/horn duet in the first movement is featured, and this leads into the buildup of the band. Finally, the climax is reached, with "soaring" woodwind lines coupled by the brass/saxophone solo. The band joins together for a final melancholy re-statement of the English horn solo, which resumes after a dramatic fermata. Finally, this, too, lets go, and all that is left is the waves lapping on the shore (the ocean drum) the clock ticking away (the "Clock" effect) and the tolls of the clock bells (these can either be made by tubular bells, handbells, a synthesizer, or cut helium tanks). Movement Four: "Ithaca" The final movement of Symphony No. 2 sharply contrasts "The Isle of Calypso" in various ways, bringing about a conclusion to the work. The piece opens with a tense, suspenseful piano/chimes/triangle trio, interrupted at certain points by the return of the English Horn from movement 3. After a few seconds, the tense mood subsides as the ocean drums enter and a horn duo repeat the motif that opened the symphony itself. However, this familiarity subsides almost as quickly as the tenseness of the opening, giving way to a dramatic brass melody. The horns continue this melody, accentuated by blasts from the rest of the band, and then all parts crescendo into the upbeat, adventurous first section. This section is made even more epic by the fact that various parts pass the melodies between them, from the horns and saxophones to the oboes to the euphoniums. The entire section is constantly punctuated by "biting" brass lines and unique flute/piccolo melodies which soar over the rest of the melodies and draw them back into the original dramatic theme. Finished off by a triumphant, very short fanfare, the sections decrescendo much like they did in movements 1 and 2 as the piece enters its second section. Smith uses a spring drum, wind wands, and wind whistles to simulate the sound of a bow being strung and arrows being released, all topped off by repetitions of "Odysseus' Theme" (the horn duet). As the winds whistles and wind wands continue playing, bodhráns and brake drums simulate the sounds of battle, which lead into the third section. The third section begins with a repeat of the "Aggressive" theme, once again modified, from the first movement. Although the brass play a melody reminiscent of the first part of the song, a series of chromatic triplets lead the band back into the "Victorious" theme from movement 1. As the chimes mimic the sound of "all the bells of Ithaca", the roaring fanfare that originally opened the symphony returns to close it, with a modified ending in which the whole band brings the song to a roaring conclusion. Notes Like many of Smith's compositions, three of the four movements follow a distinct pattern: an opening solo, followed by a fast theme, a slow theme, and another fast theme (similar to an overture). All four movements feature at least one distinct percussive effect that gives the piece added emotion. All four movements also begin and end in either the key of B-flat or its relative minor, with few key changes during the piece. In his programming notes, Mr. Smith acknowledges that the continuity of the piece compared to Homer's original epic has been altered slightly, with "The Winds of Poseidon" coming before "The Isle of Calypso". He notes that he did this to provide further contrast between the mournful third movement and the action-filled fourth movement, and states that the order of the two middle movements may be switched should the conductor desire to do so. Percussive effects The Iliad The "Groaning and Squeaky Wheel" effect In order to simulate the giant Trojan Horse being wheeled into the city, two cymbals are placed in a perpendicular arrangement while the percussionist grinds the edge of one into the inner dome of the other. Two pairs of cymbals are scored in order to produce the sound of wheels on either end of the stage. The "Fire" effect As the city of Troy burns, the band softly crumples pieces of paper, "sweeping across the band" as necessary to produce a realistic "fire". The Winds of Poseidon "Lightning" effect Smith advises that an extra-large thundersheet be used during the first part of the second movement in order to enhance the effect of deep, rumbling thunder and flashing lightning. "Siren" effect When Odysseus hears the singing of the sirens, the percussion section uses toy "spinning tubes" cut to produce the B-flat, E-flat and F pitches in order to create an eerie effect. The Isle of Calypso "Clock" effect Used to symbolize the passing of time on Calypso's island, there are various ways of producing this effect. Smith suggests knocking a piece of wood between two wooden boxes with holes cut in them, or amplifying the ticking of an antique clock. The tolling of the bells may be produced by a set of chimes, a synthesizer, or even two helium tanks cut to sound a third apart. Ithaca "Odysseus and the Arrow" To produce the sound of arrows being released/flying by, the percussion section strikes a spring drum with a large triangle beater and bends a timpani pitch up as the bow is strung. When the arrows fly by, wind whistles and wind wands play in rapid succession, creating the illusion of arrows whizzing by at high speeds. Compositions by Robert W. Smith Smith, Robert W. 2 Concert band pieces Smith, Robert W. 2 Works based on the Iliad Works based on the Odyssey Music based on poems Music based on works by Homer
20306216
https://en.wikipedia.org/wiki/Mobile%20Web%20Server
Mobile Web Server
A Mobile Web Server is software designed for modern-day smartphones to host personal web servers, through the use of open sourced software, such as, i-jetty, an open source software, based on jetty. I-jetty is an open source web container, serving Java-based web content such as, servlets and JSPs. Jetty is written in Java and its API is available as a set of JARs. Developers can instantiate a jetty container as an object, instantly adding network and web connectivity to a stand-alone Java app. Jetty is built for scalable performance allowing tens of thousands of HTTP connections and hundreds of thousands of simultaneous web socket connections. Jetty is optimized and known for creating small memory footprints, increasing scalability and performance. Nokia is one of the few cellphone companies that brought Apache HTTP Server to their line of Nokia cellphones, running Symbian OS S60 mobile software platform. The S60 Mobile Web Server enables connectivity for HTTP traffic to a mobile device from the Internet. The Mobile Web Server components include a gateway application that runs on a computer with Internet access and a connector application, that runs on the remote mobile device. The gateway and the connector applications with a valid DNS configuration can provide a mobile device with a global web address (URL). However, as of January 2010, the web server project has been discontinued by Nokia. Examples The Mobile Web Server application allows mobile devices a means for hosting personal web applications, including, web pages and server side control. The most commonly used HTTP servers and servlet containers currently available are Jetty, Tomcat, Glassfish and Resin. Web container comparison Features Personal information manager (PIM) Manage phone's address book Helix multimedia player Send SMS messages using a web browser Browse phone's calendar Browse camera phone's image gallery via computer View received and missed calls Get instant messages sent to your phone screen. Maintain a blog Share presence status Online chat Manage access rights Start mobile site from the web or Settings Share mobile site content via RSS feeds See also Python for S60 Apache Tomcat, alternative open source web server and servlet container ApacheBench, a program for measuring the performance of HTTP web servers References External links Official links Nokia Research - Mobile Web Server Nokia Wiki - Mobile Web Server Nokia Forum - Mobile Web Server Documentation SourceForge - Mobile Web Server All About Symbian - Previewing Nokia's Mobile Web Server Nokia services Free web server software Free software programmed in C Free software programmed in C++ Free software programmed in Java (programming language) Mobile software S60 (software platform) Symbian software
37860770
https://en.wikipedia.org/wiki/Audiveris
Audiveris
Audiveris is an open source tool for optical music recognition (OMR). It allows a user to import scanned music scores and export them to MusicXML format for use in other applications, e.g. music notation programs or page turning software for digital sheet music. Thanks to Tesseract it can also recognize text in scores. Audiveris is written in Java and published as free software. Audiveris V4 was published 26 November 2013 on the basis of Java Web Start under the terms of the GNU General Public License (GNU GPLv2). The source code of legacy versions as well as current development has moved to GitHub and is available under the terms of the GNU Affero General Public License (GNU AGPLv3). References External links Music OCR software Cross-platform free software Free software programmed in Java (programming language) Free music software
56189
https://en.wikipedia.org/wiki/Interlaced%20video
Interlaced video
Interlaced video (also known as interlaced scan) is a technique for doubling the perceived frame rate of a video display without consuming extra bandwidth. The interlaced signal contains two fields of a video frame captured consecutively. This enhances motion perception to the viewer, and reduces flicker by taking advantage of the phi phenomenon. This effectively doubles the time resolution (also called temporal resolution) as compared to non-interlaced footage (for frame rates equal to field rates). Interlaced signals require a display that is natively capable of showing the individual fields in a sequential order. CRT displays and ALiS plasma displays are made for displaying interlaced signals. Interlaced scan refers to one of two common methods for "painting" a video image on an electronic display screen (the other being progressive scan) by scanning or displaying each line or row of pixels. This technique uses two fields to create a frame. One field contains all odd-numbered lines in the image; the other contains all even-numbered lines. A Phase Alternating Line (PAL)-based television set display, for example, scans 50 fields every second (25 odd and 25 even). The two sets of 25 fields work together to create a full frame every 1/25 of a second (or 25 frames per second), but with interlacing create a new half frame every 1/50 of a second (or 50 fields per second). To display interlaced video on progressive scan displays, playback applies deinterlacing to the video signal (which adds input lag). The European Broadcasting Union has argued against interlaced video in production and broadcasting. They recommend 720p 50 fps (frames per second) for the current production format—and are working with the industry to introduce 1080p 50 as a future-proof production standard. 1080p 50 offers higher vertical resolution, better quality at lower bitrates, and easier conversion to other formats, such as 720p 50 and 1080i 50. The main argument is that no matter how complex the deinterlacing algorithm may be, the artifacts in the interlaced signal cannot be completely eliminated because some information is lost between frames. Despite arguments against it, television standards organizations continue to support interlacing. It is still included in digital video transmission formats such as DV, DVB, and ATSC. New video compression standards like High Efficiency Video Coding are optimized for progressive scan video, but sometimes do support interlaced video. Description Progressive scan captures, transmits, and displays an image in a path similar to text on a page—line by line, top to bottom. The interlaced scan pattern in a standard definition CRT display also completes such a scan, but in two passes (two fields). The first pass displays the first and all odd numbered lines, from the top left corner to the bottom right corner. The second pass displays the second and all even numbered lines, filling in the gaps in the first scan. This scan of alternate lines is called interlacing. A field is an image that contains only half of the lines needed to make a complete picture. Persistence of vision makes the eye perceive the two fields as a continuous image. In the days of CRT displays, the afterglow of the display's phosphor aided this effect. Interlacing provides full vertical detail with the same bandwidth that would be required for a full progressive scan, but with twice the perceived frame rate and refresh rate. To prevent flicker, all analog broadcast television systems used interlacing. Format identifiers like 576i50 and 720p50 specify the frame rate for progressive scan formats, but for interlaced formats they typically specify the field rate (which is twice the frame rate). This can lead to confusion, because industry-standard SMPTE timecode formats always deal with frame rate, not field rate. To avoid confusion, SMPTE and EBU always use frame rate to specify interlaced formats, e.g., 480i60 is 480i/30, 576i50 is 576i/25, and 1080i50 is 1080i/25. This convention assumes that one complete frame in an interlaced signal consists of two fields in sequence. Benefits of interlacing One of the most important factors in analog television is signal bandwidth, measured in megahertz. The greater the bandwidth, the more expensive and complex the entire production and broadcasting chain. This includes cameras, storage systems, broadcast systems—and reception systems: terrestrial, cable, satellite, Internet, and end-user displays (TVs and computer monitors). For a fixed bandwidth, interlace provides a video signal with twice the display refresh rate for a given line count (versus progressive scan video at a similar frame rate—for instance 1080i at 60 half-frames per second, vs. 1080p at 30 full frames per second). The higher refresh rate improves the appearance of an object in motion, because it updates its position on the display more often, and when an object is stationary, human vision combines information from multiple similar half-frames to produce the same perceived resolution as that provided by a progressive full frame. This technique is only useful though, if source material is available in higher refresh rates. Cinema movies are typically recorded at 24fps, and therefore don't benefit from interlacing, a solution which reduces the maximum video bandwidth to 5MHz without reducing the effective picture scan rate of 60 Hz. Given a fixed bandwidth and high refresh rate, interlaced video can also provide a higher spatial resolution than progressive scan. For instance, 1920×1080 pixel resolution interlaced HDTV with a 60 Hz field rate (known as 1080i60 or 1080i/30) has a similar bandwidth to 1280×720 pixel progressive scan HDTV with a 60 Hz frame rate (720p60 or 720p/60), but achieves approximately twice the spatial resolution for low-motion scenes. However, bandwidth benefits only apply to an analog or uncompressed digital video signal. With digital video compression, as used in all current digital TV standards, interlacing introduces additional inefficiencies. EBU has performed tests that show that the bandwidth savings of interlaced video over progressive video is minimal, even with twice the frame rate. I.e., 1080p50 signal produces roughly the same bit rate as 1080i50 (aka 1080i/25) signal, and 1080p50 actually requires less bandwidth to be perceived as subjectively better than its 1080i/25 (1080i50) equivalent when encoding a "sports-type" scene. Interlacing can be exploited to produce 3D TV programming, especially with a CRT display and especially for color filtered glasses by transmitting the color keyed picture for each eye in the alternating fields. This does not require significant alterations to existing equipment. Shutter glasses can be adopted as well, obviously with the requirement of achieving synchronisation. If a progressive scan display is used to view such programming, any attempt to deinterlace the picture will render the effect useless. For color filtered glasses the picture has to be either buffered and shown as if it was progressive with alternating color keyed lines, or each field has to be line-doubled and displayed as discrete frames. The latter procedure is the only way to suit shutter glasses on a progressive display. Interlacing problems Interlaced video is designed to be captured, stored, transmitted, and displayed in the same interlaced format. Because each interlaced video frame is two fields captured at different moments in time, interlaced video frames can exhibit motion artifacts known as interlacing effects, or combing, if recorded objects move fast enough to be in different positions when each individual field is captured. These artifacts may be more visible when interlaced video is displayed at a slower speed than it was captured, or in still frames. While there are simple methods to produce somewhat satisfactory progressive frames from the interlaced image, for example by doubling the lines of one field and omitting the other (halving vertical resolution), or anti-aliasing the image in the vertical axis to hide some of the combing, there are sometimes methods of producing results far superior to these. If there is only sideways (X axis) motion between the two fields and this motion is even throughout the full frame, it is possible to align the scanlines and crop the left and right ends that exceed the frame area to produce a visually satisfactory image. Minor Y axis motion can be corrected similarly by aligning the scanlines in a different sequence and cropping the excess at the top and bottom. Often the middle of the picture is the most necessary area to put into check, and whether there is only X or Y axis alignment correction, or both are applied, most artifacts will occur towards the edges of the picture. However, even these simple procedures require motion tracking between the fields, and a rotating or tilting object, or one that moves in the Z axis (away from or towards the camera) will still produce combing, possibly even looking worse than if the fields were joined in a simpler method. Some deinterlacing processes can analyze each frame individually and decide the best method. The best and only perfect conversion in these cases is to treat each frame as a separate image, but that may not always be possible. For framerate conversions and zooming it would mostly be ideal to line-double each field to produce a double rate of progressive frames, resample the frames to the desired resolution and then re-scan the stream at the desired rate, either in progressive or interlaced mode. Interline twitter Interlace introduces a potential problem called interline twitter, a form of moiré. This aliasing effect only shows up under certain circumstances—when the subject contains vertical detail that approaches the horizontal resolution of the video format. For instance, a finely striped jacket on a news anchor may produce a shimmering effect. This is twittering. Television professionals avoid wearing clothing with fine striped patterns for this reason. Professional video cameras or computer-generated imagery systems apply a low-pass filter to the vertical resolution of the signal to prevent interline twitter. Interline twitter is the primary reason that interlacing is less suited for computer displays. Each scanline on a high-resolution computer monitor typically displays discrete pixels, each of which does not span the scanline above or below. When the overall interlaced framerate is 60 frames per second, a pixel (or more critically for e.g. windowing systems or underlined text, a horizontal line) that spans only one scanline in height is visible for the 1/60 of a second that would be expected of a 60 Hz progressive display - but is then followed by 1/60 of a second of darkness (whilst the opposite field is scanned), reducing the per-line/per-pixel refresh rate to 30 frames per second with quite obvious flicker. To avoid this, standard interlaced television sets typically do not display sharp detail. When computer graphics appear on a standard television set, the screen is either treated as if it were half the resolution of what it actually is (or even lower), or rendered at full resolution and then subjected to a low-pass filter in the vertical direction (e.g. a "motion blur" type with a 1-pixel distance, which blends each line 50% with the next, maintaining a degree of the full positional resolution and preventing the obvious "blockiness" of simple line doubling whilst actually reducing flicker to less than what the simpler approach would achieve). If text is displayed, it is large enough so that any horizontal lines are at least two scanlines high. Most fonts for television programming have wide, fat strokes, and do not include fine-detail serifs that would make the twittering more visible; in addition, modern character generators apply a degree of anti-aliasing that has a similar line-spanning effect to the aforementioned full-frame low-pass filter. Deinterlacing ALiS plasma panels and the old CRTs can display interlaced video directly, but modern computer video displays and TV sets are mostly based on LCD technology, which mostly use progressive scanning. Displaying interlaced video on a progressive scan display requires a process called deinterlacing. This is an imperfect technique, and generally lowers resolution and causes various artifacts—particularly in areas with objects in motion. Providing the best picture quality for interlaced video signals requires expensive and complex devices and algorithms. For television displays, deinterlacing systems are integrated into progressive scan TV sets that accept interlaced signal, such as broadcast SDTV signal. Most modern computer monitors do not support interlaced video, besides some legacy medium-resolution modes (and possibly 1080i as an adjunct to 1080p), and support for standard-definition video (480/576i or 240/288p) is particularly rare given its much lower line-scanning frequency vs typical "VGA"-or-higher analog computer video modes. Playing back interlaced video from a DVD, digital file or analog capture card on a computer display instead requires some form of deinterlacing in the player software and/or graphics hardware, which often uses very simple methods to deinterlace. This means that interlaced video often has visible artifacts on computer systems. Computer systems may be used to edit interlaced video, but the disparity between computer video display systems and interlaced television signal formats means that the video content being edited cannot be viewed properly without separate video display hardware. Current manufacture TV sets employ a system of intelligently extrapolating the extra information that would be present in a progressive signal entirely from an interlaced original. In theory: this should simply be a problem of applying the appropriate algorithms to the interlaced signal, as all information should be present in that signal. In practice, results are currently variable, and depend on the quality of the input signal and amount of processing power applied to the conversion. The biggest impediment, at present, is artifacts in the lower quality interlaced signals (generally broadcast video), as these are not consistent from field to field. On the other hand, high bit rate interlaced signals such as from HD camcorders operating in their highest bit rate mode work well. Deinterlacing algorithms temporarily store a few frames of interlaced images and then extrapolate extra frame data to make a smooth flicker-free image. This frame storage and processing results in a slight display lag that is visible in business showrooms with a large number of different models on display. Unlike the old unprocessed NTSC signal, the screens do not all follow motion in perfect synchrony. Some models appear to update slightly faster or slower than others. Similarly, the audio can have an echo effect due to different processing delays. History When motion picture film was developed, the movie screen had to be illuminated at a high rate to prevent visible flicker. The exact rate necessary varies by brightness — 50 Hz is (barely) acceptable for small, low brightness displays in dimly lit rooms, whilst 80 Hz or more may be necessary for bright displays that extend into peripheral vision. The film solution was to project each frame of film three times using a three-bladed shutter: a movie shot at 16 frames per second illuminated the screen 48 times per second. Later, when sound film became available, the higher projection speed of 24 frames per second enabled a two bladed shutter to produce 48 times per second illumination—but only in projectors incapable of projecting at the lower speed. This solution could not be used for television. To store a full video frame and display it twice requires a frame buffer—electronic memory (RAM)—sufficient to store a video frame. This method did not become feasible until the late 1980s and with digital technology. In addition, avoiding on-screen interference patterns caused by studio lighting and the limits of vacuum tube technology required that CRTs for TV be scanned at AC line frequency. (This was 60 Hz in the US, 50 Hz Europe.) In the domain of mechanical television, Léon Theremin demonstrated the concept of interlacing. He had been developing a mirror drum-based television, starting with 16 lines resolution in 1925, then 32 lines and eventually 64 using interlacing in 1926. As part of his thesis, on May 7, 1926, he electrically transmitted and projected near-simultaneous moving images on a five-foot square screen. In 1930, German Telefunken engineer Fritz Schröter first formulated and patented the concept of breaking a single video frame into interlaced lines. In the USA, RCA engineer Randall C. Ballard patented the same idea in 1932. Commercial implementation began in 1934 as cathode ray tube screens became brighter, increasing the level of flicker caused by progressive (sequential) scanning. In 1936, when the UK was setting analog standards, early thermionic valve based CRT drive electronics could only scan at around 200 lines in 1/50 of a second (i.e. approximately a 10kHz repetition rate for the sawtooth horizontal deflection waveform). Using interlace, a pair of 202.5-line fields could be superimposed to become a sharper 405 line frame (with around 377 used for the actual image, and yet fewer visible within the screen bezel; in modern parlance, the standard would be "377i"). The vertical scan frequency remained 50 Hz, but visible detail was noticeably improved. As a result, this system supplanted John Logie Baird's 240 line mechanical progressive scan system that was also being trialled at the time. From the 1940s onward, improvements in technology allowed the US and the rest of Europe to adopt systems using progressively higher line-scan frequencies and more radio signal bandwidth to produce higher line counts at the same frame rate, thus achieving better picture quality. However the fundamentals of interlaced scanning were at the heart of all of these systems. The US adopted the 525 line system, later incorporating the composite color standard known as NTSC, Europe adopted the 625 line system, and the UK switched from its idiosyncratic 405 line system to (the much more US-like) 625 to avoid having to develop a (wholly) unique method of color TV. France switched from its similarly unique 819 line monochrome system to the more European standard of 625. Europe in general, including the UK, then adopted the PAL color encoding standard, which was essentially based on NTSC, but inverted the color carrier phase with each line (and frame) in order to cancel out the hue-distorting phase shifts that dogged NTSC broadcasts. France instead adopted its own unique, twin-FM-carrier based SECAM system, which offered improved quality at the cost of greater electronic complexity, and was also used by some other countries, notably Russia and its satellite states. Though the color standards are often used as synonyms for the underlying video standard - NTSC for 525i/60, PAL/SECAM for 625i/50 - there are several cases of inversions or other modifications; e.g. PAL color is used on otherwise "NTSC" (that is, 525i/60) broadcasts in Brazil, as well as vice versa elsewhere, along with cases of PAL bandwidth being squeezed to 3.58MHz to fit in the broadcast waveband allocation of NTSC, or NTSC being expanded to take up PAL's 4.43MHz. Interlacing was ubiquitous in displays until the 1970s, when the needs of computer monitors resulted in the reintroduction of progressive scan, including on regular TVs or simple monitors based on the same circuitry; most CRT based displays are entirely capable of displaying both progressive and interlace regardless of their original intended use, so long as the horizontal and vertical frequencies match, as the technical difference is simply that of either starting/ending the vertical sync cycle halfway along a scanline every other frame (interlace), or always synchronising right at the start/end of a line (progressive). Interlace is still used for most standard definition TVs, and the 1080i HDTV broadcast standard, but not for LCD, micromirror (DLP), or most plasma displays; these displays do not use a raster scan to create an image (their panels may still be updated in a left-to-right, top-to-bottom scanning fashion, but always in a progressive fashion, and not necessarily at the same rate as the input signal), and so cannot benefit from interlacing (where older LCDs use a "dual scan" system to provide higher resolution with slower-updating technology, the panel is instead divided into two adjacent halves that are updated simultaneously): in practice, they have to be driven with a progressive scan signal. The deinterlacing circuitry to get progressive scan from a normal interlaced broadcast television signal can add to the cost of a television set using such displays. Currently, progressive displays dominate the HDTV market. Interlace and computers In the 1970s, computers and home video game systems began using TV sets as display devices. At that point, a 480-line NTSC signal was well beyond the graphics abilities of low cost computers, so these systems used a simplified video signal that made each video field scan directly on top of the previous one, rather than each line between two lines of the previous field, along with relatively low horizontal pixel counts. This marked the return of progressive scanning not seen since the 1920s. Since each field became a complete frame on its own, modern terminology would call this 240p on NTSC sets, and 288p on PAL. While consumer devices were permitted to create such signals, broadcast regulations prohibited TV stations from transmitting video like this. Computer monitor standards such as the TTL-RGB mode available on the CGA and e.g. BBC Micro were further simplifications to NTSC, which improved picture quality by omitting modulation of color, and allowing a more direct connection between the computer's graphics system and the CRT. By the mid-1980s, computers had outgrown these video systems and needed better displays. Most home and basic office computers suffered from the use of the old scanning method, with the highest display resolution being around 640x200 (or sometimes 640x256 in 625-line/50 Hz regions), resulting in a severely distorted tall narrow pixel shape, making the display of high resolution text alongside realistic proportioned images difficult (logical "square pixel" modes were possible but only at low resolutions of 320x200 or less). Solutions from various companies varied widely. Because PC monitor signals did not need to be broadcast, they could consume far more than the 6, 7 and 8 MHz of bandwidth that NTSC and PAL signals were confined to. IBM's Monochrome Display Adapter and Enhanced Graphics Adapter as well as the Hercules Graphics Card and the original Macintosh computer generated video signals of 342 to 350p, at 50 to 60 Hz, with approximately 16MHz of bandwidth, some enhanced PC clones such as the AT&T 6300 (aka Olivetti M24) as well as computers made for the Japanese home market managed 400p instead at around 24MHz, and the Atari ST pushed that to 71Hz with 32MHz bandwidth - all of which required dedicated high-frequency (and usually single-mode, i.e. not "video"-compatible) monitors due to their increased line rates. The Commodore Amiga instead created a true interlaced 480i60/576i50 RGB signal at broadcast video rates (and with a 7 or 14MHz bandwidth), suitable for NTSC/PAL encoding (where it was smoothly decimated to 3.5~4.5MHz). This ability (plus built-in genlocking) resulted in the Amiga dominating the video production field until the mid-1990s, but the interlaced display mode caused flicker problems for more traditional PC applications where single-pixel detail is required, with "flicker-fixer" scan-doubler peripherals plus high-frequency RGB monitors (or Commodore's own specialist scan-conversion A2024 monitor) being popular, if expensive, purchases amongst power users. 1987 saw the introduction of VGA, on which PCs soon standardized, as well as Apple's Macintosh II range which offered displays of similar, then superior resolution and color depth, with rivalry between the two standards (and later PC quasi-standards such as XGA and SVGA) rapidly pushing up the quality of display available to both professional and home users. In the late 1980s and early 1990s, monitor and graphics card manufacturers introduced newer high resolution standards that once again included interlace. These monitors ran at higher scanning frequencies, typically allowing a 75 to 90 Hz field rate (i.e. 37 to 45Hz frame rate), and tended to use longer-persistence phosphors in their CRTs, all of which was intended to alleviate flicker and shimmer problems. Such monitors proved generally unpopular, outside of specialist ultra-high-resolution applications such as CAD and DTP which demanded as many pixels as possible, with interlace being a necessary evil and better than trying to use the progressive-scan equivalents. Whilst flicker was often not immediately obvious on these displays, eyestrain and lack of focus nevertheless became a serious problem, and the trade-off for a longer afterglow was reduced brightness and poor response to moving images, leaving visible and often off-colored trails behind. These colored trails were a minor annoyance for monochrome displays, and the generally slower-updating screens used for design or database-query purposes, but much more troublesome for color displays and the faster motions inherent in the increasingly popular window-based operating systems, as well as the full-screen scrolling in WYSIWYG word-processors, spreadsheets, and of course for high-action games. Additionally, the regular, thin horizontal lines common to early GUIs, combined with low color depth that meant window elements were generally high-contrast (indeed, frequently stark black-and-white), made shimmer even more obvious than with otherwise lower fieldrate video applications. As rapid technological advancement made it practical and affordable, barely a decade after the first ultra-high-resolution interlaced upgrades appeared for the IBM PC, to provide sufficiently high pixel clocks and horizontal scan rates for hi-rez progressive-scan modes in first professional and then consumer-grade displays, the practice was soon abandoned. For the rest of the 1990s, monitors and graphics cards instead made great play of their highest stated resolutions being "non-interlaced", even where the overall framerate was barely any higher than what it had been for the interlaced modes (e.g. SVGA at 56p versus 43i to 47i), and usually including a top mode technically exceeding the CRT's actual resolution (number of color-phosphor triads) which meant there was no additional image clarity to be gained through interlacing and/or increasing the signal bandwidth still further. This experience is why the PC industry today remains against interlace in HDTV, and lobbied for the 720p standard, and continues to push for the adoption of 1080p (at 60 Hz for NTSC legacy countries, and 50 Hz for PAL); however, 1080i remains the most common HD broadcast resolution, if only for reasons of backward compatibility with older HDTV hardware that cannot support 1080p - and sometimes not even 720p - without the addition of an external scaler, similar to how and why most SD-focussed digital broadcasting still relies on the otherwise obsolete MPEG2 standard embedded into e.g. DVB-T. See also Field (video): In interlaced video, one of the many still images displayed sequentially to create the illusion of motion on the screen. 480i: standard-definition interlaced video usually used in traditionally NTSC countries (North and parts of South America, Japan) 576i: standard-definition interlaced video usually used in traditionally PAL and SECAM countries 1080i: high-definition television (HDTV) digitally broadcast in 16:9 (widescreen) aspect ratio standard Progressive scan: the opposite of interlacing; the image is displayed line by line. Deinterlacing: converting an interlaced video signal into a non-interlaced one Progressive segmented frame: a scheme designed to acquire, store, modify, and distribute progressive-scan video using interlaced equipment and media Telecine: a method for converting film frame rates to television frame rates using interlacing Federal Standard 1037C: defines interlaced scanning Moving image formats Wobulation: a variation of interlacing used in DLP displays Screen tearing References External links Fields: Why Video Is Crucially Different from Graphics – An article that describes field-based, interlaced, digitized video and its relation to frame-based computer graphics with many illustrations Digital Video and Field Order - An article that explains with diagrams how the field order of PAL and NTSC has arisen, and how PAL and NTSC is digitized 100FPS.COM* – Video Interlacing/Deinterlacing Interlace / Progressive Scanning - Computer vs. Video Sampling theory and synthesis of interlaced video Interlaced versus progressive Film and video technology Television technology Video formats 1925 introductions
24281777
https://en.wikipedia.org/wiki/Brain%20Fuck%20Scheduler
Brain Fuck Scheduler
The Brain Fuck Scheduler (BFS) is a process scheduler designed for the Linux kernel in August 2009 as an alternative to the Completely Fair Scheduler (CFS) and the O(1) scheduler. BFS was created by an experienced kernel programmer Con Kolivas. The objective of BFS, compared to other schedulers, is to provide a scheduler with a simpler algorithm, that does not require adjustment of heuristics or tuning parameters to tailor performance to a specific type of computational workload. Kolivas asserted that these tunable parameters were difficult for the average user to understand, especially in terms of interactions of multiple parameters with each other, and claimed that the use of such tuning parameters could often result in improved performance in a specific targeted type of computation, at the cost of worse performance in the general case. BFS has been reported to improve responsiveness on Linux desktop computers with fewer than 16 cores. Shortly following its introduction, the new scheduler made headlines within the Linux community, appearing on Slashdot, with reviews in Linux Magazine and Linux Pro Magazine. Although there have been varied reviews of improved performance and responsiveness, Con Kolivas did not intend for BFS to be integrated into the mainline kernel. Theoretical design and efficiency In 2009, BFS was introduced and had originally used a doubly linked list data structure, but the data structure is treated like a queue. Task insertion is O(1). Task search for next task to execute is O() worst case. It uses a single global run queue which all CPUs use. Tasks with higher scheduling priorities get executed first. Tasks are ordered (or distributed) and chosen based on the virtual deadline formula in all policies except for the realtime and Isochronous priority classes. The execution behavior is still a weighted variation of the Round-Robin Scheduler especially when tasks have the same priority below the Isochronous policy. The user tuneable round robin interval (time slice) is 6 milliseconds by default which was chosen as the minimal jitter just below detectable by humans. Kolivas claimed that anything below the 6 ms was pointless and anything above 300 ms for the round robin timeslice is fruitless in terms of throughput. This important tuneable can tailor the round robin scheduler as a trade off between throughput and latency. All tasks get the same time slice with the exception of realtime FIFO which is assumed to have infinite time slice. Kolivas explained the reason why he choose to go with the doubly linked list mono-runqueue than the multi-runqueue (round robin) priority array per CPU that was used in his RDSL scheduler was to put to ease fairness among the multiple CPU scenario and remove complexity that each runqueue in a multi-runqueue scenario had to maintain its own latencies and [task] fairness. He claimed that deterministic latencies was guaranteed with BFS in his later iteration of MuQSS. He also recognized possible lock contention problem (related to the altering, removal, creation of task node data) with increasing CPUs and the overhead of O(log ) next task for execution lookup. MuQSS tried to resolve those problems. Kolivas later changed the design to a skip list in the v0.480 release of BFS in 2016. This time this altered the efficiency of the scheduler. He noted O(log n) task insertion, O(1) task lookup; O(k), with k<=16, for task removal. Virtual deadline The virtual deadline formula is a future deadline time that is the scaled round robin timeslice based on the nice level offset by the current time (in niffy units or nanosecond jiffies, an internal kernel time counter). The virtual deadline only suggests the order but does not guarantee that a task will run exactly on the future scheduled niffy. First a prio ratios lookup table is created. It is based on a recursive sequence. It increases 10% each nice level. It follows a parabolic pattern if graphed, and the niced tasks are distributed as a moving squared function from 0 to 39 (corresponding from highest to lowest nice priority) as the domain and 128 to 5089 as the range. The moving part comes from the variable in the virtual deadline formula that Kolivas hinted. The task's nice-to-index mapping function is mapped from nice −20...19 to index 0...39 to be used as the input to the prio ratio lookup table. This mapping function is the macro in sched.h in the kernel header. The internal kernel implementation slightly differs with range between 100 and 140 static priority but users will see it as −20...19 nice. The virtual deadline is based on this exact formula: Alternatively, where is the virtual deadline in u64 integer nanoseconds as a function of nice and which is the current time in niffies, is the prio ratio table lookup as a function of index, is the task's nice-to-index mapping function, is the round robin timeslice in milliseconds, is a constant of 1 millisecond in terms of nanoseconds as a latency reducing approximation of the conversion factor of but Kolivas uses a base 2 constant with approximately that scale. Smaller values of mean that the virtual deadline is earlier corresponding to negative nice values. Larger values of indicate the virtual deadline is pushed back later corresponding to positive nice values. It uses this formula whenever the timeslice expires. 128 in base 2 corresponds to 100 in base 10 and possibly a "pseudo 100". 115 in base 2 corresponds to 90 in base 10. Kolivas uses 128 for "fast shifts", as in division is right shift base 2. Scheduling policies BFS uses scheduling policies to determine how much of the CPU tasks may use. BFS uses 4 scheduling tiers (called scheduling policies or scheduling classes) ordered from best to worst which determines how tasks are selected with the ones on top being executed first. Each task has a special value called a prio. In the v0.462 edition (used in the -ck 4.0 kernel patchset), there are total of 103 "priority queues" (aka prio) or allowed values that it can take. No actual special data structure was used as the priority queue but only the doubly linked list runqueue itself. The lower prio value means it is more important and gets executed first. Realtime policy The realtime policy was designed for realtime tasks. This policy implies that the running tasks cannot be interrupted (i.e. preempted) by the lower prio-ed task or lower priority policy tiers. Priority classes considered under the realtime policy by the scheduler are those marked SCHED_RR and SCHED_FIFO. The scheduler treats realtime round robin (SCHED_RR) and realtime FIFO (SCHED_FIFO) differently. The design laid out first 100 static priority queues. The task that will get chosen for execution is based on task availability of the lowest value of prio of the 100 queues and FIFO scheduling. On forks, the process priority will be demoted to normal policy. On unprivileged use (i.e. non-root user) of sched_setscheduler called with a request for realtime policy class, the scheduler will demote the task to Isochronous policy. Isochronous policy The Isochronous policy was designed for near realtime performance for non-root users. The design laid out 1 priority queue that by default ran as pseudo-realtime tasks, but can be tuned as a degree of realtime. The behavior of the policy can allow a task can be demoted to normal policy when it exceeds a tuneable resource handling percentage (70% by default) of 5 seconds scaled to the number of online CPUs and the timer resolution plus 1 tick. The formula was altered in MuQSS due to the multi-runqueue design. The exact formulas are: where is the total number of isochronous ticks, is the timer frequency, is the number of online CPUs, is the tuneable resource handling percentage not in decimal but as a whole number. The timer frequency is set to 250 by default and editable in the kernel, but usually tuned to 100 Hz for servers and 1000 Hz for interactive desktops. 250 is the balanced value. Setting to 100 made tasks behave as realtime and 0 made it not pseudo-realtime and anything in the middle was pseudo-realtime. The task that had an earliest virtual deadline was chosen for execution, but when multiple Isochronous tasks are in existence, they schedule as round robin allowing tasks to run the tuneable round robin value (with 6 ms as the default) one after another in a fair equal chance without considering the nice level. This behavior of the Isochronous policy is unique to only BFS and MuQSS and may not be implemented in other CPU schedulers. Normal policy The normal policy was designed for regular use and is the default policy. Newly created tasks are typically marked normal. The design laid out one priority queue and tasks are chosen to be executed first based on earliest virtual deadline. Idle priority policy The idle priority was designed for background processes such as distributed programs and transcoders so that foreground processes or those above this scheduling policy can run uninterrupted. The design laid out 1 priority queue and tasks can be promoted to normal policy automatically to prevent indefinite resource hold. The next executed task with Idle priority with others residing in the same priority policy is selected by the earliest virtual deadline. Preemption Preemption can occur when a newly ready task with a higher priority policy (i.e. higher prio) has an earlier virtual deadline than the currently running task - which will be descheduled and put at the back of the queue. Descheduled means that its virtual deadline is updated. The task's time gets refilled to max round robin quantum when it has used up all its time. If the scheduler found the task at the higher prio with the earliest virtual deadline, it will execute in place of the less important currently running task only if all logical CPUs (including hyperthreaded cores / SMT threads) are busy. The scheduler will delay preemption as long as possible if there are unused logical CPUs. If a task is marked idle priority policy, it cannot preempt at all even other idle policy marked tasks but rather use cooperative multitasking. Task placement, multiple cores When the scheduler discovers a waking task on a non-unicore system, it will need to determine which logical CPU to run the task on. The scheduler favors most the idle hyperthreaded cores (or idle SMT threads) first on the same CPU that the task executed on, then the other idle core of a multicore CPU, then the other CPUs on the same NUMA node, then all busy hyperthreaded cores / SMT threads / logical CPUs to be preempted on the same NUMA node, then the other (remote) NUMA node and is ranked on a preference list. This special scan exists to minimize latency overhead resulting of migrating the task. The preemption order is similar to the above paragraph. The preemption order is hyperthreaded core / SMT units on the same multicore first, then the other core in the multicore, then the other CPU on the same NUMA node. When it goes scanning for a task to preempt in the other remote NUMA node, the preemption is just any busy threads with lower to equal prio or later virtual deadline assuming that all logical CPUs (including hyperthreaded core / SMT threads) in the machine are all busy. The scheduler will have to scan for a suitable task with a lower or maybe equal priority policy task (with a later virtual deadline if necessary) to preempt and avoid logical CPUs with a task with a higher priority policy which it cannot preempt. Local preemption has a higher rank than scanning for a remote idle NUMA unit. When a task is involuntary preempted at the time the CPU is slowed down as a result of kernel mediated CPU frequency scaling (aka CPU frequency governor), the task is specially marked "sticky" except those marked as realtime policy. Marked sticky indicates that the task still has unused time and the task is restricted executing to the same CPU. The task will be marked sticky whenever the CPU scaling governor has scaled the CPU at a slower speed. The idled stickied task will return to either executing at full Ghz speed by chance or to be rescheduled to execute on the best idle CPU that is not the same CPU that the task ran on. It is not desirable to migrate the task to other places but make it idle instead because of increased latency brought about of overhead to migrating the task to another CPU or NUMA node. This sticky feature was removed in the last iteration of BFS (v0.512) corresponding to Kolivas' patchset 4.8-ck1 and did not exist in MuQSS. schedtool A privileged user can change the priority policy of a process with the schedtool program or it is done by a program itself. The priority class can be manipulated at the code level with a syscall like sched_setscheduler only available to root, which schedtool uses. Benchmarks In a contemporary study, the author compared the BFS to the CFS using the Linux kernel v3.6.2 and several performance-based endpoints. The purpose of this study was to evaluate the Completely Fair Scheduler (CFS) in the vanilla Linux kernel and the BFS in the corresponding kernel patched with the ck1 patchset. Seven different machines were used to see if differences exist and, to what degree they scale using performance based metrics. Number of logical CPUs ranged from 1 to 16. These end-points were never factors in the primary design goals of the BFS. The results were encouraging. Kernels patched with the ck1 patch set including the BFS outperformed the vanilla kernel using the CFS at nearly all the performance-based benchmarks tested. Further study with a larger test set could be conducted, but based on the small test set of 7 PCs evaluated, these increases in process queuing, efficiency/speed are, on the whole, independent of CPU type (mono, dual, quad, hyperthreaded, etc.), CPU architecture (32-bit and 64-bit) and of CPU multiplicity (mono or dual socket). Moreover, on several "modern" CPUs, such as the Intel Core 2 Duo and Core i7, that represent common workstations and laptops, BFS consistently outperformed the CFS in the vanilla kernel at all benchmarks. Efficiency and speed gains were small to moderate. Adoption BFS is the default scheduler for the following desktop Linux distributions: NimbleX and Sabayon Linux 7 PCLinuxOS 2010 Zenwalk 6.4 GalliumOS 2.1 Additionally, BFS has been added to an experimental branch of Google's Android development repository. It was not included in the Froyo release after blind testing did not show an improved user experience. MuQSS BFS has been retired in favour of MuQSS, known formally as the Multiple Queue Skiplist Scheduler, a rewritten implementation of the same concept. Theoretical design and efficiency MuQSS uses a bidirectional static arrayed 8 level skip list and tasks are ordered by static priority [queues] (referring to the scheduling policy) and a virtual deadline. 8 was chosen to fit the array in the cacheline. Doubly linked data structure design was chosen to speed up task removal. Removing a task takes only O(1) with a doubly skip list versus the original design by William Pugh which takes O() worst case. Task insertion is O(log ). The next task for execution lookup is O(), where is the number of CPUs. The next task for execution is O(1) per runqueue, but the scheduler examines every other runqueues to maintain task fairness among CPUs, for latency or balancing (to maximize CPU usage and cache coherency on the same NUMA node over those that access across NUMA nodes), so ultimately O(). The max number of tasks it can handle are 64k tasks per runqueue per CPU. It uses multiple task runqueues in some configurations one runqueue per CPU, whereas its predecessor BFS only used one task runqueue for all CPUs. Tasks are ordered as a gradient in the skip list in a way that realtime policy priority comes first and idle policy priority comes last. Normal and idle priority policy still get sorted by virtual deadline which uses nice values. Realtime and Isochronous policy tasks are run in FIFO order ignoring nice values. New tasks with same key are placed in FIFO order meaning that newer tasks get placed at the end of the list (i.e. top most node vertically), and tasks at 0th level or at the front-bottom get execution first before those at nearest to the top vertically and those furthest away from the head node. The key used for inserted sorting is either the static priority or the virtual deadline. The user can choose to share runqueues among multicore or have a runqueue per logical CPU. The speculation of sharing runqueues design was to reduce latency with a tradeoff of throughput. A new behavior introduced by MuQSS was the use of the high resolution timer for below millisecond accuracy when timeslices were used up resulting in rescheduling tasks. See also Fair-share scheduling References External links Brain Fuck Scheduler FAQ Free software Linux kernel process schedulers
63600120
https://en.wikipedia.org/wiki/Govt.%20Degree%20College%20Phool%20Nagar%2C%20Kasur%20%28Boys%29
Govt. Degree College Phool Nagar, Kasur (Boys)
Government Associate College Phool Nagar is located in Phool Nagar, Punjab, Pakistan. It was established on 1 September 1974 and offers courses in mathematics, sciences, computer science, languages and history. A library was established in college on 1 September 1989. The college was nationalized during the government of Pakistan Prime Minister Zulfikar Ali Bhutto. Principal and vice principal Current principal of college is Rao Abdul Waheed Tabish (2017 to onwards). And vice principal is Abdul Ghani. Departments Following is the details of departments in Govt. Degree College Phool Nagar. Department Of Mathematics CTI Department Of Statistics Professor Mudassar Hussain (M.Phil Statistics) Department Of Chemistry Professor Usman Arshad (M.Phil Chemistry) Department Of Biology CTI Department Of Physics Professor Munir Ahmed (M.Phil Physics) Department Of Psychology Professor Abdul Waheed Tabish (M.Phil Psychology) Department Of Islamiat Professor Muhammad Zikriya M.Phil(Islamiat), PhD Scholar, MA(Education), MA(Arabic) Department Of Urdu Professor Rao Akbar Ali M.Phil(Urdu) Department Of Education CTI Department Of English Professor Habibullah Naveed Ahmed Department Of History Professor Saleem Akhtar MA(Political Science) Department Of Political Science CTI Department Of Sociology CTI Department Of Arabic Professor Dr. Hafiz Abdul Ghani PhD(Islamiat) Department Of Physical Education Ali Yousaf Department Of Economics Professor Atif Javaid Rao M.Phil(Economics) Professor Fahad Iqbal MA(Economics) Department Of Library Science Janaab Zahid Nawab MA(Library Science) Teaching Subjects For Examination Of Intermediate Compulsory Subjects:- 1- Urdu 2- English 3-Islamiat(Compulsory) 4- Pakistan Studies Elective Subjects:- Pre-Medical Group:- 1- Chemistry 2- Physics 3- Biology Pre-Engineering Group 1- Chemistry 2- Physics 3- Mathematic ICS You have to select one of the following groups G-1 Physics, Computer Science, Mathematic G-2 Economics, Computer Science, Mathematic G-3 Statistics, Computer Science, Mathematic General Science Group:- You have to select one of the following groups G-1 Economica, Statistics, Mathematics G-2 Physics, Statistics, Mathematics I.Com:- Accounting, Principle of Economics, Principle of Commerce, Business Math Humanities Group:- You have to select one of the following groups G-1 Psychology, Socialogy, Economics, History of Pakistan G-2 Physical Education, Civics, Education G-3 Library Science, Arabic, Islamiyat (Elective) See also Superior Group of Colleges Punjab Group of Colleges References https://m.facebook.com/pages/category/Education/Government-Degree-College-for-Boys-Phool-Nagar-Kasur-949829955165596/ https://www.google.com/maps/place/Govt+Boys+College+Phool+Nagar,+Lahore,+Kasur,+Punjab,+Pakistan/@31.2158845,73.9357567,15z/data=!4m2!3m1!1s0x39185c7603b0b351:0x95f8b2703f6fc526?gl=pk Schools in Punjab, Pakistan
20946870
https://en.wikipedia.org/wiki/Alistair%20Sinclair
Alistair Sinclair
Alistair Sinclair (born 1960) is a British computer scientist and computational theorist. Sinclair received his B.A. in mathematics from St. John’s College, Cambridge in 1979, and his Ph.D. in computer science from the University of Edinburgh in 1988 under the supervision of Mark Jerrum.He is professor at the Computer Science division at the University of California, Berkeley and has held faculty positions at University of Edinburgh and visiting positions at DIMACS and the International Computer Science Institute in Berkeley. Sinclair’s research interests include the design and analysis of randomized algorithms, computational applications of stochastic processes and nonlinear dynamical systems, Monte Carlo methods in statistical physics and combinatorial optimization. With his advisor Mark Jerrum, Sinclair investigated the mixing behaviour of Markov chains to construct approximation algorithms for counting problems such as the computing the permanent, with applications in diverse fields such as matching algorithms, geometric algorithms, mathematical programming, statistics, physics-inspired applications and dynamical systems. This work has been highly influential in theoretical computer science and was recognised with the Gödel Prize in 1996. A refinement of these methods led to a fully polynomial time randomised approximation algorithm for computing the permanent, for which Sinclair and his co-authors received the Fulkerson Prize in 2006. Sinclair's initial forms part of the name of the GNRS conjecture on metric embeddings of minor-closed graph families. References British computer scientists Theoretical computer scientists Gödel Prize laureates Alumni of the University of Edinburgh Living people UC Berkeley College of Engineering faculty 1960 births Alumni of St John's College, Cambridge
39228340
https://en.wikipedia.org/wiki/Information%20security%20indicators
Information security indicators
In information technology, benchmarking of computer security requires measurements for comparing both different IT systems and single IT systems in dedicated situations. The technical approach is a pre-defined catalog of security events (security incident and vulnerability) together with corresponding formula for the calculation of security indicators that are accepted and comprehensive. Information security indicators have been standardized by the ETSI Industrial Specification Group (ISG) ISI. These indicators provide the basis to switch from a qualitative to a quantitative culture in IT Security Scope of measurements: External and internal threats (attempt and success), user's deviant behaviours, nonconformities and/or vulnerabilities (software, configuration, behavioural, general security framework). In 2019 the ISG ISI terminated and related standards will be maintained via the ETSI TC CYBER. The list of Information Security Indicators belongs to the ISI framework that consists of the following eight closely linked Work Items: ISI Indicators (ISI-001-1 and Guide ISI-001-2): A powerful way to assess security controls level of enforcement and effectiveness (+ benchmarking) ISI Event Model (ISI-002): A comprehensive security event classification model (taxonomy + representation) ISI Maturity (ISI-003): Necessary to assess the maturity level regarding overall SIEM capabilities (technology/people/process) and to weigh event detection results. Methodology complemented by ISI-005 (which is a more detailed and case-by-case approach) ISI Guidelines for event detection implementation (ISI-004): Demonstrate through examples how to produce indicators and how to detect the related events with various means and methods (with classification of use cases/symptoms) ISI Event Stimulation (ISI-005): Propose a way to produce security events and to test the effectiveness of existing detection means (for major types of events) An ISI-compliant Measurement and Event Management Architecture for Cyber Security and Safety (ISI-006): This work item focuses on designing a cybersecurity language to model threat intelligence information and enable detection tools interoperability. ISI Guidelines for building and operating a secured SOC (ISI-007): A set of requirements to build and operate a secured SOC (Security Operations Center) addressing technical, human and process aspects. ISI Description of a whole organization-wide SIEM approach (ISI-008): A whole SIEM (CERT/SOC based) approach positioning all ISI aspects and specifications. Preliminary work on information security indicators have been done by the French Club R2GS. The first public set of the ISI standards (security indicators list and event model) have been released in April 2013. References External links ETSI ISG ISI members ETSI TC CYBER (responsible for ISI maintenance) ETSI ISI flyer ISI Quick Reference Card ISI events Quick Reference Card Club R2GS portal Data security Security
3876776
https://en.wikipedia.org/wiki/Lane%20Kiffin
Lane Kiffin
Lane Monte Kiffin (born May 9, 1975) is an American football coach who is currently the head football coach at the University of Mississippi (Ole Miss). Kiffin formerly served as the offensive coordinator for the USC Trojans football team from 2005 to 2006, head coach of the National Football League's Oakland Raiders from 2007 to 2008, head coach of the University of Tennessee Volunteers college football team in 2009, and head coach of the Trojans from 2010 to 2013. He was the youngest head coach in modern NFL history at the time when he joined the Raiders (until in 2017 when Sean McVay joined the Rams), and, for a time, was the youngest head coach of a BCS Conference team in college football. Kiffin was the offensive coordinator at the University of Alabama from 2014 until 2016, when he was hired to be the head coach at Florida Atlantic, a position he held until December 2019, when he became the head coach at Ole Miss. Kiffin is the son of longtime NFL defensive coordinator Monte Kiffin. Playing career Kiffin graduated from Bloomington Jefferson High School in Minnesota in 1994, and committed to Fresno State University to play college football. He played backup quarterback for the Bulldogs, giving up his senior season to become a Student Assistant Coach for position coach Jeff Tedford, who would later become the head coach at Cal in 2002. Kiffin graduated from Fresno State in 1998. Coaching career Early positions Kiffin worked as a graduate assistant for one year at Colorado State University. In 1999, while he was working with the offensive line, the Rams played in the Liberty Bowl. Kiffin secured a job with the Jacksonville Jaguars of the NFL as a quality control assistant for one year. He was then hired by head coach Pete Carroll as a tight ends coach at USC. USC Trojans assistant coach Kiffin began working with the University of Southern California (USC) staff during the 2001 season and became the wide receivers coach prior to the 2002 season. For the 2004 season, he added the duties of passing game coordinator, and he was promoted to offensive coordinator along with Steve Sarkisian for the 2005 season after Norm Chow left USC for the same position with the NFL's Tennessee Titans. In addition to his duties as offensive coordinator, Kiffin took the reins as recruiting coordinator that year. Along with these duties, Kiffin continued as the wide receivers coach. Under Kiffin and Sarkisian, the 2005 USC offense produced numerous school records, averaging 49.1 points and 579 yards per game and becoming the first in NCAA history to have a 3,000 yard passer (Matt Leinart), two 1,000 yard rushers (Reggie Bush and LenDale White), and a 1,000 yard receiver (Dwayne Jarrett). Steve Smith fell a few yards short of also surpassing 1,000 yards in receiving. In Kiffin's three years as recruiting coordinator at USC, the Trojans had the top ranked recruiting class in college football every year. The Trojans finished first in the Pac-10 in passing efficiency by averaging 142.8 passer rating, produced two, 1,000-yard receivers – Dwayne Jarrett (1,105) and Steve Smith (1,083) – and a 3,000-yard passer John David Booty, with 3,347 yards. The team produced top 20 statistics in most NCAA offensive categories and concluded with a 32–18 win over the then #3 ranked team the University of Michigan in the Rose Bowl. Kiffin helped guide USC to a 23–3 record during his tenure as offensive coordinator, an 88.5% win percentage; however, in June 2010, the NCAA retroactively declared Reggie Bush ineligible for the entire 2005 season, and forced USC to vacate all of its 2005 wins. Litigation from former coach Todd McNair to fight his defamation and to overturn those vacancies went on for ten years, before the defamation suit finally was settled through mediation in July 2021. The wins remained vacated, as announced by the NCAA two days later. Oakland Raiders Raiders' owner Al Davis hired the 31 year-old Kiffin on January 23, 2007, making him the youngest head coach in Oakland Raiders history, and signed Kiffin to a two-year contract worth about $4 million with a team option for 2009. Pro Football Hall of Fame Coach John Madden was 32 when he was elevated to the head post by Davis in 1969. Davis had been known to select young, up-and-coming coaches in their thirties; those hires who fared well include John Madden, Mike Shanahan, and Jon Gruden. All have won Super Bowls, though Madden is the only one of the three to win a championship with the Raiders. Age 31 at the time of his hiring by the Raiders (32 when he coached his first game), Kiffin became the youngest head coach in modern NFL history (i.e. since 1946); he also surpassed the New York Jets' Eric Mangini and the Pittsburgh Steelers' Mike Tomlin as the youngest head coach since the AFL–NFL merger in 1970. On August 12, 2007, in his NFL head coaching debut, Kiffin and the Raiders won their preseason opener 27–23 over the Arizona Cardinals. Kiffin vehemently opposed the selection of LSU quarterback JaMarcus Russell in the 2007 NFL Draft, who would eventually be regarded as one of the biggest draft busts in NFL history. Russell held out until September 12, and did not make his first start until December 2, long after the season was effectively over. Kiffin recorded his first regular season win as an NFL head coach on September 23, 2007; the Raiders defeated the Cleveland Browns by a score of 26–24 when defensive lineman Tommy Kelly blocked a late Cleveland field goal. At his end-of-the-season press conference, Kiffin told the media and his players that he had many plans and changes he was going to make in the 2008 offseason. When asked by his players about rumors that Kiffin was interested in open coaching positions in college football, he told them he never thought the rumors were important enough to address because he was never planning to leave. Departure from the Raiders On January 25, 2008, ESPN NFL analyst Chris Mortensen reported that Davis, who was not known for being patient with his coaches, tried to force Kiffin to resign after his first season ended with a 4–12 record. A source allegedly close to Kiffin told Mortensen that Kiffin would not resign, and would not sign the letter of resignation that would cause him to forfeit his $2 million salary for the remaining guaranteed year of his contract. However, the Raiders denied the story, while Kiffin refused to comment. On September 15, 2008, NBC Sports reported Davis was unhappy with Kiffin, and intended to fire him as soon as the following Monday or Tuesday. On September 30, 2008, Davis fired Kiffin over the telephone. At the televised news conference announcing the firing, Davis called Kiffin "a flat-out liar" and said he was guilty of "bringing disgrace to the organization". The Raiders said the move was made for cause, meaning they would terminate his contract immediately without paying the $2.6 million that was left on it for 2008. Kiffin later added in an interview with ESPN that he was not proud to be associated with Davis's accusations and was actually more embarrassed for Davis than himself. The Raiders subsequently released a letter Davis sent to Kiffin on September 12 that warned him that he was on the verge of being fired for "conduct detrimental to the Raiders," including repeated instances of making excuses and outright lies. Kiffin's post-firing press conference was canceled. Kiffin filed a grievance against the Raiders, claiming that he was fired without cause, but on November 15, 2010, an arbitrator ruled that Davis did indeed have cause to fire Kiffin. Kiffin's short tenure as the Raiders' head coach ended with a 5–15 record. Offensive line coach Tom Cable was given interim head coaching duties for the remainder of the 2008 season and was later made their permanent head coach on February 4, 2009. Several of his former Raider staff expressed interest as Kiffin began assembling his new staff at the University of Tennessee. On December 15, 2008, Raiders head coach Tom Cable lashed out at Kiffin for hiring one of his assistants, James Cregg, with two weeks remaining in the NFL season. Cable called the timing of Cregg's departure "wrong in the business of coaching" and indicated he had lost respect for Kiffin and planned to confront him about it. Nothing further was said publicly regarding the incident. Despite the animosity between the Raiders and Kiffin, he released a statement following Al Davis's death in October 2011 stating that although their relationship had not ended well, he appreciated the opportunity Davis had given him and had "nothing but the greatest respect" for the late Raiders owner. Tennessee Volunteers On November 28, 2008, multiple media outlets reported that Kiffin would be the next head football coach for the University of Tennessee Volunteers in 2009, replacing head coach Phillip Fulmer, who was fired. Tennessee formally introduced Kiffin as the school's 21st head football coach on December 1, 2008 in a 2:00 p.m. news conference. At the age of 33, Kiffin was hired by Tennessee and became the youngest active head coach in Division I FBS, surpassing Northwestern's Pat Fitzgerald. Kiffin signed a memorandum of understanding with the University of Tennessee on November 30, 2008. The deal included $2 million in 2009, with additional performance bonuses, including a $300,000 bonus if Tennessee was to compete for the national championship. His salary was set to increase over the six-year-deal, reaching a high of $2.75 million in 2014. The average salary of the deal was $2.375 million. If Kiffin had been fired in 2009 or 2010, the school would have to pay him $7.5 million under a buyout clause; after the 2012 season, the buyout clause decreased to $5 million. Kiffin's contract stated that if he resigned, he would have to pay UT $1 million in 2009, with the sum decreasing by $200,000 each year of his contract. Kiffin led the Vols to a 7–6 record in 2009, an improvement from their 5–7 record in 2008. The Vols increased their offensive output by more than 60 percent in 2009 with Kiffin calling the offensive plays. Highlights included wins over South Carolina, Georgia, and Kentucky. However, the season was marred by losses to UCLA, Florida, Auburn, Alabama, and Ole Miss, as well as a 23-point blow-out loss to Virginia Tech in the Chick-fil-A Bowl. After one season as coach, Kiffin left the Vols during the 2010 recruiting season to accept the head coaching job at the University of Southern California after Pete Carroll left to go to the Seattle Seahawks. Remarks and accusations On February 5, 2009, during a Tennessee booster breakfast at the Knoxville Convention Center, Kiffin accused Urban Meyer, then head coach of the Florida Gators and subsequently head coach of the Ohio State Buckeyes, of violating NCAA recruiting rules. While Kiffin accused Meyer of violating NCAA rules, he incidentally violated a Southeastern Conference rule that prevented coaches from mentioning a recruit by name. Kiffin's accusations against Meyer were mistaken. Southeastern Conference commissioner Mike Slive issued a public reprimand to Kiffin because of the comment. In addition to the SEC's public reprimand, Florida Athletic Director Jeremy Foley demanded a public apology from Kiffin. Kiffin issued a public apology one day after making the comment. In a statement released by the University of Tennessee, Kiffin wrote, "In my enthusiasm for our recruiting class, I made some statements that were meant solely to excite those at the breakfast. If I offended anyone at the University of Florida, including Mr. Foley and Urban Meyer, I sincerely apologize. That was not my intention." Kiffin generated further controversy when he told wide receiver recruit Alshon Jeffery that if Jeffery chose the Gamecocks, "he would end up pumping gas for the rest of his life like all the other players from that state who had gone to South Carolina." Jeffery went on to sign with the University of South Carolina Gamecocks, became the second round, 45th pick overall in the 2012 NFL Draft by the Chicago Bears, and, subsequently, a member of the Super Bowl LII champions, the Philadelphia Eagles. Kiffin denied making the statement, however the incident was corroborated by Jeffery's high school coach Walter Wilson, who was listening to Kiffin's remarks on speakerphone. Departure from Tennessee Kiffin's departure for USC in 2010 after just one season as head coach of the Volunteers upset many students and fans of the University of Tennessee. When Tennessee athletic director Mike Hamilton was asked for an assessment of Kiffin's tenure coaching the Volunteers, he responded with just one word: "Brief." Hundreds of students rioted on campus at the news of Kiffin's departure. Knoxville police and fire department were brought in after students blocked the exit from the Neyland Thompson Sports Center and started several small fires. USC Trojans On January 12, 2010, Kiffin returned to USC to become the Trojans' head coach. This came following Pete Carroll's departure from USC to become the head coach of the Seattle Seahawks. In June 2010, after a prolonged four-year investigation into whether former USC running back Reggie Bush and his family had accepted financial benefits and housing from two sports agents in San Diego while he was a student athlete at USC, the NCAA imposed sanctions against the Trojan football program for a "lack of institutional control," including a two-year postseason ban, the loss of 30 scholarships over three years, and the vacation of all wins in which Bush participated as an "ineligible" player, including the 2005 Orange Bowl, in which the Trojans won the BCS National Championship. The severity of these sanctions has been criticized by some NCAA football writers, including ESPN's Ted Miller, who wrote, "It's become an accepted fact among informed college football observers that the NCAA sanctions against USC were a travesty of justice, and the NCAA's refusal to revisit that travesty are a massive act of cowardice on the part of the organization." Kiffin's tenure at USC was widely considered a disappointment. Questionable coaching calls and the restrictions of sanctions contributed to a sense of missed opportunity for Kiffin and the Trojans. 2010 season In 2010, his first season at USC, Kiffin's Trojans team finished the season with an 8–5 record but were ineligible for post-season play due to the NCAA sanctions. After the NCAA issued a guideline allowing current USC juniors and seniors to automatically transfer from USC without having to sit out a year, several USC players left before the start of the 2010 season, including Malik Jackson and Byron Moore to Tennessee, Travon Patterson to Colorado, D.J. Shoemate to Connecticut, Uona Kaveinga to BYU, and Blake Ayles to Miami, among others. Seantrel Henderson, who had signed a letter of intent to USC, was granted a release by Kiffin and immediately enrolled at Miami. Both Kiffin and former head coach Pete Carroll publicly referred to this NCAA-transfer exception as "free agency" because it allowed current USC players to be targeted for transfer opportunities and granted them immediate eligibility at their transfer destination. USC played the 2010 season with just 71 scholarship players, some of whom were redshirt candidates who did not play, instead of the normal NCAA allowance of 85 scholarship players. Season highlights included a 48–14 win over the California Golden Bears in which quarterback Matt Barkley tied the USC record for touchdown passes in a game by completing five in just the first half to put the Trojans up 42–0 at halftime. After losing to rival Notre Dame for the first time in eight years, USC bounced back to close their season with a win over cross-town rival UCLA to retain the Victory Bell. Quarterback Matt Barkley returned after missing the previous week and threw one of the team's two touchdown passes. Allen Bradford led the Trojans by gaining 212 yards rushing and catching a 47-yard touchdown pass. 2011 season In 2011, Kiffin coached the Trojans to a 10–2 record (7–2 in the Pac-12), despite being ineligible for post-season play for the second consecutive season. On May 26, 2011, the NCAA's Appeals Committee upheld the sanctions against USC, after ruling that the use of precedent was not allowed under NCAA Bylaws, so the USC football team could not participate in the Pac-12 Football Championship Game (although they held the best record in the South division) or play in a bowl game during the 2011–12 season. The BCS announced June 6, 2011, that it had stripped USC of the 2004 title, though USC still retains the 2003 and 2004 AP National Championships. Season highlights included road wins against the California Golden Bears, Notre Dame Fighting Irish, and Oregon Ducks. Kiffin's Trojans lost in triple overtime to the Stanford Cardinal, who were led by quarterback Andrew Luck, but they bounced back by winning their last four games and defeating the UCLA Bruins 50–0 at the Los Angeles Memorial Coliseum, which extended the Trojans' victory streak against the Bruins to five. USC ended the season with two thousand-yard receivers (Robert Woods and Marqise Lee), a thousand-yard rusher (Curtis McNeal), and a 3,000-yard passer (Matt Barkley) for the first time since the 2005 season, when Kiffin was the offensive coordinator. 2012 season Kiffin for the first time became a voting member of the USA Today Coaches' Poll, but he resigned after just one vote amidst controversy over his preseason selection of USC as No. 1. After being informed that Arizona coach Rich Rodriguez had voted the Trojans as the top team, Kiffin told reporters, "I would not vote USC No. 1, I can tell you that much." However, USA Today, citing the need to "protect the poll's integrity", revealed that Kiffin had voted his team for the top spot. Kiffin apologized and explained that his comments were from the perspective of an opposing coach voting for USC. The Trojans finished the season with a 7–6 record overall and a 5–4 record in Pac-12 conference play. The Trojans were ranked #1 in both major polls at the start of the season, but a lackluster season (including a .500 record in conference play and a loss to archrival UCLA) left them unranked by the end of the season. Prior to 2012, the last time a team that was the pre-season ranked #1 finished the season unranked was USC in 1963. 2013 season and firing The Trojans lost their first two conference games of the 2013 season against Washington State and Arizona State, making Kiffin's record 4–7 in his last eleven games. During the Washington State opener, USC fans began filling the Coliseum with boos and, late in the game, chants to “fire Kiffin.” On September 28, 2013, after the 62–41 loss to Arizona State, USC Athletics Director Pat Haden fired Kiffin hours after the game, when the team arrived back in Los Angeles at 3 a.m. Kiffin was called off the team bus that was preparing to head back to campus from Los Angeles International Airport and taken to a small room inside the terminal where Haden told Kiffin he was being dismissed. After the meeting, Haden rejoined the team bus, and they headed back to campus with Kiffin’s bags, leaving Kiffin behind at the airport. Haden supposedly met with USC president Max Nikias in the third quarter and they decided Kiffin should be terminated. Haden formally announced the decision to dismiss Kiffin the next day. Assistant coach Ed Orgeron took over for Kiffin and led the team to a 6–2 finish, including an upset win against Stanford at the Coliseum. USC won the 2013 Las Vegas Bowl under interim head coach Clay Helton against Fresno State. Former USC associate head coach and Washington head coach Steve Sarkisian was hired by Haden following the season. Alabama Crimson Tide In December 2013, Kiffin spent eight days in Tuscaloosa, Alabama reviewing the Alabama Crimson Tide football team's offense. On January 9, 2014, after Michigan hired Alabama offensive coordinator Doug Nussmeier, Kiffin interviewed for the vacant coordinator job. Kiffin was offered the job as offensive coordinator at Alabama and accepted on January 10. In 2014, Kiffin was a finalist for the Broyles Award, given annually to the nation's top college football assistant coach. On January 2, 2017, 3 weeks after having accepted the head coaching job at Florida Atlantic, but electing to remain as the Alabama Offensive Coordinator through the playoffs, Kiffin was instead relieved of his duties as OC. He was replaced by another former USC head coach and his successor at that job, Steve Sarkisian, for the 2017 College Football Playoff National Championship 35–31 loss against Clemson and for the upcoming season. Florida Atlantic Owls On December 12, 2016, Kiffin accepted the head coaching position at Florida Atlantic University. After a 1-3 start, the FAU Owls reeled off ten straight wins, culminating in the Conference USA (C-USA) football championship against University of North Texas, 41–17, on their home field. FAU was slated to play the University of Akron in the Boca Raton Bowl on December 19, 2017. Before the game against Akron, it was announced on ESPN that Kiffin and FAU had agreed to a new deal that would keep him on for the next ten years through the 2027 season. John Kelly, the President of FAU was quoted in the article as saying, "This is further proof of FAU's unbridled ambition . . . I thought we could be a Top-25 program and we need a coach who can do that, he's on the verge of doing that. We're obviously looking toward keeping Lane long term." Kiffin led the Owls to a 50–3 victory over Akron in the Boca Raton Bowl, culminating in a 11–3 season in his first year. The 11–3 season was the first season over .500 for the FAU Owls since 2008, and the first time they have achieved over ten wins while competing at Division 1 football. The 2017 FAU football season is only the 4th time in school history they had a winning record in Division 1 football. In 2019, Kiffin once again led FAU to a 10-win season and a second C-USA championship. Ole Miss Rebels On December 6, 2019, it was reported that Kiffin was close to accepting the head coaching position at Ole Miss. On December 7, following FAU's blowout win in the C-USA championship game over UAB (49–6), it was confirmed by Ole Miss AD Keith Carter that Kiffin would be the next head coach at Ole Miss. On December 9, Lane was officially introduced as the 39th head football coach at Ole Miss. Kiffin's four-year contract totaled $16.2 million and would pay him $3.9 million in 2020 with a $100,000 yearly increment thereafter. Kiffin won his first game at Ole Miss in the second game of the 2020 season at Kentucky, a 42–41 win in overtime. In Kiffin's first season Ole Miss finished 5-5 with a 4-5 record in the SEC leading to a 2021 Outback Bowl invitation. After winning the Outback Bowl Kiffin was given a one-year contract extension by Ole Miss (the maximum extension Ole Miss could offer since Mississippi state law only allows four-year total contracts for university employees), however financial details were not immediately released. New contract details were released in August 2021 and amounted to $21 million in base pay through 2024, with $4.5 million paid out in 2021 and over $5 million in each of the remaining three seasons. Prior to the 2021 season, Kiffin led Ole Miss to become the first NCAA football team 100% vaccinated against COVID-19. This was particularly notable because at the time of Ole Miss' announcement the state of Mississippi ranked 49th out of 50th in COVID-19 vaccination rates. Kiffin tested positive for COVID-19 two days before the 2021 Ole Miss opener in the Chick-fil-A Kickoff Game with Louisville and would not make the trip with the team. With a 31-21 victory in the Egg Bowl the Rebels would finish the 2021 regular season 10-2. This was the first time in Ole Miss school history that they finished the regular season with 10 wins. Personal life Lane is the son of Monte Kiffin, a long time defensive coordinator in the National Football League, most notably for the Tampa Bay Buccaneers. In 2013, Monte resigned as the defensive coordinator on Lane's staff at USC and was hired for the same position with the Dallas Cowboys. Lane and his ex-wife Layla, who is a University of Florida alumna, have three children. Kiffin's brother, Chris, was a defensive lineman at Colorado State University and is the current defensive line coach for the Cleveland Browns. Kiffin's former father-in-law, John Reaves, was a former starting NFL and USFL quarterback who played college football for the Florida Gators. On February 28, 2016, Lane and Layla announced that they were separating and had mutually decided to divorce. Head coaching record NFL College ‡ Ineligible for Pac-12 title, bowl game and Coaches' Poll due to NCAA sanctions.*Kiffin was fired on September 29, 2013. ** Departed Florida Atlantic for Ole Miss before bowl game References External links Ole Miss profile 1975 births Living people American football quarterbacks Alabama Crimson Tide football coaches Colorado State Rams football coaches Fresno State Bulldogs football coaches Fresno State Bulldogs football players Jacksonville Jaguars coaches Oakland Raiders head coaches Tennessee Volunteers football coaches USC Trojans football coaches Florida Atlantic Owls football coaches Ole Miss Rebels football coaches Colorado State University alumni Sportspeople from Lincoln, Nebraska Sportspeople from Bloomington, Minnesota
1356771
https://en.wikipedia.org/wiki/Outline%20of%20video%20games
Outline of video games
The following outline is provided as an overview of and topical guide to video games: Video game – an electronic game that involves interaction with a user interface to generate visual feedback on a video device. The word video in video game traditionally referred to a raster display device, but following popularization of the term "video game", it now implies any type of display device. Video game genres Video game genres (list) – categories of video games based on their gameplay interaction and set of gameplay challenges, rather than visual or narrative differences. Action game Action game – a video game genre that emphasizes physical challenges, including hand–eye coordination and reaction-time. Beat 'em up – a video game genre featuring melee combat between the protagonist and a large number of underpowered antagonists. Fighting game – a genre where the player controls an on-screen character and engages in close combat with an opponent. Platform game – requires the player to control a character to jump to and from suspended platforms or over obstacles (jumping puzzles). Shooter game – wide subgenre that focuses on using some sort of weapon often testing the player's speed and reaction time. First-person shooter – a video game genre that centers the gameplay on gun and projectile weapon-based combat through first-person perspective; i.e., the player experiences the action through the eyes of a protagonist. Light gun shooter – a genre in which the primary design element is aiming and shooting with a gun-shaped controller. Shoot 'em up – a genre where the player controls a lone character, often in a spacecraft or aircraft, shooting large numbers of enemies while dodging their attacks. Third-person shooter – a genre of 3D action games in which the player character is visible on-screen, and the gameplay consists primarily of shooting. Hero shooter – multiplayer first- or third-person shooters that strongly encourage cooperative play between players on a single team through the use of pre-designed "hero" characters that each possess unique attributes, skills, weapons, and other activated abilities. Tactical shooter – includes both first-person shooters and third-person shooters and simulates realistic combat, thus making tactics and caution more important than quick reflexes in other action games. Survival game – a genre that is set in a hostile, intense, open-world environment, where players generally begin with minimal equipment and are required to collect resources, craft tools, weapons, and shelter, and survive as long as possible. Battle royale game – a subgenre that blends the survival, exploration, and scavenging elements of a survival game with last-man-standing gameplay. Action-adventure game Action-adventure game – a video game genre that combines elements of both the adventure game and the action game genres. Open world – a type of video game level design where a player can roam freely through a virtual world and is given considerable freedom in choosing how to approach objectives. Grand Theft Auto clone – a type of open world design where the player is given a simulated environment and, optionally, exterminate the local inhabitants. See also Grand Theft Auto. Metroidvania – a type of level design where a player is in a more restrictive environment and tasked with an end goal objective, usually with an emphasis on gathering powering ups from exploring the environment. Stealth game – a type of game where the objective is to remain undetected from hostile opponents. Survival horror – a type of game where fear is a primary factor in play, usually by restricting useful or power-up items in a dark, claustrophobic environment. Adventure game Adventure game – a video game in which the player assumes the role of protagonist in an interactive story driven by exploration and puzzle-solving instead of physical challenge. Graphic adventure game – any adventure game that relies on graphical imagery, rather than being primarily text-based. Escape the room – a subgenre of adventure game which requires a player to escape from imprisonment by exploiting their surroundings. Interactive fiction – games in which players input text commands to control characters and influence the environment. In some interactive fiction, text descriptions are the primary or only way the simulated environment is communicated to the player. Interactive movie – a type of video game that features highly cinematic presentation and heavy use of scripting, often through the use of full motion video of either animated or live-action footage. Visual novel – a type of adventure game featuring text accompanied by mostly static graphics, usually with anime-style art, or occasionally live-action stills or video footage Role-playing video game Role-playing video game (RPG): a video game genre with origins in pen-and-paper role-playing games such as Dungeons & Dragons, using much of the same terminology, settings and game mechanics. The player in RPGs controls one character, or several adventuring party members, fulfilling one or many quests. Action role-playing game – a loosely defined subgenre of role-playing video games that incorporate elements of action or action-adventure games, emphasizing real-time action where the player has direct control over characters, instead of turn-based or menu-based combat. Hack and slash – a type of gameplay that emphasizes combat. Role-playing shooter – a subgenre, featuring elements of both shooter games and action RPGs. Dungeon crawl – a type of scenario in fantasy role-playing games in which heroes navigate a labyrinthine environment, battling various monsters, and looting any treasure they may find. Roguelike – a subgenre of role-playing video games, characterized by randomization for replayability, permanent death, and turn-based movement. MUD – a multiplayer real-time virtual world, with the term usually referring to text-based instances of these. Massively multiplayer online role-playing game – a genre of role-playing video games in which a very large number of players interact with one another within a persistent virtual world. Tactical role-playing game – a subgenre of role-playing video games that incorporate elements of strategy video games Simulation video game Simulation video game – a diverse super-category of video games, generally designed to closely simulate aspects of a real or fictional reality. Construction and management simulation – a type of simulation game in which players build, expand or manage fictional communities or projects with limited resources. Business simulation game – games that focus on the management of economic processes, usually in the form of a business. City-building game – games in which players act as the overall planner and leader of a city, looking down on it from above, and being responsible for its growth and management. Government simulation game – a game genre that attempts to simulate the government and politics of all or part of a nation. Life simulation game – simulation video games in which the player lives or controls one or more virtual lifeforms. Digital pet – a type of artificial human companion, usually kept for companionship or enjoyment. Digital pets are distinct in that they have no concrete physical form other than the computer they run on. God game – a type of life simulation game that casts the player in the position of controlling the game on a large scale, as an entity with divine/supernatural powers, as a great leader, or with no specified character, and places them in charge of a game setting containing autonomous characters to guard and influence. Social simulation game – a subgenre of life simulation games that explores social interactions between multiple artificial lives. Dating sim – a subgenre of social simulation games that focuses on romantic relationships. Sports game – games that simulate the practice of traditional sports. Strategy video game Strategy video game – a genre that emphasizes skillful thinking and planning to achieve victory. They emphasize strategic, tactical, and sometimes logistical challenges. Many games also offer economic challenges and exploration. 4X game – a genre in which players control an empire and "explore, expand, exploit, and exterminate". Artillery game – the generic name for either early two- or three-player (usually turn-based) computer games involving tanks fighting each other in combat or similar derivative games. Real-time strategy (RTS) – a subgenre of strategy video game which does not progress incrementally in turns. Tower defense – a genre where the goal of the game is to try to stop enemies from crossing a map by building towers which shoot at them as they pass. Real-time tactics – a subgenre of tactical wargames played in real-time simulating the considerations and circumstances of operational warfare and military tactics, differentiated from real-time strategy gameplay by the lack of resource micromanagement and base or unit building, as well as the greater importance of individual units and a focus on complex battlefield tactics. Multiplayer online battle arena (MOBA) – a hybrid of real-time strategy, role-playing and action video games where the objective is for the player's team to destroy the opposing side's main structure with the help of periodically spawned computer-controlled units that march towards the enemy's main structure. Tactical role-playing game – a type of video game which incorporates elements of traditional role-playing video games and strategy games. Turn-based strategy – a strategy game (usually some type of wargame, especially a strategic-level wargame) where players take turns when playing. Turn-based tactics – a genre of strategy video games that through stop-action simulates the considerations and circumstances of operational warfare and military tactics in generally small-scale confrontations as opposed to more strategic considerations of turn-based strategy (TBS) games. Wargame – a subgenre that emphasize strategic or tactical warfare on a map, as well as historical (or near-historical) accuracy. Vehicle simulation game Vehicle simulation game – games in which the objective is to operate a manual or motor powered transport. Flight simulator – a game where flying vehicles is the primary mode of operation. Amateur flight simulation – an aircraft trainer with realistic controls. Combat flight simulator – a type of game where players simulate the handling of military aircraft and their operations. Racing game – a type of game where the player is in a racing competition. Driving simulator – a type of game where the player is tasked with using a vehicle as if it were real. Sim racing – a type of game where the player is tasked with using a realistic vehicle inside a racing competition. Space flight simulator game – a type of game meant to emulate the experience of space flight. Submarine simulator – a type of game where the player commands a submarine. Train simulator – a simulation of rail transport operations. Vehicular combat game – a type of game where vehicles with weapons are placed inside of an arena to battle. Other genres Adult game – a game which has significant sexual content (like an adult movie), and are therefore intended for an adult audience. Eroge – a Japanese video game that features erotic content, usually in the form of anime-style artwork. Advergame – the practice of using video games to advertise a product, organization or viewpoint. Art game – a video game that is designed in such a way as to emphasize art or whose structure is intended to produce some kind of reaction in its audience. Audio game – an interactive electronic game wherein the only feedback device is audible rather than visual. Christian video game – any video game centered around Christianity or Christian themes. Educational game – video games that have been specifically designed to teach people about a certain subject, expand concepts, reinforce development, understand an historical event or culture, or assist them in learning a skill as they play. Exergaming – video games that are also a form of exercise and rely on technology that tracks body movement or reaction. Maze video games – video game genre description first used by journalists during the 1980s to describe any game in which the entire playing field was a maze. Music video game – a video game where the gameplay is meaningfully and often almost entirely oriented around the player's interactions with a musical score or individual songs. Rhythm game – games that challenge the player's sense of rhythm and focus on dance or the simulated performance of musical instruments, and require players to press buttons in a sequence dictated on the screen. Party video games: games commonly designed as a collection of simple minigames, designed to be intuitive and easy to control and to be played in multiplayer. Puzzle video game – video games that emphasize puzzle solving, including logic, strategy, pattern recognition, sequence solving, and word completion. Serious game – a video game designed for a primary purpose other than pure entertainment, generally referring to products used by industries like defense, education, scientific exploration, health care, emergency management, city planning, engineering, religion, and politics. Other types of video games Casual game – a game of any genre that is targeted for a mass audience of casual gamers. Casual games typically have simple rules and require no long-term time commitment or special skills to play. Indie game – games created by individuals or small teams without video game publisher financial support as well as often focus on innovation and rely on digital distribution. Minigame – a short or more simplistic video game often contained within another video game. Non-game – software that lies on the border between video games, toys and applications, with the main difference between non-games and traditional video games being the apparent lack of goals, objectives and challenges. Programming game – a game where the player has no direct influence on the course of the game, instead a computer program or script is written that controls the actions of the characters. Video game hardware platforms Arcade game – a coin-operated entertainment machine, usually installed in public businesses such as restaurants, bars, and amusement arcades. Arcade games include video games, pinball machines, electro-mechanical games, redemption games, and merchandisers (such as claw cranes). Video game arcade cabinet – the housing within which a video arcade game's hardware resides. List of arcade games List of pinball machines Video game console – a consumer entertainment device consisting of a customized computer system designed to run video games. Console game – a video game played on a video game console. List of video game consoles List of best-selling game consoles Dedicated console – a video game console that is dedicated to a built in game or games, and is not equipped for additional games, via cartridges or other media. Console wars – a term used to refer to periods of intense competition for market share between video game console manufacturers. Handheld game console – a lightweight, portable consumer electronic device with a built-in screen, game controls and speakers. Handheld video game – a video game played on a handheld game console. Mobile game – a video game played on a mobile phone, smartphone, PDA, tablet computer or portable media player. Online game – a game played over some form of computer network. Browser game – a video game that is played over the Internet using a web browser. Massively multiplayer online game (MMO): a multiplayer video game which is capable of supporting hundreds or thousands of players simultaneously. PC game – a video game played on a personal computer, rather than on a video game console or arcade machine. Gameplay Gameplay Gamer Single-player Multiplayer game Cooperative gameplay Cheating in online games Cheating Difficulty level Gaming computer Speedrun Ludonarrative Strategy guide Specific video games Lists of video games List of best-selling video games List of best-selling video game franchises List of video games considered the best List of video games notable for negative reception Controversial video game List of cult video games List of arcade games List of video games based on comics List of video games based on DC Comics List of video games based on Marvel Comics List of video games based on anime or manga List of video games based on cartoons List of Disney video games List of Hanna-Barbera video games List of Looney Tunes video games Video game industry Video game industry List of video game companies List of commercial failures in video games List of video game developers List of video game publishers List of indie game developers List of video game industry people List of video game franchises Game studies Video game packaging Nintendo Seal of Quality Video game award Video game journalism Video games by country Africa Video games in Kenya Video games in Nigeria Video games in South Africa Asia Video games in Bangladesh Video games in China Video games in India Video games in Japan Video games in Malaysia Video games in Russia Video games in South Korea Video games in Thailand Europe Video games in Belgium Video games in France Video games in Germany Video games in Lithuania Video games in the Czech Republic Video games in the Netherlands Video games in the Republic of Ireland Video games in the United Kingdom Video games in Ukraine North America Video games in Canada Video games in the United States Oceania Video games in Australia Video games in New Zealand South America Video games in Brazil Video games in Colombia Video game development Video game development – the software development process by which a video game is developed and video game developer is a software developer (a business or an individual) that creates video games. Video game developer – a software developer (a business or an individual) that creates video games. Independent video game development – the process of creating indie video games without the financial support of a video game publisher, usually designed by an individual or a small team. Game art design – a process of creating 2D and 3D game art for a video game, such as concept art, item sprites, character models, etc. Video game artists: an artist who creates art for one or more types of games and are responsible for all of the aspects of game development that call for visual art. Video game graphics – variety of individual computer graphic techniques that have evolved over time, primarily due to hardware advances and restrictions. Video game art – the use of patched or modified video games or the repurposing of existing games or game structures. Concept art – a form of illustration where the main goal is to convey a visual representation of a design, idea, and/or mood before it is put into the final product. Procedural texture – a form of illustration where the main goal is to convey a visual representation of a design, idea, and/or mood for use in films, video games, animation, or comic books before it is put into the final product. 2D computer graphics – computer-based generation of digital images—mostly from two-dimensional models (such as 2D geometric models, text, and digital images) and by techniques specific to them. 3D computer graphics – graphics that use a three-dimensional representation of geometric data (often Cartesian) that is stored in the computer for the purposes of performing calculations and rendering 2D images. Game modification – are made by the general public or a developer, and can be entirely new games in themselves, but mods are not standalone software and require the user to have the original release in order to run. Game music – musical pieces or soundtracks and background musics found in video games ranging from a primitive synthesizer tune to an orchestral pieces and complex soundtracks. Game producer – the person in charge of overseeing development of a video game. Game programming – the programming of computer, console or arcade games. Game programmer – a software engineer, programmer, or computer scientist who primarily develops codebase for video games or related software, such as game development tools. Game engine – a system designed for the creation and development of video games. Game Artificial intelligence – techniques used in computer and video games to produce the illusion of intelligence in the behavior of non-player characters (NPCs). Game publisher – a company that publishes video games that they have either developed internally or have had developed by a video game developer. Game studies – the discipline of studying games, their design, players, and their role in society and culture more broadly. Game testing – a software testing process for quality control of video games, primary function being the discovery and documentation of software defects (aka bugs). Game journalism – a branch of journalism concerned with the reporting and discussion of video games. Level design – a discipline of game development involving creation of video game levels—locales, stages, or missions. Level editor (tool) – a software tool used to design levels, maps, campaigns, etc. and virtual worlds for a video game. Video game design – the process of designing the content and rules of a game in the pre-production stage and design of gameplay, environment, storyline, and characters during production stage. Other concepts Interaction design Expansion pack Video game remake Fan translation Fangame Abandonware XGameStation History of video games History of video games By period Early history of video games First video game Pong History of first generation video game consoles (1972–1977) History of second generation video game consoles (1976–1984) History of third generation video game consoles (1983–1992) History of fourth generation video game consoles (1987–1996) History of fifth generation video game consoles (1993–2006) History of sixth generation video game consoles (1998–) History of seventh generation video game consoles (2005–) History of eighth generation video game consoles (2011–) By decade 1980s in video games 1990s in video games 2000s in video games 2010s in video games By year Prior to 1972 1970 1971 1972 1973 1974 1975 1976 1977 1978 1979 1980 1981 1982 1983 1984 1985 1986 1987 1988 1989 1990 1991 1992 1993 1994 1995 1996 1997 1998 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 By platform History of arcade games Golden age of video arcade games Timeline of video arcade game history Chronology of console role-playing games History of video game consoles Video game crash of 1983 History of personal computer games By genre History of action games History of action-adventure games History of adventure games History of role-playing video games History of sports games History of strategy video games Culture of video games Video game culture Video game art Video game collecting Game studies Gamers Outreach Foundation List of books about video games List of books based on video games List of films based on video games List of television programs based on video games List of anime based on video games List of video game webcomics List of video game websites Gender representation in video games Video game magazines Psychology Video game addiction Video game behavioral effects Gamers Outreach Foundation People influential in video games List of video game industry people List of electronic sports players See also Game classification ROM Texture artist Unlockable game Video game console emulator Video game etiquette Non-game Strategy guide Lists List of home computers by video hardware References External links The Virtual Museum of Computing (VMoC) Interactive video game history timeline Video Game Museum in Paris outline Video games Video games Video games
6593158
https://en.wikipedia.org/wiki/ROLM
ROLM
ROLM Corporation was a technology company founded in Silicon Valley in 1969. IBM Corp. partnered with the company, and ROLM Mil-Spec was sold to Loral Corporation and later to Lockheed Martin in 1996 as Tactical Defense Systems. IBM's ROLM division was later half sold to Siemens AG in 1989, whereupon the manufacturing and development became wholly owned by Siemens and called ROLM Systems, while marketing and service became a joint venture of IBM with Siemens, called ROLM Company. After nearly 30 years, phone products with the name "Rolm" were discontinued in the late 1990s, as sales dropped in markets dominated by new technology with other products or other companies. Products The ROLM corporation had two distinct operations, depending on the application of the associated hardware, with a cross blending of technologies from one division to the other. Military Hardware The company first produced rugged mil-spec (military specification) computer systems running Data General software. The company divisionalized in 1978, becoming both Rolm Mil-Spec Computers and Rolm Telecom. The Telecom division spent much of the considerable profit realized by the Mil-Spec Computer division over the ensuing 1980s trying to penetrate the convoluted phone-interconnect business. The first computer system was the 1601 Ruggednova Processor, announced at the 1969 Fall Joint Computer Conference with deliveries beginning in March 1970. In the military it was designated the AN/UYK-12(V) It was a licensed implementation of the Data General Nova architecture. It consisted of a 5-board processor card set and core or read only MOS memory in 4K increments up to 32K in a standard ATR box which contained the power supply and 14 card slots. The 1601 was a popular machine with RCA TIPI. The processor was developed into a smaller-form card set as the ALR-62 and ultimately into a single-card version as the ALR-46A, both sold to Dalmo Victor. The Models 1602 and 1603 soon followed with greater capability and more memory - the ROLM 1602 was used on the AN/MLQ-34 TACJAM jamming system as the primary system computer and controller. The newer 1606 was leveraged into the Raytheon (Goleta) AN/SLQ-32 naval shipboard electronic warfare system for signal identification purposes and into units sold to Singer Librascope. Bob Maxfield and Alan Foster were responsible for the design of the early processor chassis until Art Wellman from Sylvania was brought in to take the computers to their next level mechanically. Both half-ATR and full-ATR-sized chassis were developed for a wide array of defense applications. The 1602B and 2150 I/O boxes were developed and standardized expressly for the Army ILS program and were top sellers at the time. The Rolm 1602 was used on AN/MLQ-34 'TACJAM' as the primary mission computer. The 1666 was leveraged into the GLCM (Ground Launched Cruise Missile) and SLCM (Surface Launched Cruise Missile) hardware for McDonnell Douglas (MDAC), St. Louis, and the follow-on 1666B was incorporated into MDAC's Tomahawk Weapons Control System (TWCS). Despite developing most products with Rolm's own money, the substantial increase in military sales in the 1980s caused the loss of the commercial exemption enjoyed in the early years. This required all product-pricing to be negotiated directly with the DoD, so margins eroded somewhat. Some 32-bit machines (versus 16-bit) were developed into the Hawk/32 computer and sold well. Engineering in the latter years scrambled to come up with a new product line as the military was enticed into ruggedized commercial computer systems by Rugged Digital, and Rolm worked on a militarized version of Mercury Computer's Digital Signal Processor. Brisk sales of the DG-based computers continued up to the time the ROLM Mil-Spec Computer division was closed in June 1998. Commercial hardware The Telecom division leveraged the 1603 processor into the heart of its original CBX. Over time, the company began to focus on digital voice, and produced some of the earliest examples of all-digital voice equipment, including Computerized Branch Exchanges (CBXs) and digital phones. Two of the most popular telecom systems were the ROLM CBX and ROLM Redwood (PBX and Key Systems Unit (KSU) models, respectively). The CBX was meant to directly compete with Northern Telecom's SL-1, AT&T Dimension telephone systems and other computerized digital-voice systems being developed at the time. By 1980, ROLM had shot past AT&T in number of systems deployed to become the #2 PBX in North America. The Redwood, often called the "Deadwood" by many ROLM techs because it never caught on, was intended to compete with the Nortel Norstar Key System. When Siemens bought ROLM from IBM and introduced their "newer" models, which were renamed Siemens switches, the early ROLM phone switches were widely pressed into service as old technology (though a number of 8000 and 9751-9005 CBXs remain online at some companies), but the digital phone handsets were quite valuable for those expanding their phone networks. The later ROLM 9200 (actually a Siemens HCM200 Hybrid system renamed) was more competition for the leading Key Systems as the 9200 had intensive Least Call Routing software, which the Redwood did not. The company also produced one of the first commercially successful voicemail systems, PhoneMail. Digital ROLM telephones, called ROLMphones, were unique from other telephones in many ways, one of which was a lack of a physical switchhook button. Instead, the handset contains a small magnet which triggers a switch in the phone base. The opening or closing of this switch lets the phone and system know if the phone is on hook (not in use) or off-hook (in use). History The company name "ROLM" was formed from the first letters of the founders names: Gene Richeson, Ken Oshman, Walter Loewenstern, and Robert Maxfield. The four men had studied electrical engineering at Rice University and earned graduate degrees at Stanford University. At Rice, Oshman and Loewenstern were members of Wiess College. Not an original founder, Leo Chamberlain was hired and became very much the soul of ROLM, advancing progressive workplace ideas such as GPW (Great Place to Work). The Old Ironsides Drive campus (ROLM Campus-Santa Clara, CA) was equipped with a swimming pool, openspace park areas, a cafeteria and recreation center. ROLM originally made flight computers for the military and heavy commercial industries such as oil exploration (Halliburton). Beginning in the early 1970s, International Paper Company bought a significant number of the 1602 series computers. These became the environmentally-hardened base for that company's in-house-developed process control system, which informally became known as the dual-ROLM. Later, in an attempt at diversification, ROLM themselves branched off into energy management by buying a company producing an early version of such a system and the telecom industry by designing the CBX, internally running a 1603 computer. It quickly outsold AT&T, who at the time had not come out with a digital PBX, and became #2 behind the Nortel SL-1 switch by 1980. At one point, ROLM was poised to overtake Nortel as the leader in PBX sales in North America. In May 1982 IBM purchased 15% of Rolm. IBM partnered with and in 1984 acquired ROLM Corporation in Santa Clara, California. The Mil-Spec Computer portion of the business was sold to Loral Corporation when IBM's Federal Systems Division was determined by government regulatory agencies to be already too large and dominant in military markets to retain ROLM Mil-Spec. Ultimately the Mil-Spec group ended up in the hands of Lockheed Martin as Tactical Defense Systems. In the phone markets, ROLM started to lose pace with Nortel, due to product issues, and they never recovered. The 9751 CBX, which has IBM's name on it, was initially a successful product; but when ISDN service became more affordable, IBM never really updated the 9751 to integrate correctly with ISDN. Nortel leaped ahead on that issue alone; AT&T (now Avaya) and others gained ground and started to overtake ROLM. IBM's ROLM division was later half sold to Siemens AG in 1989, whereupon manufacturing and development became wholly owned by Siemens and called ROLM Systems, while marketing and service became a joint venture of IBM with Siemens, called ROLM Company. By 1992, Siemens bought out IBM's share in ROLM and later changed its name to SiemensROLM Communications. However, the die was cast, and the downturn (across the telecom sector) continued. The ROLM name was eventually dropped in the late 1990s, though Siemens still retained copyright of it. Currently, secondary vendors offer support for ROLM phone systems, including repair services for broken phones and sales of refurbished units and Phonemail systems. Many systems have remained in use in large-scale universities, institutions and some corporations (Entergy, School of the Art Institute of Chicago, Huntsman, The Southern Company, the Santa Fe railroad (now part of BNSF, etc.), which were large-scale ROLM users from the early days. These older systems are still known for being very reliable, though Siemens no longer offers updates or new models of the CBX. Siemens still offers some technical support, however, most real ROLM systems quietly keep running, and unless they suffered a lightning strike or an IBM hard-drive failure (in the 9751s), no support was really needed. The Great America Campus was leveled and is now a parking lot for the adjacent Levi's Stadium. The River Oaks campus was leveled and is now high density housing. The Zanker campus remains as Broadcom. CBX (Computer Branch eXchange) technical information The original CBX were not named except for the software release (i.e., "Release 5" or "Release 6"), but then they changed with the release of the 7000 CBX, later becoming the 8000 (8000-8004 series, which had more memory and newer CPU cards as well as offering redundant critical electronics, power supplies, etc.). The models under the CBX and later CBXII product line were the VS (Very Small; one CPU and no redundant electronics and one half of a normal cabinet of the larger models), S (Small: similar to the VS but normal size cabinet and could be upgraded; offered power supply redundancy), M (redundant CPUs and electronics and power supply options) and L (multi-cabinet with total redundancy). The CBXII 8004 Mdump 18a was the last release of the original series. In the early 80s, ROLM introduced the CBXII VL9000 ('VL' for Very Large). Multi-node capable, it could have up to 15 nodes with over 20,000 stations. The nodes could be connected via T1s or fiber. The box and a lot of hardware was similar or the same as the 8000 series, but the main bus and software were totally different. The 9000 could offer many newer features the 8000 could not. The 1st 9000VL was going to Georgia Power/Southern Company but was delayed in its delivery, while SN002 was delivered to Gulf States Utilities HQ in Beaumont, Texas and installed by GSU's own telecom group ahead of SN001 being delivered to Georgia. GSU, now part of Entergy, retired the VL9000 in the late 1990s, and it was replaced with a SiemensROLM 9006i (actually sold by Siemens overseas 1st as the HiCoM (HCM) 300 and was nothing like a real ROLM). Georgia Power ran their VLCBX in tandem with an existing multi-cabinet 8000 and each extension had a switch to select either of the two CBXs in case of a malfunction until the reliability of the VL model was up to acceptable standards. NASA's Johnson Space Center (JSC) in Clear Lake, TX was the push behind the 9000 series, with JSC eventually having a 13 node 9000VLCBX on its campus (replaced by a Siemens 9006i and later a Siemens HiPath switch). The various models of IBM produced ROLM 9751 CBX are 10, 20, 40, 50 & 70. PhoneMail (succeeded by eXpressions470 in later VoIP offerings from Siemens but using the same command structure and female "Silicon Sally" voice). However, IBM did not keep up with telecom standards on the Central Office as well as it should have; which kept IBM/ROLM from delivering an ISDN PRI solution for the 9751 until late in the game. By then, Nortel, the old AT&T (later Lucent and now Avaya) as well as others had pulled ahead and ROLM never regained ground. ROLM 9751 The Model 10 cannot use Cornet hardware (RPDN card); CORNet is a proprietary networking software (an extension of ISDN PRI protocols) for Siemens PBXs and the original 9751-9005 model. Also the cabinet is a different design from the other models (the Model 20 through 70 use the same cabinet design, etc.). In the early 1990s, Siemens came out with new "9751-9006i" models called the Model 30 and Model 80, respectively. They were nothing like the original ROLM systems. The only devices that were kept from the older models were the RolmPhones and PhoneMail. The Mod 30/80 9006i series was a disaster for Siemens, and this caused a lot of old ROLM customers to jump ship to another vendor like Nortel or Avaya. The 9006i models were really HiCoM (HCM) 300 models sold overseas. Eventually, Siemens changed the name back to the HCM name, ending production in the late 1990s with Version 6.6 (original release was 6.1 or 9006 release 1). Further reading References NYTimes.com; September 27, 1984: "At ROLM, an Independent Style" NYTimes.com; December 14, 1988: "I.B.M. to Sell Rolm to Siemens" 1969 establishments in California 1998 disestablishments in California American companies established in 1969 American companies disestablished in 1998 Companies based in Santa Clara, California Computer companies established in 1969 Computer companies disestablished in 1998 Defunct companies based in the San Francisco Bay Area Defunct computer companies of the United States IBM acquisitions Siemens Technology companies based in the San Francisco Bay Area Technology companies established in 1969 Technology companies disestablished in 1998 Telecommunications companies established in 1969 Telecommunications companies disestablished in 1998 Telephony equipment
419925
https://en.wikipedia.org/wiki/UNIVAC%20490
UNIVAC 490
The UNIVAC 490 was a 30-bit word magnetic-core memory machine with 16K or 32K words; 4.8 microsecond cycle time made by UNIVAC. Seymour Cray designed this system before he left UNIVAC to join the early Control Data Corporation. It was a commercial derivative of a computer Univac Federal Systems developed for the United States Navy. That system was the heart of the Naval Tactical Data System which pioneered the use of shipboard computers for air defense. The military version went by a variety of names: UNIVAC 1232, AN/USQ-20, MIL-1206 and CP642. Overview At least 47 of these machines were made (serial numbers run from 101 to 147). Six were installed at NASA and played important roles in Gemini and the Apollo missions. The U490 had complete control of most or all of the data readout screens in Houston Mission Control. The USAF had two installed, as did Lockheed. Airlines using the 490 Real-Time system included Eastern and Northwest Orient - principally airline reservations systems at Eastern Air Lines (1963) and British European Airways (BEACON - 1964). Other commercial installations of the 490 Real-Time included two at Westinghouse, and one each at Alcoa, U.S. Steel, Bethlehem Steel and General Motors. The only surviving, nearly complete, original, civilian version of the 490 Real Time System is on display at System Source in Hunt Valley, Maryland. It has six banks of memory cores. System Source also has a nearly complete set of original documentation for the machine, including original blueprints and troubleshooting data. This includes the Boss and Wilen document. The standard Operating System was REX (RealTime Exec), except at Eastern and B.E.A. where a custom operating system was developed for airline reservations (CONTORTS - CONTrol Of Real Time System). CONTORTS was the origin of Univac's subsequent RT operating systems for 494 (STARS) and later converted to the 1100 Series (RTOS). Architecture The instruction word format: f - Function code designator (6 bits) j - Branch condition designator (3 bits) k - Operand-interpretation designator (3 bits) b - Operand address modification designator (3 bits) y - Operand designator (15 bits) Numbers were represented in ones' complement. The machine provided the programmer with the following registers: Seven B-registers (Address modifying index registers) 15 bits each One A-register or accumulator 30 bits One Q-register and auxiliary arithmetic register 30 bits Hardware Construction (Arithmetic unit only) Type Quantity Diodes 37,543 All types Transistors 13,819 All types See also AN/USQ-17 List of UNIVAC products History of computing hardware References External links UNIVAC 490 Real-Time System UNIVAC 490 manuals UNIVAC 490 at the System Source Computer Museum 490
4265743
https://en.wikipedia.org/wiki/ATI%20Avivo
ATI Avivo
ATI Avivo is a set of hardware and low level software features present on the ATI Radeon R520 family of GPUs and all later ATI Radeon products. ATI Avivo was designed to offload video decoding, encoding, and post-processing from a computer's CPU to a compatible GPU. ATI Avivo compatible GPUs have lower CPU usage when a player and decoder software that support ATI Avivo is used. ATI Avivo has been long superseded by Unified Video Decoder (UVD) and Video Coding Engine (VCE). Background The GPU wars between ATI and NVIDIA have resulted in GPUs with ever-increasing processing power since early 2000s. To parallel this increase in speed and power, both GPU makers needed to increase video quality as well, in 3D graphics applications the focus in increasing quality has mainly fallen on anti-aliasing and anisotropic filtering. However it has dawned upon both companies that video quality on the PC would need improvement as well and the current APIs provided by both companies have not seen many improvements over a few generations of GPUs. Therefore, ATI decided to revamp its GPU's video processing capability with ATI Avivo, in order to compete with NVIDIA PureVideo API. In the time of release of the latest generation Radeon HD series, the successor, the ATI Avivo HD was announced, and was presented on every Radeon HD 2600 and 2400 video cards to be available July, 2007 after NVIDIA announced similar hardware acceleration solution, PureVideo HD. In 2011 Avivo is renamed to AMD Media Codec Package, an optional component of the AMD Catalyst software. The last version is released in August 2012. As of 2013, the package is no longer offered by AMD. Features ATI Avivo During capturing, ATI Avivo amplifies the source, automatically adjust its brightness and contrast. ATI Avivo implements 12-bit transform to reduce data loss during conversion; it also utilizes motion adaptive 3D comb filter, automatic color control, automatic gain control, hardware noise reduction and edge enhancement technologies for better video playback quality. In decoding, the GPU core supports hardware decoding of H.264, VC-1, WMV9, and MPEG-2 videos to lower CPU utilization (the bitstream processing/entropy decoding still requires CPU processing). ATI Avivo supports vector adaptive de-interlacing and video scaling to reduce jaggies, and spatial/temporal dithering, which attempts to simulate 10-bit color quality on 8-bit and 6-bit displays during process stage. ATI Avivo HD The successor of ATI Avivo is the ATI Avivo HD, which consists of several parts: integrated 5.1 surround sound HDMI audio controller, dual integrated HDCP encryption key for each DVI port (to reduce license costs), the Theater 200 chip for VIVO capabilities, the Xilleon chip for TV overscan and underscan correction, the Theater 200 chip as well as the originally-presented ATI Avivo Video Converter. However, most of the important hardware decoding functions of ATI Avivo HD are provided by the accompanied Unified Video Decoder (UVD) and the Advanced Video Processor (AVP) which supports hardware decoding of H.264/AVC and VC-1 videos (and included bitstream processing/entropy decoding which was absent in last generation ATI Avivo). For MPEG-1, MPEG-2, and MPEG-4/DivX videos, motion compensation and iDCT (inverse discrete cosine transform) will be done instead. The AVP retrieves the video from memory; handles scaling, de-interlacing and colour correction; and writes it back to memory. The AVP also uses 12-bit transform to reduce data loss during conversion, same as previous generation ATI Avivo. HDMI supports the transfer of video together with 8-channel 96 kHz 24-bit digital audio (and optionally Dolby TrueHD and DTS-HD Master Audio streams for external decoding by AV receivers, since HDMI 1.3). Integration of an audio controller in the GPU core capable of surround sound output eliminates the need for S/PDIF connection from motherboard or sound card to the video card, for synchronous video and audio output via HDMI cable. The Radeon HD 2900 series lacked the UVD feature, but still was given the ATI Avivo HD label. ATI Avivo Video Converter ATI has also released a transcoder software dubbed "ATI Avivo Video Converter", which supports transcoding between H.264, VC-1, WMV9, WMV9 PMC, MPEG-2, MPEG-4, DivX video formats, as well as formats used in iPod and PSP. Earlier versions of this software uses only the CPU for transcoding, but have been locked for exclusive use with the ATI X1000 series of GPUs. Software modifications have made it possible to use version 1.12 of converter on a wider range of graphics adapters. The ATI Avivo Video Converter for Windows Vista was available with the release of Catalyst 7.9 (September 2007 release, version 8.411). The ATI Avivo Video Converter with GPU transcoding acceleration is now also available for use with HD 4800 and HD 4600 series graphics cards and is included with the Catalyst 8.12 drivers. Support for Vista x64 is available via a separate download starting with Catalyst 9.6 (9-6_vista32-64_xcode). The new software is faster than Badaboom, an encoder that uses NVIDIA's CUDA to accelerate encoding, but has a higher CPU utilization than Badaboom. One review reported visual problems with iPod and WMV playback using Catalyst version 8.12, and although concluding there was no clear winners, if forced to choose would go with the Avivo converter. Software support ArcSoft TotalMedia Theatre Corel WinDVD Media Player Classic Home Cinema MediaPortal Cyberlink PowerDVD Microsoft Windows Vista internal MPEG-2 decoder Nero (software suite) All Linux players supporting Xv output (with AMD Catalyst 9.1 or newer) See also Unified Video Decoder (UVD) Video Coding Engine (VCE) References External links Beyond3D AVIVO preview PC Perspective article ATI Technologies Video acceleration
59868
https://en.wikipedia.org/wiki/Interpreter%20%28computing%29
Interpreter (computing)
In computer science, an interpreter is a computer program that directly executes instructions written in a programming or scripting language, without requiring them previously to have been compiled into a machine language program. An interpreter generally uses one of the following strategies for program execution: Parse the source code and perform its behavior directly; Translate source code into some efficient intermediate representation or object code and immediately execute that; Explicitly execute stored precompiled bytecode made by a compiler and matched with the interpreter Virtual Machine. Early versions of Lisp programming language and minicomputer and microcomputer BASIC dialects would be examples of the first type. Perl, Raku, Python, MATLAB, and Ruby are examples of the second, while UCSD Pascal is an example of the third type. Source programs are compiled ahead of time and stored as machine independent code, which is then linked at run-time and executed by an interpreter and/or compiler (for JIT systems). Some systems, such as Smalltalk and contemporary versions of BASIC and Java may also combine two and three. Interpreters of various types have also been constructed for many languages traditionally associated with compilation, such as Algol, Fortran, Cobol, C and C++. While interpretation and compilation are the two main means by which programming languages are implemented, they are not mutually exclusive, as most interpreting systems also perform some translation work, just like compilers. The terms "interpreted language" or "compiled language" signify that the canonical implementation of that language is an interpreter or a compiler, respectively. A high-level language is ideally an abstraction independent of particular implementations. History Interpreters were used as early as 1952 to ease programming within the limitations of computers at the time (e.g. a shortage of program storage space, or no native support for floating point numbers). Interpreters were also used to translate between low-level machine languages, allowing code to be written for machines that were still under construction and tested on computers that already existed. The first interpreted high-level language was Lisp. Lisp was first implemented in 1958 by Steve Russell on an IBM 704 computer. Russell had read John McCarthy's paper, and realized (to McCarthy's surprise) that the Lisp eval function could be implemented in machine code. The result was a working Lisp interpreter which could be used to run Lisp programs, or more properly, "evaluate Lisp expressions". General operation An interpreter usually consists of a set of known commands it can execute, and a list of these commands in the order a programmer wishes to execute them. Each command (also known as an Instruction) contains the data the programmer wants to mutate, and information on how to mutate the data. For example, an interpreter might read ADD Wikipedia_Users, 5 and interpret it as a request to add five to the Wikipedia_Users variable. Interpreters have a wide variety of instructions which are specialized to perform different tasks, but you will commonly find interpreter instructions for basic mathematical operations, branching, and memory management, making most interpreters Turing complete. Many interpreters are also closely integrated with a garbage collector and debugger. Compilers versus interpreters Programs written in a high-level language are either directly executed by some kind of interpreter or converted into machine code by a compiler (and assembler and linker) for the CPU to execute. While compilers (and assemblers) generally produce machine code directly executable by computer hardware, they can often (optionally) produce an intermediate form called object code. This is basically the same machine specific code but augmented with a symbol table with names and tags to make executable blocks (or modules) identifiable and relocatable. Compiled programs will typically use building blocks (functions) kept in a library of such object code modules. A linker is used to combine (pre-made) library files with the object file(s) of the application to form a single executable file. The object files that are used to generate an executable file are thus often produced at different times, and sometimes even by different languages (capable of generating the same object format). A simple interpreter written in a low-level language (e.g. assembly) may have similar machine code blocks implementing functions of the high-level language stored, and executed when a function's entry in a look up table points to that code. However, an interpreter written in a high-level language typically uses another approach, such as generating and then walking a parse tree, or by generating and executing intermediate software-defined instructions, or both. Thus, both compilers and interpreters generally turn source code (text files) into tokens, both may (or may not) generate a parse tree, and both may generate immediate instructions (for a stack machine, quadruple code, or by other means). The basic difference is that a compiler system, including a (built in or separate) linker, generates a stand-alone machine code program, while an interpreter system instead performs the actions described by the high-level program. A compiler can thus make almost all the conversions from source code semantics to the machine level once and for all (i.e. until the program has to be changed) while an interpreter has to do some of this conversion work every time a statement or function is executed. However, in an efficient interpreter, much of the translation work (including analysis of types, and similar) is factored out and done only the first time a program, module, function, or even statement, is run, thus quite akin to how a compiler works. However, a compiled program still runs much faster, under most circumstances, in part because compilers are designed to optimize code, and may be given ample time for this. This is especially true for simpler high-level languages without (many) dynamic data structures, checks, or type checking. In traditional compilation, the executable output of the linkers (.exe files or .dll files or a library, see picture) is typically relocatable when run under a general operating system, much like the object code modules are but with the difference that this relocation is done dynamically at run time, i.e. when the program is loaded for execution. On the other hand, compiled and linked programs for small embedded systems are typically statically allocated, often hard coded in a NOR flash memory, as there is often no secondary storage and no operating system in this sense. Historically, most interpreter systems have had a self-contained editor built in. This is becoming more common also for compilers (then often called an IDE), although some programmers prefer to use an editor of their choice and run the compiler, linker and other tools manually. Historically, compilers predate interpreters because hardware at that time could not support both the interpreter and interpreted code and the typical batch environment of the time limited the advantages of interpretation. Development cycle During the software development cycle, programmers make frequent changes to source code. When using a compiler, each time a change is made to the source code, they must wait for the compiler to translate the altered source files and link all of the binary code files together before the program can be executed. The larger the program, the longer the wait. By contrast, a programmer using an interpreter does a lot less waiting, as the interpreter usually just needs to translate the code being worked on to an intermediate representation (or not translate it at all), thus requiring much less time before the changes can be tested. Effects are evident upon saving the source code and reloading the program. Compiled code is generally less readily debugged as editing, compiling, and linking are sequential processes that have to be conducted in the proper sequence with a proper set of commands. For this reason, many compilers also have an executive aid, known as a Make file and program. The Make file lists compiler and linker command lines and program source code files, but might take a simple command line menu input (e.g. "Make 3") which selects the third group (set) of instructions then issues the commands to the compiler, and linker feeding the specified source code files. Distribution A compiler converts source code into binary instruction for a specific processor's architecture, thus making it less portable. This conversion is made just once, on the developer's environment, and after that the same binary can be distributed to the user's machines where it can be executed without further translation. A cross compiler can generate binary code for the user machine even if it has a different processor than the machine where the code is compiled. An interpreted program can be distributed as source code. It needs to be translated in each final machine, which takes more time but makes the program distribution independent of the machine's architecture. However, the portability of interpreted source code is dependent on the target machine actually having a suitable interpreter. If the interpreter needs to be supplied along with the source, the overall installation process is more complex than delivery of a monolithic executable since the interpreter itself is part of what need be installed. The fact that interpreted code can easily be read and copied by humans can be of concern from the point of view of copyright. However, various systems of encryption and obfuscation exist. Delivery of intermediate code, such as bytecode, has a similar effect to obfuscation, but bytecode could be decoded with a decompiler or disassembler. Efficiency The main disadvantage of interpreters is that an interpreted program typically runs slower than if it had been compiled. The difference in speeds could be tiny or great; often an order of magnitude and sometimes more. It generally takes longer to run a program under an interpreter than to run the compiled code but it can take less time to interpret it than the total time required to compile and run it. This is especially important when prototyping and testing code when an edit-interpret-debug cycle can often be much shorter than an edit-compile-run-debug cycle. Interpreting code is slower than running the compiled code because the interpreter must analyze each statement in the program each time it is executed and then perform the desired action, whereas the compiled code just performs the action within a fixed context determined by the compilation. This run-time analysis is known as "interpretive overhead". Access to variables is also slower in an interpreter because the mapping of identifiers to storage locations must be done repeatedly at run-time rather than at compile time. There are various compromises between the development speed when using an interpreter and the execution speed when using a compiler. Some systems (such as some Lisps) allow interpreted and compiled code to call each other and to share variables. This means that once a routine has been tested and debugged under the interpreter it can be compiled and thus benefit from faster execution while other routines are being developed. Many interpreters do not execute the source code as it stands but convert it into some more compact internal form. Many BASIC interpreters replace keywords with single byte tokens which can be used to find the instruction in a jump table. A few interpreters, such as the PBASIC interpreter, achieve even higher levels of program compaction by using a bit-oriented rather than a byte-oriented program memory structure, where commands tokens occupy perhaps 5 bits, nominally "16-bit" constants are stored in a variable-length code requiring 3, 6, 10, or 18 bits, and address operands include a "bit offset". Many BASIC interpreters can store and read back their own tokenized internal representation. An interpreter might well use the same lexical analyzer and parser as the compiler and then interpret the resulting abstract syntax tree. Example data type definitions for the latter, and a toy interpreter for syntax trees obtained from C expressions are shown in the box. Regression Interpretation cannot be used as the sole method of execution: even though an interpreter can itself be interpreted and so on, a directly executed program is needed somewhere at the bottom of the stack because the code being interpreted is not, by definition, the same as the machine code that the CPU can execute. Variations Bytecode interpreters There is a spectrum of possibilities between interpreting and compiling, depending on the amount of analysis performed before the program is executed. For example, Emacs Lisp is compiled to bytecode, which is a highly compressed and optimized representation of the Lisp source, but is not machine code (and therefore not tied to any particular hardware). This "compiled" code is then interpreted by a bytecode interpreter (itself written in C). The compiled code in this case is machine code for a virtual machine, which is implemented not in hardware, but in the bytecode interpreter. Such compiling interpreters are sometimes also called compreters. In a bytecode interpreter each instruction starts with a byte, and therefore bytecode interpreters have up to 256 instructions, although not all may be used. Some bytecodes may take multiple bytes, and may be arbitrarily complicated. Control tables - that do not necessarily ever need to pass through a compiling phase - dictate appropriate algorithmic control flow via customized interpreters in similar fashion to bytecode interpreters. Threaded code interpreters Threaded code interpreters are similar to bytecode interpreters but instead of bytes they use pointers. Each "instruction" is a word that points to a function or an instruction sequence, possibly followed by a parameter. The threaded code interpreter either loops fetching instructions and calling the functions they point to, or fetches the first instruction and jumps to it, and every instruction sequence ends with a fetch and jump to the next instruction. Unlike bytecode there is no effective limit on the number of different instructions other than available memory and address space. The classic example of threaded code is the Forth code used in Open Firmware systems: the source language is compiled into "F code" (a bytecode), which is then interpreted by a virtual machine. Abstract syntax tree interpreters In the spectrum between interpreting and compiling, another approach is to transform the source code into an optimized abstract syntax tree (AST), then execute the program following this tree structure, or use it to generate native code just-in-time. In this approach, each sentence needs to be parsed just once. As an advantage over bytecode, the AST keeps the global program structure and relations between statements (which is lost in a bytecode representation), and when compressed provides a more compact representation. Thus, using AST has been proposed as a better intermediate format for just-in-time compilers than bytecode. Also, it allows the system to perform better analysis during runtime. However, for interpreters, an AST causes more overhead than a bytecode interpreter, because of nodes related to syntax performing no useful work, of a less sequential representation (requiring traversal of more pointers) and of overhead visiting the tree. Just-in-time compilation Further blurring the distinction between interpreters, bytecode interpreters and compilation is just-in-time (JIT) compilation, a technique in which the intermediate representation is compiled to native machine code at runtime. This confers the efficiency of running native code, at the cost of startup time and increased memory use when the bytecode or AST is first compiled. The earliest published JIT compiler is generally attributed to work on LISP by John McCarthy in 1960. Adaptive optimization is a complementary technique in which the interpreter profiles the running program and compiles its most frequently executed parts into native code. The latter technique is a few decades old, appearing in languages such as Smalltalk in the 1980s. Just-in-time compilation has gained mainstream attention amongst language implementers in recent years, with Java, the .NET Framework, most modern JavaScript implementations, and Matlab now including JIT compilers. Template Interpreter Making the distinction between compilers and interpreters yet again even more vague is a special interpreter design known as a template interpreter. Rather than implement the execution of code by virtue of a large switch statement containing every possible bytecode possible, while operating on a software stack or a tree walk, a template interpreter maintains a large array of bytecode (or any efficient intermediate representation) mapped directly to corresponding native machine instructions that can be executed on the host hardware as key value pairs, known as a "Template". When the particular code segment is executed the interpreter simply loads the opcode mapping in the template and directly runs it on the hardware. Due to its design, the template interpreter very strongly resembles a just-in-time compiler rather than a traditional interpreter, however it is technically not a JIT due to the fact that it merely translates code from the language into native calls one opcode at a time rather than creating optimized sequences of CPU executable instructions from the entire code segment. Due to the interpreter's simple design of simply passing calls directly to the hardware rather than implementing them directly, it is much faster than every other type, even bytecode interpreters, and to an extent less prone to bugs, but as a tradeoff is more difficult to maintain due to the interpreter having to support translation to multiple different architectures instead of a platform independent virtual machine/stack. To date, the only template interpreter implementation of a language to exist is the interpreter within the HotSpot/OpenJDK Java Virtual Machine reference implementation. Self-interpreter A self-interpreter is a programming language interpreter written in a programming language which can interpret itself; an example is a BASIC interpreter written in BASIC. Self-interpreters are related to self-hosting compilers. If no compiler exists for the language to be interpreted, creating a self-interpreter requires the implementation of the language in a host language (which may be another programming language or assembler). By having a first interpreter such as this, the system is bootstrapped and new versions of the interpreter can be developed in the language itself. It was in this way that Donald Knuth developed the TANGLE interpreter for the language WEB of the industrial standard TeX typesetting system. Defining a computer language is usually done in relation to an abstract machine (so-called operational semantics) or as a mathematical function (denotational semantics). A language may also be defined by an interpreter in which the semantics of the host language is given. The definition of a language by a self-interpreter is not well-founded (it cannot define a language), but a self-interpreter tells a reader about the expressiveness and elegance of a language. It also enables the interpreter to interpret its source code, the first step towards reflective interpreting. An important design dimension in the implementation of a self-interpreter is whether a feature of the interpreted language is implemented with the same feature in the interpreter's host language. An example is whether a closure in a Lisp-like language is implemented using closures in the interpreter language or implemented "manually" with a data structure explicitly storing the environment. The more features implemented by the same feature in the host language, the less control the programmer of the interpreter has; a different behavior for dealing with number overflows cannot be realized if the arithmetic operations are delegated to corresponding operations in the host language. Some languages such as Lisp and Prolog have elegant self-interpreters. Much research on self-interpreters (particularly reflective interpreters) has been conducted in the Scheme programming language, a dialect of Lisp. In general, however, any Turing-complete language allows writing of its own interpreter. Lisp is such a language, because Lisp programs are lists of symbols and other lists. XSLT is such a language, because XSLT programs are written in XML. A sub-domain of metaprogramming is the writing of domain-specific languages (DSLs). Clive Gifford introduced a measure quality of self-interpreter (the eigenratio), the limit of the ratio between computer time spent running a stack of N self-interpreters and time spent to run a stack of self-interpreters as N goes to infinity. This value does not depend on the program being run. The book Structure and Interpretation of Computer Programs presents examples of meta-circular interpretation for Scheme and its dialects. Other examples of languages with a self-interpreter are Forth and Pascal. Microcode Microcode is a very commonly used technique "that imposes an interpreter between the hardware and the architectural level of a computer". As such, the microcode is a layer of hardware-level instructions that implement higher-level machine code instructions or internal state machine sequencing in many digital processing elements. Microcode is used in general-purpose central processing units, as well as in more specialized processors such as microcontrollers, digital signal processors, channel controllers, disk controllers, network interface controllers, network processors, graphics processing units, and in other hardware. Microcode typically resides in special high-speed memory and translates machine instructions, state machine data or other input into sequences of detailed circuit-level operations. It separates the machine instructions from the underlying electronics so that instructions can be designed and altered more freely. It also facilitates the building of complex multi-step instructions, while reducing the complexity of computer circuits. Writing microcode is often called microprogramming and the microcode in a particular processor implementation is sometimes called a microprogram. More extensive microcoding allows small and simple microarchitectures to emulate more powerful architectures with wider word length, more execution units and so on, which is a relatively simple way to achieve software compatibility between different products in a processor family. Computer processor Even a non microcoding computer processor itself can be considered to be a parsing immediate execution interpreter that is written in a general purpose hardware description language such as VHDL to create a system that parses the machine code instructions and immediately executes them. Applications Interpreters are frequently used to execute command languages, and glue languages since each operator executed in command language is usually an invocation of a complex routine such as an editor or compiler. Self-modifying code can easily be implemented in an interpreted language. This relates to the origins of interpretation in Lisp and artificial intelligence research. Virtualization. Machine code intended for a hardware architecture can be run using a virtual machine. This is often used when the intended architecture is unavailable, or among other uses, for running multiple copies. Sandboxing: While some types of sandboxes rely on operating system protections, an interpreter or virtual machine is often used. The actual hardware architecture and the originally intended hardware architecture may or may not be the same. This may seem pointless, except that sandboxes are not compelled to actually execute all the instructions the source code it is processing. In particular, it can refuse to execute code that violates any security constraints it is operating under. Emulators for running computer software written for obsolete and unavailable hardware on more modern equipment. See also BASIC interpreter Command-line interpreter Compiled language Dynamic compilation Homoiconicity Meta-circular evaluator Partial evaluation References External links IBM Card Interpreters page at Columbia University Theoretical Foundations For Practical 'Totally Functional Programming' (Chapter 7 especially) Doctoral dissertation tackling the problem of formalising what is an interpreter Short animation explaining the key conceptual difference between interpreters and compilers Programming language implementation
57030143
https://en.wikipedia.org/wiki/Jim%20Melvin
Jim Melvin
James Walter Melvin is an American entrepreneur, investor, and philanthropist who founded and served as the CEO of several notable software companies in the foodservice and hospitality industries. His software products have been installed at more than 125,000 stores worldwide for companies such as McDonald's, YUM! Brands, Burger King, Wendy's, Disney, Darden Restaurants, K-Mart, Costco, FedEx, Walmart, Foot Locker and many others. Early life Melvin was an early fan of Dungeons & Dragons. At 15 years old, Melvin met Gary Gygax at a fan convention, where the two discussed an idea for automating the role of the Dungeon Master. At the urging of Gygax, Melvin created GameAssist using Z80 assembly on TRS-80 computers. GameAssist was a tool to assist Dungeon Masters in the Dungeons & Dragons game with tracking player statistics and inventory and in managing and generating creature encounters. Melvin sold the Intellectual property behind GameAssist in 1980 to a gaming supplies store after he left to attend college at Tulane University. Introduction to the foodservice industry Melvin's career of creating technology solutions for the foodservice industry began when he was 18 years old, when he was hired by a Wendy's franchisee to develop software for the franchisee's Wendy's stores. This software was later adopted by Wendy's corporate. In 1986, Melvin co-founded Techwerks. Techwerks became one of the largest global resellers of NCR retail hardware. While at Techwerks, Melvin developed touchscreen point of sale software called "Foodwerks" for fast food restaurants. Foodwerks was sold in April 1989 for almost $2 million to an investor group led by Melvin. Compris Technologies With support from Dave Thomas, and financial backing from IBM, Melvin founded Compris Technologies in late 1989. The primary product offered by Compris Technologies was a point of sale (POS) software application. IBM, in a partnership with Compris Technologies, created the first commercial IBM touchscreen POS units, IBM kiosk POS units and IBM handheld POS units. As a result, the Compris POS software application was first mainly installed on IBM 46xx Series hardware, but later supported by several other POS hardware manufacturers including NCR. Between 1989 and 1992, Compris quickly grew to become the dominant global provider of foodservice technology. The Compris POS software continued to increase in popularity throughout the late 1990s and early 2000s, winning several Microsoft Retail Application Developer (RAD) Awards. Apigent Solutions In 2000, Melvin founded Apigent Solutions. Other technology companies, such as Par Technology Corporation, partnered with Apigent Solutions to provide comprehensive technology offerings to foodservice chains within the quick service (QSR) segment. At the time, Apigent Solutions was the only Sun Microsystems SunTone Certified data center in the foodservice industry. In 2002, Apigent Solutions was sold to eMac Digital, a joint venture backed by McDonald's and KKR. SIVA Corporation On January 1, 2003, Melvin acquired SIVA Corporation, a provider of cloud-based POS software, from a private investor group. Melvin shut down the business for a year following the acquisition to rearchitect the underlying technology as an enterprise-focused system. Upon completion of the reengineering effort, Darden Restaurants became iSIVA's first customer in 2003. Over the next four years, the iSIVA product grew in popularity and was adopted by Luby's, Fuddruckers, Legal Sea Foods, CoCo's, Carrows, Miller's Alehouse and many others. Other ventures Following his tenure at Par Technology Corporation, Melvin served as an advisor and investor for several technology start-ups in the foodservice space. In 2014, Melvin joined the board of advisors for the House of Genius, a Startup accelerator. Melvin currently occupies the role of CEO for Intelligent Transactions, a strategic technology consultancy for the foodservice industry. Philanthropy In 1998, in a partnership with the National Restaurant Association, Melvin co-founded the technology pavilion for the association's annual trade show in Chicago to help promote innovation within the foodservice industry. Melvin's other philanthropic endeavors included serving on the FDA's Food Safety Technology Advisory Panel and participating as an advisory board member for Round It Up America. In 2013, Melvin co-founded the Food Service Educational Alliance, a non-profit organization dedicated to investigating and advancing educational opportunities for workers in the foodservice industry. References Year of birth missing (living people) Living people People from Oakland, California American food company founders American founders American philanthropists
41366035
https://en.wikipedia.org/wiki/Anusha%20Rahman
Anusha Rahman
Anusha Rahman Ahmad Khan (; born 1 June 1968) is a Pakistani politician who served as Federal Minister for Information Technology and Telecommunication, in Abbasi cabinet from April 2018 to May 2018. Previously she served as the Minister of State for Information Technology and Telecommunication of Pakistan from June 2013 to July 2017 in the third Sharif ministry and again from August 2017 to April 2018 in the Abbasi ministry. She had been a member of the National Assembly of Pakistan from 2008 to May 2018 representing the Pakistan Muslim League (N). Early life and education Anusha was born on 1 June 1968. and belongs to a pre-partition political family. In 1992, Rehman graduated with a LLB and received an LLM from University College London specializing in Law and Economics of regulated industries, networks and markets. Professional career In early 1990s, Rehman started legal career. She is a professional corporate lawyer by profession. According to Rehman, she had been working with the telecom sector since the 1990s. Political career Anusha began her political career in 2006 or 2007, when she was made senior vice president of the lawyers’ wing of PML-N. She played an active role in the lawyers movement for the restoration of the judiciary following Pakistani state of emergency, 2007. She was elected as a member of the National Assembly of Pakistan for the first time in 2008 Pakistani general election on a reserved seat for women. She was a member of the National Assembly Standing Committee on Law and Justice during her tenure in National Assembly. In 2009, Rehman was a key member of the PML-N's steering committee, which was tasked with dealing with legal matters. She was re-elected as a member of the National Assembly of Pakistan for the second time in 2013 Pakistani general election on a reserved seat for women. In 2013, Rehman was appointed as the Minister of State for Information Technology and Telecommunication. She had ceased to hold ministerial office in July 2017 when the federal cabinet was disbanded following the resignation of Prime Minister Nawaz Sharif after Panama Papers case decision. Following the election of Shahid Khaqan Abbasi as Prime Minister of Pakistan in August 2017, she was inducted into the federal cabinet of Abbasi. She was appointed as the Minister of State for Information Technology and Telecommunication. In April 2018, she was elevated as the federal minister and was appointed as Federal Minister for Information Technology and Telecommunication in the cabinet of Prime Minister Shahid Khaqan Abbasi. Upon the dissolution of the National Assembly on the expiration of its term on 31 May 2018, Rehman ceased to hold the office as Federal Minister for Information Technology and Telecommunication. Key Contributions & Achievements During her tenure. she had undertaken several policy and legislative initiatives to achieve the vision of knowledge based economy. She spearheaded formulation of Telecom Sector Policy 2015 to address emerging trends in the sector. She envisioned availability of universal, affordable and quality telecommunication services provided through open and competitive markets for the benefit of people. In 2017, Anusha Rahman received the "Government Leadership Award" 2017 for flagship policy project "Telecom policy 2015". She pioneered the e-Governance initiative in Pakistan with an objective of increasing overall public administration efficiency, enhancing transparency, improving citizens’ access to services and enhancing their participation in democratic governance processes. Her leadership and apt policy interventions have also led Pakistan to be among top five nations in terms of freelancing in IT products globally. The export of IT and ITES increased by over 380% during five years of her tenure. In 2015, Rehman was awarded "GEM-TECH Global Achievers 2015" award by UN Women and the International Telecommunication Union in recognition of her work to empower women through technology. Mobile Broadband, Pakistan was awarded “Spectrum for Mobile Broadband Award 2015 “at the prestigious Mobile World Congress 2015 in Barcelona under the leadership of Anusha Rahman. Pakistan was awarded for successfully auctioning spectrum for 3G/4G services in the 850 MHz, 1800 MHz and 2100 MHz bands in 2014, and thereafter the rapid uptake of 3G /4Gservices in the country where broadband which was less than 3% in 2013 went up to over 40 % today. She added that “The award of GSMA Spectrum for Mobile Broadband Award for 2015 to Pakistan is an indication of the global community reposing its trust in the Telecommunication Sector policy practices of the Government of Pakistan” On Dec 2015, Anusha Rahman being the first women ICT Minister of Pakistan, introduced the program for girls referred as “ICTs for Girls” for promoting inclusiveness and empowerment of girls/ young women to enable them contribute to and benefit from the value chain of ICTs. This program was designed to provide access to ICT infrastructure and tools, customized ICT education for specific skill development and the job market. As part of this program, tens of thousands of girls and women from disadvantaged segments of society have been provided digital infrastructure and digital skills with state of art machines in fully broadband supported environments. Microsoft has partnered to design, develop and train teachers on the “4 Cs” for skills education including: coding, computing, coaching and communication. In July 2016, Pakistan’s first largest National Incubation Center was envisioned and launched under the leadership of Anusha through a public-private partnership of Ministry of Information Technology & Telecom, Ignite (Formerly National ICT R&D Fund) and Teamup. The NIC was structured to provide startups with a free of cost workspace, incubation, acceleration – Jazz xlr8 program – seed funds, and access to 50 million plus mobile 2016 customers. On July 20, 2016 at the launch, Anusha Rehman, Minister of State for IT & Telecom – Pakistan said, “The launch of the National Incubation Center is an important milestone in the Government of Pakistan’s digital agenda. I would like to appreciate the efforts of all our partners who have made this possible. I wish all the startups joining the National Innovation Center the very best in achieving their dreams, they are our future.” She led this ecosystem of entrepreneurship and expanded National Incubation Centers across Pakistan; Peshawar, Karachi, Lahore and Quetta. In order to prepare the youth and the future workforce which is equipped with fourth industrial revolution, Anusha Rahman launched a "DigiSkills" Program in Feb 2018 to Prepare One Million Freelancers in Pakistan. The programme was aimed at equipping the youth, freelancers, students, professionals, etc. with knowledge, skills, tools & techniques necessary to seize opportunities available internationally in online job market places and also locally. Prime Minister Shahid Khaqan Abbasi inaugurated the Digi Skills program aimed at training one million of the youth through online modules which successfully completed its scope. Later on February 15, 2019, The Commonwealth Telecommunications Organisation (CTO) has appointed Anusha Rahman Khan, as the regional advisor to the secretary-general and the CTO for the East and South Asia region. References Living people Pakistan Muslim League (N) politicians Nawaz Sharif administration Pakistani lawyers Pakistani women lawyers Pakistani MNAs 2008–2013 Pakistani MNAs 2013–2018 1968 births Women members of the National Assembly of Pakistan Federal ministers of Pakistan Women federal ministers of Pakistan
34049942
https://en.wikipedia.org/wiki/IT%20chargeback%20and%20showback
IT chargeback and showback
IT chargeback and IT showback (memo-back) are two policies used by information technology (IT) departments to allocate and/or bill the costs associated with each department's or division's usage. Chargeback The need to understand the components of the costs of IT, and to fund the IT organization in the face of unexpected demands from user departments, led to the development of chargeback mechanisms, in which a requesting department gets an internal bill (or "cross-charge") for the costs that are directly associated to the infrastructure, data transfer, application licenses, training, etc., which they generate. The purpose of chargeback includes: Making departments responsible in their usage, e.g., refrain from asking for resources they are not going to use Providing visibility to the head of IT and to senior management on the reasons behind the costs of IT Allowing the IT department to respond to unexpected customer demand by saying "yes, we can do it, but you will have to pay for it" instead of saying "no, we cannot do this because it's not in the budget." As of 2011, the chargeback mechanisms are often controversial in organizations. Departments rarely pay directly for their own electricity bill, janitorial services, etc. -- these are allocated to departments on the basis of the number of employees or the square footage they occupy. Similarly, departments may expect to pay a fixed allocation for IT and get a flexible set of services that meet their needs in return. While the discussion on such an allocation are always difficult, seeing actual variable charges arrive on a monthly basis for specific levels of usage can create conflict both between IT and its internal customers, and between a department manager and the users who caused resource consumption to increase and therefore costs to rise. The rise of subscription-based computing services (cloud computing) may make chargeback mechanisms more palatable. Showback Around 2010, the concept of IT showback emerged to keep the advantages of chargeback without some of its drawbacks. Showback consists of providing IT management, departments, and corporate management with an analysis of the IT costs due to each department, without actually cross-charging those costs. The pressure on the departments to limit their usage is less direct, but awareness of the costs usually causes department heads and senior management to question why a department is "spending" more than another in IT. List of IT chargeback and showback software systems DellEMC, Storage Resource Manager (SRM) VMware, vRealize Business Red Hat, Red Hat CloudForms, Red Hat Insights Cost Management IBM, SmartCloud Cost Management LeanIX Talligent, Openbook Platform Embotics, vCommander Cloud Management Platform meshcloud, Cloud Foundation Platform with Cloud Cost Management capabilities MagicOrange, Profitability and Cost Transparency made simple Apptio, Technology Business Management Software and Services ClearCost Software, Integrated Service Financial Management Software CloudBolt, Hybrid cloud management platform developed by CloudBolt Software Cube Billing, Software-as-a-Service IT Cost Allocation and Chargeback/Showback Software Cloud Cruiser, Microsoft Private Cloud integration tool to provide charge & showback Open iT, Open iT for Software Asset Management and Chargeback NetApp: OnCommand Insight (OCI) BMC Software, Cloud LifeCycle Management / TrueSight Capacity Optimization Linkbynet, SelfDeploy (Cloud Management Platform) MicroFocus, Asset Manager CloudZero, Cloud Cost Management Exivity, Hybrid Cloud metering for IT chargeback/showback and customer billing Nordcloud, Klarity multi cloud management platform Visual Storage Intelligence, Storage Management Software-as-a-Service See also Virtual chargeback References See also Financial management for IT services (ITSM)#Charging Information technology management Costs
195809
https://en.wikipedia.org/wiki/PayPal
PayPal
PayPal Holdings, Inc. is an American multinational financial technology company operating an online payments system in the majority of countries that support online money transfers, and serves as an electronic alternative to traditional paper methods such as checks and money orders. The company operates as a payment processor for online vendors, auction sites and many other commercial users, for which it charges a fee. Established in 1998 as Confinity, PayPal went public through an IPO in 2002. It became a wholly owned subsidiary of eBay later that year, valued at $1.5 billion. In 2015, eBay spun off PayPal to eBay's shareholders and PayPal became an independent company again. The company was ranked 134th on the 2021 Fortune 500 of the largest United States corporations by revenue. History Early history PayPal was originally established by Peter Thiel, Luke Nosek and Max Levchin, in December 1998 as Confinity, a company that developed security software for hand held devices. It had no success with that business model, however, it switched its focus to a digital wallet. The first version of the PayPal electronic payments system was launched in 1999. In March 2000, Confinity merged with x.com, an online financial services company founded in March 1999 by Elon Musk. Musk was optimistic about the future success of the money transfer business Confinity was developing. Musk and Bill Harris, then-president and CEO of X.com, disagreed about the potential future success of the money transfer business and Harris left the company in May 2000. In October of that year, Musk decided that X.com would terminate its other internet banking operations and focus on PayPal. That same month, Elon Musk was replaced by Peter Thiel as CEO of X.com, which was renamed PayPal in 2001 and went public in 2002. PayPal's IPO listed under the ticker PYPL at $13 per share and generated over $61 million. eBay subsidiary (2002–2014) Shortly after PayPal's IPO, the company was acquired by eBay on October 3, 2002, for $1.5 billion. More than 70 percent of all eBay auctions accepted PayPal payments, and roughly 1 in 4 closed auction listings were transacted via PayPal. PayPal became the default payment method used by the majority of eBay users, and the service competed with eBay's subsidiary Billpoint, as well as Citibank's c2it, Yahoo!'s PayDirect, and Google Checkout. In 2005, PayPal acquired the VeriSign payment solution to provide added security support. In 2007, PayPal announced a partnership with MasterCard, which led to the development and launch of the PayPal Secure Card service, a software that allows customers to make payments on websites that do not accept PayPal directly. By the end of 2007, the company generated $1.8 billion in revenue. In January 2008, PayPal acquired Fraud Sciences, a privately held Israeli start-up that developed online risk tools, for $169 million. In November 2008, the company acquired Bill Me Later, an online transactional credit company. By 2010, PayPal had over 100 million active user accounts in 190 markets through 25 different currencies. In July 2011, fourteen alleged members of the Anonymous hacktivist group were charged with attempting to disrupt PayPal's operations. The denial of service attacks occurred in December 2010, after PayPal stopped processing donations to WikiLeaks. On December 5, 2013, 13 of the PayPal 14 pleaded guilty to misdemeanor and felony charges related to the attacks. The company continued to build its Merchant Services division, providing e-payments for retailers on eBay. In 2011, PayPal announced that it would begin moving its business offline so that customers can make payments via PayPal in stores. In August 2012, the company announced its partnership with Discover Card to allow PayPal payments to be made at any of the 7 million stores in Discover Card's network. By the end of 2012, PayPal's total payment volume processed was . and accounted for 40% of eBay's revenue, amounting to in the 3rd quarter of 2012. In 2013, PayPal acquired IronPearl, a Palo Alto startup offering engagement software, and Braintree, a Chicago-based payment gateway, to further product development and mobile services. In June 2014 David Marcus announced he was leaving his role as PayPal President; Marcus joined PayPal in August 2011 after its acquisition of Zong, of which he was the founder and CEO. David Marcus succeeded Scott Thompson as president, who left the role to join Yahoo. PayPal announced that Marcus would be succeeded by Dan Schulman, who previously served as CEO of Virgin Mobile and Executive vice president of American Express. Spin-off from eBay (2014–present) It was announced on September 30, 2014, that eBay would spin off PayPal into a separate publicly traded company, a move demanded in 2013 by activist hedge fund magnate Carl Icahn. The spin-off was completed on July 18, 2015. Dan Schulman is the current president and CEO, with former eBay CEO John Donahoe serving as chairman. On January 31, 2018, eBay announced that "After the existing eBay-PayPal agreement ends in 2020, PayPal will remain a payment option for shoppers on eBay, but it won't be prominently featured ahead of debit and credit card options as it is today. PayPal will cease to process card payments for eBay at that time." The company will "instead begin working with Amsterdam-based Adyen". On July 1, 2015, PayPal announced that it was acquiring digital money transfer company Xoom Corporation. PayPal spent $25 a share in cash to acquire the publicly traded Xoom, or about $1.09 billion. The deal was closed in the fourth quarter of 2015. The move strengthened PayPal’s international business, giving it access to Xoom’s 1.3 million active U.S. customers that sent about $7 billion in the 12 months ending on March 31, to people in 37 countries. On September 1, 2015, PayPal launched their peer-to-peer payment platform "PayPal.Me", a service that allows users to send a custom link to request funds via text, email, or other messaging platforms. Custom links are set to be structured as PayPal.me/username/amount requested. PayPal.Me was launched in 18 countries including the United States, United Kingdom, Germany, Australia, Canada, Russia, Turkey, France, Italy, Spain, Poland, Sweden, Belgium, Norway, Denmark, Netherlands, Austria and Switzerland. PayPal had 170 million users, as of September 2015, and the focus of PayPal.Me was to create a mobile-first user experience that enables faster payment sharing than PayPal's traditional tools. On May 17, 2018, PayPal agreed to purchase Swedish payment processor iZettle for $2.2 billion. This was PayPal's largest acquisition until late November 2019 and the company claims that it is the in-store expertise and digital marketing strength that will complement its own online and mobile payment services. On March 19, 2019, PayPal announced their partnership with Instagram as part of the company's new checkout feature, "Checkout on Instagram". In June 2019, PayPal reported that Chief Operating Officer Bill Ready would be leaving the company at the end of the year. In December 2019, Google announced that Ready would become the new commerce chief. On January 6, 2020, PayPal acquired Honey for over $4 billion. This is PayPal's largest acquisition to date, and its most recent. It more recently signed a deal with NBCUniversal. In January 2021, PayPal became the first foreign operator with 100% control of a payment platform in China, gaining an advanced position in the local online payment market. In an international survey conducted in March 2021 by Morning Consult, PayPal was found to be the second most trusted brand globally. On October 20, 2021, Bloomberg reported that PayPal is interested in acquiring Pinterest, with a potential price around $70 a share, there’s no certainty the talks will lead to an agreement. Acquisitions Finances The fiscal year for Paypal is from January 1 to December 31. For fiscal year 2019, Paypal reported earnings of US$2.459 billion, with an annual revenue of $17.772 billion, an increase of 15% over the previous fiscal cycle. PayPal's shares traded at over $108 per share, and its market capitalization was valued at over $127.58 billion in December 2019. The Covid-19 pandemic has accelerated the growth of digital payment platforms, including PayPal, at the expense of the traditional banking sector. As a result, Paypal has seen an increase in its stock to up to 78% in 2020 as of October. In addition, total payment volume has increased 29% amounting to $220 billion increasing positive investor sentiment. Offices PayPal's corporate headquarters are located in the North San Jose Innovation District of San Jose, California, at North First Street campus. The company's operations center is located in La Vista, Nebraska, which was opened in 1999. Since July 2007, PayPal has operated across the European Union as a Luxembourg-based bank. The PayPal European headquarters are located in Luxembourg and the international headquarters are in Singapore. PayPal opened a technology center in Scottsdale, Arizona in 2006, and a software development center in Chennai, India in 2007. In October 2007, PayPal opened a data service office on the north side of Austin, Texas, and also opened a second operations center in La Vista, Nebraska that same year. In 2011, joining similar customer support operations located in Berlin, Germany; Chandler, Arizona; Dublin and Dundalk, Ireland; Omaha, Nebraska; and Shanghai, China; PayPal opened a second customer support center in Kuala Lumpur, Malaysia, and began the hiring process. In 2014, PayPal opened a new global center of operations in Kuala Lumpur. Services PayPal's services allow people to make financial transactions online by granting the ability to transfer funds electronically between individuals and businesses. Through PayPal, users can send or receive payments for online auctions on websites like eBay, purchase or sell goods and services, or donate money or receive donations. It is not necessary to have a PayPal account to use the company's services. PayPal account users can set currency conversion option in account settings. The PayPal app is available online or at the iTunes App Store and Google Play. One year after acquiring Braintree, PayPal introduced its "One Touch" service, which allows users to pay with a one-touch option on participating merchants websites or apps. In 2007, PayPal acquired the online credit product Bill Me Later, Inc., which has since been rebranded as PayPal Credit and provided services for Comenity Capital Bank, the lender of PayPal Credit accounts. Founded in 2000, Bill Me Later is headquartered in Timonium, Maryland. PayPal Credit offers shoppers access to an instant online revolving line of credit at thousands of vendors that accept PayPal, subject to credit approval. PayPal Credit allows consumers to shop online in much the same way as they would with a traditional credit card. The rebranding of Bill Me Later as PayPal Credit also means that consumers can use PayPal Credit to fund transactions virtually anywhere PayPal is accepted. In 2015 PayPal agreed that PayPal Credit would pay a $25 million fine to settle a complaint filed in Federal Court by the Consumer Financial Protection Bureau. From 2009 to 2016, PayPal operated Student Accounts, allowing parents to set up a student account, transfer money into it, and obtain a debit card for student use. The program provided tools to teach how to spend money wisely and take responsibility for actions. PayPal discontinued Student Accounts in August 2016. In November 2009, PayPal partially opened its platform, allowing other services to get access to more APIs and to use its infrastructure in order to enable peer-to-peer online transactions. On November 28, 2011, PayPal reported Black Friday brought record mobile engagement including a 538% increase in global mobile payment volume when compared with Black Friday 2010. In 2012, the company launched "PayPal Here", a small business mobile payment system that includes a combination of a free mobile app and a small card-reader that plugs into a smart phone. PayPal launched an updated app for iOS and Android in 2013 that expanded its mobile app capabilities by allowing users to search for local shops and restaurants that accept PayPal payments, order ahead at participating venues, and access their PayPal Credit accounts (formerly known as Bill Me Later). On October 21, 2020, PayPal announced a new service allowing customers to use cryptocurrencies to shop at 26 million merchants on the network starting in 2021. Paypal has been using Paxos Trust to provide the back end infrastructure allowing users to manage and trade cryptocurrencies in accordance to data privacy rules and financial regulations. Paxos has been in charge of acquiring the necessary regulatory approvals for Paypal to facilitate cryptocurrency assets. As part of the announcement, PayPal secured the first conditional cryptocurrency license from the New York State Department of Financial Services, which will allow customers to purchase cryptocurrencies such as Bitcoin, Litecoin, Ethereum, and Bitcoin Cash. , PayPal operates in 202 markets and has 426 million active, registered accounts. PayPal allows customers to send, receive, and hold funds in 25 currencies worldwide. Business model evolution PayPal's success in users and volumes was the product of a three-phase strategy described by former eBay CEO Meg Whitman: "First, PayPal focused on expanding its service among eBay users in the US. Second, we began expanding PayPal to eBay's international sites. And third, we started to build PayPal's business off eBay." Phase 1 In the first phase, payment volumes were coming mostly from the eBay auction website. The system was very attractive to auction sellers, most of which were individuals or small businesses that were unable to accept credit cards, and for consumers as well. In fact, many sellers could not qualify for a credit card Merchant account because they lacked a commercial credit history. The service also appealed to auction buyers because they could fund PayPal accounts using credit cards or bank account balances, without divulging credit card numbers to unknown sellers. PayPal employed an aggressive marketing campaign to accelerate its growth, depositing $10 in new users' PayPal accounts. Phase 2 Until 2000, PayPal's strategy was to earn interest on funds in PayPal accounts. However, most recipients of PayPal credits withdrew funds immediately. Also, many senders funded their payments using credit cards, which cost PayPal roughly 2% of payment value per transaction. To solve this problem, PayPal tailored its product to cater more to business accounts. Instead of relying on interests earned from deposited funds, PayPal started relying on earnings from service charges. They offered seller protection to PayPal account holders, provided that they comply with reimbursement policies. For example, PayPal merchants are either required to retain a traceable proof of shipping to a confirmed address or to provide a signed receipt for items valued over $750. Phase 3 After fine-tuning PayPal's business model and increasing its domestic and international penetration on eBay, PayPal started its off-eBay strategy. This was based on developing stronger growth in active users by adding users across multiple platforms, despite the slowdown in on-eBay growth and low-single-digit user growth on the eBay site. A late 2003 reorganization created a new business unit within PayPal—Merchant Services—to provide payment solutions to small and large e-commerce merchants outside the eBay auction community. Starting in the second half of 2004, PayPal Merchant Services unveiled several initiatives to enroll online merchants outside the eBay auction community, including: Lowering its transaction fee for high-volume merchants from 2.2% to 1.9% (while increasing the monthly transaction volume required to qualify for the lowest fee to $100,000) Encouraging its users to recruit non-eBay merchants by increasing its referral bonus to a maximum of $1,000 (versus the previous $100 cap) Persuading credit card gateway providers, including CyberSource and Retail Decisions USA, to include PayPal among their offerings to online merchants. Hiring a new sales force to acquire large merchants such as Dell, Apple's iTunes, and Yahoo! Stores, which hosted thousands of online merchants Reducing fees for online music purchases and other "micropayments" Launching PayPal Mobile, which allowed users to make payments using text messaging on their cell phones Global reach PayPal can be used in more than 200 countries/regions. Different countries have different conditions: Send only (Package Service allows sending only, valid in 97 countries), PayPal Zero (package suggests the possibility of enrollment, entry, and withdrawal of funds in foreign currency, but the user can not hold the balance PayPal account, operates in 18 countries), SRW Send – Receive – Withdrawal (the possibility of enrollment, input-output and the ability to keep your PayPal account balance in the currency and to transfer to the card when the user sees fit, operates in 41 countries) and Local Currency (SRW plus opportunity to conduct transactions in the local currency, 21 countries). China In July 2017, PayPal announced a partnership with Baidu, to allow the Chinese firm’s 100 million mobile wallet users to make payments to PayPal’s 17 million merchants through the Baidu service. Crimea In January 2015, PayPal ceased operations in Crimea in compliance with international sanctions against Russia and Crimea. India As of March 2011, PayPal has made changes to the User Agreement for Indian users to comply with Reserve Bank of India regulations. The per transaction limit had been set to USD $3,000, since October 14, 2011. However, on July 29, 2013, PayPal increased the per transaction limit to USD $10,000. This brings the per transaction limit for India in line with the restrictions imposed by PayPal in most other countries. PayPal has disabled sending and receiving personal payments in India, thus forcing all recipients to pay a transaction fee. PayPal plans to make India an incubation center for the company's employee engagement policies. In 2012, PayPal hired 120 people for its offices in Chennai and Bengaluru. On 8 November 2017, PayPal launched domestic operations under PayPal Payments Private Limited and now provides digital payment solutions for merchants and customers in India. As of 2020, Paypal supports the domestic card system RuPay and is planning to further integrate Unified Payment Interface (UPI) in collaboration with National Payments Corporation of India (NPCI). PayPal now has the largest global engineering team in India outside of the US, which is spread over Bengaluru, Chennai and Hyderabad. Israel and Palestinian Territories PayPal is available in Israel but is not available in the Palestinian territories. Nor can Palestinians working in the West Bank or Gaza access it but Israelis living in settlements in the West Bank can use PayPal. This decision has prompted Palestinian tech companies to seek a policy change from PayPal. Japan In late March 2010, new Japanese banking regulations forced PayPal Japan to suspend the ability of personal account holders registered in Japan from sending or receiving money between individuals and as a result are now subject to PayPal's business fees on all transactions. Pakistan In Pakistan, users can use Xoom, a money transfer service owned by PayPal. In October 2018, Pakistan's government used Xoom to help crowdsource funds for the purpose of building two dams. The government of Pakistan is trying to convince PayPal administration to launch its service in the country, but PayPal is not ready to introduce its services there. Turkey Eight years after the company first started operating in the country, Paypal ceased operations in Turkey on 6 June 2016 when Turkish financial regulator BDDK denied it a payment license. The regulators had demanded that PayPal's data centers be located inside Turkey to facilitate compliance with government and court orders to block content and to generate tax revenue. PayPal said that the closure will affect tens of thousands of businesses and hundreds of thousands of consumers in Turkey. Sri Lanka In January 2017, the PayPal team was scheduled to visit Sri Lanka in mid-January to re-establish links. But as of 2021, PayPal still doesn't operate in the country. PayPal Giving Fund PayPal Giving Fund is a registered charity supported by PayPal that streamlines donations to non-profit organizations. Digital marketing with PayPal PayPal launches different marketing activities in various channels and emphasizes that consumers can use it in different ways. PayPal's marketing includes TV commercials, outdoor advertising, Facebook, and display advertisement. PayPal provides free analytics to traders about the ways that consumers utilize online payments. Through the free tracking service, PayPal assists traders in targeting consumers. PayPal's code gathers the consumer information which can be installed on the trader's website. Both PayPal and traders benefit from the free service. PayPal partners with Synchrony Financial to provide the PayPal Cashback Mastercard, which offers 2% cash back to customers who use the card to make purchases both online and in physical stores. PayPal’s cashback financial service promotes the number of potential customers. Apple allows PayPal as a mode of payment for App Store, Apple Music, iTunes, and Apple Books. PayPal can increase usage of Apple platforms. In addition, PayPal receives revenue from Apple services, especially from the App Store. Customers can use PayPal to make purchases by linking their PayPal accounts to their Apple IDs. Regulation Thiel, a founder of PayPal, has stated that PayPal is not a bank because it does not engage in fractional-reserve banking. Rather, PayPal's funds that have not been disbursed are kept in commercial interest-bearing checking accounts. In the United States, PayPal is licensed as a money transmitter, on a state-by-state basis. But state laws vary, as do their definitions of banks, narrow banks, money services businesses, and money transmitters. Although PayPal is not classified as a bank, the company is subject to some of the rules and regulations governing the financial industry including Regulation E consumer protections and the USA PATRIOT Act. The most analogous regulatory source of law for PayPal transactions comes from peer-to-peer (P2P) payments using credit and debit cards. Ordinarily, a credit card transaction, specifically the relationship between the issuing bank and the cardholder, is governed by the Truth in Lending Act (TILA) 15 U.S.C. §§ 1601-1667f as implemented by Regulation Z, 12 C.F.R. 226, (TILA/Z). TILA/Z requires specific procedures for billing errors, dispute resolution, and limits cardholder liability for unauthorized charges. Similarly, the legal relationship between a debit cardholder and the issuing bank is regulated by the Electronic Funds Transfer Act (EFTA) 15 U.S.C. §§ 1693-1693r, as implemented by Regulation E, 12 C.F.R. 205, (EFTA/E). EFTA/E is directed at consumer protection and provides strict error resolution procedures. However, because PayPal is a payment intermediary and not otherwise regulated directly, TILA/Z and EFTA/E do not operate exactly as written once the credit/debit card transaction occurs via PayPal. Basically, unless a PayPal transaction is funded with a credit card, the consumer has no recourse in the event of fraud by the seller. In 2008, PayPal Europe was granted a Luxembourg banking license, which, under European Union law, allows it to conduct banking business throughout the EU. It is therefore regulated as a bank by Luxembourg's banking supervisory authority, the Commission de Surveillance du Secteur Financier (CSSF). All of the company's European accounts were transferred to PayPal's bank in Luxembourg in July 2007. Prior to this move, PayPal had been registered in the United Kingdom as PayPal (Europe) Ltd, an entity which was licensed as an Electronic Money Issuer with the UK's Financial Services Authority (FSA) from 2004. This ceased in 2007, when the company moved to Luxembourg. In India, as of January 2010, PayPal has no cross-border money transfer authorization. In The New York Times article "India's Central Bank Stops Some PayPal Services", Reserve Bank of India spokesman Alpana Killawalla stated: "Providers of cross-border money transfer service need prior authorization from the Reserve Bank under the Payment and Settlement Systems Act, PayPal does not have our authorization." PayPal is not listed in the "Certificates of Authorisation issued by the Reserve Bank of India under the Payment and Settlement Systems Act, 2007 for Setting up and Operating Payment System in India". PaisaPay is an Indian sister service to PayPal. It is also owned by eBay. PaisaPay makes possible payments from abroad by PayPal account holders to Indian sellers on eBay.in. In Australia, PayPal is licensed as an authorised deposit-taking institution (ADI) and is thus subject to Australian banking laws and regulations. In Singapore, PayPal is the holder of a stored value facility that does not require the approval of the Monetary Authority of Singapore. Safety and protection policies The PayPal Buyer Protection Policy states that the customer may file a buyer complaint if he or she did not receive an item or if the item he or she purchased was significantly not as described. The customer can open a dispute within 180 days from the date of payment and escalate it to a claim within 20 days from opening the dispute. If the buyer used a credit card, he or she might get a refund via chargeback from his or her credit-card company. However, in the UK, where such a purchaser is entitled to specific statutory protections (that the credit card company is a second party to the purchase and is therefore equally liable in law if the other party defaults or goes into liquidation) under Section 75 Consumer Credit Act 1974, the purchaser loses this legal protection if the card payment is processed via PayPal. Also, the Financial Ombudsman Service (for the U.K.) position is that section 75 protection does not apply where PayPal or any eMoney service becomes involved in the credit card transaction. This leaves consumers with no recourse to pursue their complaint with the Financial Ombudsman Service. They would only have recourse in the courts, but in any event they cannot because PayPal is incorporated in Luxembourg and, since the UK has left the EU, is now no longer within the jurisdiction of any UK Courts. The key issues that determine the applicability of section 75 are identified very clearly in Office of Fair Trading v Lloyds TSB Bank Plc and others [2006] EWCA Civ 268 7 and the Bank of Scotland v Alfred Truman (a firm) [2005] [EWHC] 583 (QB). This is a legal authority that section 75 protection does exist where one has paid on a credit card for a product, via an eMoney service. According to PayPal, it protects sellers in a limited fashion via the Seller Protection Policy. In general, the Seller Protection Policy is intended to protect the seller from certain kinds of chargebacks or complaints if the seller meets certain conditions including proof of delivery to the buyer. PayPal states the Seller Protection Policy is "designed to protect sellers against claims by buyers of unauthorized payments and against claims of non-receipt of any merchandise". The policy includes a list of "Exclusions" which itself includes "Intangible goods", "Claims for receipt of goods 'not as described, and "Total reversals over the annual limit". There are also other restrictions in terms of the sale itself, the payment method and the destination country the item is shipped to (simply having a tracking mechanism is not sufficient to guarantee the Seller Protection Policy is in effect). The PayPal Seller Protection Policy does not provide the additional consumer protection afforded by UK consumer legislation (most notably the Consumer Rights Act 2015) and in addition, it cannot be enforced in the Courts because PayPal operates from Luxembourg, outside all three of the UK legal jurisdictions. Security Security token In early 2006, PayPal introduced an optional security key as an additional precaution against fraud. A user account tied to a security key has a modified login process. Account-holders enter their login ID and password as normal but are then prompted to enter a six-digit code provided by a credit card sized hardware security key or a text message sent to the account holder's mobile phone. For convenience, users may append the code generated by the hardware key to their password in the login screen. This way they are not prompted for it on another page. This method is required for some services, such as when using PayPal through the eBay application on iPhone. This two-factor authentication is intended to make it difficult for an account to be compromised by a malicious third party without access to the physical security key, although it does not prevent the so-called Man in the Browser (MITB) attacks. However, the user (or malicious third party) can alternatively authenticate by providing the credit card or bank account number listed on their account. Thus the PayPal implementation does not offer the security of true two-factor authentication. MTAN It is also possible to use a mobile phone to receive an mTAN (Mobile Transaction Authentication Number) via SMS. Use of a security code that is sent to the account holder's mobile phone is currently free. Fraud As early as 2001, PayPal had substantial problems with online fraud, especially international hackers who were hacking into PayPal accounts and transferring small amounts of money out of multiple accounts. Standard solutions for merchant and banking fraud might use government criminal sanctions to pursue the fraudsters. But with PayPal losing millions of dollars each month to fraud while experiencing difficulties with using the FBI to pursue cases of international fraud, PayPal developed a fraud monitoring system to detect potentially fraudulent transactions. This development of fraud monitoring software at PayPal led Peter Thiel to create Palantir, a big-data security company whose original mission was to "reduce terrorism while preserving civil liberties." 150,000 PayPal cards frozen In 2015, 150,000 Spanish card holders had their funds frozen in an apparent fraud case involving a PayPal service provider, Younique Money, which was the de facto administrator of the cards. Previously, PayPal had charged €15 to all its card users without authorization (150,000 users). As of March 2015 most funds had not been returned. Criticism and controversies In 2003, PayPal voluntarily ceased serving as a payment intermediary between gambling websites and their online customers. At the time of this cessation, it was the largest payment processor for online gambling transactions. In 2010, PayPal resumed accepting such transactions, but only in those countries where online gambling is legal, and only for sites which are properly licensed to operate in said jurisdictions. Since at least 2005, PayPal has maintained an Acceptable Use policy which disallows "transactions involving ... items that are considered obscene ... [or] certain sexually oriented materials or services." Their enforcement of this policy has been a constant source of controversy between PayPal and people within or related to the sex industry. In 2014, PayPal notified subscription service provider Patreon that it was moving to cease integration with Patreon as a platform as the result of Patreon permitting "adult content" on their platform. Patreon subsequently removed access to PayPal services for creators who produced sexual content. If an account is subject to fraud or unauthorized use, PayPal puts the "Limited Access" designation on the account. PayPal has had several notable cases in which the company has frozen the account of users such as Richard Kyanka, owner of the website Something Awful, in September 2005, Cryptome in March 2010, or April Winchell, the owner of Regretsy, in December 2011. The account was reinstated, and PayPal apologized and donated to her cause. In September 2010, PayPal froze the account of a Minecraft developer, Markus Persson. Persson stated publicly that he had not received a clear explanation of why the account was frozen, and that PayPal was threatening to keep the money if they found anything wrong. His account contained around €600,000. PayPal's partner MasterCard ceased taking donations to WikiLeaks in 2010, and PayPal also suspended, and later permanently restricted, payments to the website after the U.S. State Department deemed WikiLeaks activities illegal. Online supporters and activists retaliated by subjecting PayPal and MasterCard, along with other companies, to coordinated cyber attacks. In February 2011, PayPal unbanned the account of a website that supports Iraq War resisters after it had enough information to fulfill its know your customer guidelines. The Chelsea Manning Support Network claimed the backdown was a reaction to a petition to the company to reinstate the account. In May 2013, PayPal declined to pay a reward offered in its Bug Bounty Program to a 17-year-old German student who had reported a cross-site scripting flaw on its site. The company wrote that the vulnerability had been previously reported, and chastised the youth for disclosing the issue to the public, but, uniquely, sent him a "Letter of recognition" for the discovery. In August 2013, entrepreneurs who had used PayPal to collect the funds they raised on crowdfunding platforms like Kickstarter and Indiegogo reported difficulty in being able to withdraw the money. Victims included Ouya, GlassUp (a rival to Google Glass), and Mailpile. In May 2014, PayPal blocked the account of a Russian human rights organisation "RosUznik", which supported political prisoners arrested at Bolotnaya Square. As of January 2015, a class-action lawsuit against PayPal has been filed in Israel, claiming that they arbitrarily froze accounts and held funds for up to 180 days without paying interest and thereby directly profited from it. The lawsuit requests that PayPal be declared a monopoly and thus regulated accordingly. In April 2015, The Guardian reported that PayPal had blocked the account of London-based human rights group Justice for Iran. In May 2015, PayPal blocked an account intended to raise money for the distribution of Boris Nemtsov's report "Putin. War". The explanation by PayPal was that "PayPal does not offer the opportunity to use its system for collecting funds to finance the activities of political parties or for political aims in Russia", though PayPal's Acceptable Use Policy does not mention financing for political goals. Non-governmental organization Freedom House issued a statement that "PayPal should immediately lift this ban, to help, rather than hinder, press freedom in Russia." By 2016, ConsumerAffairs had received over 1,200 consumer reviews of PayPal, resulting in an overall satisfaction rating of one star out of five for the company. Consumers have also launched numerous anti-PayPal Facebook pages and Twitter accounts to air their complaints. In February 2017, PayPal froze the account of News Media Canada, a Canadian trade association, in response to a payment from The Reminder, a Flin Flon, Manitoba community newspaper, intended to cover the fee for the Reminder's submission of articles for consideration in a nationwide journalism contest run by News Media Canada, including one discussing Syrian refugees. PayPal cited United States regulations as a reason for flagging the transaction between Canadian entities. In September 2018, PayPal banned radio host Alex Jones and his website InfoWars, claiming that his site has content that was hateful and discriminatory against certain religious groups. PayPal discontinued payments to Pornhub models on November 14, 2019, alleging that "Pornhub has made certain business payments through PayPal without seeking our permission". Pornhub criticized the decision as one that affected "over a hundred thousand performers who rely on them for their livelihoods", and steered its payees toward other payment options. In September 2020, PayPal issued new terms of service which introduced a fee for inactive accounts in 19 countries. PayPal sent its clients an e-mail about the updated terms, but didn't mention introducing such a fee. PayPal faced criticism over their policies related to changing the name on a users account. Critics cited the complicated system involved in changing names, which require legal and government-issued identification. This system has been seen as impacting transgender users, who find trouble in using preferred names and pronouns. In July 2021, PayPal announced a plan to collaborate with the Anti-Defamation League's Center on Extremism, the League of United Latin American Citizens, and several other nonprofits to analyze its users' transactions in order to investigate the finances of extremist and hate groups in the United States and share the results with law enforcement, policymakers, and other financial corporations; the ADL's CEO, Jonathan Greenblatt, stated that this initiative is meant to help "mitigat[e] extremist threats" and to "help disrupt those activities." Litigation In March 2002, two PayPal account holders separately sued the company for alleged violations of the Electronic Funds Transfer Act (EFTA) and California law. Most of the allegations concerned PayPal's dispute resolution procedures. The two lawsuits were merged into one class-action lawsuit (In re: PayPal litigation). An informal settlement was reached in November 2003, and a formal settlement was signed on June 11, 2004. The settlement requires that PayPal change its business practices (including changing its dispute resolution procedures to make them EFTA-compliant), as well as making a US$9.25 million payment to members of the class. PayPal denied any wrongdoing. In June 2003, Stamps.com filed a lawsuit against PayPal and eBay claiming breach of contract, breach of the implied covenants of good faith and fair dealing, and interference with contract, among other claims. In a 2002 license agreement, Stamps.com and PayPal agreed that Stamps.com technology would be made available to allow PayPal users to buy and print postage online from their PayPal accounts. Stamps.com claimed that PayPal did not live up to its contractual obligations and accused eBay of interfering with PayPal and Stamps.com's agreement, hence Stamp.com's reasoning for including eBay in the suit. Craig Comb and two others filed a class action against PayPal in Craig Comb, et al. v. PayPal Inc.. They sued, alleging illegal misappropriation of customer accounts and detailed their customer service experiences, including freezing deposited funds for up to 180 days until disputes were resolved by PayPal. PayPal argued that the plaintiffs were required to arbitrate their disputes under the American Arbitration Association's Commercial Arbitration Rules. The court ruled against PayPal, stating that "the User Agreement and arbitration clause are substantively unconscionable under California law." Paypal agreed to pay $9.25 million as a result of the case. In September 2002, Bank One Corporation sued PayPal for allegedly infringing its cardless payment system patents. The following year, PayPal countersued, claiming that Bank One's online bill-payment system was an infringement against PayPal's online bill-payment patent, issued in 1998. The two companies agreed on a settlement in October 2003. In November 2003, AT&T Corporation filed suit against eBay and PayPal claiming that their payment systems infringed an AT&T patent, filed in 1991 and granted in 1994. The case was settled out of court the following month, with the terms of the settlement undisclosed. In June 2011, PayPal and Israel Credit Cards-Cal Ltd. were sued for NIS 16 million. The claimants accused PayPal of deliberately failing to notify its customers that ICC-Cal was illegally charging them for currency conversion fees. A class-action lawsuit filed in 2010 was settled in 2016, in which the plaintiffs contested PayPal's "holds" on funds. PayPal has proposed a settlement in the amount of $3.2 million in Zepeda v. PayPal which has yet to be ratified. As part of the settlement, the company agreed to change some of its policies. CFPB consent On 21 May 2015 PayPal agreed that PayPal Credit would pay a $25 million fine to settle a complaint filed in Federal Court by the Consumer Financial Protection Bureau. The complaint alleged that consumers using PayPal were signed up for PayPal credit accounts without their knowledge nor consent. It alleged that PayPal had promised discounts and payment options the consumers never received, and that users trying to sign up for the regular, non-credit, PayPal accounts were signed up for credit accounts instead. The complaint was filed in the United States District Court for the District of Maryland, which ordered PayPal Credit to refund $15 million to consumers and to pay a $10 million fine. See also Billpoint E-commerce payment system Electronic money Interchange fee List of online payment service providers Micropayment Payment service provider PayPal Mafia Paytm Google Pay Stripe (company) References External links 2002 mergers and acquisitions 2002 initial public offerings Companies based in San Jose, California Corporate spin-offs EBay Electronic funds transfer Financial services companies established in 1998 Financial technology companies Foreign exchange companies Mobile payments Online payments Payment systems Cryptocurrencies Payment service providers American companies established in 1998 1998 establishments in California Internet properties established in 1998
349811
https://en.wikipedia.org/wiki/SPIM
SPIM
SPIM is a MIPS processor simulator, designed to run assembly language code for this architecture. The program simulates R2000 and R3000 processors, and was written by James R. Larus while a professor at the University of Wisconsin–Madison. The MIPS machine language is often taught in college-level assembly courses, especially those using the textbook Computer Organization and Design: The Hardware/Software Interface by David A. Patterson and John L. Hennessy (). The name of the simulator is a reversal of the letters "MIPS". SPIM simulators are available for Windows (PCSpim), Mac OS X and Unix/Linux-based (xspim) operating systems. As of release 8.0 in January 2010, the simulator is licensed under the standard BSD license. In January, 2011, a major release version 9.0 features QtSpim that has a new user interface built on the cross-platform Qt UI framework and runs on Windows, Linux, and macOS. From this version, the project has also been moved to SourceForge for better maintenance. Precompiled versions of QtSpim for Linux (32-bit), Windows, and Mac OS X, as well as PCSpim for Windows are provided. The SPIM operating system The SPIM simulator comes with a rudimentary operating system, which allows the programmer usage of common used functions in a comfortable way. Such functions are invoked by the -instruction. Then the OS acts depending on the values of specific registers. The SPIM OS expects a label named as a handover point from the OS-preamble. SPIM Alternatives/Competitors MARS (MIPS Assembler and Runtime Simulator) is a Java-based IDE for the MIPS Assembly Programming Language and an alternative to SPIM. Its initial release was in 2005 and is under active development. Imperas is a suite of embedded software development tools for MIPS architecture which uses Just-in-time compilation emulation and simulation technology. The simulator was initially released in 2008 and is under active development. There are over 30 open source models of the MIPS 32 bit and 64 bit cores. Other alternative to SPIM for educational purposes is The CREATOR simulator. CREATOR is portable (can be executed in current web browsers) and allow students to learn several assembly languages of different processors at the same time (CREATOR includes examples of MIPS32 and RISC-V instructions). See also GXemul (formerly known as mips64emul), another MIPS emulator. Unlike SPIM, which focuses on emulating a bare MIPS implementation, GXemul is written to emulate full computer systems based on MIPS microprocessors—for example, GXemul can emulate a DECstation 5000 Model 200 workstation OVPsim also emulates MIPS, and where all the MIPS models are verified by MIPS Technologies QEMU also emulates MIPS MIPS architecture References External links Project site at SourceForge Former official site at Larus's website Web version of SPIM Introductory slides on MIPS programming using SPIM An introduction to SPIM simulator Emulation software MIPS architecture Software using the BSD license
20297
https://en.wikipedia.org/wiki/MOS%20Technology%206502
MOS Technology 6502
The MOS Technology 6502 (typically pronounced "sixty-five-oh-two" or "six-five-oh-two") is an 8-bit microprocessor that was designed by a small team led by Chuck Peddle for MOS Technology. The design team had formerly worked at Motorola on the Motorola 6800 project; the 6502 is essentially a simplified, less expensive and faster version of that design. When it was introduced in 1975, the 6502 was the least expensive microprocessor on the market by a considerable margin. It initially sold for less than one-sixth the cost of competing designs from larger companies, such as the 6800 or Intel 8080. Its introduction caused rapid decreases in pricing across the entire processor market. Along with the Zilog Z80, it sparked a series of projects that resulted in the home computer revolution of the early 1980s. Popular video game consoles and computers, such as the Atari 2600, Atari 8-bit family, Apple II, Nintendo Entertainment System, Commodore 64, Atari Lynx, BBC Micro and others, use the 6502 or variations of the basic design. Soon after the 6502's introduction, MOS Technology was purchased outright by Commodore International, who continued to sell the microprocessor and licenses to other manufacturers. In the early days of the 6502, it was second-sourced by Rockwell and Synertek, and later licensed to other companies. In 1981, the Western Design Center started development of a CMOS version, the 65C02. This continues to be widely used in embedded systems, with estimated production volumes in the hundreds of millions. History and use Origins at Motorola The 6502 was designed by many of the same engineers that had designed the Motorola 6800 microprocessor family. Motorola started the 6800 microprocessor project in 1971 with Tom Bennett as the main architect. The chip layout began in late 1972, the first 6800 chips were fabricated in February 1974 and the full family was officially released in November 1974. John Buchanan was the designer of the 6800 chip and Rod Orgill, who later did the 6501, assisted Buchanan with circuit analyses and chip layout. Bill Mensch joined Motorola in June 1971 after graduating from the University of Arizona (at age 26). His first assignment was helping define the peripheral ICs for the 6800 family and later he was the principal designer of the 6820 Peripheral Interface Adapter (PIA). Motorola's engineers could run analog and digital simulations on an IBM 370-165 mainframe computer. Bennett hired Chuck Peddle in 1973 to do architectural support work on the 6800 family products already in progress. He contributed in many areas, including the design of the 6850 ACIA (serial interface). Motorola's target customers were established electronics companies such as Hewlett-Packard, Tektronix, TRW, and Chrysler. In May 1972, Motorola's engineers began visiting select customers and sharing the details of their proposed 8-bit microprocessor system with ROM, RAM, parallel and serial interfaces. In early 1974, they provided engineering samples of the chips so that customers could prototype their designs. Motorola's "total product family" strategy did not focus on the price of the microprocessor, but on reducing the customer's total design cost. They offered development software on a timeshare computer, the "EXORciser" debugging system, onsite training and field application engineer support. Both Intel and Motorola had initially announced a $360 price for a single microprocessor. The actual price for production quantities was much less. Motorola offered a design kit containing the 6800 with six support chips for $300. Peddle, who would accompany the salespeople on customer visits, found that customers were put off by the high cost of the microprocessor chips. At the same time, these visits invariably resulted in the engineers he presented to producing lists of required instructions that were much smaller than "all these fancy instructions" that had been included in the 6800. Peddle and other team members started outlining the design of an improved feature, reduced size microprocessor. At that time, Motorola's new semiconductor fabrication facility in Austin, Texas, was having difficulty producing MOS chips, and mid-1974 was the beginning of a year-long recession in the semiconductor industry. Also, many of the Mesa, Arizona employees were displeased with the upcoming relocation to Austin, Texas. Motorola's Semiconductor Products Division management was overwhelmed with problems and showed no interest in Peddle's low-cost microprocessor proposal. Eventually Peddle was given an official letter telling him to stop working on the system. Peddle responded to the order by informing Motorola that the letter represented an official declaration of "project abandonment", and as such, the intellectual property he had developed to that point was now his. In a November 1975 interview, Motorola's Chairman, Robert Galvin, ultimately agreed that Peddle's concept was a good one and that the division missed an opportunity, "We did not choose the right leaders in the Semiconductor Products division." The division was reorganized and the management replaced. The new group vice-president John Welty said, "The semiconductor sales organization lost its sensitivity to customer needs and couldn't make speedy decisions." Moving to MOS Technology Peddle began looking outside Motorola for a source of funding for this new project. He initially approached Mostek CEO L. J. Sevin, but he declined. Sevin later admitted this was because he was afraid Motorola would sue them. While Peddle was visiting Ford Motor Company on one of his sales trips, Bob Johnson, later head of Ford's engine automation division, mentioned that their former colleague John Paivinen had moved to General Instrument and taught himself semiconductor design. Paivinen then formed MOS Technology in Valley Forge, Pennsylvania in 1969 with two other executives from General Instrument, Mort Jaffe and Don McLaughlin. Allen-Bradley, a supplier of electronic components and industrial controls, acquired a majority interest in 1970. The company designed and fabricated custom ICs for customers and had developed a line of calculator chips. After the Mostek efforts fell through, Peddle approached Paivinen, who "immediately got it". On 19 August 1974, Chuck Peddle, Bill Mensch, Rod Orgill, Harry Bawcom, Ray Hirt, Terry Holdt, and Wil Mathys left Motorola to join MOS. Mike Janes joined later. Of the seventeen chip designers and layout people on the 6800 team, eight left. The goal of the team was to design and produce a low-cost microprocessor for embedded applications and to target as wide as possible a customer base. This would be possible only if the microprocessor was low cost, and the team set the price goal at $5 in volume. Mensch later stated the goal was not the processor price itself, but to create a set of chips that could sell at $20 to compete with the recently-introduced Intel 4040 that sold for $29 in a similar complete chipset. Chips are produced by printing multiple copies of the chip design on the surface of a "wafer", a thin disk of highly pure silicon. Smaller chips can be printed in greater numbers on the same wafer, decreasing their relative price. Additionally, wafers always include some number of tiny physical defects that are scattered across the surface. Any chip printed in that location will fail and has to be discarded. Smaller chips mean any single copy is less likely to be printed on a defect. For both of these reasons, the cost of the final product is strongly dependent on the size of the chip design. The original 6800 chips were intended to be , but layout was completed at , or an area of 29.0 mm2. For the new design, the cost goal demanded a size goal of , or an area of 16.6 mm2. Several new techniques would be needed to hit this goal. Moving to NMOS There were two significant advances that arrived in the market just as the 6502 was being designed that provided significant cost reductions. The first was the move to depletion-load NMOS. The 6800 used an early NMOS process that required three supply voltages, but one of the chip's features was an onboard voltage doubler that allowed a single +5 V supply be used for +5, −5 and +12 V internally, as opposed to other chips of the era like the Intel 8080 that required three separate supply pins. While this feature reduced the complexity of the power supply and pin layout, it still required separate power rails to the various gates on the chip, driving up complexity and size. By moving to the new depletion-load design, a single +5 V supply was all that was needed, eliminating all of this complexity. A further practical advantage was that the clock signal for earlier CPUs had to be strong enough to survive all the dissipation as it traveled through the circuits, which almost always required a separate external chip that could supply a strong enough signal. With the reduced power requirements of NMOS, the clock could be moved onto the chip, simplifying the overall computer design. These changes greatly reduced complexity and the cost of implementing a complete system. Another change that was taking place was the introduction of projection masking. Previously, chips were patterned onto the surface of the wafer by placing a mask on the surface of the wafer and then shining a bright light on it. The masks often picked up tiny bits of dirt or photoresist as they were lifted off the chip, causing flaws in those locations on any subsequent masking. With complex designs like CPUs, 5 or 6 such masking steps would be used, and the chance that at least one of these steps would introduce a flaw was very high. In most cases, 90% of such designs were flawed, resulting in a 10% yield. The price of the working examples had to cover the production cost of the 90% that were thrown away. In 1973, Perkin-Elmer introduced the Micralign system, which projected an image of the mask on the wafer instead of requiring direct contact. Masks no longer picked up dirt from the wafers and lasted on the order of 100,000 uses rather than 10. This eliminated step-to-step failures and the high flaw rates formerly seen on complex designs. Yields on CPUs immediately jumped from 10% to 60 or 70%. This meant the price of the CPU declined roughly the same amount and the microprocessor suddenly became a commodity device. MOS Technology's existing fabrication lines were based on the older PMOS technology, they had not yet begun to work with NMOS when the team arrived. Paivinen promised to have an NMOS line up and running in time to begin the production of the new CPU. He delivered on the promise, the new line was ready by June 1975. Design notes Chuck Peddle, Rod Orgill, and Wil Mathys designed the initial architecture of the new processors. A September 1975 article in EDN magazine gives this summary of the design: The MOS Technology 650X family represents a conscious attempt of eight former Motorola employees who worked on the development of the 6800 system to put out a part that would replace and outperform the 6800, yet undersell it. With the benefit of hindsight gained on the 6800 project, the MOS Technology team headed by Chuck Peddle, made the following architectural changes in the Motorola CPU… The main change in terms of chip size was the elimination of the tri-state drivers from the address bus outputs. This had been included in the 6800 to allow it to work with other chips in direct memory access (DMA) and co-processing roles, at the cost of significant die space. In practice, using such a system required the other devices to be similarly complex, and designers instead tended to use off-chip systems to coordinate such access. The 6502 simply removed this feature, in keeping with its design as an inexpensive controller being used for specific tasks and communicating with simple devices. Peddle suggested that anyone that actually required this style of access could implement it with a single 74158. The next major difference was to simplify the registers. To start with, one of the two accumulators was removed. General-purpose registers like accumulators have to be accessed by many parts of the instruction decoder, and thus require significant amounts of wiring to move data to and from their storage. Two accumulators makes many coding tasks easier, but costs the chip design itself significant complexity. Further savings were made by reducing the stack register from 16 to 8 bits, meaning that the stack could only be 256 bytes long, which was enough for its intended role as a microcontroller. The 16-bit IX index register was split in two, becoming X and Y. More importantly, the style of access changed; in the 6800, IX held a 16-bit address, which was offset by an 8-bit number supplied with the instruction, the two were added to produce the final address. In the 6502 (and most other designs), the 16-bit base address was stored in the instruction, and the X or Y was added to it. Finally, the instruction set was simplified, freeing up room in the decoder and control logic. Of the original 72 instructions in the 6800, 56 were left. Among those removed were any instruction that moved data between the 6800's two accumulators, as well as a number of branch instructions inspired by the PDP-11, like the ability to directly compare two numeric values. The 6502 used a simpler system that handled comparisons by performing math on the accumulator and then examining result flags. The chip's high-level design had to be turned into drawings of transistors and interconnects. At MOS Technology, the "layout" was a very manual process done with color pencils and vellum paper. The layout consisted of thousands of polygon shapes on six different drawings; one for each layer of the fabrication process. Given the size limits, the entire chip design had to be constantly considered. Mensch and Paivinen worked on the instruction decoder while Mensch, Peddle and Orgill worked on the ALU and registers. A further advance, developed at a party, was a way to share some of the internal wiring to allow the ALU to be reduced in size. In spite of their best efforts, the final design ended up being 5 mils too wide. The first 6502 chips were , or an area of 19.8 mm2. The rotate right instruction (ROR) did not work in the first silicon, so the instruction was temporarily omitted from the published documents, but the next iteration of the design shrank the chip and corrected the rotate right instruction, which was then included in revised documentation. Introducing the 6501 and 6502 MOS would introduce two microprocessors based on the same underlying design: the 6501 would plug into the same socket as the Motorola 6800, while the 6502 re-arranged the pinout to support an on-chip clock oscillator. Both would work with other support chips designed for the 6800. They would not run 6800 software because they had a different instruction set, different registers, and mostly different addressing modes. Rod Orgill was responsible for the 6501 design; he had assisted John Buchanan at Motorola on the 6800. Bill Mensch did the 6502; he was the designer of the 6820 Peripheral Interface Adapter (PIA) at Motorola. Harry Bawcom, Mike Janes and Sydney-Anne Holt helped with the layout. MOS Technology's microprocessor introduction was different from the traditional months-long product launch. The first run of a new integrated circuit is normally used for internal testing and shared with select customers as "engineering samples". These chips often have a minor design defect or two that will be corrected before production begins. Chuck Peddle's goal was to sell the first run 6501 and 6502 chips to the attendees at the Wescon trade show in San Francisco beginning on September 16, 1975. Peddle was a very effective spokesman and the MOS Technology microprocessors were extensively covered in the trade press. One of the earliest was a full-page story on the MCS6501 and MCS6502 microprocessors in the July 24, 1975 issue of Electronics magazine. Stories also ran in EE Times (August 24, 1975), EDN (September 20, 1975), Electronic News (November 3, 1975), Byte (November 1975) and Microcomputer Digest (November 1975). Advertisements for the 6501 appeared in several publications the first week of August 1975. The 6501 would be for sale at Wescon for $20 each. In September 1975, the advertisements included both the 6501 and the 6502 microprocessors. The 6502 would cost only $25 (). When MOS Technology arrived at Wescon, they found that exhibitors were not permitted to sell anything on the show floor. They rented the MacArthur Suite at the St. Francis Hotel and directed customers there to purchase the processors. At the suite, the processors were stored in large jars to imply that the chips were in production and readily available. The customers did not know the bottom half of each jar contained non-functional chips. The chips were $20 and $25 while the documentation package was an additional $10. Users were encouraged to make photocopies of the documents, an inexpensive way for MOS Technology to distribute product information. The preliminary data sheets listed just 55 instructions excluding the Rotate Right (ROR) instruction which did not work correctly on these early chips. The reviews in Byte and EDN noted the lack of the ROR instruction. The next revision of the layout fixed this problem and the May 1976 datasheet listed 56 instructions. Peddle wanted every interested engineer and hobbyist to have access to the chips and documentation; other semiconductor companies only wanted to deal with "serious" customers. For example, Signetics was introducing the 2650 microprocessor and its advertisements asked readers to write for information on their company letterhead. Motorola lawsuit The 6501/6502 introduction in print and at Wescon was an enormous success. The downside was that the extensive press coverage got Motorola's attention. In October 1975, Motorola reduced the price of a single 6800 microprocessor from $175 to $69. The $300 system design kit was reduced to $150 and it now came with a printed circuit board. On November 3, 1975, Motorola sought an injunction in Federal Court to stop MOS Technology from making and selling microprocessor products. They also filed a lawsuit claiming patent infringement and misappropriation of trade secrets. Motorola claimed that seven former employees joined MOS Technology to create that company's microprocessor products. Motorola was a billion-dollar company with a plausible case and lawyers. On October 30, 1974, Motorola had filed numerous patent applications on the microprocessor family and was granted twenty-five patents. The first was in June 1976 and the second was to Bill Mensch on July 6, 1976, for the 6820 PIA chip layout. These patents covered the 6800 bus and how the peripheral chips interfaced with the microprocessor. Motorola began making transistors in 1950 and had a portfolio of semiconductor patents. Allen-Bradley decided not to fight this case and sold their interest in MOS Technology back to the founders. Four of the former Motorola engineers were named in the suit: Chuck Peddle, Will Mathys, Bill Mensch and Rod Orgill. All were named inventors in the 6800 patent applications. During the discovery process, Motorola found that one engineer, Mike Janes, had ignored Peddle's instructions and brought his 6800 design documents to MOS Technology. In March 1976, the now independent MOS Technology was running out of money and had to settle the case. They agreed to drop the 6501 processor, pay Motorola $200,000 and return the documents that Motorola contended were confidential. Both companies agreed to cross-license microprocessor patents. That May, Motorola dropped the price of a single 6800 microprocessor to $35. By November, Commodore had acquired MOS Technology. Computers and games With legal troubles behind them, MOS was still left with the problem of getting developers to try their processor, prompting Chuck Peddle to design the MDT-650 ("microcomputer development terminal") single-board computer. Another group inside the company designed the KIM-1, which was sold semi-complete and could be turned into a usable system with the addition of a 3rd party computer terminal and compact cassette drive. Much to their amazement, the KIM-1 sold well to hobbyists and tinkerers, as well as to the engineers to which it had been targeted. The related Rockwell AIM 65 control/training/development system also did well. The software in the AIM 65 was based on that in the MDT. Another roughly similar product was the Synertek SYM-1. One of the first "public" uses for the design was the Apple I microcomputer, introduced in 1976. The 6502 was next used in the Commodore PET and the Apple II, both released in 1977. It was later used in the Atari 8-bit family and Acorn Atom home computers, the BBC Micro, Commodore VIC-20 and other designs both for home computers and business, such as Ohio Scientific and Oric. The 6510, a direct successor of the 6502 with a digital I/O port and a tri-state address bus, was the CPU utilized in the best-selling Commodore 64 home computer. 6502 or 6502-variant CPUs were used in all of Commodore's floppy disk drives for all of their 8-bit computers, from the PET line (some of which had two 6502-based CPUs) through the Commodore 128D, including the Commodore 64, and in all of Atari's disk drives for all of their 8-bit computer line, from the 400/800 through the XEGS. Another important use of the 6500 family was in video games. The first to make use of the processor design was the Atari VCS, later renamed the Atari 2600. The VCS used a variant of the 6502 called the 6507, which had fewer pins and, as a result, could address only 8 KB of memory. Millions of the Atari consoles would be sold, each with a MOS processor. Another significant use was by the Nintendo Entertainment System and Famicom. The 6502 used in the NES was a second source version by Ricoh, a partial system-on-a-chip, that lacked the binary-coded decimal mode but added 22 memory-mapped registers and on-die hardware for sound generation, joypad reading, and sprite list DMA. Called 2A03 in NTSC consoles and 2A07 in PAL consoles (the difference being the memory divider ratio and a lookup table for audio sample rates), this processor was produced exclusively for Nintendo. The Atari Lynx used a 4 MHz version of the chip, the 65SC02. In the 1980s, a popular electronics magazine Elektor/Elektuur used the processor in its microprocessor development board Junior Computer. Technical description The 6502 is a little-endian 8-bit processor with a 16-bit address bus. The original versions were fabricated using an process technology chip with a die size of (advertised as ), for a total area of 16.6 mm2. The internal logic runs at the same speed as the external clock rate, but despite the low clock speeds (typically in the neighborhood of 1 to 2 MHz), the 6502's performance was competitive with other contemporary CPUs using significantly faster clocks. This is partly due to a simple state machine implemented by combinational (clockless) logic to a greater extent than in many other designs; the two-phase clock (supplying two synchronizations per cycle) could thereby control the machine cycle directly. Typical instructions might take half as many cycles to complete on the 6502 as on contemporary designs. Like most simple CPUs of the era, the dynamic NMOS 6502 chip is not sequenced by a microcode ROM but uses a PLA (which occupied about 15% of the chip area) for instruction decoding and sequencing. As in most 8-bit microprocessors, the chip does some limited overlapping of fetching and execution. The low clock frequency moderated the speed requirement of memory and peripherals attached to the CPU, as only about 50% of the clock cycle was available for memory access (due to the asynchronous design, this fraction varied strongly among chip versions). This was critical at a time when affordable memory had access times in the range . Because the chip only accessed memory during certain parts of the clock cycle, and those cycles were indicated by the PHI2-low clock-out pin, other chips in a system could access memory during those times when the 6502 was off the bus. This was sometimes known as "hidden access". This technique was widely used by computer systems; they would use memory capable of access at 2 MHz, and then run the CPU at 1 MHz. This guaranteed that the CPU and video hardware could interleave their accesses, with a total performance matching that of the memory device. When faster memories became available in the 1980s, newer machines could run at higher clock rates, like the 2 MHz CPU in the BBC Micro, and still use the bus sharing techniques. Registers Like its precursor, the 6800, the 6502 has very few registers. The 6502's registers include one 8-bit accumulator register (A), two 8-bit index registers (X and Y), 7 processor status flag bits (P; from bit 7 to bit 0 these are the negative (N), overflow (V), reserved, break (B), decimal (D), interrupt disable (I), zero (Z) and carry (C) flag), an 8-bit stack pointer (S), and a 16-bit program counter (PC). This compares to a typical design of the same era, the Z80, which has eight general-purpose 8-bit registers, which can be combined into four 16-bit ones. The Z80 also had a complete set of alternate registers, which made a total of sixteen general-purpose registers. In order to make up somewhat for the lack of registers, the 6502 included a zero-page addressing mode that uses one address byte in the instruction instead of the two needed to address the full 64 KB of memory. This provides fast access to the first 256 bytes of RAM by using shorter instructions. Chuck Peddle has said in interviews that the specific intention was to allow these first 256 bytes of RAM to be used like registers. The stack address space is hardwired to memory page $01, i.e. the address range $0100–$01FF (256–511). Software access to the stack is done via four implied addressing mode instructions, whose functions are to push or pop (pull) the accumulator or the processor status register. The same stack is also used for subroutine calls via the JSR (jump to subroutine) and RTS (return from subroutine) instructions and for interrupt handling. Addressing The chip uses the index and stack registers effectively with several addressing modes, including a fast "direct page" or "zero page" mode, similar to that found on the PDP-8, that accesses memory locations from addresses 0 to 255 with a single 8-bit address (saving the cycle normally required to fetch the high-order byte of the address)—code for the 6502 uses the zero page much as code for other processors would use registers. On some 6502-based microcomputers with an operating system, the operating system uses most of zero page, leaving only a handful of locations for the user. Addressing modes also include implied (1-byte instructions); absolute (3 bytes); indexed absolute (3 bytes); indexed zero-page (2 bytes); relative (2 bytes); accumulator (1); indirect,x and indirect,y (2); and immediate (2). Absolute mode is a general-purpose mode. Branch instructions use a signed 8-bit offset relative to the instruction after the branch; the numerical range −128..127 therefore translates to 128 bytes backward and 127 bytes forward from the instruction following the branch (which is 126 bytes backward and 129 bytes forward from the start of the branch instruction). Accumulator mode uses the accumulator as an effective address and does not need any operand data. Immediate mode uses an 8-bit literal operand. Indirect addressing The indirect modes are useful for array processing and other looping. With the 5/6 cycle "(indirect),y" mode, the 8-bit Y register is added to a 16-bit base address read from zero page, which is located by a single byte following the opcode. The Y register is therefore an index register in the sense that it is used to hold an actual index (as opposed to the X register in the 6800, where a base address was directly stored and to which an immediate offset could be added). Incrementing the index register to walk the array byte-wise takes only two additional cycles. With the less frequently used "(indirect,x)" mode the effective address for the operation is found at the zero page address formed by adding the second byte of the instruction to the contents of the X register. Using the indexed modes, the zero page effectively acts as a set of up to 128 additional (though very slow) address registers. The 6502 is capable of performing addition and subtraction in binary or binary-coded decimal. Placing the CPU into BCD mode with the SED (set D flag) instruction results in decimal arithmetic, in which $99 + $01 would result in $00 and the carry (C) flag being set. In binary mode (CLD, clear D flag), the same operation would result in $9A and the carry flag being cleared. Other than Atari BASIC, BCD mode was seldom used in home-computer applications. See the Hello world! article for a simple but characteristic example of 6502 assembly language. Instructions and opcodes 6502 instruction operation codes (opcodes) are 8 bits long and have the general form AAABBBCC, where AAA and CC define the opcode, and BBB defines the addressing mode. For instance, consider the ORA instruction, which performs a bitwise OR on the bits in the accumulator with another value. The instruction opcode is of the form 000bbb01, where bbb may be 010 for an immediate mode value (constant), 001 for zero-page fixed address, 011 for an absolute address, and so on. This pattern is not absolute, and there are a number of exceptions. However, where it does apply, it allows one to easily deconstruct opcode values back to assembly mnemonics for the majority of instructions, handling the edge cases with special-purpose code. Of the 256 possible opcodes available using an 8-bit pattern, the original 6502 uses 151 of them, organized into 56 instructions with (possibly) multiple addressing modes. Depending on the instruction and addressing mode, the opcode may require zero, one or two additional bytes for operands. Hence 6502 machine instructions vary in length from one to three bytes. The operand is stored in the 6502's customary little-endian format. The 65C816, the 16-bit CMOS descendant of the 6502, also supports 24-bit addressing, which results in instructions being assembled with three-byte operands, also arranged in little-endian format. The remaining 105 opcodes are undefined. In the original design, instructions where the low-order 4 bits (nibble) were 3, 7, B or F were not used, providing room for future expansion. Likewise, the $2x column had only a single entry, LDX #constant. The remaining 25 empty slots were distributed. Some of the empty slots were used in the 65C02 to provide both new instructions and variations on existing ones with new addressing modes. The $Fx instructions were initially left free to allow 3rd-party vendors to add their own instructions, but later versions of the 65C02 standardized a set of bit fiddling instructions developed by Rockwell Semiconductor. Assembly language A 6502 assembly language statement consists of a three-character instruction mnemonic, followed by any operands. Instructions that do not take a separate operand but target a single register based on the addressing mode combine the target register in the instruction mnemonic, so the assembler uses INX as opposed to INC X to increment the X register. Instruction table Detailed behavior The processor's non-maskable interrupt (NMI) input is edge sensitive, which means that the interrupt is triggered by the falling edge of the signal rather than its level. The implication of this feature is that a wired-OR interrupt circuit is not readily supported. However, this also prevents nested NMI interrupts from occurring until the hardware makes the NMI input inactive again, often under control of the NMI interrupt handler. The simultaneous assertion of the NMI and IRQ (maskable) hardware interrupt lines causes IRQ to be ignored. However, if the IRQ line remains asserted after the servicing of the NMI, the processor will immediately respond to IRQ, as IRQ is level sensitive. Thus a sort of built-in interrupt priority was established in the 6502 design. The B flag is set by the 6502's periodically sampling its NMI edge detector's output and its IRQ input. The IRQ signal being driven low is only recognized though if IRQs are allowed by the I flag. If in this way a NMI request or (maskable) IRQ is detected the B flag is set to zero and causes the processor to execute the BRK instruction next instead of executing the next instruction based on the program counter. The BRK instruction then pushes the processor status onto the stack, with the B flag bit set to zero. At the end of its execution the BRK instruction resets the B flag's value to one. This is the only way the B flag can be modified. If an instruction other than the BRK instruction pushes the B flag onto the stack as part of the processor status the B flag always has the value one. A high-to-low transition on the SO input pin will set the processor's overflow status bit. This can be used for fast response to external hardware. For example, a high-speed polling device driver can poll the hardware once in only three cycles using a Branch-on-oVerflow-Clear (BVC) instruction that branches to itself until overflow is set by an SO falling transition. The Commodore 1541 and other Commodore floppy disk drives use this technique to detect when the serializer is ready to transfer another byte of disk data. The system hardware and software design must ensure that an SO will not occur during arithmetic processing and disrupt calculations. Variations and derivatives There were numerous variants of the original NMOS 6502. 16-bit derivatives The Western Design Center designed and currently produces the W65C816S processor, a 16-bit, static-core successor to the 65C02, with greatly enhanced features. The W65C816S is a newer variant of the 65C816, which is the core of the Apple IIGS computer and is the basis of the Ricoh 5A22 processor that powers the Super Nintendo Entertainment System. The W65C816S incorporates minor improvements over the 65C816 that make the newer chip not an exact hardware-compatible replacement for the earlier one. Among these improvements was conversion to a static core, which makes it possible to stop the clock in either phase without the registers losing data. Available through electronics distributors, as of March 2020, the W65C816S is officially rated for 14 MHz operation. The Western Design Center also designed and produced the 65C802, which was a 65C816 core with a 64-kilobyte address space in a 65(C)02 pin-compatible package. The 65C802 could be retrofitted to a 6502 board and would function as a 65C02 on power-up, operating in "emulation mode." As with the 65C816, a two-instruction sequence would switch the 65C802 to "native mode" operation, exposing its 16-bit accumulator and index registers, as well as other 65C816 enhanced features. The 65C802 was not widely used; new designs almost always were built around the 65C816, resulting in 65C802 production being discontinued. Example code The following 6502 assembly language source code is for a subroutine named TOLOWER, which copies a null-terminated character string from one location to another, converting upper-case letter characters to lower-case letters. The string being copied is the "source", and the string into which the converted source is stored is the "destination". Bugs and quirks The 6502 had several bugs and quirks, which had to be accounted for when programming it: The earliest revisions of the 6502, such as those shipped with some KIM-1 computers, had a severe bug in the ROR (rotate right memory or accumulator) instruction. The operation of ROR in these chips is effectively an ASL (arithmetic shift left) instruction that does not affect the carry bit in the status register. MOS left the instruction out of chip documentation entirely because of the defect, promising that ROR would appear on 6502 chips starting in 1976. The vast majority of 6502 chips in existence today do not exhibit this bug. The NMOS 6502 family has a variety of undocumented instructions, which vary from one chip manufacturer to another. The 6502 instruction decoding is implemented in a hardwired logic array (similar to a programmable logic array) that is only defined for 151 of the 256 available opcodes. The remaining 105 trigger strange and occasionally hard-to-predict actions, such as crashing the processor, performing two valid instructions consecutively, performing strange mixtures of two instructions, or simply doing nothing at all. Eastern House Software developed the "Trap65", a device that plugged between the processor and its socket to convert (trap) unimplemented opcodes into BRK (software interrupt) instructions. Some programmers utilized this feature to extend the 6502 instruction set by providing functionality for the unimplemented opcodes with specially written software intercepted at the BRK instruction's 0xFFFE vector. All of the undefined opcodes have been replaced with NOP instructions in the 65C02, an enhanced CMOS version of the 6502, although with varying byte sizes and execution times. In the 65C802/65C816, all 256 opcodes perform defined operations. The 6502's memory indirect jump instruction, JMP (<address>), is partly broken. If <address> is hex xxFF (i.e., any word ending in FF), the processor will not jump to the address stored in xxFF and xxFF+1 as expected, but rather the one defined by xxFF and xx00 (for example, JMP ($10FF) would jump to the address stored in 10FF and 1000, instead of the one stored in 10FF and 1100). This defect continued through the entire NMOS line, but was corrected in the CMOS derivatives. The NMOS 6502 indexed addressing across page boundaries will do an extra read of an invalid address. This characteristic may cause random issues by accessing hardware that acts on a read, such as clearing timer or IRQ flags, sending an I/O handshake, etc. This defect continued through the entire NMOS line, but was corrected in the CMOS derivatives, in which the processor does an extra read of the last instruction byte. The 6502 read-modify-write instructions perform one read and two write cycles. First, the unmodified data that was read is written back, and then the modified data is written. This characteristic may cause issues by twice accessing hardware that acts on a write. This anomaly continued through the entire NMOS line, but was fixed in the CMOS derivatives, in which the processor will do two reads and one write cycle. Defensive programming practice will generally avoid this problem by not executing read/modify/write instructions on hardware registers. The N (result negative), V (sign bit overflow) and Z (result zero) status flags are generally meaningless when performing arithmetic operations while the processor is in BCD mode, as these flags reflect the binary, not BCD, result. This limitation was removed in the CMOS derivatives. Therefore, this feature may be used to distinguish a CMOS processor from an NMOS version. If the 6502 happens to be in BCD mode when a hardware interrupt occurs, it will not revert to binary mode. This characteristic could result in obscure bugs in the interrupt service routine if it fails to clear BCD mode before performing any arithmetic operations. For example, the Commodore 64's KERNAL did not correctly handle this processor characteristic, requiring that IRQs be disabled or re-vectored during BCD math operations. This issue was addressed in the CMOS derivatives as well. The 6502 instruction set includes BRK (opcode $00), which is technically a software interrupt (similar in spirit to the SWI mnemonic of the Motorola 6800 and ARM processors). BRK is most often used to interrupt program execution and start a machine language monitor for testing and debugging during software development. BRK could also be used to route program execution using a simple jump table (analogous to the manner in which the Intel 8086 and derivatives handle software interrupts by number). However, if a hardware interrupt occurs when the processor is fetching a BRK instruction, the NMOS version of the processor will fail to execute BRK and instead proceed as if only a hardware interrupt had occurred. This fault was corrected in the CMOS implementation of the processor. When executing JSR (jump to subroutine) and RTS (return from subroutine) instructions, the return address pushed to the stack by JSR is that of the last byte of the JSR operand (that is, the most significant byte of the subroutine address), rather than the address of the following instruction. This is because the actual copy (from program counter to stack and then conversely) takes place before the automatic increment of the program counter that occurs at the end of every instruction. This characteristic would go unnoticed unless the code examined the return address in order to retrieve parameters in the code stream (a 6502 programming idiom documented in the ProDOS 8 Technical Reference Manual). It remains a characteristic of 6502 derivatives to this day. See also List of 6502 assemblers MOS Technology 6502-based home computers Interrupts in 65xx processors Transistor count Apple II accelerators cc65 – 6502 macro assembler and C compiler Notes References Citations Bibliography Interview with William Mensch Stanford and the Silicon Valley Project, October 9, 1995. Transcript Further reading Datasheets and manuals 6500 Series Datasheet; MOS Technology; 12 pages; 1976. 6500 Series Hardware Manual; 2nd Ed; MOS Technology; 182 pages; 1976. 6500 Series Programming Manual; 2nd Ed; MOS Technology; 262 pages; 1976. Books 6502 Applications Book; 1st Ed; Rodnay Zaks; Sybex; 281 pages; 1979; . (archive) 6502 Assembly Language Programming; 2nd Ed; Lance Leventhal; Osborne/McGraw-Hill; 650 pages; 1986; . (archive) 6502 Assembly Language Subroutines; 1st Ed; Lance Leventhal and Winthrop Saville; Osborne/McGraw-Hill; 550 pages; 1982; . (archive) 6502 Games; 1st Ed; Rodnay Zaks; Sybex; 292 pages; 1980; . (archive) 6502 User's Manual; 1st Ed; Joseph Carr; Reston; 288 pages; 1984; . (archive) Advanced 6502 Programming; 1st Ed; Rodnay Zaks; John Wiley & Sons; 292 pages; 1982; . (archive) Machine Language For Beginners - Personal Computer Machine Language Programming For Atari, VIC, Apple, C64, and PET Computers; 1st Ed; Richard Mansfield; Compute! Publications; 350 pages; 1983; . (archive) Programming the 6502; 4th Ed; Rodnay Zaks; Sybex; 408 pages; 1983; . (archive) Programming the 65816 - including the 6502, 65C02, 65802; 1st Ed; David Eyes and Ron Lichty; Prentice Hall; 636 pages; 1986; . (archive) Reference cards 6502 Microprocessor Instant Reference Card; James Lewis; Micro Logic; 2 pages; 1980. (archive) External links 6502.org - the 6502 microprocessor resource – repository The Rise of MOS Technology & The 6502 - Commodore archive 650x information – Concise description, photos of MOS and second source chips; at cpu-collection.de mdfs.net – 6502 instruction set Simulators / Emulators Online 6502 compatible assembler and emulator, written in JavaScript List of 6502 software emulators – Zophar's Domain 6502 simulator for Windows – Atari Gaming Headquarters Visual Transistor-level Simulation of 6502 CPU MCL65 6502 CPU core - C code - MicroCore Labs GitHub Boards Grant's 7/8-chip 6502 board 6502 microprocessor training board Build your own KIM-1 training board - see KIM-1 6502 home computer PE6502 single board computer BE6502 single board computer - based on Ben Eater videos FPGA cpu6502_tc 6502 CPU core - VHDL source code - OpenCores ag_6502 6502 CPU core - Verilog source code - OpenCores M65C02 65C02 CPU core - Verilog source code - OpenCores MCL65 6502 CPU core - Verilog - MicroCore Labs GitHub MOS Technology microprocessors 65xx microprocessors Computer-related introductions in 1975 8-bit microprocessors
8198361
https://en.wikipedia.org/wiki/3596%20Meriones
3596 Meriones
3596 Meriones is a large Jupiter trojan from the Greek camp, approximately in diameter. It was discovered on 14 November 1985, by Danish astronomers Poul Jensen and Karl Augustesen at the Brorfelde Observatory near Holbæk, Denmark. The assumed C-type asteroid belongs to the 50 largest Jupiter trojans and has a rotation period of 12.96 hours. It was named after the Cretan leader Meriones from Greek mythology. Orbit and classification Meriones is a dark Jovian asteroid orbiting in the leading Greek camp at Jupiter's Lagrangian point, 60° ahead of its orbit in a 1:1 resonance (see Trojans in astronomy). It is also a non-family asteroid in the Jovian background population. It orbits the Sun at a distance of 4.8–5.5 AU once every 11 years and 9 months (4,293 days; semi-major axis of 5.17 AU). Its orbit has an eccentricity of 0.07 and an inclination of 24° with respect to the ecliptic. The body's observation arc begins with a precovery taken at Uccle Observatory in October 1950, or 35 years prior to its official discovery observation at Brorfelde. Naming This minor planet was named from Greek mythology after Meriones, who co-commanded together with the Greek hero Idomeneus the Cretan contingent in the Trojan War, where they slew many Trojans, especially in the Battle of the Ships. The official naming citation was published by the Minor Planet Center on 7 September 1987 (). Physical characteristics Meriones is an assumed, carbonaceous C-type asteroid. Rotation period In 1991, a rotational lightcurve of Meriones was published by German and Italian astronomers. Lightcurve analysis of the photometric observations gave a rotation period of 12.96 hours with a brightness amplitude of 0.15 magnitude (). Diameter and albedo According to the surveys carried out by the Japanese Akari satellite and the NEOWISE mission of NASA's Wide-field Infrared Survey Explorer, Meriones measures 73.28 and 87.38 kilometers in diameter and its surface has an albedo of 0.064 and 0.048, respectively. The Collaborative Asteroid Lightcurve Link assumes a standard albedo for a carbonaceous asteroid of 0.057, and derives a diameter of 75.09 kilometers based on an absolute magnitude of 9.35. References External links Asteroid Lightcurve Database (LCDB), query form (info ) Dictionary of Minor Planet Names, Google books Discovery Circumstances: Numbered Minor Planets (1)-(5000) – Minor Planet Center Asteroid 3596 Meriones at the Small Bodies Data Ferret 003596 Discoveries by Poul Jensen (astronomer) Discoveries by Karl Augustesen Minor planets named from Greek mythology Named minor planets 19851114
1225219
https://en.wikipedia.org/wiki/The%20Humble%20Guys
The Humble Guys
The Humble Guys (THG) were a cracking group for the IBM PC during the late 1980s founded by two friends known by the pseudonyms Candyman and Fabulous Furlough. The group was also noticed in the demoscene for some of their cracktros. THG was the first group to make use of the NFO file as a means for documenting their releases before packaging and distribution. The first release to contain an .NFO file was Nova Logic's remake of the arcade classic Bubble Bobble in 1989. This has since spawned an entire generation of ASCII artists devoted solely to creating artwork for the purpose of decorating NFO files for warez groups. To put things into perspective, there are now entire websites explicitly devoted to the collection and archival of NFO files, such as The iSONEWS. Software contributions THG also was one of the first groups to release an "intro tool" for the IBM PC demoscene, released by their coding subsidiary, THG F/X, called the THG IntroMaker. The THG IntroMaker would allow one to create a self-contained executable program which played music and displayed graphics on screen without the need for any knowledge of computer programming. A much more advanced and highly sophisticated extension of this today would be Farbrausch's .werkkzeug. Prior to THG's arrival on the warez scene the IBM world did not have anything other than text based intros usually quoting song lyrics. THG members brought the experience from the C64 and AMIGA warez scene bringing the first animated and graphical intros to the IBM scene. In December 1991 , the "F/X division" of The Humble Guys released a first and only copy of an electronic magazine called "The Humble Review" featuring game reviews and articles. Writer and weblogger Justin Hall would have his first article published in the Humble Review; a film review of Akira by "Fusty". Members of THG also had their own custom BBS software, originally a "forum hack", called L.S.D. BBS (Lush Software Designs) which was first introduced on June 1, 1990, written by The Slavelord, Niteman and others. The original source code for this was Emulex/2, which was acquired by a THG member who's alias was Tripin Face. The source code was referred to as 'Jani' in some communities at the time. In the news On the evening of October 27, 1992, NBC television aired an episode about computer hackers on Dateline titled "Are Your Secrets Safe?". This show prominently displayed ads for several warez BBSes, including one for The Slave Den BBS which was operated by a senior member and spokesperson of THG. As a result of this undesired exposure, The Slavelord voluntarily retired from his activities within the group. On September 5, 2006, David J. Francis, known by his username "Candyman" died in his hometown of St. Louis Missouri, of heart failure. On October 4, 2015, Pierre Barkett, known by his username "The PieMaN" died in Florida of heart failure. Competition THG redefined the manner in which the PC warez scene worked when they entered the scene in 1989. Prior to THG, warez releases were haphazard, with multiple groups releasing the same title, usually after the title had been available in retail stores for weeks. Often games were released to BBSes without being cracked. THG changed this by releasing titles days before the software made it to retail chains such as Babbage's. They did this by establishing relationships with the major wholesale software distributors, and ordering games with overnight shipping. For those cases where overnight shipping wasn't enough, THG found people who lived near the software companies, who could go to the company, and buy the game the day it was released. This beat the overnight shipping method by 2 days in most cases. Also, an advantage that they possessed was that most other warez groups were run by teens, who attended school during the day. THG was run by professional men, who were available each day "by 10:30" when FedEx, or UPS delivered. The other groups had to "wait until they got home" in the afternoons. A decided advantage considering most "cracks" were done in less than an hour, and releases complete shortly thereafter. THG had members who worked for morning TV shows. Software companies, ever eager for free advertising, would send a box of new, or in some cases "about to be released" software to a TV show, for just a simple phone call. They also understood the "progression" of software. Once a title was completed, the box, manual, and final version of the game were shipped to a "duplication house" to copy the software for sale in stores. THG had contacts in these duplication houses, where they could get the games weeks before they would show up at the store. Activision's F-14 Tomcat was one such title, along with all titles from Microprose. At the height of their power, THG had game suppliers in the US (country wide), UK (Leeds), France, Germany, and many parts of Asia. THG introduced the concept of couriers in an effort to plaster their releases on their competitors' BBSes. The couriers were often told to make sure that the various groups received the latest crack on their HQ's BBS before other THG BBSes. The combination of using software wholesalers and couriers turned the PC Warez Scene upside down in 1990, but these are considered normal practice now. The fierce competition within the current warez and video scenes are directly descended from THG. As a result, the majority of older, well established, warez groups disappeared from the scene. Of the four or five groups that were around prior to THG's arrival in December 1989, the only group that remained was the International Network of Crackers (INC), which was one of THG's greatest competitors in the IBM PC cracking scene. The file header of the executable THG cracktro, READTHG.EXE (displayed above), contains text which reads: "Cool Hand but fucks his dog and Phantom from INC" (sic), an insulting reference towards the vice-president and courier coordinator of their rival organization, INC. After Candyman shut down his BBS (Candyland, originally run on CNET BBS), setup, development, maintenance and unique customization were continued by The Maker who was on hand from day one. After Candyman left the United States, Fabulous Furlough took over the reins of the group. After political infighting among the remaining members of the group led to problems within the organization, several of the newer members of THG splintered off and formed a new group called USA (United Software Association) which included several noteworthy members such as, Niteman, Genesis and The Humble Babe (who changed her name to The Not So Humble Babe upon her departure from THG). USA released a few games, most of them coming from one of THG's suppliers in Illinois, whom USA had managed to "turn". After the bust of The Not So Humble Babe on credit card fraud charges in Michigan, USA teamed up with the European PC warez division of Fairlight and were cooperatively known as "USA/FLT". This inevitably lead to the two groups USA and THG warring with each other A year after the USA/FLT fiasco, several of the original members of The Humble Guys left the group in an effort to once again capture lightning in a bottle. However, by the fall of 1992 several other groups, such as Razor 1911, had joined the scene and this new group, while having some brief success, was never as successful as THG. The new group fell apart shortly after The Slavelord shut down his BBS after the Dateline story. During 1992 though early 1994, many THG releases were cracked by the UK branch which consisted of Hi. T. Moonweed, Bryn Rogers and Hydro, who struggled to keep the group together due to US burnout. The UK BBSes, The Flying Teapot (known as active from 1991-1992) and The Demons Forge (ran by Hi.T and Bryn respectively) became the UK's major landmarks. By 1994, most of the founding members of The Humble Guys were no longer involved with the warez scene. After the Pits BBS in New York was shut down by Novell in 1995, the group moved to IRC and had a presence on many different servers with the name #THG. They focused on distribution rather than cracking. The Humble Guys disbanded in the early 2000s when the last founding member left. References Warez groups Demogroups Organizations disestablished in 1994