id
stringlengths
3
8
url
stringlengths
32
207
title
stringlengths
1
114
text
stringlengths
93
492k
5532813
https://en.wikipedia.org/wiki/AEGIS%20SecureConnect
AEGIS SecureConnect
AEGIS SecureConnect (or simply 'AEGIS') is the former name of a network authentication system used in IEEE 802.1X networks. It was developed by Meetinghouse Data Communications, Inc.; the system was renamed "Cisco Secure Services Client" when Meetinghouse was acquired by Cisco Systems. The AEGIS Protocol is an 802.1X supplicant (i.e. handles authentication for wired and wireless networks, such as those that use WPA-PSK, WPA-Radius, or Certificate-based authentication), and is commonly installed along with a Network Interface Card's (NIC) or VPN drivers. References External links Cisco Secure Services Client Q&A (Cisco Systems, Inc.) Computer network security IEEE 802.11
7706325
https://en.wikipedia.org/wiki/Mobile-device%20testing
Mobile-device testing
Mobile-device testing functions to assure the quality of mobile devices, like mobile phones, PDAs, etc. It is conducted on both hardware and software, and from the view of different procedures, the testing comprises R&D testing, factory testing and certificate testing. It involves a set of activities from monitoring and trouble shooting mobile application, content and services on real handsets. It includes verification and validation of hardware devices and software applications. Test must be conducted with multiple operating system versions, hardware configurations, device types, network capabilities, and notably with the Android operating system, with various hardware vendor interface layers. Automation key features Add application/product space. Create test builds for application/product. Associate test builds with application/product space. Add your own remote devices, by getting a small service app installed on them. Record test cases/scripts/data on a reference device/emulator. Associate test cases/scripts/data with application/product space. Maintain test cases/scripts/data for each application/product. Select devices/emulators to run your test scripts. Get test results e-mailed to you (after completing the entire run, the fixed number of steps, and after every X units of time) – PDF format supported currently. Listed companies like Keynote Systems, Capgemini Consulting and Mobile Applications and Handset testing company Intertek and QA companies like PASS Technologies AG, and Testdroid provide mobile testing, helping application stores, developers and mobile device manufacturers in testing and monitoring of mobile content, applications and services. Static code analysis Static code analysis is the analysis of computer software that is performed without actually executing programs built from that software (analysis performed on executing programs is known as dynamic analysis) Static analysis rules are available for code written to target various mobile development platforms. For Android applications, it is possible to use the Dexper tool, which transforms the Dalvik bytecode into the Soot/Jimple intermediate representation. Android testing framework supports Unit test Functional test Activity test Mock objects Utilities to simplify test creation Unit testing Unit testing is a test phase when portions of mobile device development are tested, usually by the developer. It may contain hardware testing, software testing, and mechanical testing. Factory testing Factory testing is a kind of sanity check on mobile devices. It is conducted automatically to verify that there are no defects brought by the manufacturing or assembling. Mobile testing contains: mobile application testing hardware testing battery (charging) testing signal receiving network testing protocol testing mobile games testing mobile software compatibility testing Certification testing Certification testing is the check before a mobile device goes to market. Many institutes or governments require mobile devices to conform with their stated specifications and protocols to make sure the mobile device will not harm users' health and are compatible with devices from other manufacturers. Once the mobile device passes all checks, a certification will be issued for it. When users submit mobile apps to application stores/marketplaces, it goes through a certification process. Many of these vendors outsource the testing and certification to third party vendors, to increase coverage and lower the costs. Certification forums PTCRB Global Certification Forum References Software testing
189768
https://en.wikipedia.org/wiki/Consumer%20electronics
Consumer electronics
Consumer electronics or home electronics are electronic (analog or digital) equipment intended for everyday use, typically in private homes. Consumer electronics include devices used for entertainment, communications and recreation. Usually referred to as black goods due to many products being housed in black or dark casings. This term is used to distinguish them from "white goods" which are meant for housekeeping tasks, such as washing machines and refrigerators, although nowadays, these would be considered black goods, some of these being connected to the Internet. In British English, they are often called brown goods by producers and sellers. In the 2010s, this distinction is absent in large big box consumer electronics stores, which sell both entertainment, communication, and home office devices and kitchen appliances such as refrigerators. Radio broadcasting in the early 20th century brought the first major consumer product, the broadcast receiver. Later products included telephones, televisions, and calculators, then audio and video recorders and players, game consoles, mobile phones, personal computers and MP3 players. In the 2010s, consumer electronics stores often sell GPS, automotive electronics (car stereos), video game consoles, electronic musical instruments (e.g., synthesizer keyboards), karaoke machines, digital cameras, and video players (VCRs in the 1980s and 1990s, followed by DVD players and Blu-ray players). Stores also sell smart appliances, digital cameras, camcorders, cell phones, and smartphones. Some of the newer products sold include virtual reality head-mounted display goggles, smart home devices that connect home devices to the Internet and wearable technology. In the 2010s, most consumer electronics have become based on digital technologies, and have largely merged with the computer industry in what is increasingly referred to as the consumerization of information technology. Some consumer electronics stores, have also begun selling office and baby furniture. Consumer electronics stores may be "brick and mortar" physical retail stores, online stores, or combinations of both. Annual consumer electronics sales are expected to reach by 2020. It is part of the wider electronics industry. In turn, the driving force behind the electronics industry is the semiconductor industry. The basic building block of modern electronics is the MOSFET (metal-oxide-silicon field-effect transistor, or MOS transistor), the scaling and miniaturization of which has been the primary factor behind the rapid exponential growth of electronic technology since the 1960s. History For its first fifty years the phonograph turntable did not use electronics; the needle and soundhorn were purely mechanical technologies. However, in the 1920s, radio broadcasting became the basis of mass production of radio receivers. The vacuum tubes that had made radios practical were used with record players as well, to amplify the sound so that it could be played through a loudspeaker. Television was soon invented, but remained insignificant in the consumer market until the 1950s. The first working transistor, a point-contact transistor, was invented by John Bardeen and Walter Houser Brattain at Bell Laboratories in 1947, which led to significant research in the field of solid-state semiconductors in the early 1950s. The invention and development of the earliest transistors at Bell led to transistor radios. This led to the emergence of the home entertainment consumer electronics industry starting in the 1950s, largely due to the efforts of Tokyo Tsushin Kogyo (now Sony) in successfully commercializing transistor technology for a mass market, with affordable transistor radios and then transistorized television sets. Mohamed M. Atalla's surface passivation process, developed at Bell in 1957, led to the planar process and planar transistor developed by Jean Hoerni at Fairchild Semiconductor in 1959, from which comes the origins of Moore's law, and the invention of the MOSFET (metal–oxide–silicon field-effect transistor, or MOS transistor) by Mohamed Atalla and Dawon Kahng at Bell in 1959. The MOSFET was the first truly compact transistor that could be miniaturised and mass-produced for a wide range of uses, enabling Moore's law and revolutionizing the electronics industry. It has since been the building block of modern digital electronics, and the "workhorse" of the electronics industry. Integrated circuits (ICs) followed when manufacturers built circuits (usually for military purposes) on a single substrate using electrical connections between circuits within the chip itself. The most common type of IC is the MOS integrated circuit chip, capable of the large-scale integration (LSI) of MOSFETs on an IC chip. MOS technology led to more advanced and cheaper consumer electronics, such as transistorized televisions, pocket calculators, and by the 1980s, affordable video game consoles and personal computers that regular middle-class families could buy. The rapid progress of the electronics industry during the late 20th to early 21st centuries was achieved by rapid MOSFET scaling (related to Dennard scaling and Moore's law), down to sub-micron levels and then nanoelectronics in the early 21st century. The MOSFET is the most widely manufactured device in history, with an estimated total of 13sextillion MOSFETs manufactured between 1960 and 2018. Products Consumer electronics devices include those used for entertainment (flatscreen TVs, television sets, MP3 players, video recorders, DVD players, radio receivers, etc.) communications (telephones, cell phones, e-mail-capable personal computers, desktop computers, laptops, printers, paper shredders, etc.) recreation (digital cameras, camcorders, video game consoles, ROM cartridges, remote control cars, Robot kits, etc.). Increasingly consumer electronics products such as Digital distribution of video games have become based on internet and digital technologies. Consumer electronics industry has largely merged with the software industry in what is increasingly referred to as the consumerization of information technology. Trends One overriding characteristic of consumer electronic products is the trend of ever-falling prices. This is driven by gains in manufacturing efficiency and automation, lower labor costs as manufacturing has moved to lower-wage countries, and improvements in semiconductor design. Semiconductor components benefit from Moore's law, an observed principle which states that, for a given price, semiconductor functionality doubles every two years. While consumer electronics continues in its trend of convergence, combining elements of many products, consumers face different decisions when purchasing. There is an ever-increasing need to keep product information updated and comparable, for the consumer to make an informed choice. Style, price, specification, and performance are all relevant. There is a gradual shift towards e-commerce web-storefronts. Many products include Internet connectivity using technologies such as Wi-Fi, Bluetooth, EDGE, or Ethernet. Products not traditionally associated with computer use (such as TVs or Hi-Fi equipment) now provide options to connect to the Internet or to a computer using a home network to provide access to digital content. The desire for high-definition (HD) content has led the industry to develop a number of technologies, such as WirelessHD or ITU-T G.hn, which are optimized for distribution of HD content between consumer electronic devices in a home. Industries The electronics industry, especially meaning consumer electronics, emerged in the 20th century and has now become a global industry worth billions of dollars. Contemporary society uses all manner of electronic devices built in automated or semi-automated factories operated by the industry. Manufacturing Most consumer electronics are built in China, due to maintenance cost, availability of materials, quality, and speed as opposed to other countries such as the United States. Cities such as Shenzhen have become important production centres for the industry, attracting many consumer electronics companies such as Apple Inc. Electronic component An electronic component is any basic discrete device or physical entity in an electronic system used to affect electrons or their associated fields. Electronic components are mostly industrial products, available in a singular form and are not to be confused with electrical elements, which are conceptual abstractions representing idealized electronic components. Software development Consumer electronics such as personal computers use various types of software. Embedded software is used within some consumer electronics, such as mobile phones. This type of software may be embedded within the hardware of electronic devices. Some consumer electronics include software that is used on a personal computer in conjunction with electronic devices, such as camcorders and digital cameras, and third-party software for such devices also exists. Standardization Some consumer electronics adhere to protocols, such as connection protocols "to high speed bi-directional signals". In telecommunications, a communications protocol is a system of digital rules for data exchange within or between computers. Trade shows The Consumer Electronics Show (CES) trade show has taken place yearly in Las Vegas, Nevada since its foundation in 1973. The event, which grew from having 100 exhibitors in its inaugural year to more than 4,500 exhibiting companies in its 2020 edition, features the latest in consumer electronics, speeches by industry experts and innovation awards. The Internationale Funkausstellung Berlin (IFA) trade show has taken place Berlin, Germany since its foundation in 1924. The event features new consumer electronics and speeches by industry pioneers. IEEE initiatives Institute of Electrical and Electronics Engineers (IEEE), the world's largest professional society, has many initiatives to advance the state of the art of consumer electronics. IEEE has a dedicated society of thousands of professionals to promote CE, called the Consumer Electronics Society (CESoc). IEEE has multiple periodicals and international conferences to promote CE and encourage collaborative research and development in CE. The flagship conference of CESoc, called IEEE International Conference on Consumer Electronics (ICCE), is on its 35th year. IEEE Transactions on Consumer Electronics IEEE Consumer Electronics Magazine IEEE International Conference on Consumer Electronics (ICCE) Retailing Electronics retailing is a significant part of the retail industry in many countries. In the United States, dedicated consumer electronics stores have mostly given way to big-box retailers such as Best Buy, the largest consumer electronics retailer in the country, although smaller dedicated stores include Apple Stores, and specialist stores that serve, for example, audiophiles and exceptions, such as the single-branch B&H Photo store in New York City. Broad-based retailers, such as Walmart and Target, also sell consumer electronics in many of their stores. In April 2014, retail e-commerce sales were the highest in the consumer electronic and computer categories as well. Some consumer electronics retailers offer extended warranties on products with programs such as SquareTrade. An electronics district is an area of commerce with a high density of retail stores that sell consumer electronics. Service and repair Consumer electronic service can refer to the maintenance of said products. When consumer electronics have malfunctions, they may sometimes be repaired. In 2013, in Pittsburgh, Pennsylvania, the increased popularity in listening to sound from analog audio devices, such as record players, as opposed to digital sound, has sparked a noticeable increase of business for the electronic repair industry there. Mobile phone industry A mobile phone, cellular phone, cell phone, cellphone, handphone, or hand phone, sometimes shortened to simply mobile, cell or just phone, is a portable telephone that can make and receive calls over a radio frequency link while the user is moving within a telephone service area. The radio frequency link establishes a connection to the switching systems of a mobile phone operator, which provides access to the public switched telephone network (PSTN). Modern mobile telephone services use a cellular network architecture and, therefore, mobile telephones are called cellular telephones or cell phones in North America. In addition to telephony, digital mobile phones (2G) support a variety of other services, such as text messaging, MMS, email, Internet access, short-range wireless communications (infrared, Bluetooth), business applications, video games and digital photography. Mobile phones offering only those capabilities are known as feature phones; mobile phones which offer greatly advanced computing capabilities are referred to as smartphones. A smartphone is a portable device that combines mobile telephone and computing functions into one unit. They are distinguished from feature phones by their stronger hardware capabilities and extensive mobile operating systems, which facilitate wider software, internet (including web browsing over mobile broadband), and multimedia functionality (including music, video, cameras, and gaming), alongside core phone functions such as voice calls and text messaging. Smartphones typically contain a number of metal–oxide–semiconductor (MOS) integrated circuit (IC) chips, include various sensors that can be leveraged by pre-included and third-party software (such as a magnetometer, proximity sensors, barometer, gyroscope, accelerometer and more), and support wireless communications protocols (such as Bluetooth, Wi-Fi, or satellite navigation). By country Environmental impact In 2017, the Greenpeace USA published a study of 17 of the world's leading consumer electronics companies in relation to their energy and resource consumption and the use of chemicals. Rare metals and rare earth elements Electronic devices use thousands rare metals and rare earth elements (40 on average for a smartphone), these material are extracted and refined using water and energy-intensive processes. These metals are also used in the renewable energy industry meaning that consumer electronics are directly competing for the raw materials. Energy consumption The energy consumption of consumer electronics and their environmental impact, either from their production processes or the disposal of the devices, is increasing steadily. EIA estimates that electronic devices and gadgets account for about 10%–15% of the energy use in American homes – largely because of their number; the average house has dozens of electronic devices. The energy consumption of consumer electronics increases – in America and Europe – to about 50% of household consumption, if the term is redefined to include home appliances such as refrigerators, dryers, clothes washers and dishwashers. Standby power Standby power – used by consumer electronics and appliances while they are turned off – accounts for 5–10% of total household energy consumption, costing $100 annually to the average household in the United States. A study by United States Department of Energy's Berkeley Lab found that a videocassette recorders (VCRs) consume more electricity during the course of a year in standby mode than when they are used to record or playback videos. Similar findings were obtained concerning satellite boxes, which consume almost the same amount of energy in "on" and "off" modes. A 2012 study in the United Kingdom, carried out by the Energy Saving Trust, found that the devices using the most power on standby mode included televisions, satellite boxes and other video and audio equipment. The study concluded that UK households could save up to £86 per year by switching devices off instead of using standby mode. A report from the International Energy Agency in 2014 found that $80 billion of power is wasted globally per year due to inefficiency of electronic devices. Consumers can reduce unwanted use of standby power by unplugging their devices, using power strips with switches, or by buying devices that are standardized for better energy management, particularly Energy Star marked products. Electronic waste A high number of different metals and low concentration rates in electronics means that recycling is limited and energy intensive. Electronic waste describes discarded electrical or electronic devices. Many consumer electronics may contain toxic minerals and elements, and many electronic scrap components, such as CRTs, may contain contaminants such as lead, cadmium, beryllium, mercury, dioxins, or brominated flame retardants. Electronic waste recycling may involve significant risk to workers and communities and great care must be taken to avoid unsafe exposure in recycling operations and leaking of materials such as heavy metals from landfills and incinerator ashes. However, large amounts of the produced electronic waste from developed countries is exported, and handled by the informal sector in countries like India, despite the fact that exporting electronic waste to them is illegal. Strong informal sector can be a problem for the safe and clean recycling. Reuse and repair E-waste policy has gone through various incarnations since the 1970s, with emphases changing as the decades passed. More weight was gradually placed on the need to dispose of e-waste more carefully due to the toxic materials it may contain. There has also been recognition that various valuable metals and plastics from waste electrical equipment can be recycled for other uses. More recently the desirability of reusing whole appliances has been foregrounded in the 'preparation for reuse' guidelines. The policy focus is slowly moving towards a potential shift in attitudes to reuse and repair. With turnover of small household appliances high and costs relatively low, many consumers will throw unwanted electric goods in the normal dustbin, meaning that items of potentially high reuse or recycling value go to landfills. While larger items such as washing machines are usually collected, it has been estimated that the 160,000 tonnes of EEE in regular waste collections was worth £220 million. And 23% of EEE taken to Household Waste Recycling Centres was immediately resaleable – or would be with minor repairs or refurbishment. This indicates a lack of awareness among consumers as to where and how to dispose of EEE, and of the potential value of things that are literally going in the bin. For reuse and repair of electrical goods to increase substantially in the UK there are barriers that must be overcome. These include people's mistrust of used equipment in terms of whether it will be functional, safe, and the stigma for some of owning second-hand goods. But the benefits of reuse could allow lower income households access to previously unaffordable technology whilst helping the environment at the same time.<Cole, C., Cooper, T. and Gnanapragasam, A., 2016. Extending product lifetimes through WEEE reuse and repair: opportunities and challenges in the UK. In: Electronics Goes Green 2016+ Conference, Berlin, Germany, 7–9 September 2016> Health impact Desktop monitors and laptops produce major physical health concerns for humans when bodies are forced into positions that are unhealthy and uncomfortable in order to see the screen better. From this, neck and back pains and problems increase, commonly referred to as repetitive strain injuries. Using electronics before going to bed makes it difficult for people to fall asleep, which has a negative effect on human health. Sleeping less prevents people from performing to their full potential physically and mentally and can also "increase rates of obesity and diabetes," which are "long-term health consequences". Obesity and diabetes are more commonly seen in students and in youth because they tend to be the ones using electronics the most. "People who frequently use their thumbs to type text messages on cell phones can develop a painful affliction called De Quervain syndrome that affects their tendons on their hands. The best known disease in this category is called carpal tunnel syndrome, which results from pressure on the median nerve in the wrist". See also Digital electronics Electronics industry List of home appliances Product teardown Timeline of electrical and electronic engineering References Notes Further reading External links Electronics industry
9333997
https://en.wikipedia.org/wiki/Scott%20Forstall
Scott Forstall
Scott James Forstall (born 1969) is an American software engineer, known for leading the original software development team for the iPhone and iPad, and Broadway producer, known for co-producing the Tony award-winning Fun Home and Eclipsed with Molly Forstall, his wife, among others. Having spent his career first at NeXT and then Apple, he was the senior vice president (SVP) of iOS Software at Apple Inc. from 2007 until October 2012. Early life and education Forstall grew up in a middle-class family in Kitsap County, Washington, the second-born of three boys to a registered-nurse mother Jeanne and an engineer father Tom Forstall. His older brother Bruce is also a senior software design engineer, at Microsoft. A gifted student for whom skills such as programming "came easily where they were difficult for others", Forstall qualified for advanced-placement science and math class in junior high school, and gained experience programming on Apple IIe computers. He was skipped forward a year, entering Olympic High School in Bremerton, Washington early, where classmates recall his immersion in competitive chess, history, and general knowledge, on occasion competing at the state level. He achieved a 4.0 GPA and earned the position of valedictorian, a position he shared with a classmate, Molly Brown, who would later become his wife. He had established the goal of being a "designer of high-tech electronics equipment", as he proclaimed in an interview with a local newspaper. Enrolling at Stanford University, he graduated in 1991 with a degree in symbolic systems. The next year he received his master's degree in computer science, also from Stanford. During his time at Stanford, Forstall was a member of the Phi Kappa Psi fraternity. Career NeXT / Apple Forstall joined Steve Jobs's NeXT in 1992 and stayed when it was purchased by Apple in 1997. Forstall was then placed in charge of designing user interfaces for a reinvigorated Macintosh line. In 2000, Forstall became a leading designer of the Mac's new Aqua user interface, known for its water-themed visual cues such as translucent icons and reflections, making him a rising star in the company. He was promoted to SVP in January 2003. During this period, he supervised the creation of the Safari web browser. Don Melton, a senior developer on the Safari team, credited Forstall for being willing to trust the instincts of his team and respecting their ability to develop the browser in secret. In 2005, when Jobs began planning the iPhone, he had a choice to either "shrink the Mac, which would be an epic feat of engineering, or enlarge the iPod". Jobs favored the former approach but pitted the Macintosh and the iPod team, led by Forstall and Tony Fadell respectively, against each other in an internal competition. Forstall won that fierce competition to create iOS. The decision enabled the success of the iPhone as a platform for third-party developers: using a well-known desktop operating system as its basis allowed the many third-party Mac developers to write software for the iPhone with minimal retraining. Forstall was also responsible for creating a software developer's kit for programmers to build iPhone apps, as well as an App Store within iTunes. In 2006, Forstall became responsible for Mac OS X releases after Avie Tevanian stepped down as the company's Chief Software Technology Officer and before being named SVP of iPhone Software. Forstall received credit as he "ran the iOS mobile software team like clockwork and was widely respected for his ability to perform under pressure". He has spoken publicly at Apple Worldwide Developers Conferences, including talks about Mac OS X Leopard in 2006 and iPhone software development in 2008, later after the release of iPhone OS 2.0 and iPhone 3G, and on January 27, 2010, at Apple's 2010 iPad keynote. At WWDC 2011, Forstall introduced iOS 5. Forstall also appears in the iOS 5 video, narrating about three-quarters of the clip, and in almost every major Apple iOS special event. At the "Let's talk iPhone" event launching the iPhone 4S, he took the stage to demonstrate the phone's Siri voice recognition technology, which was originally developed at SRI International. Departure from Apple The aftermath of the release of iOS 6, on September 19, 2012, proved a troubled period for Apple. The newly introduced Maps application, completely designed in-house by Apple, was criticized for being underdeveloped, buggy and lacking in detail. In addition, the clock app used a design based on the trademarked Swiss railway clock, which Apple had failed to license, forcing Apple to pay Swiss railways a reported $21 million in compensation. In October, Apple reported third-quarter results in which revenues and profits grew less than predicted, the second quarter in a row that the company missed analysts' expectations. On October 29, 2012, Apple announced in a press release "that Scott Forstall will be leaving Apple [in 2013] and will serve as an advisor to CEO Tim Cook in the interim." Forstall's duties were divided among four other Apple executives: design SVP Jonathan Ive assumed leadership of Apple's Human Interface team, Craig Federighi became the new head of iOS software engineering, services chief Eddy Cue took over responsibilities for Maps and Siri, and Bob Mansfield (previously SVP of hardware engineering) "unretired" to oversee a new technology group. On the same day, John Browett, who was SVP of retail, was dismissed immediately after only six months on the job. Neither Forstall nor any other Apple executive has commented publicly on his departure beyond the initial press statement, but it is generally presumed that Forstall left his position involuntarily. All information about the reasons for his departure therefore come from anonymous sources. Cook's aim since becoming CEO has been reported to be building a culture of harmony, which meant "weeding out people with disagreeable personalities—people Jobs tolerated and even held close, like Forstall," Although former Apple senior engineer Michael Lopp "believes that Apple's ability to innovate came from tension and disagreement." Steve Jobs was referred to as the "decider" who had the final say on products and features while he was CEO, reportedly keeping the "strong personalities at Apple in check by always casting the winning vote or by having the last word", so after Jobs' death many of these executive conflicts became public. Forstall had such a poor relationship with Ive and Mansfield that he could not be in a meeting with them unless Cook mediated; reportedly, Forstall and Ive did not cooperate at any level. Being forced to choose between the two, Cook reportedly chose to retain Ive since Forstall was not collaborative. Forstall was very close to and referred to as a mini-Steve Jobs, so Jobs' death left Forstall without a protector. Forstall was also referred to as the CEO-in-waiting by Fortune magazine and the book Inside Apple (written by Adam Lashinsky), a profile that made him unpopular at Apple. Forstall was said to be responsible for the departure of Jean-Marie Hullot (CTO of applications) in 2005 and Tony Fadell (SVP of hardware engineering) in 2008; Fadell remarked in an interview with the BBC that Forstall's firing was justified and he "got what he deserved". Jon Rubinstein, Fadell's predecessor as SVP of hardware, also had a strained relationship with Forstall. After Jobs' death in 2011, it had been reported that Forstall was trying to gather power to challenge Cook. The Siri intelligent personal voice assistant that Forstall introduced in September 2011 has received a mixed reception with some observers regarding it as a "flop". Forstall was vigorously criticized after the new Maps app, introduced in iOS 6, received criticism for inaccuracies that were not up to Apple standards. According to Adam Lashinsky of Fortune, when Apple issued a formal apology for the errors in Maps, Forstall refused to sign it. Under long-standing practice at Apple, Forstall was the "directly responsible individual" for Maps, and his refusal to sign the apology convinced Cook that Forstall had to go. Forstall's skeuomorphic design style, strongly advocated by former CEO Steve Jobs, was reported to have also been controversial and divided the Apple design team. In a 2012 interview, Ive, then head of hardware design only, refused to comment on the iOS user interface, "In terms of those elements you're talking about, I'm not really connected to that." Present Forstall did not make public appearances after his departure from Apple for a number of years. A report in December 2013 said that he had been concentrating on travel, advising charities, and providing informal advice to some small companies. On April 17, 2015, Forstall made his first tweet, which revealed that he is a co-producer of the Broadway version of the musical Fun Home. It was his first public appearance since departing from Apple in 2012. On June 7, 2015, the Forstall-produced musical won five awards at the Tonys. Forstall is reportedly working as an advisor with Snap Inc. On June 20, 2017, Forstall gave his first public interview after leaving Apple. He was interviewed in the Computer History Museum by John Markoff about the creation of the iPhone on the 10th anniversary of its sales launch. On April 18, 2020, Forstall announced that he was a producer for the Broadway musical Hadestown. The musical went on to win 8 Tony Awards. On May 20, 2020, Forstall made an appearance in Code.org's online Break event. On December 17, 2020, Forstall was revealed to be one of the co-creators of WordArt alongside Apple engineer Nat Brown, while interning for Microsoft in 1991. Forstall served as one of Apple’s witnesses on Epic Games v. Apple. See also Outline of Apple Inc. (personnel) History of Apple Inc. List of Stanford University people References External links Apple Inc. executives Living people Stanford University School of Engineering alumni Macintosh operating systems people American software engineers People from Kitsap County, Washington NeXT 1969 births
1715117
https://en.wikipedia.org/wiki/IBM%20WebSphere%20Application%20Server
IBM WebSphere Application Server
WebSphere Application Server (WAS) is a software product that performs the role of a web application server. More specifically, it is a software framework and middleware that hosts Java-based web applications. It is the flagship product within IBM's WebSphere software suite. It was initially created by Donald F. Ferguson, who later became CTO of Software for Dell. The first version was launched in 1998. This project was an offshoot from IBM HTTP Server team starting with Domino Go (Web Server). Architecture WebSphere Application Server (WAS) is built using open standards such as Java EE, XML, and Web Services. It runs on the following platforms: Windows, AIX, Linux, Solaris, IBM i and z/OS. Beginning with Version 6.1 and now into Version 9.0, the open standard specifications are aligned and common across all the platforms. Platform exploitation, to the extent it takes place, is done below the open standard specification line. It works with a number of Web servers including Apache HTTP Server, Netscape Enterprise Server, Microsoft Internet Information Services (IIS), IBM HTTP Server for i5/OS, IBM HTTP Server for z/OS, and IBM HTTP Server for AIX/Linux/Microsoft Windows/Solaris. It uses port 9060 for connection as the default administration port and port 9080 as the default website publication port. The "traditional" (as opposed to the Liberty variant) WebSphere Application Server platform is architected as a distributed computing platform that could be installed on multiple operating system instances, collectively referred to as a WebSphere cell. Management of all the instances could be done from a management node - called the Deployment Manager - within the cell, and deployment of applications - including the ability to perform rolling updates - could be pushed out to a subset of the cell nodes. The configuration information for the entire cell (how many nodes there are, what applications are deployed to each, how the applications are configured, session management and details of other resources, etc) are tracked in XML configuration files that are distributed throughout the cell to every node. Over the product lifetime, the implementation of these configuration details went from files, to database-based (around v3.5), and back again to files (around v5). Given the distributed install, and given also that management of the entire cell required management of local effects (such as deployment, logging configuration, etc), the overall effect was that WAS security could often override local security if not configured properly. For example, in earlier versions of the management console, there was an option that was available to specify the location of a log file on a remote node. This could be used to read / write to an arbitrary file on that remote node. For this reason, it was not advisable to run the application server / node agent processes with root privileges, and starting with v6, security configuration defaulted out of the box to a secure state (even if this meant that enabling desired functions required manual changing of the defaults). Originally, all nodes of the cell were in a single domain for management as well as application security. However, starting with v6.1, there can be multiple security domains and administrative and application security can be separate. Many IBM products (such as IBM InfoSphere DataStage) use WebSphere Application Server as the base platform for their infrastructure. Version history IBM has shipped several versions and editions of WebSphere Application Server. In the first beta versions, WebSphere had been called Servlet Express. Although the versioning scheme x.1 and x.5 would usually indicate a minor release in the software industry, WebSphere v6.1 and v5.1 are major releases, just like WebSphere v8.5 and v3.5. WebSphere Liberty Versions WebSphere Liberty was introduced into WebSphere Application Server V8.5, originally referred to as the WebSphere Liberty Profile, with the same version numbering scheme as the rest of WAS. In 2016 IBM introduced a new fix pack numbering scheme for Liberty to reflect a move to continuous delivery of Liberty in a single support stream - after V8.5.5.9, the Liberty numbering scheme was rebased starting at 16.0.0.2 to reflect Year and Quarter of the Liberty fixpack release. A common level of WebSphere Liberty is distributed as part of the both Version 8.5 and Version 9.0 of WebSphere Application Server. The Liberty continuous delivery model was introduced to allow new capabilities and features to be delivered on a more frequent basis. Version 9.0 WebSphere Application Server V9.0 adds Java EE 7 and Java SE 8 (by default) and also provides - and can be configured to run on - Java SE 7. This brought WAS Application Server traditional up to the same level of Java EE as WebSphere Liberty had offered since 2015. This was the first release of WAS to be made simultaneously available as both an on-premises offering and through WebSphere as a Service on IBM Cloud. WebSphere Liberty is increasingly the focus for new cloud native applications, with Liberty 16.0.0.2 being the version of Liberty included with WAS Version 9.0.0.0. Liberty 16.0.0.3 adds support for the new MicroProfile programming model that simplifies cloud native application development using standard Java EE technologies. Flexible access to WebSphere Liberty is provided through additional distributions as a docker image and Cloud Foundry buildpack. In September 2017 IBM moved ongoing development of Liberty into a new Open Source project called Open Liberty. Open Liberty is the source for the Liberty runtime in WebSphere Application Server. Distributions of Open Liberty are supported by the OpenLiberty.io community; IBM provides commercial support for Liberty through WebSphere Application Server. Version 8.5.5 WebSphere Application Server V8.5.5 includes significant enhancements to the Liberty profile including support for Java SE 8, full Java EE 7 compliance since V8.5.5.6, and WebSphere's intelligent management capabilities. WebSphere Liberty's support for Java EE is enabled through the configuration of sets of features, with different sets of Library features available in each edition of WAS. The WAS Liberty Core edition includes the Liberty features required for Java EE WebProfile; all other editions of WAS add Liberty features for full Java EE 7. The WAS Network Deployment Edition adds Liberty features for intelligent management. Beyond this the WAS z/OS edition adds Liberty features to enable z/OS platform capabilities. Version 8.5.0 WebSphere Application Server V8.5 offers the same Java EE 6 and Java SE 6 (by default) as V8.0 and also provides - and can be configured to run on - Java SE 7. The primary new capabilities in V8.5 are the Liberty profile of WebSphere Application Server and the intelligent management features. The Liberty profile of WebSphere Application Server is included with all the commercial editions of the server, providing a lightweight profile of the server for web, mobile and OSGi applications. In this release it is a functional subset of the full profile of WebSphere Application Server, for both development and production use, with an install size of under 50 MB, a startup time of around 3 seconds and a new XML-based server configuration which can be treated as a development artifact to aid developer productivity. Server capabilities are engaged through the set of features defined in the server configuration; features are added and removed dynamically through internal use of OSGi services. A new model is provided for moving applications through the pipeline from development to production as a packaged server; this is a complete archive of the server, server configuration and application for unzip deploy. A centralized managed install is optionally available through the Job Manager component of WebSphere Application Server Network Deployment edition. Intelligent management capability is added in the Network Deployment and z/OS editions of WebSphere Application server. This integrates operational features that were previously available in the separate WebSphere Virtual Enterprise (WVE) offering: application editioning, server health management, dynamic clustering and intelligent routing. Compute Grid is also included in the Network Deployment and z/OS editions of WebSphere Application server. Previously this was the separately priced WebSphere XD Compute Grid feature for scheduling and managing Java batch workloads. Version 7.0 This version was released on September 9, 2008. It is a Java EE 5 compliant application server. Following are the flagship features introduced by WebSphere Application Server Version 7: Flexible Management facilitates administration of a large number of WebSphere Application Server base edition and Network Deployment topologies that might be geographically distributed Business-Level Application is used for managing application artifacts independent of packaging or programming models Property Based Configuration feature simplifies the experience of automating administration: an administrator can update the WebSphere Application Server Version 7 configuration using a simple property file Between the general availability of WebSphere Application Server V7 and WebSphere Application Server V8 (in 2011), a number of additional capabilities were made available for V7 in the form of feature packs which are optionally added to a V7 install. Feature Pack content has the same quality and support as main release content - the purpose of a feature pack is to deliver new innovation before the next major release. The following feature packs were provided for WebSphere Application Server V7: Feature Pack for Modern Batch Feature Pack for OSGi Applications and JPA 2.0 Feature Pack for SCA Feature Pack for Web 2.0 and Mobile Feature Pack for XML Feature Pack for Communication Enabled Applications Version 6.1 This version was released on June 30, 2006. On September 11, 2012, IBM extended the end of service for V6.1 by a full year, to September 30, 2013, and announced new version-to-version migration incentives and assistance. It is a Java EE 1.4 compliant application server and includes the following function: Support for Java Standard Edition 1.5 Support for running JSR 168 Portlets in the application server Session Initiation Protocol (SIP) Servlets Enhancements to the WebSphere Install Factory IBM Support Assistant IBM JSF Widget Library Simplified Administration Improved Certificate and Key Management Security Enhancements Administration of IBM HTTP Server from WebSphere Admin Console Support for (pre-OASIS) WS-Security 1.0 Support for Web Services Resource Framework and WS-BusinessActivity (WS-BA) Support for JSR160 JMX Remote Connections (From IBM Agents Only) Administrative Console Jython Command Assistance Enhanced scripting. This version started the deprecation process for the Jacl syntax. 64-bit servants and a new Apache-based IBM HTTP Server for z/OS Support for the EJB 3.0 technology and support for some webservices standards were provided by the EJB feature pack and the webservices feature packs, respectively. These function in these feature packs has been folded into the main product in version 7. Functions in the webservices feature pack include: Asynchronous programming model (Limited functional support) Multiple Payload structures StAX (Streaming API for XML) WS-RM (Limited functional support) Support for (OASIS specified) WS-Security 1.0. WS-Addressing (Limited functional support) JAX-B support Policy Set (Limited functional support) Secured thin client (Limited functional support) SOAP (protocol) Message Transmission Optimization Mechanism (MTOM) Supports CGI and CORBA Version 6.0 This version was released on December 31, 2004. It is a Java EE 1.4 compliant application server. Security enhancements include support for JACC 1.0 and (pre-OASIS) WS-Security 1.0. Support for Java Standard Edition 1.4 Many programming model extensions previously found in WebSphere Application Server V5.0 Enterprise Edition were moved out of enterprise and into Express and Base. These APIs included application profile, startup beans, the scheduler, and async beans. The JMS engine, now called "WebSphere Platform Messaging," was rewritten in 100% Java and its functionality greatly enhanced. (WebSphere MQ is still supported as the JMS provider and is interoperable with WebSphere Platform Messaging.) The clustering was rewritten to use the high availability manager. This manages all singletons in the WebSphere environment and can provide hot recovery for those singletons. WebSphere was modified so that a shared file system can be used to store transaction logs and this meant that any cluster member with that shared file system mounted can hot recover in-doubt XA transactions with no external HA software. The Deployment Manager's role was eliminated from all clustering runtime operations. It's only required for centralized JMX admin and config changes. Now supports running mixed version cells (V5 to V6) in production. WebSphere Application Server for z/OS Provides the same core functionality as Network Deployment, since it shares a common programming model, but still contains the platform advantages such as: z/OS Workload Manager for prioritized management of mixed workloads Resource Recovery Services (added transactional integrity for complex, critical transactions) Support for security mainframe products such a RACF Advanced vertical scaling for application server by featuring a unique control region (integrated control area) server region (where workloads are completed) separation which enables the control region to open and close server regions as needed by the volume of incoming requests Parallel Sysplex support for full participation in the Sysplex, enabling advanced failover support and a geographically dispersed environment that seamlessly acts as one with a centralized logging and management facility WAS XD as it is known increases the functionality of the application server in two main areas - Manageability and Performance. It also allows makes possible new configurations, such as dynamic virtualization between pools of application servers. Under the performance header the ObjectGrid component was added, which is a standalone distributed cache that can be used with any application server (any version with a 1.4 JDK) or with any J2SE 1.4 runtime, including zLinux and z/OS support. Community Edition Code based on Apache Geronimo project With Version 6, some of the functionality previously found in WebSphere Business Integration Server Foundation (WBISF) moved into the new IBM WebSphere Process Server. Other function moved into the other editions (Express and above). Version 5.1 This version was released on January 16, 2004. It is a J2EE 1.4 compliant application server. Express Base Network Deployment WebSphere Application Server for z/OS Version 5.1 for z/OS is the first to support zAAP engines. WebSphere Business Integration Server Foundation V5.1 This is the follow on product to WebSphere Application Server Enterprise Edition V5.0. The workflow engine was updated to support BPEL rather than the proprietary FDML format used in V5.0. The product was also repriced and available on all IBM platforms from the Intel environments to the mainframe. WebSphere eXtended Deployment (XD) Version 5.0 The version released on November 19, 2002. This was a J2EE 1.3 certified application server. It was a major rewrite of the V3/V4 codebase and was the first time WebSphere Application Server was coded from a common codebase. Now WAS across all deployment platforms, from Intel x86 to the mainframe, are substantially the same code. The database-based configuration repository was replaced with a replication XML file-based configuration repository. A service called the Deployment Manager had the master copy of the cell configuration, and nodes had the file(s) they needed copied from this master server whenever they changed. V5 also included a miniature version of MQ 5.3 called the embedded Java Message Service (JMS) server. Express Edition replaces the Standard Edition. Express now becomes the term to indicate SME-oriented offerings from IBM, across all its software brands. Base Network Deployment. This version supports deployment of a cell configuration with cluster and J2EE failover support. It now also includes Edge Components, previously known as Edge Server. This provides a proxy server, load balancing, and content-based routing. Enterprise Edition. This version added a workflow engine, called the Process Choreographer, for the first time but predates the BPEL standard. It also added the first fully supported application threading model called WebSphere Asynchronous Beans. WebSphere Application Server for z/OS. This version is essentially the same as the Network Deployment product but is optimized to take full advantage of z/OS features, such as Workload Manager, to leverage the key technologies that make the mainframe indispensable for mission-critical, scalable, and secure workloads. Version 4.0 This was a J2EE 1.2 certified application server. It inherited the database-based configuration model from V3.x for all but the single-server edition, which already used an XML datastore. AE (Advanced Edition) AEs (Advanced Edition single). Single-server edition that was not able to run in a cluster configuration. AEd (Developer Edition). Functionally equivalent to AEs, but intended only for non-production development use. EE (Enterprise Edition) Version 3.5 (and 3.0) WebSphere 3.5 is the first widely used version of WebSphere. Version 2.0 IBM adds JavaBean, CORBA and Linux support. Comes in two editions: Standard Edition (SE) and Advanced Edition (AE). Version 1.0 Initial release in June 1998. Was primarily a Java Servlet engine. Security The WebSphere Application Server security model is based on the services provided in the operating system and the Java EE security model. WebSphere Application Server provides implementations of user authentication and authorization mechanisms providing support for various user registries: Local operating system user registry LDAP user registry Federated user registry (as of version 6.1) Custom user registry The authentication mechanisms supported by WebSphere are: Lightweight Third Party Authentication (LTPA) See also Java (software platform) IBM Rational Business Developer Other Java EE application servers: List of application servers Tomcat GlassFish JBoss AS Payara Server SAP NetWeaver Application Server WebLogic Server References External links IBM WebSphere main page Web frameworks Java enterprise platform Application Server Web server software Web server software programmed in Java
17657605
https://en.wikipedia.org/wiki/The%20Listeners%20%28novel%29
The Listeners (novel)
The Listeners is a 1972 science fiction novel by American author James Gunn. It centers on the search for interstellar communication and the effect that receipt of a message has. Although the search and the message are the unifying background of the novel, the chapters explore the personal effect of these events have on the lives of the characters. Style The Listeners is a modernist novel with a nonlinear narrative. The novel's 12 chapters, which vary in length from one page to about 30 pages, are: The Listeners Robert MacDonald—2025 Computer Run George Thomas—2027 Computer Run William Mitchell—2028 COMPUTER RUN Andrew White—2028 Computer Run Robert MacDonald—2058 The Computer—2118 Translations Linear narrative and dialogue are often interspersed with quotations from real authors and their works, fragments of fictional news reports, and snippets of thought and dialogue from named and unnamed sources (including a supercomputer). Italic type is often used. Among the more notable individuals quoted in the novel are scientist Giuseppe Cocconi, poet Kirby Congdon, Walter de la Mare, scientist Freeman J. Dyson, futurist and economist Herman Kahn, poet Alice Meynell, scientist and author Carl Sagan, and poet William Butler Yeats. Many quotations and some of the text are in Spanish (as the first Robert MacDonald's wife is Hispanic, and both characters speak the language). Each of the "Computer" chapters represents material the supercomputer collects in its attempts to better translate and understand the alien message it is receiving. These chapters use a historiographic approach which combines elements of futurology, literature, and science, and strongly resemble similar segments and elements in Isaac Asimov's Foundation series and A. E. van Vogt's The Voyage of the Space Beagle. The final chapter, "Translations," translates many of the foreign language quotations into English for readers. Plot synopsis The following is synopsis is presented in chronological order, although it is not presented that way in the novel. It is the year 2025, and many world problems (such as overpopulation, economic depression, resource depletion, racism, and crime) are on the verge of being solved. Robert MacDonald is a 47-year-old linguist and electrical engineer who is director of The Project, an attempt to listen for attempts at interstellar communication. For 20 years, he has been married to Maria, a Hispanic woman from Puerto Rico. MacDonald is obsessed with his job, which places strains on his marriage. His wife, who has previously attempted suicide by taking an overdose of sleeping pills, slashes her wrists in a second attempt. Although MacDonald almost resigns to care for his wife, he does not. Two years later, MacDonald and his wife have had a child, Robert MacDonald, Jr. (known as "Bobby"). MacDonald is interviewed by a journalist, George Thomas. Thomas is skeptical of The Project's cost. He also confronts MacDonald with the views of a new Christian sect, the Solitarians, led by the elderly preacher Jeremiah Jones. Jeremiah (he prefers to use only his first name) believes that humanity is alone in the universe, and that the search for extraterrestrial intelligence borders on heresy. That night, as Thomas visits the MacDonald household, The Project receives a message from the region of the star Capella in the constellation Auriga. The message is badly degraded by static, but appears to be early radio and audio-only television signals beamed back at the Earth from an alien race. MacDonald releases the news to the entire world. A year later, in 2028, Jeremiah's movement has gained vast numbers of new followers. One of The Project's scientists, William Mitchell, is engaged to be married to Jeremiah's daughter, Judith. MacDonald meets with Jeremiah, who believes the repeated radio transmissions are a message from God that humanity is, indeed, alone. MacDonald reveals to Jeremiah that the static noise in the message included short bursts of pure sound ("dots"), similar to Morse code, but no "dashes." He believes Jeremiah is not just a fanatic but also an "honest man" who might change his views once he sees what The Project is doing. MacDonald tells Jeremiah that once the message in the static is decoded, he wants Jeremiah to come to The Project's headquarters in Puerto Rico to see the message first-hand. Jeremiah agrees. After his arrival back in Puerto Rico, MacDonald has a revelation that leads to translation of the message. He urges Jeremiah to come, and the preacher does so the next day. MacDonald translates the encoded message for the first time in front of the Solitarian minister: The dots and silences are meant to be printed out. When they do so, they form an image of a multi-armed bird-like creature with wings. A circle envelops its head. Jeremiah initially believes the image represents an angel with a halo around its head. MacDonald and the scientists see it is an avian humanoid with vestigial wings and a space helmet around its head. News of the image spreads quickly. MacDonald contacts Andrew White, the first African American President of the United States, and tells him about the image. White wants to keep the news quiet, but realizes that too many people saw the image and that Jeremiah cannot be silenced. White flies to Puerto Rico with his son, John, an idealistic 20-year-old who believes that racism and the world's problems cannot return. MacDonald reveals that the printed image also shows the twin Capellan suns, reveals information about Capellan biology and reproduction, and points out the Capellan home world (a moon orbiting a gas giant planet). Solitarian riots break out across the United States over the next several hours as people begin to fear alien invasion. The Chinese and Russian governments demand that President White not respond. White concludes that the only way to prevent further bloodshed is to shut down The Project and issue no reply. But John White realizes that the message reveals something else: One of the Capellan stars is turning into a red giant, and the Capellan race is dying. The message was not a precursor to invasion; rather, it was an attempt to reach out to other life forms even as the race died. MacDonald tells President White that he can calm the rioters by revealing that the Capellans are dying, and that it will take 90 years for a message to reach Capella and return. White agrees to issue "The Answer" and to begin a propaganda campaign to counter the fear raised by Jeremiah's announcement. John White joins The Project. Thirty years later, Robert "Bobby" MacDonald, Jr. travels to Puerto Rico after his father's death. Robert MacDonald remained obsessed with his job. Although his son idolized him, Maria could no longer handle the strain and left with Bobby. Bobby blames his father for his mother's death, and refused to communicate with him for decades. A middle-aged John White welcomes Bobby back to The Project. Bobby discovers that The Project's massive computer has been recording all the conversations in the control room for the past 75 years, and White plays snippets of these conversations for Bobby: The reaction to "The Message," President White's discussions with MacDonald, Jeremiah's visit to MacDonald's memorial service, and much more. John White reveals that The Project is adrift without Robert MacDonald to lead it, and might shut down. Their biggest fear is that they might receive "The Reply" but no one would be listening. Bobby, who has earned degrees in electrical engineering and computer programming, comes to terms with his father's absent parenting, and agrees to join The Project as it waits for The Reply. In the year 2118, the computer which runs The Project is secretly close to gaining sentience after having accumulated nearly 200 years' worth of knowledge. William MacDonald, Bobby MacDonald's son, is the new director of The Project. As the "Day of Reply" nears, the entire world becomes excited, and musical compositions, motion pictures, plays, and philosophical discussions are presented worldwide. Some in the audience at The Project's headquarters in Puerto Rico believe that the ghosts of Robert and Bobby MacDonald haunt the facility, but these claims are dismissed as holographic projections created by the computer to keep the staff's hopes alive. The Reply arrives, containing a vast encyclopedia of Capellan knowledge and art. But The Reply also reveals that the Capellans died millennia ago and that the message Earth has been receiving is nothing more than an automated response from self-repairing machinery set in motion ages ago by the alien race. The Project computer slowly begins assimilating the Capellan information, secretly becoming more Capellan-like, as humanity turns inward again and starts to learn about the dead Capellans. The story ends with the revelation that, 50 years later, another message is received from the Crab Nebula. Development The novel is largely made up of previously published short stories which were adapted to form a coherent novel. These include: The "Robert MacDonald" chapters previously appeared as the short story "The Listeners" in Galaxy Magazine, September 1968. The chapter "George Thomas—2027" previously appeared as the short story "The Voices" in Fantasy & Science Fiction, September 1972. The chapter "William Mitchell—2028" previously appeared as the short story "The Message" in Galaxy Magazine, May–June 1971. The chapter "Andrew White—2028" previously appeared as the short story "The Answer" in Galaxy Magazine, January–February 1972. The chapter "The Computer—2118" previously appeared as the short story "The Reply" in Galaxy Magazine, May–June 1972. Gunn wrote the stories which would become the book in 1966, during a sabbatical from his job as an administrative assistant to the Chancellor for University Relations at the University of Kansas. The short stories were inspired by Walter S. Sullivan's book We Are Not Alone, which documented then-nascent efforts to search for extraterrestrial intelligence (SETI) using radio telescopes. The SETI project piqued Gunn's interest: "...what stimulated my writer's instinct was the concept of a project that might have to be pursued for a century without results. What kind of need would produce that kind of dedication, I pondered, and what kind of people would it enlist—and have to enlist if it were to continue?" The story was written as a novelette, but Gunn agreed to have the chapter then called "The Listeners" published in Galaxy Magazine after editor Frederik Pohl announced the magazine was returning to a monthly publication schedule and needed a great deal of new material. This version of the story was nominated for the 1969 Nebula Award for Best Novelette. Most of the rest of the novelette was published as individual short stories over the next few months. Charles Scribner's Sons, a book publishing house, was developing a new line of science fiction novels, and an editor approached Gunn about collecting the stories and editing them together to fashion a complete novel out of the material. Gunn added the "Computer" chapters as a means of cementing the disparate chapters together and added the final chapter, "Translations," and the novel was published in hard cover in late 1972. Reception The Listeners had a substantial impact on the field of the science fiction novel. National Aeronautics and Space Administration Chief Historian Steven J. Dick has called the novel the "classic expression" of contact with extraterrestrial life in fiction. Along with such writers as Pulitzer Prize-winner Carl Sagan, Gunn has been called one of the chief contributors to the subgenre of science fiction which deals with alien first contact. According to Gunn, scientist and author Carl Sagan told Gunn that The Listeners was the inspiration for his own novel, Contact. In 2010, the novel was nominated by a group of scientists and writers (assembled by New Scientist magazine) as one of the science fiction novels of the 20th century which is an unrecognized classic in the field. Footnotes External links 1972 American novels 1972 science fiction novels American science fiction novels Charles Scribner's Sons books Fiction about the Crab Nebula Modernist novels Nonlinear narrative novels
387562
https://en.wikipedia.org/wiki/Sports%20video%20game
Sports video game
A sports video game is a video game that simulates the practice of sports. Most sports have been recreated with a game, including team sports, track and field, extreme sports, and combat sports. Some games emphasize actually playing the sport (such as FIFA, Pro Evolution Soccer and Madden NFL), whilst others emphasize strategy and sport management (such as Football Manager and Out of the Park Baseball). Some, such as Need for Speed, Arch Rivals and Punch-Out!!, satirize the sport for comic effect. This genre has been popular throughout the history of video games and is competitive, just like real-world sports. A number of game series feature the names and characteristics of real teams and players, and are updated annually to reflect real-world changes. The sports genre is one of the oldest genres in gaming history. Game design Sports games involve physical and tactical challenges, and test the player's precision and accuracy. Most sports games attempt to model the athletic characteristics required by that sport, including speed, strength, acceleration, accuracy, and so on. As with their respective sports, these games take place in a stadium or arena with clear boundaries. Sports games often provide play-by-play and color commentary through the use of recorded audio. Sports games sometimes make use of different modes for different parts of the game. This is especially true in games about American football such as the Madden NFL series, where executing a pass play requires six different gameplay modes in the span of approximately 45 seconds. Sometimes, other sports games offer a menu where players may select a strategy while play is temporarily suspended. Association football video games sometimes shift gameplay modes when it is time for the player to attempt a penalty kick, a free shot at goal from the penalty spot, taken by a single player. Some sports games also require players to shift roles between the athletes and the coach or manager. These mode switches are more intuitive than other game genres because they reflect actual sports. Older 2D sports games sometimes used an unrealistic graphical scale, where athletes appeared to be quite large in order to be visible to the player. As sports games have evolved, players have come to expect a realistic graphical scale with a high degree of verisimilitude. Sports games often simplify the game physics for ease of play, and ignore factors such as a player's inertia. Games typically take place with a highly accurate time-scale, although they usually allow players to play quick sessions with shorter game quarters or periods. Sports games sometimes treat button-pushes as continuous signals rather than discrete moves, in order to initiate and end a continuous action. For example, football games may distinguish between short and the long passes based on how long the player holds a button. Golf games often initiate the backswing with one button-push, and the swing itself is initiated by a subsequent push. Types Arcade Sports games have traditionally been very popular arcade games. The competitive nature of sports lends itself well to the arcades where the main objective is usually to obtain a high score. The arcade style of play is generally more unrealistic and focuses on a quicker gameplay experience. However the competitive nature of sports and being able to gain a high score while competing against friends for free online, has made online sports games very popular. Examples of this include the NFL Blitz and NBA Jam series. Simulation Simulation games are more realistic than arcade games, with the emphasis being more on realism than on how fun the game is to pick up and play. Simulation games tend to be slower and more accurate while arcade games tend to be fast and can have all kinds of ad-hoc rules and ideas thrown in, especially pre-2000. Management A sports management game puts the player in the role of team manager. Whereas some games are played online against other players, management games usually pit the player against AI controlled teams in the same league. Players are expected to handle strategy, tactics, transfers, and financial issues. Various examples of these games can be found in the sports management category. Multi-sport Since Track & Field (1983), various multi-sport video games have combined multiple sports into a single game. Wii Sports and Wii Sports Resort are recent examples. A popular sub-genre are Olympic video games, including Track & Field and other similar titles. Multi-sport tournaments are becoming the basis for computer games. Sports-based fighting Sports-based fighting games are titles that fall firmly within the definitions of both the fighting game and sports game genres, such as boxing and wrestling video games. As such, they are usually put in their own separate subgenres. Often the fighting is far more realistic than in traditional fighting games (though the amount of realism can greatly vary), and many feature real-world franchises or fighters. Examples of this include the Fight Night, UFC 2009 Undisputed, EA Sports UFC and WWE 2K series. History Origins (1958–1972) Sports video games have origins in sports electro-mechanical games (EM games), which were arcade games manufactured using a mixture of electrical and mechanical components, for amusement arcades between the 1940s and 1970s. Examples include boxing games such as International Mutoscope Reel Company's K.O. Champ (1955), bowling games such as Bally Manufacturing's Bally Bowler and Chicago Coin's Corvette from 1966, baseball games such as Midway Manufacturing's Little League (1966) and Chicago Coin's All Stars Baseball (1968), other team sport games such as Taito's Crown Soccer Special (1967) and Crown Basketball (1968), and air hockey type games such as Sega's MotoPolo (1968) and Air Hockey (1972) by Brunswick Billiards. The earliest sports video game dates backs to 1958, when William Higinbotham created a game called Tennis for Two, a competitive two-player tennis game played on an oscilloscope. The players would select the angle at which to put their racket, and pressed a button to return it. Although this game was incredibly simple, it demonstrated how an action game (rather than previous puzzles) could be played on a computer. Video games prior to the late 1970s were primarily played on university mainframe computers under timesharing systems that supported multiple computer terminals on school campuses. The two dominant systems in this era were Digital Equipment Corporation's PDP-10 and Control Data Corporation's PLATO. Both could only display text, and not graphics, originally printed on teleprinters and line printers, but later printed on single-color CRT screens. Ralph Baer developed Table Tennis for the first video game console, the Magnavox Odyssey, released in 1972. While the console had other sports-themed game cards, they required the use of television overlays while playing similarly to board games or card games. Table Tennis was the only Odyssey game that was entirely electronic and did not require an overlay, introducing a ball-and-paddle game design that showcased the potential of the new video game medium. This provided the basis for the first commercially successful video game, Pong (1972), released as an arcade video game by Atari, Inc. Ball-and-paddle era (1973–1975) Numerous ball-and-paddle games that were either clones or variants of Pong were released for arcades in 1973. Atari themselves released a four-player cooperative multiplayer variant, Pong Doubles (1973), based on tennis doubles. In the United States, the best-selling arcade video game of 1973 was Pong, followed by several of its clones and variants, including Pro Tennis from Williams Electronics, Winner from Midway Manufacturing, Super Soccer and Tennis Tourney from Allied Leisure (later called Centuri), and TV Tennis from Chicago Coin. In Japan, arcade manufacturers such as Taito initially avoided video games as they found Pong to be simplistic compared to more complex EM games, but after Sega successfully tested-marketed Pong in Japan, Sega and Taito released the clones Pong Tron and Elepong, respectively, in July 1973, before the official Japanese release of Pong by Atari Japan (later part of Namco) in November 1973. Tomohiro Nishikado's four-player Pong variant Soccer was released by Taito in November 1973, with a green background to simulate an association football playfield along with a goal on each side. Another Taito variant, Pro Hockey (1973), set boundaries around the screen and only a small gap for the goal. Tomohiro Nishikado wanted to move beyond simple rectangles to character graphics, resulting in his development of a basketball game, Taito's TV Basketball, released in April 1974. It was the earliest use of character sprites to represent human characters in a video game. While the gameplay was similar to earlier ball-and-paddle games, it displayed images both for the players and the baskets, and attempted to simulate basketball. Each player controls two team members, a forward and a guard; the ball can be passed between team members before shooting, and the ball has to fall into the opposing team's basket to score a point. The game was released in North America by Midway as TV Basketball, selling 1,400 arcade cabinets in the United States, a production record for Midway up until they released Wheels the following year. Ramtek later released Baseball in October 1974, similarly featuring the use of character graphics. In 1975, Nintendo released EVR-Race, a horse racing simulation game with support for up to six players. It was a mixture between a video game and an electro-mechanical game, and played back video footage from a video tape. Decline (1976–1982) After the market became flooded with Pong clones, the Pong market crashed around the mid-1970s. Sports video games would not regain the same level of success until the 1980s. In 1976, Sega released an early combat sport game, Heavyweight Champ, based on boxing and now considered the first fighting game. In March 1978, Sega released World Cup, an association football game with a trackball controller. In October 1978, Atari released Atari Football, which is considered to be the first video game to accurately emulate American football; it also popularized the use of a trackball, with the game's developers mentioning it was inspired by an earlier Japanese association football game that used a trackball. Atari Football was the second highest-earning arcade video game of 1979 in the United States, below only Taito's shoot 'em up blockbuster Space Invaders (1978), though Atari Football was the only sports game among the top ten highest-earners. In 1980, Mattel's Basketball for the Intellivision was the first basketball video game to be licensed by the National Basketball Association (NBA). On home computers, Microsoft's Olympic Decathlon (1980) was one of the first sports-related programs to mix game and simulation elements, and was an early example of an Olympic track-and-field game. The first association football management simulation, Football Manager, was released for the ZX Spectrum computer in 1982. Between 1981 and 1983, the Atari's VCS (2600) and Mattel's Intellivision waged a series of high-stakes TV advertising campaigns promoting their respective systems, marking the start of the first console wars. Atari prevailed in arcade games and had a larger customer base due to its lower price, while Intellivision touted its visually superior sports games. Sports writer George Plimpton was featured in the Intellivision ads, which showed the parallel games side by side. Both Atari and Intellivision fielded at least one game for baseball, American football, hockey, basketball and association football. Atari's sports games included Activision Tennis (1981). Resurgence (1983–1985) Sports video games experienced a resurgence from 1983. As the golden age of arcade video games came to an end, arcade manufacturers began looking for ways to reinvigorate the arcade video game industry, so they began turning to sports games. The arcade industry began producing sports games at levels not seen since the days of Pong and its clones, which played a role in the recovery of the arcade market by the mid-1980s. There were initially high expectations for laserdisc games to help revive the arcade industry in 1983, but it was instead non-laserdisc sports games that ended up being the most well-received hits at amusement arcade shows by late 1983. Arcades In March 1983, Sega released Alpha Denshi's arcade game Champion Baseball, which became a blockbuster success in Japanese arcades, with Sega comparing its impact on Japanese arcades to that of Space Invaders. Champion Baseball was a departure from the "space games" and "cartoon" action games that had previously dominated the arcades, and subsequently served as the prototype for later baseball video games. It had a split-screen format, displaying the playfield from two camera angles, one from the outfield and another close-up shot of the player and batter, while also giving players the option of selecting relief pitchers or pinch hitters, while an umpire looks on attentively to make the game calls. The game also had digitized voices for the umpire, and individual player statistics. Sports games became more popular across arcades worldwide with the arrival of Konami's Track & Field, known as Hyper Olympic in Japan, introduced in September 1983. It was an Olympic-themed athletics game that had multiple Olympic track-and-field events (including the 100-meter dash, long jump, javelin throw, 110-meter hurdles, hammer throw, and high jump) and allowed up to four players to compete. It had a horizontal side-scrolling format, depicting one or two tracks at a time, a large scoreboard that displayed world records and current runs, and a packed audience in the background. Despite the industry's hype for laserdisc games at the time, Track & Field became the most well-received game at the Amusement Machine Show (AM Show) in Tokyo and the Amusement & Music Operators Association (AMOA) show in the United States. The game sold 38,000 arcade units in Japan, became one of the top five highest-grossing arcade games of 1984 in the United States, and the top-grossing arcade game of 1984 in the United Kingdom. It was also the basis for an organized video game competition that drew more than a million players in 1984. The success of Track & Field spawned other similar Olympic video games. Numerous sports video games were subsequently released in arcades after Track & Field, including American football games such as 10-Yard Fight (1983) by Irem and Goal to Go (1984) by Stern Electronics, boxing video games such as Nintendo's Punch-Out (1984), martial arts sports fighting games such as Karate Champ (1984), the Nintendo VS. System titles Vs. Tennis and Vs. Baseball, Taito's golf game Birdie King II, and Data East's Tag Team Wrestling. 10-Yard Fight in 1983 had a career mode, where the player progresses from high school, to college, professional, playoff, and Super Bowl, as the difficulty increases with each step. Irem's waterskiing game Tropical Angel had a female player character, and was one of the two most well-received games at the September 1983 AM Show (along with Hyper Olympic) for its graphics and gameplay. Another sports game with female player characters was Taito's Joshi Volleyball (Big Spikers), which topped the Japanese table arcade cabinet chart in December 1983. Kaneko's Roller Aces was a roller skating game played from a third-person perspective, while Technōs Japan released the wrestling game Tag Team Wrestling. In the field of association football games, Alpha Denshi's Exciting Soccer (1983) featured digitized voices and a top-down overhead perspective, which was later popularized by Tehkan World Cup (1985) from Tehkan (later Tecmo). Tehkan World Cup was a multiplayer association football game with a trackball controller, where a button was used for kicking the ball and the trackball used for the direction and speed of the shot, with gameplay that was fairly realistic. It was a landmark title for association football games, considered revolutionary for its trackball control system, its top-down perspective that allows players to see more of the pitch, and its trackball-based game physics. It provided the basis for later association football games such as MicroProse Soccer (1988) and the Sensible Soccer series (1992 debut). Several sports laserdisc games were released for arcades in 1984, including Universal's Top Gear which displayed 3D animated race car driving, while Sega's GP World and Taito's Laser Grand Prix displayed live-action footage. Sega also produced a bullfighting game, Bull Fight, and a multiple-watersports game Water Match (published by Bally Midway), which included swimming, kayaking and boat racing; while Taito released a female sports game based on high-school track & field, The Undoukai, and a dirt track racing game Buggy Challenge, with a buggy. Other dirt racing games from that year were dirt bike games: Nintendo's Excitebike and SNK's motocross game Jumping Cross. Nintendo also released a four-player racquet sport game, Vs. Tennis (the Nintendo Vs. System version of Tennis). That same year, ice hockey games were also released: Alpha Denshi's Bull Fighter and Data East's Fighting Ice Hockey. Data East also released a lawn sports game Haro Gate Ball, based on croquet, while Nichibutsu released a game based on roller derby, Roller Jammer. Meanwhile, Technos Japan released a game based on sumo wrestling, Syusse Oozumou, and the first martial arts combat-sport game, Karate Champ, considered one of the most influential fighting games. In 1985, Nintendo released an arm wrestling game, Arm Wrestling, while Konami released a table tennis game that attempted to accurately reflect the sport, Konami's Ping Pong. Homes On home consoles, Mattel released Intellivision World Series Baseball (IWSB), designed by Don Daglow and Eddie Dombrower, in late 1983. It is considered the earliest sports video game to use multiple camera angles to show the action in a manner resembling a sports television broadcast. Earlier sports games prior to this had displayed the entire field on screen, or scrolled across static top-down fields to show the action. IWSB mimicked television baseball coverage by showing the batter from a modified "center field" camera, the baserunners in corner insets and defensive plays from a camera behind the batter. It was also one of the first sports video games to feature audibly-speaking digitized voices (as opposed to text), using the Mattel Intellivoice module. The game was sophisticated for its time, but was a commercial failure, released around the time of the video game crash of 1983 when the North American home video game market collapsed. Nintendo released a series of highly successful sports games for the Nintendo Entertainment System console and the arcade Nintendo Vs. System, starting with Baseball (1983) and Tennis (1984). They played an important role in the history of the Nintendo Entertainment System, as they were the earliest NES games released in North America, initially in the arcades and then with the console's launch. Nintendo's arcade version VS. Baseball (1984) was competing with Sega's earlier hit Champion Baseball in the arcades. On home computers, Track & Field spawned similar hit Olympic games for computer platforms, such as Ocean Software's Daley Thompson's Decathlon (1984). Electronic Arts (EA) produced their first sports game for home computers, the basketball title Dr. J and Larry Bird Go One on One (1983), which was the first licensed sports game based on the names and likenesses of famous athletes; the inclusion of famous real world athletes would become one of the most important selling points for sports games. One on One became Electronic Arts' best-selling game, and the highest-selling computer sports game. having sold 400,000 copies by late 1988 Further growth (1986–1994) In the late 1980s, basketball video games gained popularity in arcades. Konami's Double Dribble (1986) featured colorful graphics, five-on-five gameplay, cutaway animations for slam dunks, and a digitized version of "The Star-Spangled Banner" theme. It was considered the most realistic basketball game upon release, with fast-paced action, detailed players, a large side-scrolling court, innovative cinematic dunks, and detailed sound effects, beginning a trend where presentation would play an increasingly important role in sports games. Magic Johnson's Fast Break (1988) by Arcadia Systems had detailed characters and audio clips of Magic Johnson's voice. Midway, who had not released a basketball game in sixteen years since Taito's TV Basketball in 1974, released Arch Rivals (1989), a two-on-two game featuring large players with distinct looks, a basketball court, a crowd, cheeleaders, four periods, the ability to rough up an opponent, and big dunks capable of backboard shattering. Konami's Punk Shot (1990) is an arcade basketball game with an element of violence, allowing players to physically attack each other, which CU Amiga magazine compared to the film Rollerball (1975). The success of the Nintendo Entertainment System (NES) in North America led to the platform becoming a major platform for American sports video games. Basketball games included a port of Double Dribble, with a halo mechanic signifying the optimum release for shots, and Tecmo NBA Basketball (1992). American football video games included Tecmo Bowl (1987), which was ported to the NES with the NFL Players Association license, and Tecmo Super Bowl (1991), which introduced a season mode with nearly the entire NFL roster. Tecmo Super Bowl is considered to be one of the greatest and most influential games of all time, as it was the first mainstream sports video game with both the league and player association licenses, with ESPN ranking it the greatest sports video game of all time. Sega also developed American football games for their competing Master System console, Great Football in 1987 and American Pro Football (Walter Payton Football) in 1989, the latter very well-received by critics at the time. The late 1980s is considered the "Golden Age" of baseball video games. Namco's R.B.I. Baseball (1986) and the Atlus title Major League Baseball (1988) for the NES were the first fully licensed baseball video games. SNK's Baseball Stars (1989) was a popular arcade-style NES game, while Jaleco's NES title Bases Loaded (1987) was a simulation game with statistics. In 1988, EA released Earl Weaver Baseball, developed by Don Daglow and Eddie Dombrower, which for the first time combined a highly accurate simulation game with high quality graphics. This was also the first game in which an actual baseball manager provided the computer AI. In 1996 Computer Gaming World named EWB the 25th of its Best 150 Games of All Time, the second highest ranking for any sports game in that 1981–1996 period (after FPS Football). The 1990s began in the 16-bit era, as a wave of fourth generation video game consoles were created to handle more complex games and graphics. The Sega Genesis/Mega Drive in particular became renowned for its sports video games, as it was more powerful than the NES and with Sega targeting an older audience than Nintendo's typically younger target demographic at the time. Basketball video games included EA's Lakers versus Celtics and the NBA Playoffs (1991), which launched the NBA Live series. World Series Baseball (1994) introduced the "catcher-cam" perspective, launching the World Series Baseball series and becoming the first game in the Sega Sports line. In 1989, Electronic Arts producer Richard Hilleman hired GameStar's Scott Orr to re-design John Madden Football for the fast-growing Sega Genesis. In 1990, Orr and Hilleman released Madden Football. They focused on producing a head-to-head two-player game with an intuitive interface and responsive controls. Electronic Arts had only expected to sell around 75,000 units, but instead the title sold around 400,000 units. In 1990, Taito released Football Champ, an association football game that allows up to four players in both competitive and cooperative gameplay. It also let players perform a number of actions, including a back heel, power kick, high kick, sliding tackle, super shot, and fouling other players (kicking, punching, and pulling shirts), which the player can get away with if the referee isn't looking, or get a yellow or red penalty card for if he is. In 1991, the American football game Tecmo Super Bowl was the first mainstream sports game to feature both the league and player association licenses of the sport it emulated; previous titles either had one license or the other, but Tecmo Super Bowl was the first to feature real NFL players on real teams. Orr joined EA full-time in 1991 after the success of Madden on the Sega Genesis, and began a ten-year period of his career where he personally supervised the production of the Madden Football series. During this time EA formed EA Sports, a brand name used for sports games they produced. EA Sports created several ongoing series, with a new version released each year to reflect the changes in the sport and its teams since the previous release. Sega launched its own competing NFL series on the Sega Genesis. The gameplay of Sega's earlier 1987 Master System title Great Football (1987) was the basis for Joe Montana Football (1991), developed by EA and published by Sega for the Genesis. Sega then released their own sequel without EA's involvement, Joe Montana II: Sports Talk Football (1991), which became the first American football game with audio commentary. After Sega acquired the NFL license, they shortened the title to NFL Sports Talk Football Starring Joe Montana, which later became known as Sega's NFL series. Due to strong competition from Madden, the series was cancelled in 1997. Licensed basketball games began becoming more common by the early 1990s, including Sega's Pat Riley Basketball (1990) and Acme Interactive's David Robinson's Supreme Court (1992) for the Sega Genesis, and Hudson Soft's Bill Laimbeer's Combat Basketball (1991) for the Super Nintendo Entertainment System (SNES). EA followed Jordan vs. Bird: One on One (1988) with Lakers versus Celtics and the NBA Playoffs (1989), the latter ported to the Genesis in 1991, which added more simulation aspects to the subgenre. In the arcades, Midway followed Arch Rivals with NBA Jam (1993), which introduced digitized sprites similar to their fighting game Mortal Kombat (1992), combined with a gameplay formula similar to Arch Rivals. In its first twelve months of release, NBA Jam generated over to become the highest-grossing arcade sports game of all time. FIFA International Soccer (1993), the first game in EA's FIFA series of association football video games, released on the Sega Mega Drive and became the best-selling home video game of 1993 in the United Kingdom. In contrast to the top-down perspective of earlier association football games, FIFA introduced an isometric perspective to the genre. International Superstar Soccer (1994), the first game in Konami's International Superstar Soccer (ISS) series, released for the SNES. A rivalry subsequently emerged between the FIFA and ISS franchises. Transition to 3D polygons (1994–1997) In the 1990s, 3D graphics were introduced in sports games. Early uses of flat-shaded polygons date back to 1991, with home computer games such as 4D Sports Boxing and Winter Challenge. However, it was not until the mid-1990s that 3D polygons were popularized in sports games. Sega's arcade title Virtua Striker (1994) was the first association football game to use 3D graphics, and was also notable for its early use of texture mapping. Meanwhile, Sierra Online released American football title Front Page Sports Football in 1995 for the PC. The following year, Computer Gaming World named it twelfth of the Best 150 Games of All Time, the highest ranking sports game on the list. International Superstar Soccer Pro (ISS Pro), released for the PlayStation in 1997, was considered a "game-changer" for association football games, which had been largely dominated by rival FIFA on home systems for the last several years. Developed by Konami Tokyo, ISS Pro introduced a new 3D engine capable of better graphics and more sophisticated gameplay than its rival. Whereas FIFA had a simpler "arcade-style" approach to its gameplay, ISS Pro introduced more complex simulation gameplay emphasizing tactics and improvisation, enabled by tactical variety such as nine in-match strategy options. In 1997, Electronic Gaming Monthly reported that sports games accounted for roughly 50% of console software sales. Extreme sports enter into the mainstream (1996–2001) At the end of the 20th and beginning of the 21st century, extreme sport video games began to appear more frequently. Namco's Alpine Racer (1994) was a skiing winter sports simulator that became a major success in arcades during the mid-1990s. This led to a wave of similar sports games capitalizing on its success during the late 1990s, from companies such as Sega, Namco, Konami and Innovative Concepts. In 1996, two snowboarding video games were released: Namco's Alpine Surfer in the arcades, and the UEP Systems game Cool Boarders for the PlayStation console. The following year, Square's popular role-playing video game, Final Fantasy VII, included a snowboarding minigame that was later released as an independent snowboarding game, Final Fantasy VII Snowboarding, for mobile phones. In 2000, SSX was released. Based around boardercross, the game featured fast downhill races, avoiding various objects whilst using others to perform jumps and increase the player's speed. In 1997, Sega released one of the first mainstream skateboarding games, Top Skater, in the arcades, where it introduced a skateboard controller interface. Top Skater served as a basic foundation for later skateboarding games. The following year saw the release of the console skateboarding game Street Sk8er, developed by Atelier Double and published by Electronic Arts. In 1999, the subgenre was further popularized by Tony Hawk's Pro Skater, an arcade-like skateboarding game where players were challenged to execute elaborate tricks or collect a series of elements hidden throughout the level. Tony Hawk's went on to be one of the most popular sports game franchises. Sports games become big business (2002–2005) Association football games became more popular in the 2000s. Konami's ISS series spawned the Pro Evolution Soccer (PES) series in the early 2000s. A rivalry subsequently emerged between FIFA and PES, considered the "greatest rivalry" in the history of sports video games. PES became known for having "faster-paced tactical play" and more varied emergent gameplay, while FIFA was known for having more licenses. The FIFA series had sold over units by 2000, while the PES series had sold more than units by 2002. The sales gap between the two franchises had narrowed by the mid-2000s. On December 13, 2004, Electronic Arts began a string of deals that granted exclusive rights to several prominent sports organizations, starting with the NFL. This was quickly followed with two deals in January 2008 securing rights to the AFL and ESPN licenses. This was a particularly hard blow to Sega, the previous holder of the ESPN license, who had already been affected by EA's NFL deal. As the market for football brands was being quickly taken by EA, Take-Two Interactive responded by contacting the Major League Baseball Players Association and signing a deal that granted exclusive third-party major-league baseball rights; a deal not as restrictive, as first-party projects were still allowed. The NBA was then approached by several developers, but declined to enter into an exclusivity agreement, instead granting long-term licenses to Electronic Arts, Take-Two Interactive, Midway Games, Sony, and Atari. In April 2005, EA furthered its hold on American football licensing by securing rights to all NCAA brands. Motion detection Sega Activator: IR motion detection (1993–1994) In 1993, Sega released the Sega Activator, a motion detection game controller designed to respond to a player's body movements, for their Genesis console. The Activator was based on the Light Harp, a MIDI controller invented by Assaf Gurner. He was an Israeli musician and Kung Fu martial artist who researched inter disciplinarian concepts to create the experience of playing an instrument using the whole body's motion. It was released for the Mega Drive (Genesis) in 1993. It could read the player's physical movements and was the first controller to allow full-body motion sensing, The original invention related to a 3 octaves musical instrument that could interpret the user's gestures into musical notes via MIDI protocol. The invention was registered as patent initially in Israel on May 11, 1988 after 4 years of R&D. In 1992, the first complete Light Harp was created by Assaf Gurner and Oded Zur, and was presented to Sega of America. Like the Light Harp, the Activator is an octagonal frame that lies on the floor. Light-emitting diodes (LEDs) on the frame vertically project thin, invisible beams of infrared light. When something, such as a player's arm or leg, interrupts a beam, the device reads the distance at which the interruption occurred, and interprets the signal as a command. The device can also interpret signals from multiple beams simultaneously (i.e., chords) as a distinct command. Sega designed special Activator motions for a few of their own game releases. By tailoring motion signals specifically for a game, Sega attempted to provide a more intuitive gaming experience. A player could, for example, compete in Greatest Heavyweights of the Ring or Eternal Champions by miming punches. Despite these efforts, the Activator was a commercial failure. Like the Power Glove of 1989, it was widely rejected for its "unwieldiness and inaccuracy". Wii Remote: IR motion detection with accelerometry (2006–2009) In 2006, Nintendo released Wii Sports, a sports game for the Wii console in which the player had to physically move their Wii Remote to move their avatar known as a Mii. The game contained five different sports—boxing, bowling, golf, tennis, and baseball—which could all be played individually or with multiple players. Players could also track their skill progress through the game, as they became more proficient at the different sports, and use the training mode to practice particular situations. As of 2013, Wii Sports became the second-highest selling video game of all time. Wii Sports opened the way for other physically reactive sports-based video games, such as Mario & Sonic at the Olympic Games, the first official title to feature both Mario and Sonic the Hedgehog, in which players used the Wii Remote to simulate running, jumping and other Olympic sports. In 2008, Nintendo released Wii Fit, which allowed players to do aerobic and fitness exercises using the Wii Balance Board. In a similar light, 2008 saw the release of Mario Kart Wii, a racing game which allowed the player to use their remote with a Wii Wheel to act as a steering wheel, akin to those on traditional arcade racing games. Sports games today (2010–present) The most popular subgenre in Europe is association football games, which up until 2010 was dominated by EA Sports with the FIFA series and Konami with the Pro Evolution Soccer (PES) series. While FIFA was commercially ahead, the sales gap between the two franchises had narrowed. FIFA responded by borrowing gameplay elements from PES to improve FIFA, which eventually pulled ahead commercially by a significant margin in the 2010s and emerged as the world's most successful sports video game franchise. In North America, the sports genre is currently dominated by EA Sports and 2K Sports, who hold licenses to produce games based on official leagues. EA's franchises include the Madden NFL series, the NHL series, the FIFA series, and the NBA Live series. 2K Sports' franchises include the NBA 2K and WWE 2K series. All of these games feature real leagues, competitions and players. These games continue to sell well today despite many of the product lines being over a decade old, and receive, for the most part, consistently good reviews. With 2K & EA Sports' domination and many sports leagues carrying exclusive lisences, the North American sports video game market has become very difficult to enter; competing games in any of the above genres, with the exception of racing games, tend to be unsuccessful. This has led to a sharp drop in sports-themed titles over recent years especially with arcade titles. One of the most notable exceptions is Konami's Pro Evolution Soccer series, which is often hailed as an alternative to the FIFA series, but does not contain as many licensed teams, players, kits, or competitions. Another deviation from the norm is Sony's MLB The Show series, which now has a monopoly on the baseball genre after the withdrawal of 2K after MLB 2K13. Racing games, due to the variation that the sport can offer in terms of tracks, cars and styles, offer more room for competition and the selection of games on offer has been considerably greater (examples being F1 and the World Rally Championship, and many unlicensed games). Sports management games, while not as popular as they used to be, live on through small and independent software development houses. Management titles today have transitioned to the very popular fantasy sports leagues, which are available through many websites such as Yahoo. Independent developers are also creating sports titles like Super Mega Baseball, The Golf Club, and Freestyle2: Street Basketball. Nintendo has been able to make an impact upon the sports market by producing several Mario-themed titles, such as Mario Sports Mix, Mario Golf: Super Rush, Mario Sports Superstars, Mario Tennis Aces, and Mario Strikers: Battle League. These titles sell respectfully, but are only available on Nintendo's video game consoles, for example GameCube, Nintendo 64, Nintendo 3DS, Wii, Wii U and Nintendo Switch. See also Lists of sports video games References External links Video game genres Video game terminology
13119609
https://en.wikipedia.org/wiki/Wireless%40SG
Wireless@SG
Wireless@SG is a wireless broadband programme developed by the Infocomm Development Authority (IDA) of Singapore as part of its Next Generation National Infocomm Infrastructure initiative, being part of the nation's 10-year masterplan called Intelligent Nation 2015 (iN2015). The targeted users of this wireless broadband network are broadly classified as "people on the move" – people who require wireless broadband access while away from their homes, schools and offices. These include students, tourists, business travellers and enterprise users such as insurance agents and real estate agents who use widely available and wireless-enabled devices such as notebook PCs and PDAs. Once connected, users will be able to access all Internet-based services e.g. online gaming, web surfing, instant messaging, VoIP and email. History The IDA issued a Call-for-Collaboration (CFC) early 2006 for interested operators to provide such coverage. Late 2006, IDA has accepted the proposals from iCELL Network, QMAX Communications and SingTel, to kick-start the nation's progressive deployment of a widely available wireless network by September 2007. Following the announcement on 10 October 2006 by Prime Minister Lee Hsien Loong, that Singapore will get to enjoy two years of free wireless broadband connections from January next year. IDA announced on 30 November 2006 that the three Wireless@SG operators have extended this free offering to three years. Users started to enjoy wireless connectivity from 1 December 2006, one month ahead of schedule at selected hotspots. On 31 May 2007, Network deployment for all Primary Catchment Areas in all Regions was completed with the Secondary Catchment Areas in all Regions was completed on 30 September 2007. The CFC on Wireless Broadband Market Development was scheduled to be completed on 31 December 2008. It has two main objectives of: Accelerating the deployment of wireless broadband by providing coverage in locations where users out of their homes, schools and offices can conveniently access wireless broadband services using data-centric computing devices. Catalysing the demand for wireless broadband services by increasing the number of wireless broadband users. On 16 June 2009, the government announced an extension of the programme until March 2013 and enhancement of speed up to 1Mbit/s. Access speeds were increased to 1 Mbit/s in 2008, and 2 Mbit/s in 2009. In September 2009, M1 Limited acquired local connectivity provider, Qala Singapore, taking over QMax's Wireless@SG service. Funding from IDA has stopped since April 2013 for the "deployment and operations costs of Wireless@SG" hot spots. It was reported the main newspaper that the number of hotspots had dipped. Some malls will be left without the free WIFI hotspots. On 1 April 2013, StarHub and Y5Zone were added to the list of Wireless@SG operators, while iCELL was dropped as the company's proposal did not meet IDA's requirements for the next deployment phase of the programme and will remain support until 30 June 2013. On 11 April 2016, Parliament announced that the surfing speeds of Wireless@SG will be increased to 5 Mbit/s, up from 2Mbit/s. The number of hotspots will also double to 20,000. Operators Current Singtel M1 StarHub MyRepublic Previous QMAX iCELL Y5Zone Coverage To access Wireless@SG, users need to be located within the respective Wireless@SG coverage areas. The list of hotspots include shopping centres, libraries, museums, public swimming pools, cafes, restaurants, fast food joints and other public venues. The latest coverage areas can be found at Google Earth and at the IDA portal. Accessing the network To connect to the Wireless@SG wireless broadband network, a user just needs a WiFi-enabled device (such as a laptop, mobile phone, or tablet), and a mobile number. With this registered account, the user is able to roam within any of Wireless@SG's coverage areas, regardless of the operators' network. Available login methods include: SIM-based authentication (EAP-SIM), Seamless and Secure Access (PEAP with EAP-MSCHAPv2), and HTTP-based login (Captive portal). Users are encouraged by the IDA to install and activate a Virtual Private Network (VPN) or other encryption mechanism, Personal firewall; and Anti-virus software with the latest signature files. The IDA also encourages users to avoid ad hoc wireless networking to safeguard their security. There are various ways users can register for this service: HTTP-based manual login with mobile number: Locals and foreign visitors can access Wireless@SG by entering their mobile number on the sign-in page. A One-Time Password (OTP) will be sent to them via SMS. Seamless and Secure Access (SSA) automatic login: Users can configure their device to connect to the Wireless@SGx network automatically, as opposed to generating a One-Time Password via SMS every time on the Wireless@SG network. EAP-SIM authentication is available for devices with a SIM card. The Wireless@SG app is available on Windows, Mac, Android, and iOS, for devices without a SIM card. It sets up an 802.1X profile with PEAP authentication, after users provide a Singapore NRIC/FIN and a mobile number. References External links Wireless@SG – Infocomm Media Development Authority of Singapore Wireless@SG Programme Introduction See: Archive Wireless@SG operators Wireless@SG - Singtel Page Wireless@SG - M1 Page Wireless@SG - Starhub Page Wireless@SG - Y5Zone Page Internet in Singapore Broadband Wi-Fi providers
1126158
https://en.wikipedia.org/wiki/Microsoft%20Project
Microsoft Project
Microsoft Project is a project management software product, developed and sold by Microsoft. It is designed to assist a project manager in developing a schedule, assigning resources to tasks, tracking progress, managing the budget, and analyzing workloads. Microsoft Project was the company's third Microsoft Windows-based application. Within a few years after its launch, it became the dominant PC-based project management software. It is part of the Microsoft Office family but has never been included in any of the Office suites. It is available currently in two editions, Standard and Professional. Microsoft Project's proprietary file format is .mpp. Microsoft Project and Microsoft Project Server are the cornerstones of the Microsoft Office enterprise project management (EPM) product. History 'Project' was an MS-DOS software application originally written in Microsoft 'C' (and some assembly) language for the IBM PC. The idea originated from the vision of Ron Bredehoeft, a former IBM S/E and PC enthusiast in the early 1980s, to express the recipe and all preparation for a breakfast of eggs Benedict in project management terms. Mr. Bredehoeft formed Microsoft Application Services (MAS) during the birth of the application and the company later entered an OEM agreement with Microsoft Corporation. Alan M. Boyd, Microsoft's Manager of Product Development, introduced the application as an internal tool to help manage the huge number of software projects that were in development at any time inside the company. Boyd wrote the specification and engaged a local Seattle company to develop the prototype. The first commercial version of Project was released for DOS in 1984. Microsoft bought all rights to the software in 1985 and released version 2. Version 3 for DOS was released in 1986. Version 4 for DOS was the final DOS version, released in 1986. The first Windows version was released in 1990, and was labelled version 1 for Windows. In 1991 a Macintosh version was released. Development continued until Microsoft Project 4.0 for Mac in 1993. Microsoft Project 4 for the Mac included both 68k and PowerMac versions, Visual Basic for Applications and integration with Microsoft office 4.2 for the Mac. In 1994, Microsoft stopped development of most of its Mac applications and did not offer a new version of Office until 1998, after the creation of the new Microsoft Macintosh Business Unit the year prior. The Mac Business Unit never released an updated version of Project, and the last version does not run natively on macOS. Microsoft Project 1.0 was the only version to support Windows 2.x (Windows 2.0 and Windows 2.1x). It came bundled with Windows 2.x runtime but was fully compatible with Windows 3.0, especially Standard and Enhanced modes. The setup program runs in DOS, like most Windows-based applications at the time. Microsoft Project 3.0 introduced macro support, toolbars, print preview, DDE and OLE support, spell checking, Resource Allocation view and Planning Wizards and was the last to support Windows 3.0. The setup program now runs in Windows, and it is based on Microsoft's own setup program, which was also used by e.g. Microsoft Visual Basic 2.0/3.0, Works 2.0, Access 1.x. Microsoft Project 4.0 was the first to use common Office menus, right-click context menus, Acme setup program and the last to support Windows 3.1x, Windows NT 3.1 and 3.5. It was the last 16-bit version. Additionally it was the first version to use VBA macro language and introduced screen tooltips, Cue Cards, GanttChartWizard, Calendar view, Assign Resources dialog, recurring tasks, workgroup abilities, Drawing toolbar, Microsoft Project Exchange file format support, OLE 2.0 and ability to create reports. This version allowed user to consolidate up to 80 projects. Microsoft Project 95 (4.1) was the first 32-bit version and it was designed for Windows 95, hence the name even though some components such as the welcome tour, help components etc. remained 16-bit. It introduced ODBC support, AutoCorrect, Answer Wizard, like all Office 95 applications. Updated version, called Microsoft Project 4.1a improved Windows NT support. Additionally it was the first version to be available on CD-ROM. Additionally it was the last version to open Project 3.0 files. Microsoft Project 98 was fully 32-bit, and the first to use Tahoma font in the menu bars, to contain Office Assistant, like all Office 97 applications, introduced view bar, AutoFilter, task splitting, Assignment Information dialog, resource availability dates, project status date, user-entered actual costs, new task types, multiple critical paths, in-sheet controls, ability to rename custom fields, Web publishing features, new database format, Task Usage, Tracking Gantt and Resource Usage views, Web features, Web toolbar, PERT analysis features, resource contouring, cost rate tables, effort-driven scheduling, cross-project linking, indicators, progress lines, ability to save project files in HTML format, ability to analyze time-scaled data in Excel, improved limits for the number of tasks, resources, outline levels etc., IntelliMouse and Microsoft Office Binder support, Microsoft Outlook timeline integration, selective data import and export, ability to save as Microsoft Excel pivot tables, Microsoft Project Map, Project menu and allowed user to consolidate 1,000 projects. It was the last version to run on Windows NT 3.51, the last to open Project 4.0/95 files and save in .mpx (Microsoft Project Exchange) file format, the last to use Acme setup program and the last to be available on floppy disks. Project 98 SR-1 was a major service release addressing several issues in Project 98. Microsoft Project 2000 was the first to use personalized menus, Microsoft Agent-based Office Assistant and to use Windows Installer-based setup interface, like all Office 2000 applications, and introduced Microsoft Project Central (later renamed Microsoft Project Server). PERT Chart was renamed Network Diagram and was greatly improved in this version. Notable new features include ability to create personal Gantt charts, ability to apply filters in Network Diagram view, AutoSave, task calendars, ability to create projects based on templates and to specify default save path and format, graphical indicators, material resources, deadline dates, OLE DB, grouping, outline codes, estimated durations, month duration, value lists and formulas custom fields, contoured resource availability, ability to clear baseline, variable row height, in-cell editing, fill handle, ability to set fiscal year in timescale, single document interface, accessibility features, COM add-ins, pluggable language user interface, roaming user and Terminal Services support, ability to set task and project priority up to 1,000 (previously 10) and HTML help. Project 2000 was also the last version to support Find Fast (available in Windows 9x and NT 4.0 only) and to run on Windows 95. Project 2000 SR-1 fixed several bugs. Microsoft Project 2002 was the first to contain task panes, safe mode, smart tags, import/setup tracking/new project/calendar/import and export mapping wizards, ability to import tasks from Outlook and to save multiple baselines along with additional baseline fields, Project Guide, EPM/portfolio features (Professional only), Excel task list template, rollup baseline data to summary tasks on a selective baseline save, ability to choose which baseline the earned value calculations are based on, calculation options, multiple project manager support (Project Server is required), Collaborate menu, "Type a question for help" in the top right corner, error reporting along with mandatory product activation, like Office XP and Windows XP and ability to open and save Microsoft Project Data Interchange (.mspdi) files. It was also the last version to run on Windows NT 4.0, 98 (SE) and ME. It was available in two editions for the first time, Standard and Professional. Office Assistant is installed but not enabled by default. Support for accounts with limited rights under Windows 2000/XP was improved. Find Fast was dropped in favor of Windows 2000/XP Indexing Service. Microsoft Project 2003 was the first to support Windows XP visual styles and to contain SharePoint support, XML importing/printing/Copy Picture to Office wizards, built-in Office Online help, ability to create WBS charts in Visio, add-in for comparing projects (available as a freely downloadable add-on for Project 2000 and 2002), resource availability graphs, ability to import resource information from Active Directory and Exchange address book, Windows XP-style icons, like all Office 2003 applications, and the last to contain Office Assistant (not installed by default) and to run on Windows 2000 (Service Pack 3 required). Microsoft Project 2007 was the last to contain the menu bar and toolbars. New features include top level budget planning, multiple level undo, ability to manage non-working time, background cell highlighting, cost/team resources, change highlighting, visual reports, desktop OLAP cube and Report menu. Office Assistant was removed entirely. Microsoft Project 2010 was the first to contain ribbon and Backstage view, like all Office 2010 applications, contextual guidance, ability to zoom in/out quickly, user-controlled scheduling, top down summary tasks, placeholder text in project fields, timeline view, ability to add columns dynamically, text wrap, expanded color palette and formatting, task inspector, schedule warnings, ability to save as PDF or XPS and to synchronize with SharePoint, enhanced copy/pase and the last to open Microsoft Project 98 and .mpx files and to run on Windows XP and Vista. Additionally it was the first 64-bit version. Volume licensing activation was introduced in this version. Microsoft Project 2013 was the first to contain Modern UI-based look, and introduced Microsoft account and OneDrive integration. New features include integrated communication (Skype for Business is required). Microsoft Project 2016 is the last to support Windows 7 and Windows 8(.1). New features include multiple timeline view, Tell Me, colorful/dark gray/white themes, resource engagements, resource manager views, resource capacity heat maps, ability to give feedback directly to Microsoft in the File tab. Microsoft Project 2019 runs only on Windows 10, and it contains features carried over from Office 365. New features include ability to link tasks using a drop-down menu, Task Summary Name field, timeline bar labels and task progress, accessibility improvements. Versions for Windows were released in 1990 (v1.0), 1992 (v3.0), 1993 (v4.0), 1995 (Project 95, v4.1a), Project 98 (v8.0), Project 98 SR-1 (1999), Project 2000 (v9.0), Project 2000 SR-1 (2001), Project 2002 (v10.0), Project 2003 (v11.0), Project 2007 (v12.0), Project 2010 (v14.0), Project 2013 (v15.0) and Project 2016 (v16.0). There was no Version 2 on the Windows platform; the original design spec was augmented with the addition of macro capabilities and the extra work required to support a macro language pushed the development schedule out to early 1992 (Version 3). Features The project creates budgets based on assignment work and resource rates. As resources are assigned to tasks and assignment work estimated, the program calculates the cost, equal to the work times the rate, which rolls up to the task level and then to any summary tasks and finally to the project level. Resource definitions (people, equipment and materials) can be shared between projects using a shared resource pool. Each resource can have its own calendar, which defines what days and shifts a resource is available. Resource rates are used to calculate resource assignment costs which are rolled up and summarized at the resource level. Each resource can be assigned to multiple tasks in multiple plans and each task can be assigned multiple resources, and the application schedules task work based on the resource availability as defined in the resource calendars. All resources can be defined in label without limit. Therefore, it cannot determine how many finished products can be produced with a given amount of raw materials. This makes Microsoft Project unsuitable for solving problems of available materials constrained production. Additional software is necessary to manage a complex facility that produces physical goods. The application creates critical path schedules, and critical chain and event chain methodology third-party add-ons also are available. Schedules can be resource leveled, and chains are visualized in a Gantt chart. Additionally, Microsoft Project can recognize different classes of users. These different classes of users can have differing access levels to projects, views, and other data. Custom objects such as calendars, views, tables, filters, and fields are stored in an enterprise global which is shared by all users. Editions The project is available in two editions, Standard and Professional; both editions are available either as 32 or 64bit options. The Professional edition includes all the features of the Standard version, plus more features like team collaboration tools and the ability to connect to Microsoft Project Server. Project 2010 Microsoft Project 2010 includes the Fluent user interface known as the Ribbon. Interoperability Microsoft Project's capabilities were extended with the introduction of Microsoft Office Project Server and Microsoft Project Web Access. Project Server stores Project data in a central SQL-based database, allowing multiple, independent projects to access a shared resource pool. Web Access allows authorized users to access a Project Server database across the Internet, and includes timesheets, graphical analysis of resource workloads, and administrative tools. User-controlled scheduling User-controlled scheduling offers flexible choices for developing and managing projects. Timeline The timeline view allows the user to build a basic Visio-style graphical overview of the project schedule. The view can be copied and pasted into PowerPoint, Word, or any other application. SharePoint 2010 list synchronization SharePoint Foundation and Project Professional project task status updates may be synchronized for team members. Inactive tasks helps experiment with project plans and perform what-if analysis The Team Planner view The new Team Planner shows resources and work overtime, and helps spot problems and resolve issues. Project 2013 What's new in Project 2013 includes new Reports section, better integration with other Microsoft products, and appearance of user interface items: Reports A Reports section is added to the ribbon for pre-installed reports. Project 2013 includes graphical reports so that you can create graphical reports and add clipart without having to export data to another program. For example, the Burndown reports show planned work, completed work, and remaining work as lines on a graph. Project 2013 adds pre-installed ability to compare projects, do dashboards, and export to Visual Reports. Trace task paths This feature allows you to highlight the link chain (or 'task path') for any task. When you click on a specific task, all of its predecessor tasks show up in one color and all of its successor tasks show up in another color. Sharing Project 2013 improves the sharing and communication features of its predecessors in multiple ways without leaving Project. With Lync installed, hovering over a name allows you to start an IM session, a video chat, an email, or a phone call. You can copy and paste content to any of the Microsoft Office suites. You can sync content to SharePoint or a SkyDrive to share without going through Project and Project Online provides an online project management web app that has all of the functionality of Project 2013 and can be accessed from any web-enabled device. Project 2016 Project 2016 adds a new Reports section, backwards-compatibility with Project Server 2013, better integration with other Microsoft products, and improved appearance of user interface items: Timeline Allows user to customize views to have multiple timeline bars and custom date ranges in a single view. Resource Agreements Gives features for resource planning coordination between Project Manager and Resource Manager. Office 2016 style theme and help Uses the new Office query 'tell me what you want to do'. Backwards compatibility with Microsoft Project Server 2013 The transition of enterprises from one version to the next may be eased by this product being able to interact with the earlier version of server. See also Comparison of project management software Schedule (project management) References External links Microsoft Project blog Project Programmability blog on MSDN Blogs Project 2003: Project Guide and Custom Views Microsoft Project 2010: Interactive menu to ribbon guide The Project Map: Your road map to project management Office.com Templates for Project 2013 MPUG Templates for Project Step by Step practice files Microsoft Project Tutorial Project Project management software Critical Path Scheduling 1984 software
1064223
https://en.wikipedia.org/wiki/CEN/XFS
CEN/XFS
CEN/XFS or XFS (extensions for financial services) provides a client-server architecture for financial applications on the Microsoft Windows platform, especially peripheral devices such as EFTPOS terminals and ATMs which are unique to the financial industry. It is an international standard promoted by the European Committee for Standardization (known by the acronym CEN, hence CEN/XFS). The standard is based on the WOSA Extensions for Financial Services or WOSA/XFS developed by Microsoft. With the move to a more standardized software base, financial institutions have been increasingly interested in the ability to pick and choose the application programs that drive their equipment. XFS provides a common API for accessing and manipulating various financial services devices regardless of the manufacturer. History Chronology: 1991 - Microsoft forms "Banking Solutions Vendor Council" 1995 - WOSA/XFS 1.11 released 1997 - WOSA/XFS 2.0 released - additional support for 24 hours-a-day unattended operation 1998 - adopted by European Committee for Standardization as an international standard. 2000 - XFS 3.0 released by CEN 2008 - XFS 3.10 released by CEN 2011 - XFS 3.20 released by CEN 2015 - XFS 3.30 released by CEN 2020 - XFS 3.40 released by CEN WOSA/XFS changed name to simply XFS when the standard was adopted by the international CEN/ISSS standards body. However, it is most commonly called CEN/XFS by the industry participants. XFS middleware While the perceived benefit of XFS is similar to Java's "write once, run anywhere" mantra, often different hardware vendors have different interpretations of the XFS standard. The result of these differences in interpretation means that applications typically use a middleware to even out the differences between various platforms implementation of XFS. Notable XFS middleware platforms include: F1 Solutions - F1 TPS (multi-vendor ATM & POS solution) Nexus Software LLC - Nexus Evolution Nautilus Hyosung - Nextware Hitachi-Omron Terminal Solutions ATOM Diebold Agilis Power NCR - NCR XFS KAL - KAL Kalignite Auriga - The Banking E-volution- WWS Omnichannel Platform Cashware - XFS service providers Phoenix Interactive VISTAatm Acquired by Diedold Wincor Nixdorf ProBase (ProBase C as WOSA/XFS platform - ProBase J as J/XFS platform) SBS Software KIXXtension Dynasty Technology Group - (JSI) Jam Service Interface HST Systems & Technologies - HAL Interface FreeXFS- open source XFS platform GRG banking eCAT (multi-vendor ATM terminal solution) TIS xfs.js implementation(open source for node.js community) TEB Orion XFS test tools XFS test tools allow testing of XFS applications and middleware on simulated hardware. Some tools include sophisticated automatic regression testing capabilities. Providers of XFS test tools include: Abbrevia Simplicity Paragon VirtualATM Product Page VirtualATM Intro Video Cashware (Aurigae Group) XFS and J/XFS ATM Simulator, ATMirage FIS ATM Testlab, (was Clear2Pay, formally Level Four Software and Lexcel TestSystem ATM) KAL KAL Kalignite Test Utilities Dynasty Technology Group - JSI Simulators HST Systems & Technologies (Brazil) Takkto Technologies (Mexico) LUTZWOLF JDST - Testtool for J/XFS compatibility Afferent Software RapidFire ATM XFS Serquo Software XFS and J/XFS ATM Simulator J/XFS J/XFS is an alternative API to CEN/XFS (which is Windows specific) and also to Xpeak (which is Operating System independent, based on XML messages). J/XFS is written in Java with the objective to provide a platform agnostic client-server architecture for financial applications, especially peripheral devices used in the financial industry such as EFTPOS terminals and ATMs. With the move to a more standardized software base, financial institutions have been increasingly interested in the ability to pick and choose the application programs that drive their equipment. J/XFS provides a common Object Oriented API between a pure Java application and a wide range of financial devices, providing a layer of separation between application and device logic that can be implemented using a native J/XFS API or wrapping an existing implementation in JavaPOS or CEN/XFS. J/XFS was developed by the companies De La Rue, IBM, NCR, Wincor Nixdorf and Sun Microsystems and is now hosted, monitored and maintained by the European Committee for Standardization, CEN. See also Xpeak - Devices Connectivity using XML (Open Source Project). Automated teller machine Teller assist unit External links CEN/XFS Home Page Windows communication and services Device drivers Embedded systems Application programming interfaces Microsoft application programming interfaces Banking technology
2697437
https://en.wikipedia.org/wiki/Guitar%20Pro
Guitar Pro
Guitar Pro is a multitrack editor of guitar and bass tablature and musical scores, possessing a built-in MIDI-editor, a plotter of chords, a player, a metronome and other tools for musicians. It has versions for Windows and Mac OS X (Intel processors only) and is written by the French company Arobas Music. History There have been five popular public major releases of the software: versions 3–7. Guitar Pro was initially designed as a tablature editor, but has since evolved into a full-fledged score writer including support for many musical instruments other than guitar. Until it reached version 4, the software was only available for Microsoft Windows. Later, Guitar Pro 5 (released November 2005) undertook a year-long porting effort and Guitar Pro 5 for the Mac OS X was released in July 2006. On April 5, 2010, Guitar Pro 6, a completely redesigned version, was released. This version also supports Linux, with 32-bit Ubuntu being the officially supported distribution. On February 6, 2011, the first ever portable release of Guitar Pro (version 6) was made available on the App Store for support with the iPhone, iPod Touch, and iPad running iOS 3.0 or later. An Android version was released on December 17, 2014. In 2011, a version was made to work with the Fretlight guitar called Guitar Pro 6 Fretlight Ready. The tablature notes being played in Guitar Pro 6 Fretlight Ready show up on the Fretlight guitar's LEDs which are encased within the guitar's fretboard to teach you the song. In April 2017, Guitar Pro 7 was officially released with new features and dropped Linux support. Background The software makes use of multiple instrument tracks which follow standard staff notation, but also shows the notes on tablature notation. It gives the musician visual access to keys (banjos, drumkits, etc.) for the song to be composed, and allows live previews of the notes to be played at a specified tempo. It allows for certain tracks to be muted and provides dynamic control over the volume, phasing and other aspects of each track. Included in version 4 onwards is a virtual keyboard that allows pianists to add their part to a composition. Guitar Pro outputs sound by means of a library and/or, as of version 5, the "" (RSE) which uses high quality recorded samples for a more realistic playback. By using its live preview feature musicians may play along with the song, following the tablature played in real time. Files composed using Guitar Pro are recorded in the GP, GPX, GP5, GP4 and GP3 format, corresponding to versions 7, 6, 5, 4, and 3 of the software. These file formats lack forward compatibility, and opening them in an older version of Guitar Pro prompts the user to upgrade their software to the respective version. These tab files are available free on several websites, including songs of both underground and popular bands. However, copyright issues raised by the Music Publishers Association (MPA) pressured some of these sites to close. See also List of music software References Guitar-related software Cross-platform software Scorewriters 1997 software Software that uses Qt
65570001
https://en.wikipedia.org/wiki/Trickbot
Trickbot
Trickbot is computer malware, a trojan for the Microsoft Windows and other operating systems, and the cybercrime group behind this. Its major function was originally the theft of banking details and other credentials, but its operators have extended its capabilities to create a complete modular malware ecosystem. Capabilities Trickbot was first reported in October 2016. It is propagated by methods including executable programs, batch files, email phishing, Google Docs, and fake sexual harassment claims. The Web site Bleeping Computer has tracked the evolution of TrickBot from its start as a banking Trojan. Articles cover its extension to attack PayPal and business customer relationship management (CRM; June 2017),the addition of a self-spreading worm component (July 2017), coinbase.com, DKIM support to bypass email filters, steal Windows problem history, steal cookies (July 2019), targets security software such as Microsoft Defender to prevent its detection and removal (July 2019), steal Verizon Wireless, T-Mobile, and Sprint PIN codes by injecting code when accessing a Web site (August 2019), steal OpenSSH and OpenVPN keys (November 2019), spread malware through a network (January 2020), bypass Windows 10 UAC and steal Active Directory credentials (January 2020), use fake COVID-19 emails and news (since March 2020), bypass Android mobile two-factor authentication, checks whether it is being run in a virtual machine (by anti-malware experts; July 2020), infecting Linux systems (July 2020). TrickBot can provide other malware with access-as-a-service to infected systems, including Ryuk (January 2019) and Conti ransomware; the Emotet spam Trojan is known to install TrickBot (July 2020). In 2021, IBM researcher reported that trickbot has been improved by enhanced with features such as creative mutex naming algorithm and an updated persistence mechanism. Infections On 27 September 2020, US hospitals and healthcare systems were shut down by a cyber attack using Ryuk ransomware. It is believed likely that the Emotet Trojan started the botnet infection by sending malicious email attachments during 2020. After some time, it would install TrickBot, which would then provide access to Ryuk. Despite the efforts to extinguish TrickBot, the FBI and two other American federal agencies warned on 29 October 2020 that they had "credible information of an increased and imminent cybercrime [ransomware] threat to US hospitals and healthcare providers" as COVID-19 cases were spiking. After the previous month's attacks, five hospitals had been attacked that week, and hundreds more were potential targets. Ryuk, seeded through TrickBot, was the method of attack. Retaliation From the end of September 2020, the TrickBot botnet was attacked by what is believed to be the Cyber Command branch of the US Department of Defense and several security companies. A configuration file was delivered to systems infected by TrickBot that changed the command and control server address to 127.0.0.1 (localhost, an address that cannot access the Internet). The efforts actually started several months earlier, with several disruptive actions. The project aims for long-term effects, gathering and carefully analyzing data from the botnet. An undisclosed number of C2 servers were also taken down by legal procedures to cut their communication with the bots at the hosting provider level. The action started after the US District Court for the Eastern District of Virginia granted Microsoft's request for a court order to stop TrickBot activity. The technical effort required is great; as part of the attack, ESET's automatic systems examined more than 125,000 Trickbot samples with over 40,000 configuration files for at least 28 individual plugins used by the malware to steal passwords, modify traffic, or self-propagate. The attacks would disrupt the TrickBot significantly, but it has fallback mechanisms to recover, with difficulty, computers removed from the botnet. It was reported that there was short-term disruption, but the botnet quickly recovered due to its infrastructure remaining intact. The US government considered ransomware to be a major threat to the 2020 US elections, as attacks can steal or encrypt voter information and election results, and impact election systems. On 20 October 2020, BleepingComputer reported that the TrickBot operation was "on the brink of completely shutting down following efforts from an alliance of cybersecurity and hosting providers targeting the botnet's command and control servers", after the relatively ineffective disruptive actions earlier in the month. A coalition headed by Microsoft's Digital Crimes Unit (DCU) had a serious impact, although TrickBot continued to infect further computers. On 18 October, Microsoft stated that 94% of Trickbot's critical operational infrastructure - 120 out of 128 servers - had been eliminated. Some Trickbot servers remained active in Brazil, Colombia, Indonesia, and Kyrgyzstan. Constant action, both technical and legal, is required to prevent Trickbot from re-emerging due to its unique architecture. Although there was no evidence of TrickBot targeting the US election on 3 November 2020, intense efforts continued until that date. See also Wizard Spider - group known to use the software References Windows trojans Cyberattacks Cybercrime
454843
https://en.wikipedia.org/wiki/Reference%20management%20software
Reference management software
Reference management software, citation management software, or bibliographic management software is software for scholars and authors to use for recording and utilising bibliographic citations (references) as well as managing project references either as a company or an individual. Once a citation has been recorded, it can be used time and again in generating bibliographies, such as lists of references in scholarly books, articles and essays. The development of reference management packages has been driven by the rapid expansion of scientific literature. These software packages normally consist of a database in which full bibliographic references can be entered, plus a system for generating selective lists of articles in the different formats required by publishers and scholarly journals. Modern reference management packages can usually be integrated with word processors so that a reference list in the appropriate format is produced automatically as an article is written, reducing the risk that a cited source is not included in the reference list. They will also have a facility for importing the details of publications from bibliographic databases. Reference management software does not do the same job as a bibliographic database, which tries to list all articles published in a particular discipline or group of disciplines. Such bibliographic databases are large and have to be housed on major server installations. Reference management software collects a much smaller database, of the publications that have been used or are likely to be used by a particular author or group, and such a database can easily be housed on an individual's personal computer. Apart from managing references, most reference management software also enables users to search references from online libraries. These online libraries are usually based on Z39.50 public protocol. Users just need to specify the IP address, database name and keywords to start a Z39.50 search. It is quicker and more efficient than a web browser. However, Z39.50 is a little out of date. Some popular scientific websites, such as Google Scholar, IEEE Xplore and arXiv, do not support the Z39.50 protocol. Citation creators Citation creators or citation generators are online tools which facilitate the creation of works cited and bibliographies. Citation creators use web forms to take input and format the output according to guidelines and standards, such as the Modern Language Association's MLA Style Manual, American Psychological Association's APA style, The Chicago Manual of Style, or Turabian format. Some citation creators generate only run-time output, while others store the citation data for later use. Reference management software among legal scholars A comparison of usage of EndNote, RefWorks, and Zotero among the legal scholars at the Oxford University Law Faculty was performed by survey. 0% of survey participants used RefWorks; 40% used Endnote; 17% used Zotero, mostly research students. The difficulty of using RefWorks, Endnote, and Zotero by Oxford legal scholars was estimated by the author as well. A comparison of these tools for legal scholars was made across several usage scenarios, including: installing and setting up OSCOLA citation style; building a personal legal bibliographic library and using extracting metadata from legal bibliographic databases; generating footnotes and bibliographies for academic publications; using and modifying OSCOLA citation style. Reference management in Wikipedia Wikipedia, which runs on MediaWiki software, has built-in tools for the management of references. These tools - in many ways - have the function of reference-management software, in that they: automatically number the references generate the reference list set up links between the component of the citation in the text and the reference list Wikidata stores various attributes of scientific journals and journal articles in the main, item, namespace of Wikidata. Unlike traditional reference-management tools, MediaWiki does not store references in a database constructed to facilitate ease of citation. See also Comparison of reference management software COinS – method to embed bibliographic metadata in the HTML code of web pages Z39.50 – international standard client–server, application layer communications protocol for searching and retrieving information from a database over a TCP/IP computer network; widely used in library environments Reference software References External links
49700350
https://en.wikipedia.org/wiki/Graphnet
Graphnet
Graphnet is a UK provider of healthcare software based in Milton Keynes, Buckinghamshire. The company was founded in 1994. It specialises in shared care records and claims to be "the UK’s leading supplier of shared care record software to the NHS and care services". It is not connected to Graphnet, Inc., based in the USA. Graphnet has a contract using its CareCentric software in Berkshire for a Connected Care programme which will enable the 102 GP practices, Royal Berkshire NHS Foundation Trust, Frimley Health NHS Foundation Trust, Berkshire Healthcare NHS Foundation Trust, South Central Ambulance Service, and the six local authorities across the county to share records - and enable patients to view their own records. Records of 855,000 patients will be used by about 12,000 health and care professionals. Graphnet is also developing a system which will make available an integrated record of key information to clinicians across Greater Manchester. References Companies based in Berkshire Companies based in Milton Keynes Health care software Health in Buckinghamshire Healthcare software companies Software companies of England
18934432
https://en.wikipedia.org/wiki/Cryptography
Cryptography
Cryptography, or cryptology (from "hidden, secret"; and graphein, "to write", or -logia, "study", respectively), is the practice and study of techniques for secure communication in the presence of adversarial behavior. More generally, cryptography is about constructing and analyzing protocols that prevent third parties or the public from reading private messages; various aspects in information security such as data confidentiality, data integrity, authentication, and non-repudiation are central to modern cryptography. Modern cryptography exists at the intersection of the disciplines of mathematics, computer science, electrical engineering, communication science, and physics. Applications of cryptography include electronic commerce, chip-based payment cards, digital currencies, computer passwords, and military communications. Cryptography prior to the modern age was effectively synonymous with encryption, converting information from a readable state to unintelligible nonsense. The sender of an encrypted message shares the decoding technique only with intended recipients to preclude access from adversaries. The cryptography literature often uses the names Alice ("A") for the sender, Bob ("B") for the intended recipient, and Eve ("eavesdropper") for the adversary. Since the development of rotor cipher machines in World War I and the advent of computers in World War II, cryptography methods have become increasingly complex and its applications more varied. Modern cryptography is heavily based on mathematical theory and computer science practice; cryptographic algorithms are designed around computational hardness assumptions, making such algorithms hard to break in actual practice by any adversary. While it is theoretically possible to break into a well-designed system, it is infeasible in actual practice to do so. Such schemes, if well designed, are therefore termed "computationally secure"; theoretical advances (e.g., improvements in integer factorization algorithms) and faster computing technology require these designs to be continually reevaluated, and if necessary, adapted. Information-theoretically secure schemes that cannot be broken even with unlimited computing power, such as the one-time pad, are much more difficult to use in practice than the best theoretically breakable, but computationally secure, schemes. The growth of cryptographic technology has raised a number of legal issues in the Information Age. Cryptography's potential for use as a tool for espionage and sedition has led many governments to classify it as a weapon and to limit or even prohibit its use and export. In some jurisdictions where the use of cryptography is legal, laws permit investigators to compel the disclosure of encryption keys for documents relevant to an investigation. Cryptography also plays a major role in digital rights management and copyright infringement disputes in regard to digital media. Terminology The first use of the term "cryptograph" (as opposed to "cryptogram") dates back to the 19th century—originating from "The Gold-Bug," a story by Edgar Allan Poe. Until modern times, cryptography referred almost exclusively to "encryption", which is the process of converting ordinary information (called plaintext) into unintelligible form (called ciphertext). Decryption is the reverse, in other words, moving from the unintelligible ciphertext back to plaintext. A cipher (or cypher) is a pair of algorithms that carry out the encryption and the reversing decryption. The detailed operation of a cipher is controlled both by the algorithm and, in each instance, by a "key". The key is a secret (ideally known only to the communicants), usually a string of characters (ideally short so it can be remembered by the user), which is needed to decrypt the ciphertext. In formal mathematical terms, a "cryptosystem" is the ordered list of elements of finite possible plaintexts, finite possible cyphertexts, finite possible keys, and the encryption and decryption algorithms which correspond to each key. Keys are important both formally and in actual practice, as ciphers without variable keys can be trivially broken with only the knowledge of the cipher used and are therefore useless (or even counter-productive) for most purposes. Historically, ciphers were often used directly for encryption or decryption without additional procedures such as authentication or integrity checks. There are two main types of cryptosystems: symmetric and asymmetric. In symmetric systems, the only ones known until the 1970s, the same secret key encrypts and decrypts a message. Data manipulation in symmetric systems is significantly faster than in asymmetric systems. Asymmetric systems use a "public key" to encrypt a message and a related "private key" to decrypt it. The advantage of asymmetric systems is that the public key can be freely published, allowing parties to establish secure communication without having a shared secret key. In practice, asymmetric systems are used to first exchange a secret key, and then secure communication proceeds via a more efficient symmetric system using that key. Examples of asymmetric systems include Diffie–Hellman key exchange, RSA (Rivest–Shamir–Adleman), ECC (Elliptic Curve Cryptography), and Post-quantum cryptography. Secure symmetric algorithms include the commonly used AES (Advanced Encryption Standard) which replaced the older DES (Data Encryption Standard). Insecure symmetric algorithms include children's language tangling schemes such as Pig Latin or other cant, and all historical cryptographic schemes, however seriously intended, prior to the invention of the one-time pad early in the 20th century. In colloquial use, the term "code" is often used to mean any method of encryption or concealment of meaning. However, in cryptography, code has a more specific meaning: the replacement of a unit of plaintext (i.e., a meaningful word or phrase) with a code word (for example, "wallaby" replaces "attack at dawn"). A cypher, in contrast, is a scheme for changing or substituting an element below such a level (a letter, or a syllable or a pair of letters, etc.) in order to produce a cyphertext. Cryptanalysis is the term used for the study of methods for obtaining the meaning of encrypted information without access to the key normally required to do so; i.e., it is the study of how to "crack" encryption algorithms or their implementations. Some use the terms "cryptography" and "cryptology" interchangeably in English, while others (including US military practice generally) use "cryptography" to refer specifically to the use and practice of cryptographic techniques and "cryptology" to refer to the combined study of cryptography and cryptanalysis. English is more flexible than several other languages in which "cryptology" (done by cryptologists) is always used in the second sense above. advises that steganography is sometimes included in cryptology. The study of characteristics of languages that have some application in cryptography or cryptology (e.g. frequency data, letter combinations, universal patterns, etc.) is called cryptolinguistics. History of cryptography and cryptanalysis Before the modern era, cryptography focused on message confidentiality (i.e., encryption)—conversion of messages from a comprehensible form into an incomprehensible one and back again at the other end, rendering it unreadable by interceptors or eavesdroppers without secret knowledge (namely the key needed for decryption of that message). Encryption attempted to ensure secrecy in communications, such as those of spies, military leaders, and diplomats. In recent decades, the field has expanded beyond confidentiality concerns to include techniques for message integrity checking, sender/receiver identity authentication, digital signatures, interactive proofs and secure computation, among others. Classic cryptography The main classical cipher types are transposition ciphers, which rearrange the order of letters in a message (e.g., 'hello world' becomes 'ehlol owrdl' in a trivially simple rearrangement scheme), and substitution ciphers, which systematically replace letters or groups of letters with other letters or groups of letters (e.g., 'fly at once' becomes 'gmz bu podf' by replacing each letter with the one following it in the Latin alphabet). Simple versions of either have never offered much confidentiality from enterprising opponents. An early substitution cipher was the Caesar cipher, in which each letter in the plaintext was replaced by a letter some fixed number of positions further down the alphabet. Suetonius reports that Julius Caesar used it with a shift of three to communicate with his generals. Atbash is an example of an early Hebrew cipher. The earliest known use of cryptography is some carved ciphertext on stone in Egypt (ca 1900 BCE), but this may have been done for the amusement of literate observers rather than as a way of concealing information. The Greeks of Classical times are said to have known of ciphers (e.g., the scytale transposition cipher claimed to have been used by the Spartan military). Steganography (i.e., hiding even the existence of a message so as to keep it confidential) was also first developed in ancient times. An early example, from Herodotus, was a message tattooed on a slave's shaved head and concealed under the regrown hair. More modern examples of steganography include the use of invisible ink, microdots, and digital watermarks to conceal information. In India, the 2000-year-old Kamasutra of Vātsyāyana speaks of two different kinds of ciphers called Kautiliyam and Mulavediya. In the Kautiliyam, the cipher letter substitutions are based on phonetic relations, such as vowels becoming consonants. In the Mulavediya, the cipher alphabet consists of pairing letters and using the reciprocal ones. In Sassanid Persia, there were two secret scripts, according to the Muslim author Ibn al-Nadim: the šāh-dabīrīya (literally "King's script") which was used for official correspondence, and the rāz-saharīya which was used to communicate secret messages with other countries. David Kahn notes in The Codebreakers that modern cryptology originated among the Arabs, the first people to systematically document cryptanalytic methods. Al-Khalil (717–786) wrote the Book of Cryptographic Messages, which contains the first use of permutations and combinations to list all possible Arabic words with and without vowels. Ciphertexts produced by a classical cipher (and some modern ciphers) will reveal statistical information about the plaintext, and that information can often be used to break the cipher. After the discovery of frequency analysis, perhaps by the Arab mathematician and polymath Al-Kindi (also known as Alkindus) in the 9th century, nearly all such ciphers could be broken by an informed attacker. Such classical ciphers still enjoy popularity today, though mostly as puzzles (see cryptogram). Al-Kindi wrote a book on cryptography entitled Risalah fi Istikhraj al-Mu'amma (Manuscript for the Deciphering Cryptographic Messages), which described the first known use of frequency analysis cryptanalysis techniques. Language letter frequencies may offer little help for some extended historical encryption techniques such as homophonic cipher that tend to flatten the frequency distribution. For those ciphers, language letter group (or n-gram) frequencies may provide an attack. Essentially all ciphers remained vulnerable to cryptanalysis using the frequency analysis technique until the development of the polyalphabetic cipher, most clearly by Leon Battista Alberti around the year 1467, though there is some indication that it was already known to Al-Kindi. Alberti's innovation was to use different ciphers (i.e., substitution alphabets) for various parts of a message (perhaps for each successive plaintext letter at the limit). He also invented what was probably the first automatic cipher device, a wheel which implemented a partial realization of his invention. In the Vigenère cipher, a polyalphabetic cipher, encryption uses a key word, which controls letter substitution depending on which letter of the key word is used. In the mid-19th century Charles Babbage showed that the Vigenère cipher was vulnerable to Kasiski examination, but this was first published about ten years later by Friedrich Kasiski. Although frequency analysis can be a powerful and general technique against many ciphers, encryption has still often been effective in practice, as many a would-be cryptanalyst was unaware of the technique. Breaking a message without using frequency analysis essentially required knowledge of the cipher used and perhaps of the key involved, thus making espionage, bribery, burglary, defection, etc., more attractive approaches to the cryptanalytically uninformed. It was finally explicitly recognized in the 19th century that secrecy of a cipher's algorithm is not a sensible nor practical safeguard of message security; in fact, it was further realized that any adequate cryptographic scheme (including ciphers) should remain secure even if the adversary fully understands the cipher algorithm itself. Security of the key used should alone be sufficient for a good cipher to maintain confidentiality under an attack. This fundamental principle was first explicitly stated in 1883 by Auguste Kerckhoffs and is generally called Kerckhoffs's Principle; alternatively and more bluntly, it was restated by Claude Shannon, the inventor of information theory and the fundamentals of theoretical cryptography, as Shannon's Maxim—'the enemy knows the system'. Different physical devices and aids have been used to assist with ciphers. One of the earliest may have been the scytale of ancient Greece, a rod supposedly used by the Spartans as an aid for a transposition cipher. In medieval times, other aids were invented such as the cipher grille, which was also used for a kind of steganography. With the invention of polyalphabetic ciphers came more sophisticated aids such as Alberti's own cipher disk, Johannes Trithemius' tabula recta scheme, and Thomas Jefferson's wheel cypher (not publicly known, and reinvented independently by Bazeries around 1900). Many mechanical encryption/decryption devices were invented early in the 20th century, and several patented, among them rotor machines—famously including the Enigma machine used by the German government and military from the late 1920s and during World War II. The ciphers implemented by better quality examples of these machine designs brought about a substantial increase in cryptanalytic difficulty after WWI. Computer era Prior to the early 20th century, cryptography was mainly concerned with linguistic and lexicographic patterns. Since then the emphasis has shifted, and cryptography now makes extensive use of mathematics, including aspects of information theory, computational complexity, statistics, combinatorics, abstract algebra, number theory, and finite mathematics generally. Cryptography is also a branch of engineering, but an unusual one since it deals with active, intelligent, and malevolent opposition; other kinds of engineering (e.g., civil or chemical engineering) need deal only with neutral natural forces. There is also active research examining the relationship between cryptographic problems and quantum physics. Just as the development of digital computers and electronics helped in cryptanalysis, it made possible much more complex ciphers. Furthermore, computers allowed for the encryption of any kind of data representable in any binary format, unlike classical ciphers which only encrypted written language texts; this was new and significant. Computer use has thus supplanted linguistic cryptography, both for cipher design and cryptanalysis. Many computer ciphers can be characterized by their operation on binary bit sequences (sometimes in groups or blocks), unlike classical and mechanical schemes, which generally manipulate traditional characters (i.e., letters and digits) directly. However, computers have also assisted cryptanalysis, which has compensated to some extent for increased cipher complexity. Nonetheless, good modern ciphers have stayed ahead of cryptanalysis; it is typically the case that use of a quality cipher is very efficient (i.e., fast and requiring few resources, such as memory or CPU capability), while breaking it requires an effort many orders of magnitude larger, and vastly larger than that required for any classical cipher, making cryptanalysis so inefficient and impractical as to be effectively impossible. Advent of modern cryptography Cryptanalysis of the new mechanical devices proved to be both difficult and laborious. In the United Kingdom, cryptanalytic efforts at Bletchley Park during WWII spurred the development of more efficient means for carrying out repetitious tasks. This culminated in the development of the Colossus, the world's first fully electronic, digital, programmable computer, which assisted in the decryption of ciphers generated by the German Army's Lorenz SZ40/42 machine. Extensive open academic research into cryptography is relatively recent, beginning in the mid-1970s. In the early 1970s IBM personnel designed the Data Encryption Standard (DES) algorithm that became the first federal government cryptography standard in the United States. In 1976 Whitfield Diffie and Martin Hellman published the Diffie–Hellman key exchange algorithm. In 1977 the RSA algorithm was published in Martin Gardner's Scientific American column. Since then, cryptography has become a widely used tool in communications, computer networks, and computer security generally. Some modern cryptographic techniques can only keep their keys secret if certain mathematical problems are intractable, such as the integer factorization or the discrete logarithm problems, so there are deep connections with abstract mathematics. There are very few cryptosystems that are proven to be unconditionally secure. The one-time pad is one, and was proven to be so by Claude Shannon. There are a few important algorithms that have been proven secure under certain assumptions. For example, the infeasibility of factoring extremely large integers is the basis for believing that RSA is secure, and some other systems, but even so, proof of unbreakability is unavailable since the underlying mathematical problem remains open. In practice, these are widely used, and are believed unbreakable in practice by most competent observers. There are systems similar to RSA, such as one by Michael O. Rabin that are provably secure provided factoring n = pq is impossible; it is quite unusable in practice. The discrete logarithm problem is the basis for believing some other cryptosystems are secure, and again, there are related, less practical systems that are provably secure relative to the solvability or insolvability discrete log problem. As well as being aware of cryptographic history, cryptographic algorithm and system designers must also sensibly consider probable future developments while working on their designs. For instance, continuous improvements in computer processing power have increased the scope of brute-force attacks, so when specifying key lengths, the required key lengths are similarly advancing. The potential impact of quantum computing are already being considered by some cryptographic system designers developing post-quantum cryptography. The announced imminence of small implementations of these machines may be making the need for preemptive caution rather more than merely speculative. Modern cryptography Symmetric-key cryptography Symmetric-key cryptography refers to encryption methods in which both the sender and receiver share the same key (or, less commonly, in which their keys are different, but related in an easily computable way). This was the only kind of encryption publicly known until June 1976. Symmetric key ciphers are implemented as either block ciphers or stream ciphers. A block cipher enciphers input in blocks of plaintext as opposed to individual characters, the input form used by a stream cipher. The Data Encryption Standard (DES) and the Advanced Encryption Standard (AES) are block cipher designs that have been designated cryptography standards by the US government (though DES's designation was finally withdrawn after the AES was adopted). Despite its deprecation as an official standard, DES (especially its still-approved and much more secure triple-DES variant) remains quite popular; it is used across a wide range of applications, from ATM encryption to e-mail privacy and secure remote access. Many other block ciphers have been designed and released, with considerable variation in quality. Many, even some designed by capable practitioners, have been thoroughly broken, such as FEAL. Stream ciphers, in contrast to the 'block' type, create an arbitrarily long stream of key material, which is combined with the plaintext bit-by-bit or character-by-character, somewhat like the one-time pad. In a stream cipher, the output stream is created based on a hidden internal state that changes as the cipher operates. That internal state is initially set up using the secret key material. RC4 is a widely used stream cipher. Block ciphers can be used as stream ciphers by generating blocks of a keystream (in place of a Pseudorandom number generator) and applying an XOR operation to each bit of the plaintext with each bit of the keystream. Message authentication codes (MACs) are much like cryptographic hash functions, except that a secret key can be used to authenticate the hash value upon receipt; this additional complication blocks an attack scheme against bare digest algorithms, and so has been thought worth the effort. Cryptographic hash functions are a third type of cryptographic algorithm. They take a message of any length as input, and output a short, fixed-length hash, which can be used in (for example) a digital signature. For good hash functions, an attacker cannot find two messages that produce the same hash. MD4 is a long-used hash function that is now broken; MD5, a strengthened variant of MD4, is also widely used but broken in practice. The US National Security Agency developed the Secure Hash Algorithm series of MD5-like hash functions: SHA-0 was a flawed algorithm that the agency withdrew; SHA-1 is widely deployed and more secure than MD5, but cryptanalysts have identified attacks against it; the SHA-2 family improves on SHA-1, but is vulnerable to clashes as of 2011; and the US standards authority thought it "prudent" from a security perspective to develop a new standard to "significantly improve the robustness of NIST's overall hash algorithm toolkit." Thus, a hash function design competition was meant to select a new U.S. national standard, to be called SHA-3, by 2012. The competition ended on October 2, 2012, when the NIST announced that Keccak would be the new SHA-3 hash algorithm. Unlike block and stream ciphers that are invertible, cryptographic hash functions produce a hashed output that cannot be used to retrieve the original input data. Cryptographic hash functions are used to verify the authenticity of data retrieved from an untrusted source or to add a layer of security. Public-key cryptography Symmetric-key cryptosystems use the same key for encryption and decryption of a message, although a message or group of messages can have a different key than others. A significant disadvantage of symmetric ciphers is the key management necessary to use them securely. Each distinct pair of communicating parties must, ideally, share a different key, and perhaps for each ciphertext exchanged as well. The number of keys required increases as the square of the number of network members, which very quickly requires complex key management schemes to keep them all consistent and secret. In a groundbreaking 1976 paper, Whitfield Diffie and Martin Hellman proposed the notion of public-key (also, more generally, called asymmetric key) cryptography in which two different but mathematically related keys are used—a public key and a private key. A public key system is so constructed that calculation of one key (the 'private key') is computationally infeasible from the other (the 'public key'), even though they are necessarily related. Instead, both keys are generated secretly, as an interrelated pair. The historian David Kahn described public-key cryptography as "the most revolutionary new concept in the field since polyalphabetic substitution emerged in the Renaissance". In public-key cryptosystems, the public key may be freely distributed, while its paired private key must remain secret. In a public-key encryption system, the public key is used for encryption, while the private or secret key is used for decryption. While Diffie and Hellman could not find such a system, they showed that public-key cryptography was indeed possible by presenting the Diffie–Hellman key exchange protocol, a solution that is now widely used in secure communications to allow two parties to secretly agree on a shared encryption key. The X.509 standard defines the most commonly used format for public key certificates. Diffie and Hellman's publication sparked widespread academic efforts in finding a practical public-key encryption system. This race was finally won in 1978 by Ronald Rivest, Adi Shamir, and Len Adleman, whose solution has since become known as the RSA algorithm. The Diffie–Hellman and RSA algorithms, in addition to being the first publicly known examples of high-quality public-key algorithms, have been among the most widely used. Other asymmetric-key algorithms include the Cramer–Shoup cryptosystem, ElGamal encryption, and various elliptic curve techniques. A document published in 1997 by the Government Communications Headquarters (GCHQ), a British intelligence organization, revealed that cryptographers at GCHQ had anticipated several academic developments. Reportedly, around 1970, James H. Ellis had conceived the principles of asymmetric key cryptography. In 1973, Clifford Cocks invented a solution that was very similar in design rationale to RSA. In 1974, Malcolm J. Williamson is claimed to have developed the Diffie–Hellman key exchange. Public-key cryptography is also used for implementing digital signature schemes. A digital signature is reminiscent of an ordinary signature; they both have the characteristic of being easy for a user to produce, but difficult for anyone else to forge. Digital signatures can also be permanently tied to the content of the message being signed; they cannot then be 'moved' from one document to another, for any attempt will be detectable. In digital signature schemes, there are two algorithms: one for signing, in which a secret key is used to process the message (or a hash of the message, or both), and one for verification, in which the matching public key is used with the message to check the validity of the signature. RSA and DSA are two of the most popular digital signature schemes. Digital signatures are central to the operation of public key infrastructures and many network security schemes (e.g., SSL/TLS, many VPNs, etc.). Public-key algorithms are most often based on the computational complexity of "hard" problems, often from number theory. For example, the hardness of RSA is related to the integer factorization problem, while Diffie–Hellman and DSA are related to the discrete logarithm problem. The security of elliptic curve cryptography is based on number theoretic problems involving elliptic curves. Because of the difficulty of the underlying problems, most public-key algorithms involve operations such as modular multiplication and exponentiation, which are much more computationally expensive than the techniques used in most block ciphers, especially with typical key sizes. As a result, public-key cryptosystems are commonly hybrid cryptosystems, in which a fast high-quality symmetric-key encryption algorithm is used for the message itself, while the relevant symmetric key is sent with the message, but encrypted using a public-key algorithm. Similarly, hybrid signature schemes are often used, in which a cryptographic hash function is computed, and only the resulting hash is digitally signed. Cryptographic Hash Functions Cryptographic Hash Functions are cryptographic algorithms that are ways to generate and utilize specific keys to encrypt data for either symmetric or asymmetric encryption, and such functions may be viewed as keys themselves. They take a message of any length as input, and output a short, fixed-length hash, which can be used in (for example) a digital signature. For good hash functions, an attacker cannot find two messages that produce the same hash. MD4 is a long-used hash function that is now broken; MD5, a strengthened variant of MD4, is also widely used but broken in practice. The US National Security Agency developed the Secure Hash Algorithm series of MD5-like hash functions: SHA-0 was a flawed algorithm that the agency withdrew; SHA-1 is widely deployed and more secure than MD5, but cryptanalysts have identified attacks against it; the SHA-2 family improves on SHA-1, but is vulnerable to clashes as of 2011; and the US standards authority thought it "prudent" from a security perspective to develop a new standard to "significantly improve the robustness of NIST's overall hash algorithm toolkit." Thus, a hash function design competition was meant to select a new U.S. national standard, to be called SHA-3, by 2012. The competition ended on October 2, 2012, when the NIST announced that Keccak would be the new SHA-3 hash algorithm. Unlike block and stream ciphers that are invertible, cryptographic hash functions produce a hashed output that cannot be used to retrieve the original input data. Cryptographic hash functions are used to verify the authenticity of data retrieved from an untrusted source or to add a layer of security. Cryptanalysis The goal of cryptanalysis is to find some weakness or insecurity in a cryptographic scheme, thus permitting its subversion or evasion. It is a common misconception that every encryption method can be broken. In connection with his WWII work at Bell Labs, Claude Shannon proved that the one-time pad cipher is unbreakable, provided the key material is truly random, never reused, kept secret from all possible attackers, and of equal or greater length than the message. Most ciphers, apart from the one-time pad, can be broken with enough computational effort by brute force attack, but the amount of effort needed may be exponentially dependent on the key size, as compared to the effort needed to make use of the cipher. In such cases, effective security could be achieved if it is proven that the effort required (i.e., "work factor", in Shannon's terms) is beyond the ability of any adversary. This means it must be shown that no efficient method (as opposed to the time-consuming brute force method) can be found to break the cipher. Since no such proof has been found to date, the one-time-pad remains the only theoretically unbreakable cipher. Although well-implemented one-time-pad encryption cannot be broken, traffic analysis is still possible. There are a wide variety of cryptanalytic attacks, and they can be classified in any of several ways. A common distinction turns on what Eve (an attacker) knows and what capabilities are available. In a ciphertext-only attack, Eve has access only to the ciphertext (good modern cryptosystems are usually effectively immune to ciphertext-only attacks). In a known-plaintext attack, Eve has access to a ciphertext and its corresponding plaintext (or to many such pairs). In a chosen-plaintext attack, Eve may choose a plaintext and learn its corresponding ciphertext (perhaps many times); an example is gardening, used by the British during WWII. In a chosen-ciphertext attack, Eve may be able to choose ciphertexts and learn their corresponding plaintexts. Finally in a man-in-the-middle attack Eve gets in between Alice (the sender) and Bob (the recipient), accesses and modifies the traffic and then forwards it to the recipient. Also important, often overwhelmingly so, are mistakes (generally in the design or use of one of the protocols involved). Cryptanalysis of symmetric-key ciphers typically involves looking for attacks against the block ciphers or stream ciphers that are more efficient than any attack that could be against a perfect cipher. For example, a simple brute force attack against DES requires one known plaintext and 255 decryptions, trying approximately half of the possible keys, to reach a point at which chances are better than even that the key sought will have been found. But this may not be enough assurance; a linear cryptanalysis attack against DES requires 243 known plaintexts (with their corresponding ciphertexts) and approximately 243 DES operations. This is a considerable improvement over brute force attacks. Public-key algorithms are based on the computational difficulty of various problems. The most famous of these are the difficulty of integer factorization of semiprimes and the difficulty of calculating discrete logarithms, both of which are not yet proven to be solvable in polynomial time (P) using only a classical Turing-complete computer. Much public-key cryptanalysis concerns designing algorithms in P that can solve these problems, or using other technologies, such as quantum computers. For instance, the best-known algorithms for solving the elliptic curve-based version of discrete logarithm are much more time-consuming than the best-known algorithms for factoring, at least for problems of more or less equivalent size. Thus, to achieve an equivalent strength of encryption, techniques that depend upon the difficulty of factoring large composite numbers, such as the RSA cryptosystem, require larger keys than elliptic curve techniques. For this reason, public-key cryptosystems based on elliptic curves have become popular since their invention in the mid-1990s. While pure cryptanalysis uses weaknesses in the algorithms themselves, other attacks on cryptosystems are based on actual use of the algorithms in real devices, and are called side-channel attacks. If a cryptanalyst has access to, for example, the amount of time the device took to encrypt a number of plaintexts or report an error in a password or PIN character, he may be able to use a timing attack to break a cipher that is otherwise resistant to analysis. An attacker might also study the pattern and length of messages to derive valuable information; this is known as traffic analysis and can be quite useful to an alert adversary. Poor administration of a cryptosystem, such as permitting too short keys, will make any system vulnerable, regardless of other virtues. Social engineering and other attacks against humans (e.g., bribery, extortion, blackmail, espionage, torture, ...) are usually employed due to being more cost-effective and feasible to perform in a reasonable amount of time compared to pure cryptanalysis by a high margin. Cryptographic primitives Much of the theoretical work in cryptography concerns cryptographic primitives—algorithms with basic cryptographic properties—and their relationship to other cryptographic problems. More complicated cryptographic tools are then built from these basic primitives. These primitives provide fundamental properties, which are used to develop more complex tools called cryptosystems or cryptographic protocols, which guarantee one or more high-level security properties. Note, however, that the distinction between cryptographic primitives and cryptosystems, is quite arbitrary; for example, the RSA algorithm is sometimes considered a cryptosystem, and sometimes a primitive. Typical examples of cryptographic primitives include pseudorandom functions, one-way functions, etc. Cryptosystems One or more cryptographic primitives are often used to develop a more complex algorithm, called a cryptographic system, or cryptosystem. Cryptosystems (e.g., El-Gamal encryption) are designed to provide particular functionality (e.g., public key encryption) while guaranteeing certain security properties (e.g., chosen-plaintext attack (CPA) security in the random oracle model). Cryptosystems use the properties of the underlying cryptographic primitives to support the system's security properties. As the distinction between primitives and cryptosystems is somewhat arbitrary, a sophisticated cryptosystem can be derived from a combination of several more primitive cryptosystems. In many cases, the cryptosystem's structure involves back and forth communication among two or more parties in space (e.g., between the sender of a secure message and its receiver) or across time (e.g., cryptographically protected backup data). Such cryptosystems are sometimes called cryptographic protocols. Some widely known cryptosystems include RSA, Schnorr signature, ElGamal encryption, and Pretty Good Privacy (PGP). More complex cryptosystems include electronic cash systems, signcryption systems, etc. Some more 'theoretical' cryptosystems include interactive proof systems, (like zero-knowledge proofs), systems for secret sharing, etc. Lightweight cryptography Lightweight cryptography (LWC) concerns cryptographic algorithms developed for a strictly constrained environment. The growth of Internet of Things (IoT) has spiked research into the development of lightweight algorithms that are better suited for the environment. An IoT environment requires strict constraints on power consumption, processing power, and security. Algorithms such as PRESENT, AES, and SPECK are examples of the many LWC algorithms that have been developed to achieve the standard set by the National Institute of Standards and Technology. Applications In general To ensure secrecy during transmission, many systems use private key cryptography to protect transmitted information. With public-key systems, one can maintain secrecy without a master key or a large number of keys. In cybersecurity Cryptography can be used to secure communications by encrypting them. Websites use encryption via HTTPS. "End-to-end" encryption, where only sender and receiver can read messages, is implemented for email in Pretty Good Privacy and for secure messaging in general in Signal and WhatsApp. Operating systems use encryption to keep passwords secret, conceal parts of the system, and ensure that software updates are truly from the system maker. Instead of storing plaintext passwords, computer systems store hashes thereof; then, when a user logs in, the system passes the given password through a cryptographic hash function and compares it to the hashed value on file. In this manner, neither the system nor an attacker has at any point access to the password in plaintext. Encryption is sometimes used to encrypt one's entire drive. For example, University College London has implemented BitLocker (a program by Microsoft) to render drive data opaque without users logging in. In electronic money, blockchain, and cryptocurrency Social issues Legal issues Prohibitions Cryptography has long been of interest to intelligence gathering and law enforcement agencies. Secret communications may be criminal or even treasonous . Because of its facilitation of privacy, and the diminution of privacy attendant on its prohibition, cryptography is also of considerable interest to civil rights supporters. Accordingly, there has been a history of controversial legal issues surrounding cryptography, especially since the advent of inexpensive computers has made widespread access to high-quality cryptography possible. In some countries, even the domestic use of cryptography is, or has been, restricted. Until 1999, France significantly restricted the use of cryptography domestically, though it has since relaxed many of these rules. In China and Iran, a license is still required to use cryptography. Many countries have tight restrictions on the use of cryptography. Among the more restrictive are laws in Belarus, Kazakhstan, Mongolia, Pakistan, Singapore, Tunisia, and Vietnam. In the United States, cryptography is legal for domestic use, but there has been much conflict over legal issues related to cryptography. One particularly important issue has been the export of cryptography and cryptographic software and hardware. Probably because of the importance of cryptanalysis in World War II and an expectation that cryptography would continue to be important for national security, many Western governments have, at some point, strictly regulated export of cryptography. After World War II, it was illegal in the US to sell or distribute encryption technology overseas; in fact, encryption was designated as auxiliary military equipment and put on the United States Munitions List. Until the development of the personal computer, asymmetric key algorithms (i.e., public key techniques), and the Internet, this was not especially problematic. However, as the Internet grew and computers became more widely available, high-quality encryption techniques became well known around the globe. Export controls In the 1990s, there were several challenges to US export regulation of cryptography. After the source code for Philip Zimmermann's Pretty Good Privacy (PGP) encryption program found its way onto the Internet in June 1991, a complaint by RSA Security (then called RSA Data Security, Inc.) resulted in a lengthy criminal investigation of Zimmermann by the US Customs Service and the FBI, though no charges were ever filed. Daniel J. Bernstein, then a graduate student at UC Berkeley, brought a lawsuit against the US government challenging some aspects of the restrictions based on free speech grounds. The 1995 case Bernstein v. United States ultimately resulted in a 1999 decision that printed source code for cryptographic algorithms and systems was protected as free speech by the United States Constitution. In 1996, thirty-nine countries signed the Wassenaar Arrangement, an arms control treaty that deals with the export of arms and "dual-use" technologies such as cryptography. The treaty stipulated that the use of cryptography with short key-lengths (56-bit for symmetric encryption, 512-bit for RSA) would no longer be export-controlled. Cryptography exports from the US became less strictly regulated as a consequence of a major relaxation in 2000; there are no longer very many restrictions on key sizes in US-exported mass-market software. Since this relaxation in US export restrictions, and because most personal computers connected to the Internet include US-sourced web browsers such as Firefox or Internet Explorer, almost every Internet user worldwide has potential access to quality cryptography via their browsers (e.g., via Transport Layer Security). The Mozilla Thunderbird and Microsoft Outlook E-mail client programs similarly can transmit and receive emails via TLS, and can send and receive email encrypted with S/MIME. Many Internet users don't realize that their basic application software contains such extensive cryptosystems. These browsers and email programs are so ubiquitous that even governments whose intent is to regulate civilian use of cryptography generally don't find it practical to do much to control distribution or use of cryptography of this quality, so even when such laws are in force, actual enforcement is often effectively impossible. NSA involvement Another contentious issue connected to cryptography in the United States is the influence of the National Security Agency on cipher development and policy. The NSA was involved with the design of DES during its development at IBM and its consideration by the National Bureau of Standards as a possible Federal Standard for cryptography. DES was designed to be resistant to differential cryptanalysis, a powerful and general cryptanalytic technique known to the NSA and IBM, that became publicly known only when it was rediscovered in the late 1980s. According to Steven Levy, IBM discovered differential cryptanalysis, but kept the technique secret at the NSA's request. The technique became publicly known only when Biham and Shamir re-discovered and announced it some years later. The entire affair illustrates the difficulty of determining what resources and knowledge an attacker might actually have. Another instance of the NSA's involvement was the 1993 Clipper chip affair, an encryption microchip intended to be part of the Capstone cryptography-control initiative. Clipper was widely criticized by cryptographers for two reasons. The cipher algorithm (called Skipjack) was then classified (declassified in 1998, long after the Clipper initiative lapsed). The classified cipher caused concerns that the NSA had deliberately made the cipher weak in order to assist its intelligence efforts. The whole initiative was also criticized based on its violation of Kerckhoffs's Principle, as the scheme included a special escrow key held by the government for use by law enforcement (i.e. wiretapping). Digital rights management Cryptography is central to digital rights management (DRM), a group of techniques for technologically controlling use of copyrighted material, being widely implemented and deployed at the behest of some copyright holders. In 1998, U.S. President Bill Clinton signed the Digital Millennium Copyright Act (DMCA), which criminalized all production, dissemination, and use of certain cryptanalytic techniques and technology (now known or later discovered); specifically, those that could be used to circumvent DRM technological schemes. This had a noticeable impact on the cryptography research community since an argument can be made that any cryptanalytic research violated the DMCA. Similar statutes have since been enacted in several countries and regions, including the implementation in the EU Copyright Directive. Similar restrictions are called for by treaties signed by World Intellectual Property Organization member-states. The United States Department of Justice and FBI have not enforced the DMCA as rigorously as had been feared by some, but the law, nonetheless, remains a controversial one. Niels Ferguson, a well-respected cryptography researcher, has publicly stated that he will not release some of his research into an Intel security design for fear of prosecution under the DMCA. Cryptologist Bruce Schneier has argued that the DMCA encourages vendor lock-in, while inhibiting actual measures toward cyber-security. Both Alan Cox (longtime Linux kernel developer) and Edward Felten (and some of his students at Princeton) have encountered problems related to the Act. Dmitry Sklyarov was arrested during a visit to the US from Russia, and jailed for five months pending trial for alleged violations of the DMCA arising from work he had done in Russia, where the work was legal. In 2007, the cryptographic keys responsible for Blu-ray and HD DVD content scrambling were discovered and released onto the Internet. In both cases, the Motion Picture Association of America sent out numerous DMCA takedown notices, and there was a massive Internet backlash triggered by the perceived impact of such notices on fair use and free speech. Forced disclosure of encryption keys In the United Kingdom, the Regulation of Investigatory Powers Act gives UK police the powers to force suspects to decrypt files or hand over passwords that protect encryption keys. Failure to comply is an offense in its own right, punishable on conviction by a two-year jail sentence or up to five years in cases involving national security. Successful prosecutions have occurred under the Act; the first, in 2009, resulted in a term of 13 months' imprisonment. Similar forced disclosure laws in Australia, Finland, France, and India compel individual suspects under investigation to hand over encryption keys or passwords during a criminal investigation. In the United States, the federal criminal case of United States v. Fricosu addressed whether a search warrant can compel a person to reveal an encryption passphrase or password. The Electronic Frontier Foundation (EFF) argued that this is a violation of the protection from self-incrimination given by the Fifth Amendment. In 2012, the court ruled that under the All Writs Act, the defendant was required to produce an unencrypted hard drive for the court. In many jurisdictions, the legal status of forced disclosure remains unclear. The 2016 FBI–Apple encryption dispute concerns the ability of courts in the United States to compel manufacturers' assistance in unlocking cell phones whose contents are cryptographically protected. As a potential counter-measure to forced disclosure some cryptographic software supports plausible deniability, where the encrypted data is indistinguishable from unused random data (for example such as that of a drive which has been securely wiped). See also – first cryptography chart World Wide Web Consortium's Collision attack References Further reading Excellent coverage of many classical ciphers and cryptography concepts and of the "modern" DES and RSA systems. Cryptography and Mathematics by Bernhard Esslinger, 200 pages, part of the free open-source package CrypTool, . CrypTool is the most widespread e-learning program about cryptography and cryptanalysis, open source. In Code: A Mathematical Journey by Sarah Flannery (with David Flannery). Popular account of Sarah's award-winning project on public-key cryptography, co-written with her father. James Gannon, Stealing Secrets, Telling Lies: How Spies and Codebreakers Helped Shape the Twentieth Century, Washington, D.C., Brassey's, 2001, . Oded Goldreich, Foundations of Cryptography, in two volumes, Cambridge University Press, 2001 and 2004. Alvin's Secret Code by Clifford B. Hicks (children's novel that introduces some basic cryptography and cryptanalysis). Introduction to Modern Cryptography by Jonathan Katz and Yehuda Lindell. Ibrahim A. Al-Kadi, "The Origins of Cryptology: the Arab Contributions," Cryptologia, vol. 16, no. 2 (April 1992), pp. 97–126. Christof Paar, Jan Pelzl, Understanding Cryptography, A Textbook for Students and Practitioners. Springer, 2009. (Slides, online cryptography lectures and other information are available on the companion web site.) Very accessible introduction to practical cryptography for non-mathematicians. , giving an overview of international law issues regarding cryptography. Introduction to Modern Cryptography by Phillip Rogaway and Mihir Bellare, a mathematical introduction to theoretical cryptography including reduction-based security proofs. PDF download. Tenzer, Theo (2021): SUPER SECRETO – The Third Epoch of Cryptography: Multiple, exponential, quantum-secure and above all, simple and practical Encryption for Everyone, Norderstedt, . Johann-Christoph Woltag, 'Coded Communications (Encryption)' in Rüdiger Wolfrum (ed) Max Planck Encyclopedia of Public International Law (Oxford University Press 2009). External links Crypto Glossary and Dictionary of Technical Cryptography NSA's CryptoKids. Overview and Applications of Cryptology by the CrypTool Team; PDF; 3.8 MB. July 2008 A Course in Cryptography by Raphael Pass & Abhi Shelat – offered at Cornell in the form of lecture notes. For more on the use of cryptographic elements in fiction, see: The George Fabyan Collection at the Library of Congress has early editions of works of seventeenth-century English literature, publications relating to cryptography. Banking technology Formal sciences Applied mathematics
32070900
https://en.wikipedia.org/wiki/Edward%20D.%20Lazowska
Edward D. Lazowska
Edward D. "Ed" Lazowska is an American computer scientist. He is a Professor, and the Bill & Melinda Gates Chair emeritus, in the Paul G. Allen School of Computer Science & Engineering at the University of Washington. Scholarship Lazowska’s research and teaching concern the design, implementation, and analysis of high-performance computing and communication systems, and, more recently, the techniques and technologies of data-intensive science. He co-authored the definitive textbook on computer system performance analysis using queuing network models, contributed to several early object-oriented distributed systems, and co-developed widely used approaches to kernel and system design in areas such as thread management, high-performance local and remote communication, load sharing, cluster computing, and the effective use of the underlying architecture by the operating system. From 2008 to 2017 he served as the Founding Director of the University of Washington eScience Institute, one of three partners (along with Berkeley and New York University) in the Data Science Environments effort sponsored by the Gordon and Betty Moore Foundation and the Alfred P. Sloan Foundation. Leadership Lazowska chaired the Computing Research Association from 1997 to 2001, the NSF CISE Advisory Committee from 1998 to 1999, the DARPA Information Science and Technology Study Group from 2004 to 2006, the President’s Information Technology Advisory Committee (co-chair with Marc Benioff) from 2003 to 2005, and the Working Group of the President’s Council of Advisors on Science and Technology to review the Federal Networking and Information Technology Research and Development Program in 2010 (co-chair with David E. Shaw). From 2007 to 2013 he served as Founding Chair of the Computing Community Consortium, a national effort to engage the computing research community in fundamental research motivated by tackling societal challenges. He served as Chair of University of Washington Computer Science & Engineering from 1993 to 2001, a period during which that program consolidated its reputation as one of the top computer science programs in the nation and the world. A long-time advocate for increasing participation in the field, Lazowska serves on the Executive Advisory Council of the National Center for Women & Information Technology, on the National Research Council's Committee on Women in Science, Engineering, and Medicine, and on NRC's study committee on the Impacts of Sexual Harassment in Academia. Students Lazowska has mentored a number of students, including Hank Levy (University of Washington), Yi-Bing Lin (National Chiao Tung University), Tom Anderson (University of Washington), Ed Felten (Princeton University), and Christophe Bisciglia (successively Google, Cloudera, and WibiData). Recognition Lazowska is a Member and Councillor of the National Academy of Engineering, a Fellow of the American Academy of Arts & Sciences, a Member of the Washington State Academy of Sciences, and a Fellow of the Association for Computing Machinery, the Institute of Electrical and Electronics Engineers, and the American Association for the Advancement of Science. He received the 2007 University of Washington Computer Science & Engineering Undergraduate Teaching Award, and the 2012 Vollum Award for Distinguished Accomplishment in Science and Technology. Personal history Lazowska was born on August 3, 1950 in Washington, D.C. He obtained his A.B. at Brown University in 1972, advised by Andries van Dam and David J. Lewis, and his M.Sc. in 1974 and Ph.D. in 1977 at the University of Toronto, advised by Kenneth C. Sevcik. He is married to Lyndsay Downs. They have two sons, Jeremy and Adam. References 1950 births Living people American computer scientists Fellows of the Association for Computing Machinery Members of the United States National Academy of Engineering
44428714
https://en.wikipedia.org/wiki/LinkNYC
LinkNYC
LinkNYC is the New York City branch of an international infrastructure project to create a network covering several cities with free Wi-Fi service. The office of New York City Mayor Bill de Blasio announced the plan on November 17, 2014, and the installation of the first kiosks, or "Links," started in late 2015. The Links replace the city's network of 9,000 to 13,000 payphones, a contract for which expired in October 2014. The LinkNYC kiosks were devised after the government of New York City held several competitions to replace the payphone system. The most recent competition, in 2014, resulted in the contract being awarded to the CityBridge consortium, which comprises Qualcomm; Titan and Control Group, which now make up Intersection; and Comark. All of the Links feature two high-definition displays on their sides; Android tablet computers for accessing city maps, directions, and services, and making video calls; two free USB charging stations for smartphones; and a phone allowing free calls to all 50 states and Washington, D.C. The Links also provide the ability to use calling cards to make international calls, and each Link has one button to call 9-1-1 directly. The project brings free, encrypted, gigabit wireless internet coverage to the five boroughs by converting old payphones into Wi-Fi hotspots where free phone calls could also be made. , there are 920 Links citywide; eventually, there will be 7,500 Links installed in the New York metropolitan area, making the system the world's fastest and most expansive. Intersection has also installed InLinks in cities across the UK. The Links are seen as a model for future city builds as part of smart city data pools and infrastructure. Since the Links' deployment, there have been several concerns about the kiosks' features. Privacy advocates have stated that the data of LinkNYC users can be collected and used to track users' movements throughout the city. There are also concerns with cybercriminals possibly hijacking the Links, or renaming their personal wireless networks to the same name as LinkNYC's network, in order to steal LinkNYC users' data. In addition, prior to September 2016, the tablets of the Links could be used to browse the Internet. In summer 2016, concerns arose about the Link tablets' browsers being used for illicit purposes; despite the implementation of content filters on the kiosks, the illicit activities continued, and the browsers were disabled. History Payphones and plans for reuse In 1999, thirteen companies signed a contract that legally obligated them to maintain New York City's payphones for fifteen years. In 2000, the city's tens of thousands of payphones were among the 2.2 million payphones spread across the United States. Since then, these payphones' use had been declining with the advent of cellphones. , there were 13,000 phones in over 10,000 individual locations; that number had dropped to 9,133 phones in 7,302 locations by April 2014, at a time when the number of payphones in the United States had declined more than 75 percent, to 500,000. The contract with the thirteen payphone operators was set to expire in October 2014, after which time the payphones' futures were unknown. In July 2012, the New York City government released a public request for information, asking for comments about the future uses for these payphones. The RFI presented questions such as "What alternative communications amenities would fill a need?"; "If retained, should the current designs of sidewalk payphone enclosures be substantially revised?"; and "Should the current number of payphones on City sidewalks change, and if so, how?". Through the RFI, the New York City government sought new uses for the payphones, including a combination of "public wireless hotspots, touch-screen wayfinding panels, information kiosks, charging stations for mobile communications devices, [and] electronic community bulletin boards," all of which eventually became the features of the kiosks that were included in the LinkNYC proposal. In 2013, a year before the payphone contract was set to expire, there was a competition that sought ideas to further repurpose the network of payphones. The competition, held by the administration of Michael Bloomberg, expanded the idea of the pilot project. There were 125 responses that suggested a Wi-Fi network, but none of these responses elaborated on how that would be accomplished. Previous free Wi-Fi projects In 2012, the government of New York City installed Wi-Fi routers at 10 payphones in the city (seven in Manhattan, two in Brooklyn, and one in Queens) as part of a pilot project. The Wi-Fi was free of charge and available for use at all times. The Wi-Fi signal was detectable from a radius of a few hundred feet (about 100m). Two of New York City's largest advertising companies—Van Wagner and Titan, who collectively owned more than 9,000 of New York City's 12,000 payphones at the time—paid $2,000 per router, with no monetary input from either the city or taxpayers. While the payphones participating in the Wi-Fi pilot project were poorly marked, the Wi-Fi offered at these payphones was significantly faster than some of the other free public Wi-Fi networks offered elsewhere. The Manhattan neighborhood of Harlem received free Wi-Fi starting in late 2013. Routers were installed in three phases within a 95-block area between 110th Street, Frederick Douglass Boulevard, 138th Street, and Madison Avenue. Phase 1, from 110th to 120th Streets, finished in 2013; Phase 2, from 121st to 126th Street, was expected to be complete in February 2014; and Phase 3, the remaining area, was supposed to be finished by May 2014. The network was estimated to serve 80,000 Harlemites, including 13,000 in public housing projects who may have otherwise not had broadband internet access at home. At the time, it was dubbed the United States' most expansive "continuous free public Wi-Fi network." Bids On April 30, 2014, the New York City Department of Information Technology and Telecommunications (DOITT) requested proposals for how to convert the city's over 7,000 payphones into a citywide Wi-Fi network. A new competition was held, with the winner standing to receive a 12-year contract to maintain up to 10,000 communication points. The communication points would tentatively have free Wi-Fi service, advertising, and free calls to at least 9-1-1 (the emergency service) or 3-1-1 (the city information hotline). The contract would require the operator, or the operating consortium, to pay "$17.5 million or 50 percent of gross revenues, whichever is greater" to the City of New York every year. The communication points could be up to tall, compared to the height of the phone booths; however, the advertising space on these points would only be allowed to accommodate up to of advertisements, or roughly half the maximum of of the advertising space allowed on existing phone booths. There would still need to be phone service at these Links because the payphones are still used often: collectively, all of New York City's nearly 12,000 payphones were used 27 million times in 2011, amounting to each phone being used about 6 times per day. In November 2014, the bid was awarded to the consortium CityBridge, which consists of Qualcomm, Titan, Control Group, and Comark. In June 2015, Control Group and Titan announced that they would merge into one company called Intersection. Intersection is being led by a Sidewalk Labs-led group of investors who operate the company as a subsidiary of Alphabet Inc. that focuses on solving problems unique to urban environments. Daniel L. Doctoroff, the former CEO of Bloomberg L.P. and former New York City Deputy Mayor for Economic Development and Rebuilding, is the CEO of Sidewalk Labs. Installation CityBridge announced that it would be setting up about 7,000 kiosks, called "Links," near where guests could use the LinkNYC Wi-Fi. Coverage was set to be up by late 2015, starting with about 500 Links in areas that already have payphones, and later to other areas. These Links were to be placed online by the end of the year. The project would require the installation of of new communication cables. The Links would be built in coordination with borough presidents, business improvement districts, the New York City Council, and New York City community boards. The project is expected to create up to 800 jobs, including 100 to 150 full-time jobs at CityBridge as well as 650 technical support positions. Of the LinkNYC plans, New York City Mayor Bill de Blasio said, On December 10, 2014, the network was approved by New York City's Franchise and Concession Review Committee. Installation of two stations on Third Avenue—at 15th and 17th Streets—began on December 28, 2015, followed by other Links on Third Avenue below 58th Street, as well as on Eighth Avenue. After some delays, the first Links went online in January 2016. The public network was announced in February 2016. Locations like St. George, Jamaica, South Bronx, and Flatbush Avenue were prioritized for LinkNYC kiosk installations, with these places receiving Links by the end of 2016. By mid-July 2016, the planned roll-out of 500 hubs throughout New York City was to occur, though the actual installation proceeded at a slower rate. , there were 400 hubs in three boroughs (most of which were in Manhattan along lower Second Avenue, Third Avenue, Eighth Avenue, upper Amsterdam Avenue, Lafayette Street, parts of Broadway, and the Bowery; at least 25 along Grand Concourse and in Fordham in the Bronx; and some along Queens Boulevard and Jamaica Avenue in Queens). In November 2016, the first two Links were installed in Brooklyn, along Fulton Street in the Bedford–Stuyvesant neighborhood, with plans to install nine more Links in various places around Brooklyn before year's end, especially around Prospect Park and LIU Brooklyn. Around this time, Staten Island also received its first Links, which were installed in New Dorp. The Links were being installed at an average pace of ten per day throughout the boroughs with a projected goal of 500 hubs by the end of 2016. , there were 920 Links installed across the city. This number had increased to 1,250 by January 2018, and to 1,600 by September 2018. As originally planned, there would be 4,550 hubs by July 2019 and 7,500 hubs by 2024, which would make LinkNYC the largest and fastest public, government-operated Wi-Fi network in the world. Slightly more than half, or 52%, of the hubs would be in Manhattan and the rest would be in the outer boroughs. There would be capacity for up to 10,000 Links within the network, as per the contract. The total cost for installation is estimated at more than $200 million. The eventual network includes 736 Links in the Bronx, 361 of which will have advertising and fast network speeds; as well as over 2,500 in Manhattan, most with advertising and fast network speeds. The vast majority of the payphones will be demolished and replaced with Links. However, three or four banks of payphones along West End Avenue in the Upper West Side are expected to be preserved rather than being replaced with Links. These payphones are the only remaining fully enclosed payphones in Manhattan. The preservation process includes creating new fully enclosed booths for the site, which is a difficulty because that specific model of phone booths is no longer manufactured. The New York City government and Intersection agreed to preserve these payphones because of their historical value, and because they were a relic of the Upper West Side community, having been featured in the 2002 movie Phone Booth and the 2010 book "The Lonely Phone Booth." Links The Links are tall, and are compliant with the Americans with Disabilities Act of 1990. There are two high-definition displays on each Link for advertisements and public service announcements. There is an integrated Android tablet embedded within each Link, which can be used to access city maps, directions, and services, as well as make video calls; they were formerly also available to allow patrons to use the internet, but these browsers have now been disabled due to abuse (see below). Each Link includes two free USB charging stations for smartphones as well as a phone that allows free calls to all 50 states and to Washington, D.C. The Links allow people to make either phone calls (using the keypad and the headphone jack to the keypad's left), or video calls (using the tablet). Vonage provides this free domestic phone call service as well as the ability to make international calls using calling cards. The Links feature a red 9-1-1 call button between the tablet and the headphone jack, and they can be used to call the information helpline 3-1-1 as well. The Links can also be used for completing simple time-specific tasks such as registering to vote. In April 2017, the Links were equipped with another app, Aunt Bertha, which could be utilized to find social services such as food pantries, financial aid, and emergency shelter. The Links sometimes offer eccentric apps, such as an app to call Santa's voice mail that was enabled in December 2017. In October 2019, a video relay service for deaf users was added to the Links. The Wi-Fi technology comes from Ruckus Wireless and is enabled by Qualcomm's Vive 802.11ac Wave 2 4x4 chipsets. The Links' operating system runs on the Qualcomm Snapdragon 600 processor and the Adreno 320 graphics processing unit. The Links' hardware and software can handle future upgrades. The software will be updated until at least 2022, but Qualcomm has promised to maintain the Links for the rest of their service lives. All of the Links are cleaned twice weekly, with LinkNYC staff removing vandalism and dirt from the Links. Each Link has cameras and over 30 vibration sensors to sense if the kiosk has been hit by an object. A separate set of sensors also detects if the USB ports are tampered with. If either the vibration sensors or the USB port sensors detect tampering, an alert is displayed at LinkNYC headquarters that the specific part of the Link has been affected. All of the Links have a backup battery power supply that can last for up to 24 hours if a long-term power outage were to occur. This was added to prevent interruption of phone service, as happened in the aftermath of Hurricane Sandy in 2012, which caused power outages citywide, especially to the city's payphones (which were connected to the municipal power supply of New York City). Antenna Design helped with the overall design of the kiosks, which are produced by Comark subsidiary Civiq. Advertising screens New York City does not pay for the system because CityBridge oversees the installation, ownership, and operations, and is responsible for building the new optic infrastructure under the streets. CityBridge stated in a press release that the network would be free for all users, and that the service would be funded by advertisements. This advertising will provide revenue for New York City as well as for the partners involved in CityBridge. The advertising is estimated to bring in over $1 billion in revenue over twelve years, with the City of New York receiving over $500 million, or about half of that amount. Technically, the LinkNYC network is intended to act as a public internet utility with advertising services. However, in four of the first five years the Links have been active, actual revenue fell short of goals. This is partially due to the fact that some local small businesses and non-profits were given advertisement space for free. The Links' advertising screens also display "NYC Fun Facts", one-sentence factoids about New York City, as well as "This Day in New York" facts and historic photographs of the city, which are shown between advertisements. In April 2018, some advertising screens started displaying real-time bus arrival information for nearby bus routes, using data from the MTA Bus Time system. Other things displayed on Links include headlines from the Associated Press, as well as weather information, comics, contests, and "content collaborations" where third-party organizations display their own information. Links in some areas, especially lower-income and lower-traffic areas, are expected to not display advertisements because it is not worthwhile for CityBridge to advertise in these areas. Controversially, the Links that lack advertising are expected to exhibit network speeds that may be as slow as one-tenth of the network speeds of advertisement-enabled Links. , wealthier neighborhoods in Manhattan, Brooklyn, and Queens are expected to have the most Links with advertisements and fast network speeds, while poorer neighborhoods and Staten Island would get slower Links with no advertising. Network According to its specifications, the Links' Wi-Fi will cover a radius of 150 feet (46 m) to 400 feet (120 m). The Links' Wi-Fi is capable of running at 1 gigabit per second or 1000 megabits per second, more than 100 times faster than the 8.7 megabit per second speed of the average public Wi-Fi network in the United States. LinkNYC's routers have neither a bandwidth cap nor a time limit for usage, meaning that users can use LinkNYC Wi-Fi for as long as they need to. The free phone calls are also available for unlimited use. The network is only intended for use in public spaces, though this may be subject to change in the future. In the future, the LinkNYC network could also be used to "connect lighting systems, smart meters, traffic networks, connected cameras and other IoT systems," as well as for utility monitoring and for 5G installations. CityBridge emphasized that it takes security and privacy seriously "and will never sell any personally identifiable information or share with third parties for their own use." Aside from the unsecured network that devices can directly connect to, the Links provide an encrypted network that shields communications from eavesdropping within the network. There are two types of networks: a private (secured WPA/WPA2) network called "LinkNYC Private," which is available to iOS devices with iOS 7 and above; and a public network called "LinkNYC Free Public Wi-Fi," which is available to all devices but is only protected by the device's browser. Private network users will have to accept a network key in order to log onto the LinkNYC Wi-Fi. This would make New York City one of the first American municipalities to have a free, encrypted Wi-Fi network, as well as North America's largest. LinkNYC would also be the fastest citywide ISP in the world, with download and upload speeds between 15 and 32 times faster than on free networks at Starbucks, in LaGuardia Airport, and within New York City hotels. Originally, the CityBridge consortium was supposed to include Transit Wireless, which maintains the New York City Subway's wireless system. However, as neither company mentioned each other on their respective websites, one communications writer speculated that the deal had either not been implemented yet or had fallen through. Transit Wireless stated that "those details have not been finalized yet," and CityBridge "promised to let [the writer] know when more information is available." The network is extremely popular, and by September 2016, around 450,000 unique users and over 1 million devices connected to the Links in an average week. The Links had been used a total of more than 21 million times by that date. This had risen to over 576,000 unique users by October 4, with 21,000 phone calls made in the previous week alone. By January 2018, the number of calls registered by the LinkNYC system had risen to 200,000 per month, or 50,000 per week on average. There were also 600,000 unique users connecting to the Links' Wi-Fi or cellular services each week. The LinkNYC network exceeded 500,000 average monthly calls, 1 billion total sessions, and 5 million monthly users in September 2018. One writer for the Motherboard website observed that the LinkNYC network also helped connect poor communities, as people from these communities come to congregate at the Links. This stems from the fact that the network provides service to all New Yorkers regardless of income, but it especially helps residents who would have otherwise used their smartphones for internet access using 3G and 4G. The New York City Bureau of Policy and Research published a report in 2015 that stated that one-fourth of residents do not have home broadband internet access, including 32 percent of unemployed residents. , the most-dialed number on the LinkNYC network was the helpline for the state's electronic benefit transfer system, which distributes food stamps to low-income residents. The LinkNYC network is seen as only somewhat mitigating this internet inequality, as many poor neighborhoods, like some in the Bronx, will get relatively few Links. LinkNYC is seen as an example of smart city infrastructure in New York City, as it is a technologically advanced system that helps enable technological connectivity. Concerns Tracking The deployment of the Links and the method, process, eventual selection, and ownership of entities involved in the project has come under scrutiny by privacy advocates, who express concerns about the terms of service, the financial model, and the collection of end users' data. These concerns are aggravated by the involvement of Sidewalk Labs, which belongs to Google's holding company, Alphabet Inc. Google already has the ability to track the majority of all website visits, and LinkNYC could be used to track people's movements. Nick Pinto of the Village Voice, a Lower Manhattan newspaper, wrote: In March 2016, the New York Civil Liberties Union (NYCLU), the New York City office of the American Civil Liberties Union, wrote a letter to Mayor de Blasio outlining their privacy concerns. In the letter, representatives for the NYCLU wrote that CityBridge could be retaining too much information about LinkNYC users. They also stated that the privacy policy was vague and needed to be clarified. They recommended that the privacy policy be rewritten so that it expressly mentions whether the Links' environmental sensors or cameras are being used by the NYPD for surveillance or by other city systems. In response, LinkNYC updated its privacy policy to make clear that the kiosks do not store users' browsing history or track the websites visited while using LinkNYC's Wi-Fi, a step that NYCLU commended. In an unrelated incident, Titan, one of the members of CityBridge, was accused of embedding Bluetooth radio transmitters in their phones, which could be used to track phone users' movements without their consent. These beacons were later found to have been permitted by the DOITT, but "without any public notice, consultation, or approval," so they were removed in October 2014. Despite the removal of the transmitters, Titan is proposing putting similar tracking devices on Links, but if the company decides to go through with the plan, it has to notify the public in advance. In 2018, a New York City College of Technology undergraduate student, Charles Myers, found that LinkNYC had published folders on GitHub titled "LinkNYC Mobile Observation" and "RxLocation". He shared these with The Intercept website, which wrote that the folders indicated that identifiable user data was being collected, including information on the user's coordinates, web browser, operating system, and device details, among other things. However, LinkNYC disputed these claims and filed a Digital Millennium Copyright Act claim to force GitHub to remove files containing code that Meyer had copied from LinkNYC's GitHub account. Other privacy issues According to LinkNYC, it does not monitor its kiosks' Wi-Fi, nor does it give information to third parties. However, data will be given to law enforcement officials in situations where LinkNYC is legally obliged. Its privacy policy states that it can collect personally identifiable information (PII) from users to give to "service providers, and sub-contractors to the extent reasonably necessary to enable us provide the Services; a third party that acquires CityBridge or a majority of its assets [if CityBridge was acquired by that third party]; a third party with whom we must legally share information about you; you, upon your request; [and] other third parties with your express consent to do so." Non-personally identifiable information can be shared with service providers and advertisers. The privacy policy also states that "in the event that we receive a request from a governmental entity to provide it with your Information, we will take reasonable attempts to notify you of such request, to the extent possible." There are also concerns that despite the WPA/WPA2 encryption, hackers may still be able to steal other users' data, especially since the LinkNYC Wi-Fi network has millions of users. To reduce the risk of data theft, LinkNYC is deploying a better encryption system for devices that have Hotspot 2.0. Another concern is that hackers could affect the tablet itself by redirecting it to a malware site when users put in PII, or adding a keystroke logging program to the tablets. To protect against this, CityBridge places in "a series of filters and proxies" that prevents malware from being installed; ends a session when a tablet is detected communicating with a command-and-control server; and resets the entire kiosk after 15 seconds of inactivity. The USB ports have been configured so that they can only be used to charge devices. However, the USB ports are still susceptible to physical tampering with skimmers, which may lead to a user's device getting a malware infection while charging; this is prevented by the more than 30 anti-vandalism sensors on each Link. Yet another concern is that a person may carry out a spoofing attack by renaming their personal Wi-Fi network to "LinkNYC." This is potentially dangerous since many electronic devices tend to automatically connect to networks with a given name, but do not differentiate between the different networks. One reporter for The Verge suggested that to circumvent this, a person could turn off their mobile device's Wi-Fi while in the vicinity of a kiosk, or "forget" the LinkNYC network altogether. The cameras on the top of each kiosk's tablet posed a concern in some communities where these cameras face the interiors of buildings. However, , the cameras were not activated. Browser access and content filtering In the summer of 2016, a content filter was set up on the Links to restrict navigation to certain websites, such as pornography sites and other sites with not safe for work (NSFW) content. This was described as a problem especially among the homeless, and at least one video showed a homeless man watching pornography on a LinkNYC tablet. This problem has supposedly been ongoing since at least January 2016. Despite the existence of the filter, Link users still found a way to bypass these filters. The filters, which consisted of Google SafeSearch as well as a web blocker that was based on the web blockers of many schools, were intentionally lax to begin with because LinkNYC feared that stricter filters that blocked certain keywords would alienate customers. Other challenges included the fact that "stimulating" user-generated content can be found on popular, relatively interactive websites like Tumblr and YouTube; it is hard to block NSFW content on these sites, because that would entail blocking the entire website when only a small portion hosts NSFW content. In addition, it was hard, if not impossible, for LinkNYC to block new websites with NSFW content, as such websites are constantly being created. A few days after Díaz's and Johnson's statements, the web browsers of the tablets embedded into the Links were disabled indefinitely due to concerns of illicit activities such as drug deals and NSFW website browsing. LinkNYC cited "lewd acts" as the reason for shutting off the tables' browsing capabilities. One Murray Hill resident reported that a homeless man "enthusiastically hump[ed]" a Link in her neighborhood while watching pornography. Despite the tablets being disabled, the 9-1-1 capabilities, maps, and phone calls would still be usable, and people can still use LinkNYC Wi-Fi from their own devices. The disabling of the LinkNYC tablets' browsers had stoked fears about further restrictions on the Links. The Independent, a British newspaper, surveyed some homeless New Yorkers and found that while most of these homeless citizens used the kiosks for legitimate reasons (usually not to browse NSFW content), many of the interviewees were scared that LinkNYC may eventually charge money to use the internet via the Links, or that the kiosks may be demolished altogether. The Guardian, another British newspaper, came to a similar conclusion; one of the LinkNYC users they interviewed said that the Links are "very helpful, but of course bad people messed it up for everyone." In a press release, LinkNYC refuted fears that service would be paywalled or eliminated, though it did state that several improvements, including dimming the kiosks and lowering maximum volumes, were being implemented to reduce the kiosks' effect on the surrounding communities. Immediately after the disabling of the tablets' browsing capabilities, reports of loitering near kiosks decreased by more than 80%. By the next year, such complaints had dropped 96% from the pre-September 2016 figure. The tablets' use, as a whole, has increased 12%, with more unique users accessing maps, phone calls, and 3-1-1. Nuisance complaints There have been scattered complaints in some communities that the LinkNYC towers themselves are a nuisance. These complaints mainly have to do with loitering, browser access, and kiosk volume, the latter two of which the city has resolved. However, these nuisance complaints are rare citywide; of the 920 kiosks installed citywide by then, there had been only one complaint relating to the kiosk design itself. In September 2016, the borough president of the Bronx, Rubén Díaz Jr., called on city leaders to take stricter action, saying that "after learning about the inappropriate and over-extended usage of Links throughout the city, in particular in Manhattan, it is time to make adjustments that will allow all of our city residents to use this service safely and comfortably." City Councilman Corey Johnson said that some police officials had called for several Links in Chelsea to be removed because homeless men had been watching NSFW content on these Links while children were nearby. Barbara A. Blair, president of the Garment District Alliance, stated that "people are congregating around these Links to the point where they're bringing furniture and building little encampments clustered around them. It's created this really unfortunate and actually deplorable condition." A related problem arising from the tablets' browser access was that even though the tablets were intended for people to use it for a short period of time, the Links began being "monopolized" almost as soon as they were unveiled. Some people would use the Links for hours at a time. Particularly, homeless New Yorkers would sometimes loiter around the Links, using newspaper dispensers and milk crates as "makeshift furniture" on which they could sit while using the Links. The New York Post characterized the Links as having become "living rooms for vagrants." As a result, LinkNYC staff were working on a way to help ensure that Links would not be monopolized by one or two people. Proposals for solutions included putting time limits on how long the tablets could be used by any one person. Some people stated that the Links could also be used for loitering and illicit phone calls. One Hell's Kitchen bar owner cited concerns about the users of a Link located right outside his bar, including a homeless man who a patron complained was a "creeper" watching animal pornography, as well as several people who made drug deals using the Link's phone capabilities while families were nearby. In Greenpoint, locals alleged that after Links were activated in their neighborhood in July 2017, these particular kiosks became locations for drug deals; however, that particular Link was installed near a known drug den. Wider deployment Intersection, in collaboration with British telecommunications company BT and British advertising agency Primesight, is also planning to install up to 850 Links in the United Kingdom, including in London, beginning in 2017. The LinkUK kiosks, as they will be called, are similar to the LinkNYC kiosks in New York City. These Links will replace some of London's iconic telephone booths due to these booths' age. The first hundred Links would be installed in the borough of Camden. The Links will have tablets, but they will lack web browsing capabilities due to the problems that LinkNYC faced in enabling the tablet browsers. In early 2016, Intersection announced that it could install about 100 Links in a mid-sized city in the United States, provided that it wins the United States Department of Transportation's Smart City Challenge. Approximately 25 of that city's blocks will get the Links, which will be integrated with Sidewalk Labs' transportation data-analysis initiative, Flow. In summer 2016, the city of Columbus, Ohio, was announced as the winner of the Smart City Challenge. Intersection has proposed installing Links in four Columbus neighborhoods. In July 2017, the city of Hoboken, New Jersey, located across the Hudson River from Manhattan, proposed adding free Wi-Fi kiosks on its busiest pedestrian corridors. The kiosks, which are also a smart-city initiative, are proposed to be installed by Intersection. See also Municipal wireless network References External links Communications in New York City Municipal wireless networks Government of New York City Qualcomm Public phones
42353094
https://en.wikipedia.org/wiki/Netcode
Netcode
Netcode is a blanket term most commonly used by gamers relating to networking in online games, often referring to synchronization issues between clients and servers. Players often infer "bad netcodes" when they encounter connection problems in a game. Some common causes: high latency between server and client, packet loss, network congestion, and external factors independent to network quality such as frame rendering time or inconsistent frame rates. Netcode types Unlike a local game where the inputs of all players are executed instantly in the same simulation or instance of the game, in an online game there are several parallel simulations (one for each player) where the inputs from their respective players are received instantly, while the inputs for the same frame from other players arrive with a certain delay (greater or lesser depending on the physical distance between the players, the quality and speed of the players' network connections, etc.). During an online match, games must receive and process players' input within a certain time for each frame (e.g. 16 ms at 60 FPS), and if a remote player's input of a particular frame (for example, of frame number 10) arrives when another one is already running (for example, in frame number 20, 160 ms later), desynchronization between player simulations is produced. There are two main solutions to resolving this conflict and making the game run smoothly: Delay-based The classic solution to this problem is the use of a delay-based netcode. When the inputs of a remote player arrive late the game delays the inputs of the local player the same time to synchronize the two inputs and run them simultaneously. The fact that local player entries are not running instantly can be annoying for players (especially when there is high latency between them), but overall the change is not very noticeable. The real problem with this system is its inconsistency, since the delay of the remote player's inputs can vary depending on current latency, which can fluctuate unexpectedly. When the latency between players is so high that the remote player's input cannot be sent into a buffer of, say, 3 frames (48 ms), the game must wait, causing the screens to "freeze" (a delay-based netcode does not allow the simulation to continue until it receives the inputs from all the players in the frame in question). Because this delay can be variable, this causes a more inconsistent and unresponsive experience compared to offline play (or to a LAN game), and can negatively affect player performance in timing-sensitive and fast-paced genres such as fighting games. Rollback An alternative system to the previous netcode is rollback netcode. This system immediately runs the inputs of the local player (so that they are not delayed as with delay-based netcode), as if it were an offline game, and predicts the inputs of the remote player or players instead of waiting for them (assuming they will make the same input as the one in the previous tick). Once these remote inputs arrive (suppose, e.g., 45 ms later), the game can act in two ways: if the prediction is correct, the game continues as-is, in a totally continuous way; if the prediction was incorrect, the game state is reverted and gameplay continues from the corrected state, seen as a "jump" to the other player or players (equivalent to 45 ms, following the example). Some games utilize a hybrid solution in order to disguise these "jumps" (which can become problematic as latency between players grows, as there is less and less time to react to other players' actions) with a fixed input delay and then rollback being used. Rollback is quite effective at concealing lag spikes or other issues related to inconsistencies in the users' connections, as predictions are often correct and players do not even notice. Nevertheless, this system can be troublesome whenever a client's game slows down (usually due to overheating), since rift problems can be caused leading to an exchange of tickets between machines at unequal rates. This generates visual glitches that interrupt the gameplay of those players that receive inputs at a slower pace, while the player whose game is slowed down will have an advantage over the rest by receiving inputs from others at a normal rate (this is known as one-sided rollback). To address this uneven input flow (and consequently, an uneven frame flow as well), there are standard solutions such as waiting for the late entries to arrive to all machines (similar to the delay-based netcode model) or more ingenious solutions as the one currently used in Skullgirls, which consists of the systematic omission of one frame every seven so that when the game encounters the problem in question it can recover the skipped frames in order to gradually synchronize the instances of the games on the various machines. Rollback netcode requires the game engine to be able to turn back its state, which requires modifications to many existing engines, and therefore, the implementation of this system can be problematic and expensive in AAA type games (which usually have a solid engine and a high-traffic network), as commented by Dragon Ball FighterZ producer Tomoko Hiroki, among others. Although this system is often associated with a peer-to-peer architecture and fighting games, there are forms of rollback networking that are also commonly used in client-server architectures (for instance, aggressive schedulers found in database management systems include rollback functionality) and in other video game genres. There is a popular MIT-licensed library named GGPO designed to help implement rollback networking to games (mainly fighting games). Potential causes of netcode issues Latency Latency is unavoidable in online games, and the quality of the player's experience is strictly tied to this (the more latency there is between players, the greater the feeling that the game is not responsive to their inputs). That the latency of the players' network (which is largely out of a game's control) is not the only factor in question, but also the latency inherent in the way the game simulations are run. There are several lag compensation methods used to disguise or cope with latency (specially with high latency values). Tickrate A single update of a game simulation is known as a tick. The rate at which the simulation is run on a server is referred often to as the server's tickrate; this is essentially the server equivalent of a client's frame rate, absent any rendering system. Tickrate is limited by the length of time it takes to run the simulation, and is often intentionally limited further to reduce instability introduced by a fluctuating tickrate, and to reduce CPU and data transmission costs. A lower tickrate increases latency in the synchronization of the game simulation between the server and clients. Tickrate for games like first-person shooters is often between 128 ticks per second (such is Valorant's case), 60 ticks per second (in games like Counter-Strike: Global Offensive and Overwatch), 30 ticks per second (like in Fortnite and Battlefield V's console edition) and 20 ticks per second (such are the controversial cases of Call of Duty: Modern Warfare, Call of Duty: Warzone and Apex Legends). A lower tickrate also naturally reduces the precision of the simulation, which itself might cause problems if taken too far, or if the client and server simulations are running at significantly different rates. Because of limitations in the amount of available bandwidth and the CPU time that's taken by network communication, some games prioritize certain vital communications while limiting the frequency and priority of less important information. As with tickrate, this effectively increases synchronization latency. Game engines may limit the number of times that updates (of a simulation) are sent to a particular client and/or particular objects in the game's world in addition to reducing the precision of some values sent over the network to help with bandwidth use. This lack of precision may in some instances be noticeable. Software bugs Various simulation synchronization errors between machines can also fall under the "netcode issues" blanket. These may include bugs which cause the simulation to proceed differently on one machine than on another, or which cause some things to not be communicated when the user perceives that they ought to be. Traditionally, real-time strategy games (such as Age of Empires) have used lock-step peer-to-peer networking models where it is assumed the simulation will run exactly the same on all clients; if, however, one client falls out of step for any reason, the desynchronization may compound and be unrecoverable. Transport layer protocol and communication code: TCP and UDP A game's choice of transport layer protocol (and its management and coding) can also affect perceived networking issues. If a game uses a Transmission Control Protocol (TCP), there will be increased latency between players. This protocol is based on the connection between two machines, in which they can exchange data and read it. These types of connections are very reliable, stable, ordered and easy to implement, and are used in virtually any operation we do on the Internet (from web browsing to emailing or chatting through an IRC). These connections, however, are not quite suited to the network speeds that fast-action games require, as this type of protocol (Real Time Streaming Protocols) automatically groups data into packets (which will not be sent until a certain volume of information is reached, unless this algorithm - Nagle's algorithm - is disabled) which will be sent through the connection established between the machines, rather than directly (sacrificing speed for security). This type of protocol also tends to respond very slowly whenever they lose a packet, or when packets arrive in an incorrect order or duplicated, which can be very detrimental to a real-time online game (this protocol was not designed for this type of software). If the game instead uses a User Datagram Protocol (UDP), the connection between machines will be very fast, because instead of establishing a connection between them the data will be sent and received directly. This protocol is much simpler than the previous one, but it lacks its reliability and stability and requires the implementation of own code to handle indispensable functions for the communication between machines that are handled by TCP (such as data division through packets, automatic packet loss detection, etc.); this increases the engine's complexity and might itself lead to issues. See also Online game Lag (online gaming) GGPO References Multiplayer video games Servers (computing) Video game development Video game platforms
22075469
https://en.wikipedia.org/wiki/Institute%20for%20Certification%20of%20Computing%20Professionals
Institute for Certification of Computing Professionals
The Institute for the Certification of Computing Professionals (ICCP) is a non-profit (501c6) institution for professional certification in the Computer engineering and Information technology industry. It was founded in 1973 by 8 professional computer societies to promote certification and professionalism in the industry, lower the cost of development and administration of certification for all of the societies and act as the central resource for job standards and performance criteria. The initial certification administered by ICCP in 1973 was the Certified Data Processor (CDP) which was originally created by the Data Processing Management Association (DPMA) in 1965. The institute is a society of Professional Associations, and affiliates across the world with other like organizations with similar goals. The institute awards a professional certification, Certified Computing Professional (CCP), to individuals who pass a written examination and have at least 48 months experience in computer based information systems. Post secondary education can be substituted for up to 24 months of this requirement. The ICCP created the Certified Business Intelligence Professional (CBIP) in 2003 and the Certified Data Management Professional (CDMP) in 2004. Today the ICCP administers the CDMP as the Certified Data Professional (CDP). ICCP also offers Certified Data Scientist (CDS), Certified Big Data Professional (CBDP) and recently added the Certified Blockchain Professional (CBP). The institute was responsible for creating the Systems Security Exam (today known as the Cyber Security Examination) for the Information Systems Security (ISC) organization which then became the ISC2 organization offering the CISSP. ICCP has also assisted Network Professional Association (NPA) to create and develop its certification program - Certified Network Professional. ICCP also created the Certified Data Management Professional (CDMP) in 2004 on a request from one of its constituent societies DAMA International. In 2015 the ICCP renamed the CDMP to be the Certified Data Professional to make it inclusive of data science and the myriad of data specialty jobs that were emerging. Creators and Developers of the CDMP program were: Kewal Dhariwal, Patricia Cupoli, Brett Champlain. The Data Management Body of Knowledge (DMBoK v1 and v2) are based on the ICCP Examinations for each of the 11 areas of the DMBoK Wheel. Editors of the DMBoK were Patricia Cupoli, Deborah Henderson and Susan Earley (all members of the ICCP Certification Council). Patricia Cupoli was the ICCP Director of Certification during this development as well as representing DAMA International on the ICCP Board of Directors. Deborah Henderson was the President of DAMA Education Foundation who then fostered the development and editing of the DMBoK publication. ICCP examinations are used by the Canadian Information Processing Society (CIPS) towards the Canadian Information Systems Professional (I.S.P.) credential which has received recognition by various Provinces as a recognized Public Occupation, under The Professional and Occupational Associations Registration Act which that regulates professions and occupations. ICCP Examinations test for stringent industry fundamental and assesses for expert mastery skills, along with work experience requirement and/or education. See also Constituent Societies of the ICCP: Association for Computing Machinery (ACM) Association of Information Technology Professionals (CompTIA-AITP) Buchanan Edwards - R2C Canadian Information Processing Society CPCI - Council of Professional Informatics Societies of Argentina DAMA International Global Institute for IT Management Marketing Research & Intelligence Association (MRIA) Affiliate Societies of the ICCP TDWI - Transforming Data With Intelligence Big Data International (Beijing) Hong Kong Computer Society (HKCS) Argentina, Brazil, Canada, China & Hong Kong (3), Ghana, India, Nigeria, Morocco, Pakistan, Peru, Saudi Arabia, Sultanate of Oman, Tunisia, USA (9) External links Institute for the Certification of Computing Professionals official website Institute for Certification of Computer Professionals Records, 1960-1993, Charles Babbage Institute, University of Minnesota. Contains information relating to the Certificate in Data Processing (CDP), the Certificate in Computer Processing (CCP), the Registered Business Programmer (RBP), and the Certified Systems Professional (CSP) programs (includes meetings held before 1973 under the auspices of the Data Processing Management Association (DPMA), predecessor to the Association of Information Technology Professionals) Professional titles and certifications Organizations based in Illinois
1899829
https://en.wikipedia.org/wiki/Logic%20Programming%20Associates
Logic Programming Associates
Logic Programming Associates (LPA) is a company specializing in logic programming and artificial intelligence software. LPA was founded in 1980 and is widely known for its range of Prolog compilers and more recently for VisiRule. LPA was established to exploit research at the Department of Computing and Control at Imperial College London into logic programming carried out under the supervision of Prof Robert Kowalski. One of the first implementations made available by LPA was micro-PROLOG which ran on popular 8-bit home computers such as the Sinclair Spectrum and Apple II. This was followed by micro-PROLOG Professional one of the first Prolog implementations for MS-DOS. As well as continuing with Prolog compiler technology development, LPA has a track record of creating innovative associated tools and products to address specific challenges and opportunities. History In 1989, LPA developed the Flex expert system toolkit, which incorporated frame-based reasoning with inheritance, rule-based programming and data-driven procedures. Flex has its own English-like Knowledge Specification Language (KSL) which means that knowledge and rules are defined in an easy-to-read and understand way. In 1992, LPA helped set up the Prolog Vendors Group, a not-for-profit organization whose aim was to help promote Prolog by making people aware of its usage in industry. In 2000, LPA helped set up Business Integrity Ltd, to bring to market document assembly technology. This lead the creation of Contract Express which became sold to most major law firms. In 2015, Thomson Reuters acquired Business Integrity Ltd. LPA's core product is LPA Prolog for Windows , a compiler and development system for the Microsoft Windows platform. The current LPA software range comprises an integrated AI toolset which covers various aspects of Artificial Intelligence including Logic Programming, Expert Systems, Knowledge-based Systems, Data Mining, Agents and Case-based reasoning etc. In 2004, LPA launched VisiRule a graphical tool for developing knowledge-based and decision support systems. VisiRule has been used in various sectors, to build legal expert systems, machine diagnostic programs, medical and financial advice systems, etc. Customers For many years, LPA has worked closely with Valdis Krebs, an American-Latvian researcher, author, and consultant in the field of social and organizational network analysis. Valdis is the founder and chief scientist of Orgnet, and the creator of the popular Inflow software package. References External links LPA home page About LPA Micro-PROLOG (in Spanish) Aspects of PROLOG History VisiRule demos VisiRule: a new graphical business rules tool from LPA A flex-based expert system for sewage treatment works support ESSE: An Expert System for Software Evaluation LPA Delivers A Range Of Software Development Tools For Both Programmers And Non-Programmers Software companies of the United Kingdom Expert systems Knowledge engineering Knowledge representation Department of Computing, Imperial College London
11522
https://en.wikipedia.org/wiki/Fly-by-wire
Fly-by-wire
Fly-by-wire (FBW) is a system that replaces the conventional manual flight controls of an aircraft with an electronic interface. The movements of flight controls are converted to electronic signals transmitted by wires, and flight control computers determine how to move the actuators at each control surface to provide the ordered response. It can use mechanical flight control backup systems (like the Boeing 777) or use fully fly-by-wire controls. Improved fully fly-by-wire systems interpret the pilot's control inputs as a desired outcome and calculate the control surface positions required to achieve that outcome; this results in various combinations of rudder, elevator, aileron, flaps and engine controls in different situations using a closed feedback loop. The pilot may not be fully aware of all the control outputs acting to effect the outcome, only that the aircraft is reacting as expected. The fly-by-wire computers act to stabilise the aircraft and adjust the flying characteristics without the pilot's involvement and to prevent the pilot operating outside of the aircraft's safe performance envelope. Rationale Mechanical and hydro-mechanical flight control systems are relatively heavy and require careful routing of flight control cables through the aircraft by systems of pulleys, cranks, tension cables and hydraulic pipes. Both systems often require redundant backup to deal with failures, which increases weight. Both have limited ability to compensate for changing aerodynamic conditions. Dangerous characteristics such as stalling, spinning and pilot-induced oscillation (PIO), which depend mainly on the stability and structure of the aircraft concerned rather than the control system itself, are dependent on the pilot's actions. The term "fly-by-wire" implies a purely electrically signaled control system. It is used in the general sense of computer-configured controls, where a computer system is interposed between the operator and the final control actuators or surfaces. This modifies the manual inputs of the pilot in accordance with control parameters. Side-sticks or conventional flight control yokes can be used to fly FBW aircraft. Weight saving A FBW aircraft can be lighter than a similar design with conventional controls. This is partly due to the lower overall weight of the system components and partly because the natural stability of the aircraft can be relaxed, slightly for a transport aircraft, and more for a maneuverable fighter, which means that the stability surfaces that are part of the aircraft structure can therefore be made smaller. These include the vertical and horizontal stabilizers (fin and tailplane) that are (normally) at the rear of the fuselage. If these structures can be reduced in size, airframe weight is reduced. The advantages of FBW controls were first exploited by the military and then in the commercial airline market. The Airbus series of airliners used full-authority FBW controls beginning with their A320 series, see A320 flight control (though some limited FBW functions existed on A310). Boeing followed with their 777 and later designs. Basic operation Closed-loop feedback control A pilot commands the flight control computer to make the aircraft perform a certain action, such as pitch the aircraft up, or roll to one side, by moving the control column or sidestick. The flight control computer then calculates what control surface movements will cause the plane to perform that action and issues those commands to the electronic controllers for each surface. The controllers at each surface receive these commands and then move actuators attached to the control surface until it has moved to where the flight control computer commanded it to. The controllers measure the position of the flight control surface with sensors such as LVDTs. Automatic stability systems Fly-by-wire control systems allow aircraft computers to perform tasks without pilot input. Automatic stability systems operate in this way. Gyroscopes and sensors such as accelerometers are mounted in an aircraft to sense rotation on the pitch, roll and yaw axes. Any movement (from straight and level flight for example) results in signals to the computer, which can automatically move control actuators to stabilize the aircraft. Safety and redundancy While traditional mechanical or hydraulic control systems usually fail gradually, the loss of all flight control computers immediately renders the aircraft uncontrollable. For this reason, most fly-by-wire systems incorporate either redundant computers (triplex, quadruplex etc.), some kind of mechanical or hydraulic backup or a combination of both. A "mixed" control system with mechanical backup feedbacks any rudder elevation directly to the pilot and therefore makes closed loop (feedback) systems senseless. Aircraft systems may be quadruplexed (four independent channels) to prevent loss of signals in the case of failure of one or even two channels. High performance aircraft that have fly-by-wire controls (also called CCVs or Control-Configured Vehicles) may be deliberately designed to have low or even negative stability in some flight regimes rapid-reacting CCV controls can electronically stabilize the lack of natural stability. Pre-flight safety checks of a fly-by-wire system are often performed using built-in test equipment (BITE). A number of control movement steps can be automatically performed, reducing workload of the pilot or groundcrew and speeding up flight-checks. Some aircraft, the Panavia Tornado for example, retain a very basic hydro-mechanical backup system for limited flight control capability on losing electrical power; in the case of the Tornado this allows rudimentary control of the stabilators only for pitch and roll axis movements. History Servo-electrically operated control surfaces were first tested in the 1930s on the Soviet Tupolev ANT-20. Long runs of mechanical and hydraulic connections were replaced with wires and electric servos. In 1934, filed a patent about the automatic-electronic system, which flared the aircraft, when it was close to the ground. In 1941, an engineer from the Siemens, Karl Otto Altvater developed and tested the first fly-by-wire system for the Heinkel He 111, in which the aircraft was fully controlled by electronic impulses. The first non-experimental aircraft that was designed and flown (in 1958) with a fly-by-wire flight control system was the Avro Canada CF-105 Arrow, a feat not repeated with a production aircraft (though the Arrow was cancelled with five built) until Concorde in 1969, which became the first fly-by-wire airliner. This system also included solid-state components and system redundancy, was designed to be integrated with a computerised navigation and automatic search and track radar, was flyable from ground control with data uplink and downlink, and provided artificial feel (feedback) to the pilot. The first pure electronic fly-by-wire aircraft with no mechanical or hydraulic backup was the Apollo Lunar Landing Training Vehicle (LLTV), first flown in 1968. This was preceded in 1964 by the Lunar Landing Research Vehicle (LLRV) which pioneered fly-by-wire flight with no mechanical backup. Control was through a digital computer with three analog redundant channels. In the USSR, the Sukhoi T-4 also flew. At about the same time in the United Kingdom a trainer variant of the British Hawker Hunter fighter was modified at the British Royal Aircraft Establishment with fly-by-wire flight controls for the right-seat pilot. In the UK the two seater Avro 707C was flown with a Fairey system with mechanical backup in the early to mid-60s. The program was curtailed when the air-frame ran out of flight time. In 1972, the first digital fly-by-wire fixed-wing aircraft without a mechanical backup to take to the air was an F-8 Crusader, which had been modified electronically by NASA of the United States as a test aircraft; the F-8 used the Apollo guidance, navigation and control hardware. The Airbus A320 began service in 1988 as the first airliner with digital fly-by-wire controls. Analog systems All "fly-by-wire" flight control systems eliminate the complexity, the fragility and the weight of the mechanical circuit of the hydromechanical or electromechanical flight control systems — each being replaced with electronic circuits. The control mechanisms in the cockpit now operate signal transducers, which in turn generate the appropriate electronic commands. These are next processed by an electronic controller—either an analog one, or (more modernly) a digital one. Aircraft and spacecraft autopilots are now part of the electronic controller. The hydraulic circuits are similar except that mechanical servo valves are replaced with electrically controlled servo valves, operated by the electronic controller. This is the simplest and earliest configuration of an analog fly-by-wire flight control system. In this configuration, the flight control systems must simulate "feel". The electronic controller controls electrical feel devices that provide the appropriate "feel" forces on the manual controls. This was used in Concorde, the first production fly-by-wire airliner. Digital systems A digital fly-by-wire flight control system can be extended from its analog counterpart. Digital signal processing can receive and interpret input from multiple sensors simultaneously (such as the altimeters and the pitot tubes) and adjust the controls in real time. The computers sense position and force inputs from pilot controls and aircraft sensors. They then solve differential equations related to the aircraft's equations of motion to determine the appropriate command signals for the flight controls to execute the intentions of the pilot. The programming of the digital computers enable flight envelope protection. These protections are tailored to an aircraft's handling characteristics to stay within aerodynamic and structural limitations of the aircraft. For example, the computer in flight envelope protection mode can try to prevent the aircraft from being handled dangerously by preventing pilots from exceeding preset limits on the aircraft's flight-control envelope, such as those that prevent stalls and spins, and which limit airspeeds and g forces on the airplane. Software can also be included that stabilize the flight-control inputs to avoid pilot-induced oscillations. Since the flight-control computers continuously feedback the environment, pilot's workloads can be reduced. This also enables military aircraft with relaxed stability. The primary benefit for such aircraft is more maneuverability during combat and training flights, and the so-called "carefree handling" because stalling, spinning and other undesirable performances are prevented automatically by the computers. Digital flight control systems enable inherently unstable combat aircraft, such as the Lockheed F-117 Nighthawk and the Northrop Grumman B-2 Spirit flying wing to fly in usable and safe manners. Legislation The Federal Aviation Administration (FAA) of the United States has adopted the RTCA/DO-178C, titled "Software Considerations in Airborne Systems and Equipment Certification", as the certification standard for aviation software. Any safety-critical component in a digital fly-by-wire system including applications of the laws of aeronautics and computer operating systems will need to be certified to DO-178C Level A or B, depending on the class of aircraft, which is applicable for preventing potential catastrophic failures. Nevertheless, the top concern for computerized, digital, fly-by-wire systems is reliability, even more so than for analog electronic control systems. This is because the digital computers that are running software are often the only control path between the pilot and aircraft's flight control surfaces. If the computer software crashes for any reason, the pilot may be unable to control an aircraft. Hence virtually all fly-by-wire flight control systems are either triply or quadruply redundant in their computers and electronics. These have three or four flight-control computers operating in parallel and three or four separate data buses connecting them with each control surface. Redundancy The multiple redundant flight control computers continuously monitor each other's output. If one computer begins to give aberrant results for any reason, potentially including software or hardware failures or flawed input data, then the combined system is designed to exclude the results from that computer in deciding the appropriate actions for the flight controls. Depending on specific system details there may be the potential to reboot an aberrant flight control computer, or to reincorporate its inputs if they return to agreement. Complex logic exists to deal with multiple failures, which may prompt the system to revert to simpler back-up modes. In addition, most of the early digital fly-by-wire aircraft also had an analog electrical, mechanical, or hydraulic back-up flight control system. The Space Shuttle had, in addition to its redundant set of four digital computers running its primary flight-control software, a fifth back-up computer running a separately developed, reduced-function, software flight-control system – one that could be commanded to take over in the event that a fault ever affected all of the computers in the other four. This back-up system served to reduce the risk of total flight-control-system failure ever happening because of a general-purpose flight software fault that had escaped notice in the other four computers. Efficiency of flight For airliners, flight-control redundancy improves their safety, but fly-by-wire control systems, which are physically lighter and have lower maintenance demands than conventional controls also improve economy, both in terms of cost of ownership and for in-flight economy. In certain designs with limited relaxed stability in the pitch axis, for example the Boeing 777, the flight control system may allow the aircraft to fly at a more aerodynamically efficient angle of attack than a conventionally stable design. Modern airliners also commonly feature computerized Full-Authority Digital Engine Control systems (FADECs) that control their jet engines, air inlets, fuel storage and distribution system, in a similar fashion to the way that FBW controls the flight control surfaces. This allows the engine output to be continually varied for the most efficient usage possible. The second generation Embraer E-Jet family gained a 1.5% efficiency improvement over the first generation from the fly-by-wire system, which enabled a reduction from 280 ft.² to 250 ft.² for the horizontal stabilizer on the E190/195 variants. Airbus/Boeing Airbus and Boeing differ in their approaches to implementing fly-by-wire systems in commercial aircraft. Since the Airbus A320, Airbus flight-envelope control systems always retain ultimate flight control when flying under normal law and will not permit the pilots to violate aircraft performance limits unless they choose to fly under alternate law. This strategy has been continued on subsequent Airbus airliners. However, in the event of multiple failures of redundant computers, the A320 does have a mechanical back-up system for its pitch trim and its rudder, the Airbus A340 has a purely electrical (not electronic) back-up rudder control system and beginning with the A380, all flight-control systems have back-up systems that are purely electrical through the use of a "three-axis Backup Control Module" (BCM). Boeing airliners, such as the Boeing 777, allow the pilots to completely override the computerised flight-control system, permitting the aircraft to be flown outside of its usual flight-control envelope. Applications Concorde was the first production fly-by-wire aircraft with analogue control. The General Dynamics F-16 was the first production aircraft to use digital fly-by-wire controls. The Space Shuttle orbiter had an all-digital fly-by-wire control system. This system was first exercised (as the only flight control system) during the glider unpowered-flight "Approach and Landing Tests" that began on the Space Shuttle Enterprise during 1977. Launched into production during 1984, the Airbus Industries Airbus A320 became the first airliner to fly with an all-digital fly-by-wire control system. In 2005, the Dassault Falcon 7X became the first business jet with fly-by-wire controls. A fully digital fly-by-wire without a closed feedback loop was integrated 2002 in the first generation Embraer E-Jet family. By closing the loop (feedback), the second generation Embraer E-Jet family gained a 1.5% efficiency improvement in 2016. Engine digital control The advent of FADEC (Full Authority Digital Engine Control) engines permits operation of the flight control systems and autothrottles for the engines to be fully integrated. On modern military aircraft other systems such as autostabilization, navigation, radar and weapons system are all integrated with the flight control systems. FADEC allows maximum performance to be extracted from the aircraft without fear of engine misoperation, aircraft damage or high pilot workloads. In the civil field, the integration increases flight safety and economy. Airbus fly-by-wire aircraft are protected from dangerous situations such as low-speed stall or overstressing by flight envelope protection. As a result, in such conditions, the flight control systems commands the engines to increase thrust without pilot intervention. In economy cruise modes, the flight control systems adjust the throttles and fuel tank selections precisely. FADEC reduces rudder drag needed to compensate for sideways flight from unbalanced engine thrust. On the A330/A340 family, fuel is transferred between the main (wing and center fuselage) tanks and a fuel tank in the horizontal stabilizer, to optimize the aircraft's center of gravity during cruise flight. The fuel management controls keep the aircraft's center of gravity accurately trimmed with fuel weight, rather than drag-inducing aerodynamic trims in the elevators. Further developments Fly-by-optics Fly-by-optics is sometimes used instead of fly-by-wire because it offers a higher data transfer rate, immunity to electromagnetic interference and lighter weight. In most cases, the cables are just changed from electrical to optical fiber cables. Sometimes it is referred to as "fly-by-light" due to its use of fiber optics. The data generated by the software and interpreted by the controller remain the same. Fly-by-light has the effect of decreasing electro-magnetic disturbances to sensors in comparison to more common fly-by-wire control systems. The Kawasaki P-1 is the first production aircraft in the world to be equipped with such a flight control system. Power-by-wire Having eliminated the mechanical transmission circuits in fly-by-wire flight control systems, the next step is to eliminate the bulky and heavy hydraulic circuits. The hydraulic circuit is replaced by an electrical power circuit. The power circuits power electrical or self-contained electrohydraulic actuators that are controlled by the digital flight control computers. All benefits of digital fly-by-wire are retained since the power-by-wire components are strictly complementary to the fly-by-wire components. The biggest benefits are weight savings, the possibility of redundant power circuits and tighter integration between the aircraft flight control systems and its avionics systems. The absence of hydraulics greatly reduces maintenance costs. This system is used in the Lockheed Martin F-35 Lightning II and in Airbus A380 backup flight controls. The Boeing 787 and Airbus A350 also incorporate electrically powered backup flight controls which remain operational even in the event of a total loss of hydraulic power. Fly-by-wireless Wiring adds a considerable amount of weight to an aircraft; therefore, researchers are exploring implementing fly-by-wireless solutions. Fly-by-wireless systems are very similar to fly-by-wire systems, however, instead of using a wired protocol for the physical layer a wireless protocol is employed. In addition to reducing weight, implementing a wireless solution has the potential to reduce costs throughout an aircraft's life cycle. For example, many key failure points associated with wire and connectors will be eliminated thus hours spent troubleshooting wires and connectors will be reduced. Furthermore, engineering costs could potentially decrease because less time would be spent on designing wiring installations, late changes in an aircraft's design would be easier to manage, etc. Intelligent flight control system A newer flight control system, called intelligent flight control system (IFCS), is an extension of modern digital fly-by-wire flight control systems. The aim is to intelligently compensate for aircraft damage and failure during flight, such as automatically using engine thrust and other avionics to compensate for severe failures such as loss of hydraulics, loss of rudder, loss of ailerons, loss of an engine, etc. Several demonstrations were made on a flight simulator where a Cessna-trained small-aircraft pilot successfully landed a heavily damaged full-size concept jet, without prior experience with large-body jet aircraft. This development is being spearheaded by NASA Dryden Flight Research Center. It is reported that enhancements are mostly software upgrades to existing fully computerized digital fly-by-wire flight control systems. The Dassault Falcon 7X and Embraer Legacy 500 business jets have flight computers that can partially compensate for engine-out scenarios by adjusting thrust levels and control inputs, but still require pilots to respond appropriately. See also Aircraft flight control system Air France Flight 296 Drive by wire Flight control modes MIL-STD-1553, a standard data bus for fly-by-wire Relaxed stability Note References External links "Fly-by-wire" a 1972 Flight article archive version Aircraft controls Fault tolerance
37593992
https://en.wikipedia.org/wiki/Google%20Flu%20Trends
Google Flu Trends
Google Flu Trends (GFT) was a web service operated by Google. It provided estimates of influenza activity for more than 25 countries. By aggregating Google Search queries, it attempted to make accurate predictions about flu activity. This project was first launched in 2008 by Google.org to help predict outbreaks of flu. Google Flu Trends stopped publishing current estimates on 9 August 2015. Historical estimates are still available for download, and current data are offered for declared research purposes. History The idea behind Google Flu Trends was that, by monitoring millions of users’ health tracking behaviors online, the large number of Google search queries gathered can be analyzed to reveal if there is the presence of flu-like illness in a population. Google Flu Trends compared these findings to a historic baseline level of influenza activity for its corresponding region and then reports the activity level as either minimal, low, moderate, high, or intense. These estimates have been generally consistent with conventional surveillance data collected by health agencies, both nationally and regionally. Roni Zeiger helped develop Google Flu Trends. Methods Google Flu Trends was described as using the following method to gather information about flu trends. First, a time series is computed for about 50 million common queries entered weekly within the United States from 2003 to 2008. A query's time series is computed separately for each state and normalized into a fraction by dividing the number of each query by the number of all queries in that state. By identifying the IP address associated with each search, the state in which this query was entered can be determined. A linear model is used to compute the log-odds of Influenza-like illness (ILI) physician visit and the log-odds of ILI-related search query: P is the percentage of ILI physician visit and Q is the ILI-related query fraction computed in previous steps. β0 is the intercept and β1 is the coefficient, while ε is the error term. Each of the 50 million queries is tested as Q to see if the result computed from a single query could match the actual history ILI data obtained from the U.S. Centers for Disease Control and Prevention (CDC). This process produces a list of top queries which gives the most accurate predictions of CDC ILI data when using the linear model. Then the top 45 queries are chosen because, when aggregated together, these queries fit the history data the most accurately. Using the sum of top 45 ILI-related queries, the linear model is fitted to the weekly ILI data between 2003 and 2007 so that the coefficient can be gained. Finally, the trained model is used to predict flu outbreak across all regions in the United States. This algorithm has been subsequently revised by Google, partially in response to concerns about accuracy, and attempts to replicate its results have suggested that the algorithm developers "felt an unarticulated need to cloak the actual search terms identified". Privacy concerns Google Flu Trends tries to avoid privacy violations by only aggregating millions of anonymous search queries, without identifying individuals that performed the search. Their search log contains the IP address of the user, which could be used to trace back to the region where the search query is originally submitted. Google runs programs on computers to access and calculate the data, so no human is involved in the process. Google also implemented the policy to anonymize IP address in their search logs after 9 months. However, Google Flu Trends has raised privacy concerns among some privacy groups. Electronic Privacy Information Center and Patient Privacy Rights sent a letter to Eric Schmidt in 2008, then the CEO of Google. They conceded that the use of user-generated data could support public health effort in significant ways, but expressed their worries that "user-specific investigations could be compelled, even over Google's objection, by court order or Presidential authority". Impact An initial motivation for GFT was that being able to identify disease activity early and respond quickly could reduce the impact of seasonal and pandemic influenza. One report was that Google Flu Trends was able to predict regional outbreaks of flu up to 10 days before they were reported by the CDC (Centers for Disease Control and Prevention). In the 2009 flu pandemic Google Flu Trends tracked information about flu in the United States. In February 2010, the CDC identified influenza cases spiking in the mid-Atlantic region of the United States. However, Google's data of search queries about flu symptoms was able to show that same spike two weeks prior to the CDC report being released. “The earlier the warning, the earlier prevention and control measures can be put in place, and this could prevent cases of influenza,” said Dr. Lyn Finelli, lead for surveillance at the influenza division of the CDC. “From 5 to 20 percent of the nation’s population contract the flu each year, leading to roughly 36,000 deaths on average.” Google Flu Trends is an example of collective intelligence that can be used to identify trends and calculate predictions. The data amassed by search engines is significantly insightful because the search queries represent people's unfiltered wants and needs. “This seems like a really clever way of using data that is created unintentionally by the users of Google to see patterns in the world that would otherwise be invisible,” said Thomas W. Malone, a professor at the Sloan School of Management at MIT. “I think we are just scratching the surface of what’s possible with collective intelligence.” Accuracy The initial Google paper stated that the Google Flu Trends predictions were 97% accurate comparing with CDC data. However subsequent reports asserted that Google Flu Trends' predictions have sometimes been very inaccurate—especially over the interval 2011–2013, when it consistently overestimated relative flu incidence, and over one interval in the 2012-2013 flu season predicted twice as many doctors' visits as the CDC recorded. One source of problems is that people making flu-related Google searches may know very little about how to diagnose flu; searches for flu or flu symptoms may well be researching disease symptoms that are similar to flu, but are not actually flu. Furthermore, analysis of search terms reportedly tracked by Google, such as "fever" and "cough", as well as effects of changes in their search algorithm over time, have raised concerns about the meaning of its predictions. In fall 2013, Google began attempting to compensate for increases in searches due to prominence of flu in the news, which was found to have previously skewed results. However, one analysis concluded that "by combining GFT and lagged CDC data, as well as dynamically recalibrating GFT, we can substantially improve on the performance of GFT or the CDC alone." A later study also demonstrates that Google search data can indeed be used to improve estimates, reducing the errors seen in a model using CDC data alone by up to 52.7 per cent. By re-assessing the original GFT model, researchers uncovered that the model was aggregating queries about different health conditions, something that could lead to an over-prediction of ILI rates; in the same work, a series of more advanced linear and nonlinear better-performing approaches to ILI modelling have been proposed. Related systems Similar projects such as the flu-prediction project by the institute of Cognitive Science Osnabrück carry the basic idea forward, by combining social media data e.g. Twitter with CDC data, and structural models that infer the spatial and temporal spreading of the disease. References External links Internet properties established in 2008 Projects established in 2008 Flu Trends Influenza Data analysis software Prediction Public health and biosurveillance software
28087280
https://en.wikipedia.org/wiki/Unity%20Technologies
Unity Technologies
Unity Software Inc. (doing business as Unity Technologies) is a video game software development company based in San Francisco. It was founded in Denmark in 2004 as Over the Edge Entertainment (OTEE) and changed its name in 2007. Unity Technologies is best known for the development of Unity, a licensed game engine used to create video games and other applications. , Unity Technologies has undergone significant growth despite reporting financial losses for every year since its founding in 2004. History Founding and early success (2004–2008) Unity Technologies was founded as Over the Edge Entertainment (OTEE) in Copenhagen in 2004 by David Helgason (CEO), Nicholas Francis (CCO), and Joachim Ante (CTO). Over the Edge released its first game, GooBall, in 2005. The game failed commercially, but the three founders saw value in the game development tools that they had created to simplify game development, and so they shifted the company's focus to create an engine for other developers. The company sought to "democratize" game development and make development of 2D and 3D interactive content more accessible. Unity was named the runner-up for Best Use of Mac OS X Graphics at the 2006 Apple Design Awards. The company grew with the 2007 release of the iPhone, as Unity Technologies produced one of the first engines supporting the platform in full. Because the games industry was focused on console games when the iPhone and App Store were released, Unity was positioned to support developers looking to create mobile games. Its dominance on the iPhone was largely uncontested for a couple years. In 2007, Over the Edge changed its name to Unity Technologies. New platforms and expansion (2009–2019) The technology was developed for different platforms. By 2018, Unity was used to make games and other experiences for more than 25 platforms, including mobile, desktop, consoles, and virtual reality. Unity games can also be deployed on the Web. The Unity Asset Store launched in November 2010 as an online marketplace for Unity users to sell project assets (artwork, code systems, audio, etc.) to each other. In April 2012, Unity reportedly had 1 million registered developers, 300,000 of whom used Unity on a monthly basis. In May of the same year, a survey by Game Developer revealed that approximately 53% of mobile game developers were using Unity. By 2016, the company reported more than 5.5 million registered users. Part of Unity's appeal is that it allows people who lack the technical knowledge to program games from scratch to create games and other simulations. Facebook integrated a software development kit for games using the Unity game engine in 2013. The kit featured tools that allowed tracking advertising campaigns and deep linking, where users were directly linked from social media posts to specific portions within games, and in-game-image sharing. Unity acquired Applifier, a Helsinki-based mobile service provider, in March 2014. Applifier's game replay sharing and community service was initially called Everyplay, and became known as Unity Everyplay. The acquisition also meant that Applifier's mobile video ad network, GameAds, became Unity Ads. Two more acquisitions followed later in 2014: Playnomics, a data analysis platform for developers (now Unity Analytics), and Tsugi, whose continuous integration service became known as Unity Cloud Build. In October 2014, Helgason announced in a blog post that he would be stepping down as CEO with John Riccitiello, the former CEO of game company Electronic Arts, replacing him. Helgason remained with the company as executive vice-president. Software developer Niantic released Pokémon Go, which was built using Unity engine, in 2016. Following the success of Pokémon Go, Unity Technologies held several rounds of funding that increased the company's valuation: In July 2016, a $181 million round of funding valued the company at approximately $1.5 billion; in May 2017, the company raised $400 million that valued the company at $2.8 billion; and in 2018 Unity's CEO confirmed a $145 million round that valued the company at approximately $3 billion. Also in 2016, Facebook developed a new PC gaming platform with Unity. In 2017, Unity Technologies acquired Multiplay, a business that offers multiplayer server game hosting, from retailer Game for £19 million. Unity Technologies released the Unity 2017 version of its platform in 2017. Unity worked with Google on ARCore in 2017 to develop augmented reality tools for Android devices and apps. The following year, Unity Technologies worked with Google Cloud to offer services for online game developers and Alphabet Inc. subsidiary DeepMind to develop virtual world artificial intelligence. The Unity platform is used to help machines through reinforced learning. According to Fast Company, DeepMind uses Unity software to train algorithms in "physics-realistic environments", where a computer will continually try to achieve a goal through trial and error. The use of Unity Technologies software expanded beyond games in the 2010s, including film and television and automotive. For the automotive industry, carmakers use Unity's virtual reality platform for design and virtual world car testing simulations. In October 2018, Unity Technologies acquired Digital Monarch Media, a Canadian virtual cinematography company. Unity Technologies created the Unity Icon Collective in November 2018. The team creates assets for sale in the Unity Asset Store for PC and consoles. The assets—characters, environments, art, and animation—can be used in high-quality games; the move was seen as an attempt to compete with Unity's rivals, such as Epic Games' Unreal Engine. The company acquired Vivox, a cross-platform voice and text chat provider based in Framingham, Massachusetts, in January 2019. At an acquisition price of $123.4 million, the company became a wholly owned subsidiary of Unity Technologies and operates independently. Vivox's technology is used in Fortnite, PlayerUnknown's Battlegrounds, and League of Legends, among others. Terms of the deal were not disclosed. In May 2019, the company confirmed a $150 million Series E funding round that increased its valuation to $6 billion. In July that year, it announced that together with D1 Capital Partners, CPP Investment Board, Light Street Capital, Sequoia Capital and Silver Lake Partners, it would fund a $525 million tender to allow Unity's common shareholders to sell their shares in the company. Unity Technologies additionally purchased game analytics company deltaDNA in September 2019, which was later reported at a value of $53.1 million. The company continued their acquisitions by buying live game management platform ChilliConnect in October 2019, and 3D application streaming service Furioos creator Obvioos in November 2019. That same year, Unity paid $48.8 million to acquire Artomatix, a company that develops an AI-assisted material creation tool called ArtEngine. Despite growing revenues of $541.8 million, Unity also posted growing losses of $163.2 million. The company's IPO filing revealed that they reported losses of over $162.3 million in 2019, and have consistently lost money since its founding in 2004. Despite the losses, the company has consistently grown in terms of revenue and employee numbers. Going public and further acquisitions (2020–present) In June 2020, Unity announced that they had partnered with Apple to update the Unity Engine to run on Apple silicon-equipped Macs with the 2020.2 release, allowing game developers to update their games to support the new hardware platform. An Apple silicon-ported version of the Unity Editor was demoed during the WWDC 2020 Platforms State of the Union event. On 17 August of that year, Unity stated that it had acquired Codice Software, who make the distributed version control system Plastic SCM. That same year, Unity acquired Finger Food Advanced Technology Group for $46.8 million. Unity announced its plans to offer an initial public offering (IPO) in August 2020. At the time, the company reported 1.5 million monthly users, with 15,000 new projects started daily. The company completed its IPO on 17 September 2020 at a total of , above its target price, and started trading as a public company on the New York Stock Exchange under the ticker on the following day. The IPO gave Unity an estimated value of . In December 2020, Unity announced the acquisition of the multiplayer networking framework, MLAPI, and RestAR, a computer vision and deep learning company. In June 2021, it acquired Pixyz Software, a developer of 3D data optimization technology. The company announced plans to acquire Parsec, desktop streaming software, in August 2021 for . In a cash and shares deal Unity acquired Weta Digital for $1.63 billion in November 2021. Unity added the "Wellington-based company's 275 engineers to its workforce". The latter's visual special effects and animation teams "will continue to exist as a standalone entity", becoming Unity's "largest customer in the media and entertainment space". WetaFX remains majority owned by Peter Jackson. In January 2022, Unity announced the acquisition of Ziva Dynamics, a Vancouver-based VFX company. Corporate affairs Unity Technologies is a public company based in San Francisco, California; its IPO was in September 2020. , the company employed more than 2,000 people in offices across North America, Europe and Asia. It is overseen by a board of directors. John Riccitiello acts as chief executive officer (CEO), and replaced co-founder and former CEO David Helgason in 2014. Danny Lange, who has a history of work on machine learning for IBM, Microsoft, Amazon Web Services and Uber, is vice-president of artificial intelligence and machine learning, a post he has held since late 2016. Unity Technologies named its first independent directors in 2017. Riccitiello said the move was needed if the company intended to go public in the future. According to TechCrunch, Unity Technologies had raised more than $600 million in funding and was valued at about $3 billion by 2018. Its investors include Sequoia Capital, Draper Fisher Jurvetson, Silver Lake, China Investment Corporation, FreeS Fund, Thrive Capital, WestSummit Capital and Max Levchin. Revenue streams include licensing fees for its game engine, its Unity Asset Store, and the Unity platform. Unity's business is split into Operate Solutions (consisting of Unity Ads, Unity In-App Purchases, and other tools, which was newly established in 2015), Create Solutions (consisting of Unity Engine subscriptions and other professional services) and Strategic Partnerships. In 2019, of its reported revenue Operate Solutions accounted for 54%, Create Solutions for 31% and the remaining income sources for 15%. In 2017, Unity Technologies launched Unity Without Borders, a programme that sponsored 50 video game programmers from the Middle East to attend Unity's Unite Europe conference in Amsterdam. Unity Without Borders sponsored video game programmers affected by travel restrictions by President Donald Trump's administration. On 5 June 2019, Anne Evans, formerly vice-president in human resources for Unity Technologies, filed a sexual harassment and wrongful termination lawsuit against the company, alleging that she had been harassed by Riccitiello and another co-worker, and was then terminated over the dispute with the latter. Evans said that the company had a "highly sexualised" workplace culture, where executives would routinely discuss their sexual histories. Unity Technologies responded that Evans had been terminated due to misconduct and lapse in judgment, and replaced Evans with a former Microsoft executive in mid-2020. Unity engine Unity's eponymous platform is used to create two-dimensional, three-dimensional, virtual reality, and augmented reality video games and other simulations. The engine originally launched in 2005 to create video games. , it supports more than 25 platforms. , the platform has been used to create approximately half of mobile games on the market and 60 percent of augmented reality and virtual reality content, including approximately 90 percent on emerging augmented reality platforms, such as Microsoft HoloLens, and 90 percent of Samsung Gear VR content. , Unity-made applications were used by 2 billion monthly active users, with 1.5 million monthly creators. Unity technology is the basis for most virtual reality and augmented reality experiences, and Fortune said Unity "dominates the virtual reality business". As of 2017, Unity Technologies used its game engine to transition into other industries using the real-time 3D platform, including film and automotive. See also List of game engines List of Unity games References External links 2004 establishments in Denmark Video game companies based in California Video game development companies Video game companies established in 2004 Companies based in San Francisco 2020 initial public offerings Companies listed on the New York Stock Exchange
475505
https://en.wikipedia.org/wiki/Disk%20array%20controller
Disk array controller
A disk array controller is a device that manages the physical disk drives and presents them to the computer as logical units. It almost always implements hardware RAID, thus it is sometimes referred to as RAID controller. It also often provides additional disk cache. Disk array controller is often improperly shortened to disk controller. The two should not be confused as they provide very different functionality. Front-end and back-end side A disk array controller provides front-end interfaces and back-end interfaces. Back-end interface communicates with controlled disks. Hence protocol is usually ATA (a.k.a. PATA), SATA, SCSI, FC or SAS. Front-end interface communicates with a computer's host adapter (HBA, Host Bus Adapter) and uses: one of ATA, SATA, SCSI, FC; these are popular protocols used by disks, so by using one of them a controller may transparently emulate a disk for a computer somewhat less popular protocol dedicated for a specific solution: FICON/ESCON, iSCSI, HyperSCSI, ATA over Ethernet or InfiniBand A single controller may use different protocols for back-end and for front-end communication. Many enterprise controllers use FC on front-end and SATA on back-end. Enterprise controllers In a modern enterprise architecture disk array controllers (sometimes also called storage processors, or SPs) are parts of physically independent enclosures, such as disk arrays placed in a storage area network (SAN) or network-attached storage (NAS) servers. Those external disk arrays are usually purchased as an integrated subsystem of RAID controllers, disk drives, power supplies, and management software. It is up to controllers to provide advanced functionality (various vendors name these differently): Automatic failover to another controller (transparent to computers transmitting data) Long-running operations performed without downtime Forming a new RAID set Reconstructing degraded RAID set (after a disk failure) Adding a disk to online RAID set Removing a disk from a RAID set (rare functionality) Partitioning a RAID set to separate volumes/LUNs Snapshots Business continuance volumes (BCV) Replication with a remote controller.... Simple controllers A simple disk array controller may fit inside a computer, either as a PCI expansion card or just built onto a motherboard. Such a controller usually provides host bus adapter (HBA) functionality itself to save physical space. Hence it is sometimes called a RAID adapter. Intel started integrating their own Matrix RAID controller in their more upmarket motherboards, giving control over 4 devices and an additional 2 SATA connectors, and totalling 6 SATA connections (3Gbit/s each). For backward compatibility one IDE connector able to connect 2 ATA devices (100 Mbit/s) is also present. History While hardware RAID controllers were available for a long time, they always required expensive SCSI hard drives and aimed at the server and high-end computing market. SCSI technology advantages include allowing up to 15 devices on one bus, independent data transfers, hot-swapping, much higher MTBF. Around 1997, with the introduction of ATAPI-4 (and thus the Ultra-DMA-Mode 0, which enabled fast data-transfers with less CPU utilization) the first ATA RAID controllers were introduced as PCI expansion cards. Those RAID systems made their way to the consumer market, where the users wanted the fault-tolerance of RAID without investing in expensive SCSI drives. ATA drives make it possible to build RAID systems at lower cost than with SCSI, but most ATA RAID controllers lack a dedicated buffer or high-performance XOR hardware for parity calculation. As a result, ATA RAID performs relatively poorly compared to most SCSI RAID controllers. Additionally, data safety suffers if there is no battery backup to finish writes interrupted by a power outage. OS support Because the hardware RAID controllers present assembled RAID volumes, operating systems aren't strictly required to implement the complete configuration and assembly for each controller. Very often only the basic features are implemented in the open-source software driver, with extended features being provided through binary blobs directly by the hardware manufacturer. Normally, RAID controllers can be fully configured through card BIOS before an operating system is booted, and after the operating system is booted, proprietary configuration utilities are available from the manufacturer of each controller, because the exact feature set of each controller may be specific to each manufacturer and product. Unlike the network interface controllers for Ethernet, which can be usually be configured and serviced entirely through the common operating system paradigms like ifconfig in Unix, without a need for any third-party tools, each manufacturer of each RAID controller usually provides their own proprietary software tooling for each operating system that they deem to support, ensuring a vendor lock-in, and contributing to reliability issues. For example, in FreeBSD, in order to access the configuration of Adaptec RAID controllers, users are required to enable Linux compatibility layer, and use the Linux tooling from Adaptec, potentially compromising the stability, reliability and security of their setup, especially when taking the long term view in mind. However, this greatly depends on the controller, and whether appropriate hardware documentation is available in order to write a driver, and some controllers do have open-source versions of their configuration utilities, for example, mfiutil and mptutil is available for FreeBSD since FreeBSD 8.0 (2009), as well as mpsutil/mprutil since 2015, each supporting only their respective device drivers, this latter fact contributing to code bloat. Some other operating systems have implemented their own generic frameworks for interfacing with any RAID controller, and provide tools for monitoring RAID volume status, as well as facilitation of drive identification through LED blinking, alarm management, hot spare disk designations and from within the operating system without having to reboot into card BIOS. For example, this was the approach taken by OpenBSD in 2005 with its bio(4) pseudo-device driver and the bioctl utility, which provide volume status, and allow LED/alarm/hotspare control, as well as the sensors (including the drive sensor) for health monitoring; this approach has subsequently been adopted and extended by NetBSD in 2007 as well. With bioctl, the feature set is intentionally kept to a minimum, so that each controller can be supported by the tool in the same way; the initial configuration of the controller is meant to be performed through card BIOS, but after the initial configuration, all day-to-day monitoring and repair should be possible with unified and generic tools, which is what bioctl is set to accomplish. References Storage Basics: Choosing a RAID Controller, May 7, 2004, By Ben Freeman Computer data storage Computer storage devices AT Attachment Fault-tolerant computer systems Integrated circuits RAID SCSI
24247643
https://en.wikipedia.org/wiki/WikiTrust
WikiTrust
WikiTrust is a software product, available as a Firefox Plugin, which aimed to assist editors in detecting vandalism and dubious edits, by highlighting the "untrustworthy" text with a yellow or orange background. As of September 2017, the server is offline, but the code is still available for download. When the UCSC server was active, WikiTrust assessed the credibility of content and author reputation of wiki articles using an automated algorithm. WikiTrust provides a plug-in for servers using the MediaWiki platform, such as Wikipedia. When installed, it was designed to enable users of that website to obtain information about the author, origin, and reliability of that website's wiki text. Content that is stable, based on an analysis of article history, should be displayed in normal black-on-white type, and content that is not stable is highlighted in varying shades of yellow or orange. It was formerly available for several language versions of Wikipedia. WikiTrust on Wikipedia was a project undertaken by the Online Collaboration Lab at the University of California, Santa Cruz, in response to a Meta-wiki quality initiative sponsored by the Wikimedia Foundation. The project, discussed at Wikimania 2009, was one of a number of quality/rating tools for Wikipedia content that the Wikimedia Foundation was considering. Communications of the ACM (August 2011) had an article on it. WikiTrust is designed for English and German use via the Wiki-Watch pagedetails for Wikipedia articles, in several languages via a Firefox plugin or it can be installed in any MediaWiki configuration. By 2012, WikiTrust appeared to be an inactive project. A variant of the WikiTrust code was also used for selection of vandalism-free Revision IDs for the Wikipedia Version 0.8 offline selection. As of September 2017, this part of the code is reported to be under development again, for use in Version 0.9 and 1.0 offline collections. Software application Computing reliability WikiTrust computes, for each word, three pieces of information: The author of the word. The revision where the word (and the immediately surrounding text) was inserted. By clicking on a word, visitors are sent to the revision where the word originated. The "trust" of the word, indicated by the word background coloring (orange for "untrusted" text, white for "trusted" text). The trust of the word is computed according to how much the word, and the surrounding text, have been revised by users that WikiTrust considers of "high authority." This project is still in a beta test stage. Criticism The criticism has been raised that "the software doesn’t really measure trustworthiness, and the danger is that people will trust the software to measure something that it does not." Generally, users whose content persists for a long time without being "reverted" by other editors are deemed more trustworthy by the software. This may mean that users who edit controversial articles subject to frequent reversion may be found to be less trustworthy than others. The software uses a variation of Levenshtein distance to measure how much of user's edit is kept or rearranged, so that users can receive "partial credit" for their work. Community bias The software has also been described as measuring the amount of consensus in an article. The community of editors collaborate on articles and revise each other until agreement is reached. Users who make edits which are more similar to the final agreement will receive more reputation. The point is also made that consensus revolves around the beliefs of the community, so that the reputation computed is also a reflection of the community. See also Flagged Revisions Reliability of Wikipedia Artificial intelligence in Wikimedia projects References External links Source code repository Wikipedia reliability
27963793
https://en.wikipedia.org/wiki/Free%20Software%20Street
Free Software Street
Free Software Street () is a street in the town of Berga in Catalonia, Spain. It is the first street in the world dedicated to the free software movement. It was officially opened on 3 July 2010. The street is about long and is located in a newly redeveloped area known as Pla de l'Alemany in the southwestern part of town. It is a mixed-use street, with a large hotel (the Berga Park), a school, a police station, some offices, a parkland, and some undeveloped plots. History In June 2009, Albert Molina, Xavier Gassó and Abel Parera from Berga Telecentre organized the first Free Software Conferences to be held in town. After the conference, they noted the possibility of naming a street in honor of free software so that they applied for it to the Berga Town Council. In January 2010, the second edition of the Free Software Conferences was being organized and it was decided to invite the Free Software Foundation founder Richard Stallman to participate in the planned events, including the naming ceremony of the street. At the same time, the contacts with the politicians from the town council were being successfully concluded. On 10 June, the renaming of the street as "Carrer del Programari Lliure" ("Free Software Street") was officially approved during a plenary session at the town hall with 14 votes in favor and 2 abstentions. On 3 July 2010 at 20:00 (19:00 UTC), the Mayor of Berga Juli Gendrau and Richard Stallman finally inaugurated the street. References External links Streets in Catalonia Free software culture and documents Berguedà
2100055
https://en.wikipedia.org/wiki/LAND
LAND
A LAND (local area network denial) attack is a DoS (denial of service) attack that consists of sending a special poison spoofed packet to a computer, causing it to lock up. The security flaw was first discovered in 1997 by someone using the alias "m3lt", and has resurfaced many years later in operating systems such as Windows Server 2003 and Windows XP SP2. Mechanism The attack involves sending a spoofed TCP SYN packet (connection initiation) with the target host's IP address to an open port as both source and destination. This causes the machine to reply to itself continuously. It is, however, distinct from the TCP SYN Flood vulnerability. Other LAND attacks have since been found in services like SNMP and Windows 88/tcp (kerberos/global services). Such systems had design flaws that would allow the device to accept request on the wire appearing to be from themselves, causing repeated replies. Vulnerable systems Below is a list of vulnerable operating systems: AIX 3.0 AmigaOS AmiTCP 4.2 (Kickstart 3.0) BeOS Preview release 2 PowerMac BSDi 2.0 and 2.1 Digital VMS FreeBSD 2.2.5-RELEASE and 3.0 (Fixed after required updates) HP External JetDirect Print Servers IBM AS/400 OS7400 3.7 Irix 5.2 and 5.3 Mac OS MacTCP, 7.6.1 OpenTransport 1.1.2 and 8.0 NetApp NFS server 4.1d and 4.3 NetBSD 1.1 to 1.3 (Fixed after required updates) NeXTSTEP 3.0 and 3.1 Novell 4.11 OpenVMS 7.1 with UCX 4.1-7 QNX 4.24 Rhapsody Developer Release SCO OpenServer 5.0.2 SMP, 5.0.4 SCO Unixware 2.1.1 and 2.1.2 SunOS 4.1.3 and 4.1.4 Windows 95, NT and XP SP2, Prevention Most firewalls should intercept and discard the poison packet thus protecting the host from this attack. Some operating systems released updates fixing this security hole. See also Slowloris (computer security) High Orbit Ion Cannon Low Orbit Ion Cannon ReDoS Denial-of-service attack References External links Insecure.Org's original post about the attack Article about XP's vulnerability Denial-of-service attacks Types of cyberattacks
563646
https://en.wikipedia.org/wiki/Revolution%20%28disambiguation%29
Revolution (disambiguation)
A revolution is a drastic political change that usually occurs relatively quickly. For revolutions which affect society, culture, and technology more than political systems, see social revolution. Revolution may also refer to: Aviation Warner Revolution I, an American homebuilt aircraft design Warner Revolution II, an American homebuilt aircraft design Books Revolution (book), by Russell Brand, 2014 Revolution (novel), by Jennifer Donnelly, 2010 Revolution, the first part of the 2013 novelization of the first book of the animated TV series The Legend of Korra The Revolution: A Manifesto, by Ron Paul, 2008 Comics Revolution (Marvel Comics), 2000 Revolution (IDW Publishing), 2016 Computing Revolution (software platform), a development environment based on the MetaCard engine Revolution Analytics, a statistical software company Revolution, the former name of LiveCode, a software platform and cross-platform software development environment featuring a dynamically-typed programming language known as Transcript Runtime Revolution (RunRev), the former name of LiveCode, Ltd., the software company that develops the LiveCode software platform Revolution, prototype name for the Wii video game console produced by Nintendo Revolution Software, an English videogame company Revolution (video game), 1986 computer game released by U.S. Gold Engineering and science In astronomy and related fields, the term "revolution" is used when one body moves around (orbits) another while the term "rotation" is used to mean the movement around an axis Industrial Revolution, an 18th–19th-century period of rapid technological development in the West Second Industrial Revolution, also known as the Technological Revolution Revolution engine, a Harley–Davidson engine Revolutions per minute (RPM), a unit of frequency measuring rotational speed, as around a fixed axis Film Revolution, a 1967 eight-minute short by Peter Greenaway Revolution (1968 film), a documentary film by Jack O'Connell made in San Francisco Révolution, a 1985 French adult film directed by José Bénazéraf Revolution (1985 film), a film about a New York fur trapper during the American Revolutionary War Revolution!! (1989), a comic re-enactment of the French Revolution by the National Theatre of Brent Revolution (2012 film), a documentary movie about taking a stand against environmental degradation RevoLOUtion: The Transformation of Lou Benedetti a 2006 dramedy about a Brooklyn boxer Revolution OS (2001), a 2001 documentary on Linux and the free software movement Revolution Studios, a film production company Television "Revolution" (Law & Order: Criminal Intent), the Law & Order: Criminal Intent eighth-season finale based on a banking revolution The Revolution (miniseries), a 2006 American documentary miniseries about the American Revolution that was broadcast on History Channel The Revolution (TV program), a 2012 American health and lifestyle talk television program that aired on ABC Revolution (TV series), a U.S. science fiction series that ran from 2012 to 2014 Revolucija (TV series), a Serbian television series that ran from 2013 to 2015 Supermodel Me: Revolution, the sixth season of Supermodel Me Mathematics Surface of revolution Revolution (geometry) or turn, a complete rotation, 360° Orbital revolution, the cyclical path taken by one object around another object, as of planets Medicine Selamectin, a parasiticide and anthelminthic for cats and dogs, with the trade name Revolution Music Albums Revolution (All Star United album), and the title song Revolution (Crematory album), and the title song Revolution (The Dubliners album) Revolution (Hypnogaja album) Revolution (Kara album) Revolution (Lacrimosa album), and the title song Revolution (Little Steven album), and the title song Revolution (Miranda Lambert album) Revolution (Sirsy album) Revolution (Sister Machine Gun album) Revolution (Slaughter album) Revolution (Tiësto album) Revolution (YFriday album), and the title song Revolution!, by Paul Revere & The Raiders Revolution (Anew Revolution EP) Revolution, by 2R Revolution, by Dilba Revolution, by Diplo, and the title song Revolution, by Wickeda Revolution, an EP by One Minute Silence The Revolution (Belly album) The Revolution (Inhabited album) The Revolution (EP), by Van William, and the title song featuring First Aid Kit (R)evolution, by Minimum Serious (r)Evolution, by HammerFall REvolution, by Lynch Mob The Revolution, by Fiach Moriarty and Wallis Bird Songs "Revolution" (Beatles song), 1968 "Revolution" (Chumbawamba song), 1985 "Revolution" (Coldrain song), 2018 "Revolution" (The Cult song), 1985 "Revolution" (Jars of Clay song), 2002 "Revolution" (Judas Priest song), 2005 "Revolution" (Nina Simone song), 1968 "Revolution" (R3hab and Nervo and Ummet Ozcan song), 2013 "Revolution" (Stefanie Heinzmann song), 2008 "Revolution" (Tomorrow song), 1967 "Revolution" (The Veronicas song), 2006 "Revolution", by 30 Seconds to Mars, an unreleased song "Revolution", by Accept from Stalingrad "Revolution", by Aimee Allen, used as the title music for the TV series Birds of Prey "Revolution", by Arrested Development "Revolution", by Audio Adrenaline from Audio Adrenaline "Revolution", by Bang Camaro from Bang Camaro II "Revolution", by Bob Marley from Natty Dread "Revolution", by Built to Spill from Ultimate Alternative Wavers "Revolution", by Dennis Brown "Revolution", by Doug Wimbish from CinemaSonics "Revolution", by Eric Clapton from Back Home "Revolution", by Flogging Molly from Speed of Darkness "Revolution", by Kirk Franklin from The Nu Nation Project "Revolution", by Krayzie Bone, featuring The Marley Brothers, from Thug Mentality 1999 "Revolution", by Lil' Kim from The Notorious K.I.M. "Revolution", by Livin Out Loud from Then and Now "Revolution", by Moth from Immune to Gravity "Revolution", by P.O.D. from Payable on Death "Revolution", by Public Enemy from New Whirl Odor "Revolution", by R.E.M. from the soundtrack Batman & Robin and their documentary video Road Movie "Revolution", by Robbie Williams from Escapology "Revolution", by Rogue Traders from We Know What You're Up To "Revolution", by Spacemen 3 from Playing with Fire "Revolution", by Steve Angello from Wild Youth "Revolution", by Theatre of Tragedy from Forever Is the World "Revolution (B-Boy Anthem)", by Zion I "Revolution (In the Summertime?)", by Cosmic Rough Riders "Revolution 9", by The Beatles "Revolution 909", by Daft Punk "Revolution 1993", by Jamiroquai from Emergency on Planet Earth "Revolution Song", by Oasis, a demo recorded during the sessions for Standing on the Shoulder of Giants "The Revolution" (Exile Tribe song) "The Revolution", by Attack Attack! from This Means War "The Revolution", by BT from the soundtrack Lara Croft: Tomb Raider "The Revolution", by Chris de Burgh from The Getaway "The Revolution", by Coolio from Gangsta's Paradise "The Revolution", by David Byrne from Look into the Eyeball "The Revolution", by Scooter from Back to the Heavyweight Jam "The Revolution", by Tom Verlaine from The Miller's Tale: A Tom Verlaine Anthology "La révolution", by Tryo from Mamagubida "(r)Evolution", by HammerFall from (r)Evolution Other Revolution (duo), a South African house band The Revolution (band), Prince's original band, formed in 1979 Revolution Records, a U.S. record label Politics Revolution (political group), a political group founded by the League for a Fifth International Total revolution, the political philosophy of veganism and anarchism Publications Revolution (weekly), organ of the Revolutionary Communist Party, USA The Revolution (newspaper), a women's rights newspaper published from 1868 to 1872 Society Revolution (Pleasure Beach Blackpool), a roller coaster at Blackpool Pleasure Beach, England Vekoma Illusion, a roller coaster model named Revolution at Bobbejaanland in Belgium Revolution (vodka bar), a brand of bars founded in Manchester in 1996 Revolution LLC, a principal investment firm founded by former AOL chairman Steve Case Revolution (pet medicine), a flea and heartworm preventative treatment for cats and dogs The Revolution (radio station), a former radio station broadcasting to Oldham, Rochdale and Tameside, United Kingdom Sport AEW Revolution, an annual professional wrestling event by All Elite Wrestling (AEW) New England Revolution, a Major League Soccer team Revolution, a ball rotation in ten-pin bowling Revolution, nickname of a United States men's national Australian rules football team Shropshire Revolution, American football team in Shropshire, England Revolution (cycling series), a track cycling event held at the Manchester Velodrome, England Various professional wrestling tag teams and stables: The Revolution (TNA) in Total Nonstop Action Wrestling The Revolution (WCW), in World Championship Wrestling Revolution (puroresu) in various puroresu promotions See also List of revolutions and rebellions Revolución (disambiguation) Revolutions (disambiguation) R-Evolution (disambiguation) Viva la revolución (disambiguation)
3970492
https://en.wikipedia.org/wiki/National%20Security%20Council%20%28India%29
National Security Council (India)
The National Security Council (NSC) (IAST: Rāṣṭrīya Surakṣā Pariṣad) of India is an executive government agency tasked with advising the Prime Minister's Office on matters of national security and strategic interest. It was established by the former Prime Minister of India Atal Bihari Vajpayee on 19 November 1998, with Brajesh Mishra as the first National Security Advisor. Prior to the formation of the NSC, these activities were overseen by the Principal Secretary to the preceding Prime Minister. Members Besides the National Security Advisor (NSA), the Deputy National Security Advisors, the Ministers of Defence, External Affairs, Home, Finance of the Government of India, and the Vice Chairman of the NITI Aayog are members of the National Security Council. PM can chair the meeting of NSC (for eg - PM chaired the meeting of NSC Post Pulwama to discuss heightened tension with Pakistan). Other members may be invited to attend its monthly meetings, as and when it is required. Organisational structure The NSC is the apex body of the three-tiered structure of the national security management system in India. The three tiers are the Strategic Policy Group, the National Security Advisory Board and a secretariat from the Joint Intelligence Committee. Strategic Policy Group The Strategic Policy Group is the first level of the three tier structure of the National Security Council. It forms the nucleus of the decision-making apparatus of the NSC. National Security Advisor Ajit Doval is the chairman of the group and it consists of the following members: Vice Chairman Niti Aayog Rajiv Kumar Cabinet Secretary (Rajiv Gauba, IAS) Chief of Defence Staff (Vacant) Chief of the Army Staff (General Manoj Mukund Naravane) Chief of the Naval Staff (Admiral R. Hari Kumar) Chief of the Air Staff (Air Chief Marshal Vivek Ram Chaudhari) Governor of the Reserve Bank of India (RBI) (Shaktikanta Das) Foreign Secretary (Harsh Vardhan Shringla, IFS) Defence Secretary (Ajay Kumar, IAS) Home Secretary (Ajay Kumar Bhalla, IAS) Finance Secretary (T. V. Somanathan, IAS) Secretary (Research) (i.e. the head of the Research and Analysis Wing) (Samant Goel, IPS) Director General of Defence Intelligence Agency (Lieutenant General KJS Dhillion) Director of the Intelligence Bureau (Arvind Kumar, IPS) Chairperson, Central Board of Direct Taxes (Pramod Chandra Mody, IRS (IT)) Secretary (Defence Production) (Dr. Ajay Kumar, IAS) Scientific Advisor to the Raksha Mantri and Chairman of Defence Research and Development Organisation (DRDO) (Dr. G. Satheesh Reddy) Secretary (Atomic Energy) (Dr. K. N. Vyas) Secretary (Space) and ex-officio Chairman, Indian Space Research Organization (ISRO) (Dr. K. Sivan) The Strategic Policy Group undertakes the Strategic Defence Review, a blueprint of short and long term security threats, as well as possible policy options on a priority basis. National Security Advisory Board The brainchild of the first National Security Advisor (NSA), Brajesh Mishra, a former member of Indian Foreign Service. The National Security Advisory Board (NSAB) consists of a group of eminent national security experts outside of the government. Members are usually senior retired officials, civilian as well as military, academics and distinguished members of civil society drawn from and having expertise in Internal and External Security, Foreign Affairs, Defence, Science & Technology and Economic Affairs. The first NSAB, constituted in December 1998, headed by the late K. Subrahmanyam produced a draft Nuclear Doctrine for the country in 2001, a Strategic Defence Review in 2002 and a National Security Review in 2007. The board meets at least once a month, and more frequently as required. It provides a long-term prognosis and analysis to the NSC and recommends solutions and address policy issues referred to it. Initially the Board was constituted for one year, but since 2004-06, the Board has been reconstituted for two years. The tenure of the previous NSAB, headed by former Foreign Secretary Shyam Saran, ended in January 2015. It had 14 members. The new board has been re-constituted in July 2018, with P. S. Raghavan, former Indian Ambassador to Russia (2014–16), as its head. It has a tenure of two years. Joint Intelligence Committee The Joint Intelligence Committee (JIC) of the Government of India analyses intelligence data from the Intelligence Bureau, Research and Analysis Wing and the Directorates of Military, Naval and Air Intelligence and thus analyses both domestic and foreign intelligence. The JIC has its own Secretariat that works under the Cabinet Secretariat. Cyber Security National Cyber Security Strategy is formulated by the Office of National Cyber Security Coordinator at the National Security Council Secretariat. The National Security Council and National Information Board headed by National Security Adviser are at working under the cyber security surveillance helping in framing India’s cyber security policy. It aims to protect the cyber space including critical information infrastructure from attack, damage, misuse and economic espionage. In 2014 the National Critical Information Infrastructure Protection Centre under the National Technical Research Organisation mandated the protection of critical information infrastructure. In 2015, the Office of National Cyber Security Coordinator was created to advice the Prime Minister on strategic cyber security issues. In the case of nodal entity, India’s Computer Emergency Response Team (CERT-in) is playing a crucial role under the Ministry of Electronics and Information Technology(MEITY). On 15 June 2021, the Government of India launched the Trusted Telecom Portal signalling the coming into effect of the National Security Directive on Telecommunication Sector (NSDTS). Consequently, with effect from 15 June 2021 the Telecom Service Providers (TSPs) are mandatorily required to connect in their networks only those new devices which are designated as Trusted Products from Trusted Sources. See also Cabinet Committee on Security List of Indian Intelligence agencies References External links Official website of the National Security Advisory Board Trusted Telecom Portal - National Security Directive on Telecommunication Sector Center for Contemporary Conflict - Indian Internal Security and Intelligence Organization Espionage Info - India, Intelligence and Security FAS - Directorate for Inter-Services Intelligence Global Security - India Intelligence and Security Agencies India Councils of India Security 1998 establishments in Delhi Government agencies established in 1998 Ministry of Communications and Information Technology (India) Cyber Security in India
48858456
https://en.wikipedia.org/wiki/Suhas%20Katti%20v.%20Tamil%20Nadu
Suhas Katti v. Tamil Nadu
Suhas Katti v. Tamil Nadu was the first case in India where a conviction was handed down in connection with the posting of obscene messages on the internet under the controversial section 67 of the Information Technology Act, 2000. The case was filed in February 2004 and In a short span of about seven months from the filing of the FIR, the Chennai Cyber Crime Cell achieved the conviction . In the case, a woman complained to the police about a man who was sending her obscene, defamatory and annoying messages in a Yahoo message group. The accused also forwarded emails received in a fake account opened by him in the victim's name. The victim also received phone calls by people who believed she was soliciting for sex work. Facts After the victim made the complaint in February 2004, the police traced the accused, who was the victim's friend, to Mumbai and arrested him. The police found the accused was interested in marrying the victim but she turned him down and married someone else instead. The marriage, however, ended in divorce, which is when the accused started contacting the victim again but she rejected him again. The accused then started harassing the victim online. On March 24, 2004, a chargesheet was filed under section 67 of the IT Act 2000, 469 and 509 IPC before the metropolitan magistrate in Egmore, Chennai. The defence argued that the offending mails were sent either by the victim's husband or by herself to implicate the accused. On November 5, 2004, the magistrate found the accused guilty of offences under section 469, 509 IPC and 67 of IT Act 2000. He was sentenced to rigorous imprisonment for 2 years under 469 IPC and to pay a fine of Rs.500/-, one year simple imprisonment and Rs 500 fine under 509 IPC and two years imprisonment with a fine of Rs 4,000 under section 67 of IT Act 2000. All sentences were to run concurrently. More information Section 67 in The Information Technology Act, 2000 [ 67 Punishment for publishing or transmitting obscene material in electronic form. -Whoever publishes or transmits or causes to be published or transmitted in the electronic form, any material which is lascivious or appeals to the prurient interest or if its effect is such as to tend to deprave and corrupt persons who are likely, having regard to all relevant circumstances, to read, see or hear the matter contained or embodied in it, shall be punished on first conviction with imprisonment of either description for a term which may extend to three years and with fine which may extend to five lakh rupees and in the event of second or subsequent conviction with imprisonment of either description for a term which may extend to five years and also with fine which may extend to ten lakh rupees. ] The case is also significant for having introduced electronic evidence under Section 65B of the Indian Evidence Act for the first time in a Court, where a certified copy of the electronic document present on Yahoo server was produced by a private techno legal consultant, not being part of a Government forensic lab, and was accepted as the prime evidence of crime. The role of a private person as an "Expert" and the "Validity of Section 65B of Indian Evidence Act" were examined and validated in the trial. During the trial conviction was also brought on the concept of "Forgery" of an electronic document under Indian Penal Code when a person writes his name below a message intending the recipient to consider it as a message sent by that person. Impact The impact of the case was far reaching and set a benchmark for the courts and inspired people to lodge cases related to harassment on the internet. Section 67 subsequently ran into controversy after the government used the section to enforce a partial ban on pornography in India with activists questioning the vague definition of "obscene" in the provision that could be used to curtail any sexually explicit material. The Case also brought out the responsibilities of an intermediary like a Cyber Cafe in maintaining a visitor's register and its importance as evidence. The case validated the concept of production of electronic evidence through Section 65B certification without the production of the original hard disk containing the document. References External links Indian Kanoon website Cyber Evidence Archival Center (copy of Judgement) Information technology in India
62006488
https://en.wikipedia.org/wiki/World%20Congress%20on%20Information%20Technology%20%282019%29
World Congress on Information Technology (2019)
The World Congress On Information Technology (WCIT) 2019 is an information and communications technology (ICT) event which took place from October 6 to 9, 2019 in Yerevan, Armenia. The 23rd World Congress on IT featured discussions related to the evolution of the Digital Age. It included sessions on topics ranging from artificial intelligence, virtual reality, smart cities to cybersecurity, climate change, and more. The 2019 World Congress had over 2000 delegates from 70 countries, with over 31 sponsoring organizations. Overview The Congress has been organized since 1978 by the World Information Technology and Services Alliance (WITSA) and takes place every two years in different countries - since 2017 annually. WCIT 2019 events and programs Sunday, October 6th Pre-Opening Celebration: World's First AI Concert on Republic Square of Yerevan, Armenia, conductor Sergey Smbatyan and special guest Armin Van Buuren Monday, October 7th Substantive Sessions WCIT 2019 Keynote Address Tuesday, October 8th Substantive Sessions Ministerial Session Wednesday, October 9th Substantive Sessions Genomics Speakers Among the featured speakers were internationally recognized leaders from government and industry. Business Alexander Yesayan, President of the Union of Advanced Technology Enterprises Yvonne Chiu, Chairman of the World Information Technology and Servies Alliance Government Nikol Pashinyan, Prime Minister of Armenia Academia/Media/Other Kim Kardashian, American media personality, businesswoman, socialite, model and actress Serj Tankian, Armenian-American musician, singer, songwriter, multi-instrumentalist, record producer, poet and political activist Sponsors Government of Armenia World Information Technology and Services Alliance (WITSA) Union of Advanced Technology Enterprises Partners The Government of Moscow Picsart Armenian General Benevolent Union (AGBU) Palladium Sponsors Ucom SoftConstruct DigiTec Expo Gold Sponsors Taiwan Excellence Smart Taiwan TeamViewer Google Silver Sponsors VMware Ararat Vahakni Bronze Sponsors InecoBank Codics HachTech Digitain IUnetworks Renderforest Virtlo Joomag Storaket Architectural Studio Pikasso Adrack Ardshinbank Interprint Coca-Cola Digifield Karas Converse Bank The Crowdfunding Formula Sproot HD Studio Special Partner All.me Multi Group Concern Branding Partner Maeutica Branding Agency Travel Partner Armenia Travel Digital Marketing Partner MAROG Creative Agency Gallery See also Information and communications technology World Information Technology and Services Alliance References External links WCIT 2019 web page WCIT 2019's page on facebook Information and communications technology Events in Yerevan
12808
https://en.wikipedia.org/wiki/GSM
GSM
The Global System for Mobile Communications (GSM) is a standard developed by the European Telecommunications Standards Institute (ETSI) to describe the protocols for second-generation (2G) digital cellular networks used by mobile devices such as mobile phones and tablets. It was first deployed in Finland in December 1991. By the mid-2010s, it became a global standard for mobile communications achieving over 90% market share, and operating in over 193 countries and territories. 2G networks developed as a replacement for first generation (1G) analog cellular networks. The GSM standard originally described a digital, circuit-switched network optimized for full duplex voice telephony. This expanded over time to include data communications, first by circuit-switched transport, then by packet data transport via General Packet Radio Service (GPRS), and Enhanced Data Rates for GSM Evolution (EDGE). Subsequently, the 3GPP developed third-generation (3G) UMTS standards, followed by the fourth-generation (4G) LTE Advanced and the fifth-generation 5G standards, which do not form part of the ETSI GSM standard. "GSM" is a trade mark owned by the GSM Association. It may also refer to the (initially) most common voice codec used, Full Rate. As a result of the network's widespread use across Europe, the acronym "GSM" was briefly used as a generic term for mobile phones in France, the Netherlands and in Belgium. A great number of people in Belgium still use it to date. Many carriers (like Verizon) will shutdown GSM and CDMA in 2022. History Initial development for GSM by Europeans In 1983, work began to develop a European standard for digital cellular voice telecommunications when the European Conference of Postal and Telecommunications Administrations (CEPT) set up the Groupe Spécial Mobile (GSM) committee and later provided a permanent technical-support group based in Paris. Five years later, in 1987, 15 representatives from 13 European countries signed a memorandum of understanding in Copenhagen to develop and deploy a common cellular telephone system across Europe, and EU rules were passed to make GSM a mandatory standard. The decision to develop a continental standard eventually resulted in a unified, open, standard-based network which was larger than that in the United States. In February 1987 Europe produced the first agreed GSM Technical Specification. Ministers from the four big EU countries cemented their political support for GSM with the Bonn Declaration on Global Information Networks in May and the GSM MoU was tabled for signature in September. The MoU drew in mobile operators from across Europe to pledge to invest in new GSM networks to an ambitious common date. In this short 38-week period the whole of Europe (countries and industries) had been brought behind GSM in a rare unity and speed guided by four public officials: Armin Silberhorn (Germany), Stephen Temple (UK), Philippe Dupuis (France), and Renzo Failli (Italy). In 1989 the Groupe Spécial Mobile committee was transferred from CEPT to the European Telecommunications Standards Institute (ETSI). The IEEE/RSE awarded to Thomas Haug and Philippe Dupuis the 2018 James Clerk Maxwell medal for their "leadership in the development of the first international mobile communications standard with subsequent evolution into worldwide smartphone data communication". The GSM (2G) has evolved into 3G, 4G and 5G. First networks In parallel France and Germany signed a joint development agreement in 1984 and were joined by Italy and the UK in 1986. In 1986, the European Commission proposed reserving the 900 MHz spectrum band for GSM. The former Finnish prime minister Harri Holkeri made the world's first GSM call on 1 July 1991, calling Kaarina Suonio (deputy mayor of the city of Tampere) using a network built by Nokia and Siemens and operated by Radiolinja. The following year saw the sending of the first short messaging service (SMS or "text message") message, and Vodafone UK and Telecom Finland signed the first international roaming agreement. Enhancements Work began in 1991 to expand the GSM standard to the 1800 MHz frequency band and the first 1800 MHz network became operational in the UK by 1993, called and DCS 1800. Also that year, Telecom Australia became the first network operator to deploy a GSM network outside Europe and the first practical hand-held GSM mobile phone became available. In 1995 fax, data and SMS messaging services were launched commercially, the first 1900 MHz GSM network became operational in the United States and GSM subscribers worldwide exceeded 10 million. In the same year, the GSM Association formed. Pre-paid GSM SIM cards were launched in 1996 and worldwide GSM subscribers passed 100 million in 1998. In 2000 the first commercial GPRS services were launched and the first GPRS-compatible handsets became available for sale. In 2001, the first UMTS (W-CDMA) network was launched, a 3G technology that is not part of GSM. Worldwide GSM subscribers exceeded 500 million. In 2002, the first Multimedia Messaging Service (MMS) was introduced and the first GSM network in the 800 MHz frequency band became operational. EDGE services first became operational in a network in 2003, and the number of worldwide GSM subscribers exceeded 1 billion in 2004. By 2005 GSM networks accounted for more than 75% of the worldwide cellular network market, serving 1.5 billion subscribers. In 2005, the first HSDPA-capable network also became operational. The first HSUPA network launched in 2007. (High-Speed Packet Access (HSPA) and its uplink and downlink versions are 3G technologies, not part of GSM.) Worldwide GSM subscribers exceeded three billion in 2008. Adoption The GSM Association estimated in 2011 that technologies defined in the GSM standard served 80% of the mobile market, encompassing more than 5 billion people across more than 212 countries and territories, making GSM the most ubiquitous of the many standards for cellular networks. GSM is a second-generation (2G) standard employing time-division multiple-access (TDMA) spectrum-sharing, issued by the European Telecommunications Standards Institute (ETSI). The GSM standard does not include the 3G Universal Mobile Telecommunications System (UMTS), code-division multiple access (CDMA) technology, nor the 4G LTE orthogonal frequency-division multiple access (OFDMA) technology standards issued by the 3GPP. GSM, for the first time, set a common standard for Europe for wireless networks. It was also adopted by many countries outside Europe. This allowed subscribers to use other GSM networks that have roaming agreements with each other. The common standard reduced research and development costs, since hardware and software could be sold with only minor adaptations for the local market. Discontinuation Telstra in Australia shut down its 2G GSM network on 1 December 2016, the first mobile network operator to decommission a GSM network. The second mobile provider to shut down its GSM network (on 1 January 2017) was AT&T Mobility from the United States. Optus in Australia completed the shut down of its 2G GSM network on 1 August 2017, part of the Optus GSM network covering Western Australia and the Northern Territory had earlier in the year been shut down in April 2017. Singapore shut down 2G services entirely in April 2017. Technical details Network structure The network is structured into several discrete sections: Base station subsystem – the base stations and their controllers Network and Switching Subsystem – the part of the network most similar to a fixed network, sometimes just called the "core network" GPRS Core Network – the optional part which allows packet-based Internet connections Operations support system (OSS) – network maintenance Base-station subsystem GSM utilizes a cellular network, meaning that cell phones connect to it by searching for cells in the immediate vicinity. There are five different cell sizes in a GSM network: macro micro pico femto, and umbrella cells The coverage area of each cell varies according to the implementation environment. Macro cells can be regarded as cells where the base-station antenna is installed on a mast or a building above average rooftop level. Micro cells are cells whose antenna height is under average rooftop level; they are typically deployed in urban areas. Picocells are small cells whose coverage diameter is a few dozen meters; they are mainly used indoors. Femtocells are cells designed for use in residential or small-business environments and connect to a telecommunications service provider's network via a broadband-internet connection. Umbrella cells are used to cover shadowed regions of smaller cells and to fill in gaps in coverage between those cells. Cell horizontal radius varies – depending on antenna height, antenna gain, and propagation conditions – from a couple of hundred meters to several tens of kilometers. The longest distance the GSM specification supports in practical use is . There are also several implementations of the concept of an extended cell, where the cell radius could be double or even more, depending on the antenna system, the type of terrain, and the timing advance. GSM supports indoor coverage – achievable by using an indoor picocell base station, or an indoor repeater with distributed indoor antennas fed through power splitters – to deliver the radio signals from an antenna outdoors to the separate indoor distributed antenna system. Picocells are typically deployed when significant call capacity is needed indoors, as in shopping centers or airports. However, this is not a prerequisite, since indoor coverage is also provided by in-building penetration of radio signals from any nearby cell. GSM carrier frequencies GSM networks operate in a number of different carrier frequency ranges (separated into GSM frequency ranges for 2G and UMTS frequency bands for 3G), with most 2G GSM networks operating in the 900 MHz or 1800 MHz bands. Where these bands were already allocated, the 850 MHz and 1900 MHz bands were used instead (for example in Canada and the United States). In rare cases the 400 and 450 MHz frequency bands are assigned in some countries because they were previously used for first-generation systems. For comparison, most 3G networks in Europe operate in the 2100 MHz frequency band. For more information on worldwide GSM frequency usage, see GSM frequency bands. Regardless of the frequency selected by an operator, it is divided into timeslots for individual phones. This allows eight full-rate or sixteen half-rate speech channels per radio frequency. These eight radio timeslots (or burst periods) are grouped into a TDMA frame. Half-rate channels use alternate frames in the same timeslot. The channel data rate for all is and the frame duration is The transmission power in the handset is limited to a maximum of 2 watts in and in . Voice codecs GSM has used a variety of voice codecs to squeeze 3.1 kHz audio into between 7 and 13 kbit/s. Originally, two codecs, named after the types of data channel they were allocated, were used, called Half Rate (6.5 kbit/s) and Full Rate (13 kbit/s). These used a system based on linear predictive coding (LPC). In addition to being efficient with bitrates, these codecs also made it easier to identify more important parts of the audio, allowing the air interface layer to prioritize and better protect these parts of the signal. GSM was further enhanced in 1997 with the enhanced full rate (EFR) codec, a 12.2 kbit/s codec that uses a full-rate channel. Finally, with the development of UMTS, EFR was refactored into a variable-rate codec called AMR-Narrowband, which is high quality and robust against interference when used on full-rate channels, or less robust but still relatively high quality when used in good radio conditions on half-rate channel. Subscriber Identity Module (SIM) One of the key features of GSM is the Subscriber Identity Module, commonly known as a SIM card. The SIM is a detachable smart card containing the user's subscription information and phone book. This allows the user to retain their information after switching handsets. Alternatively, the user can change operators while retaining the handset simply by changing the SIM. Phone locking Sometimes mobile network operators restrict handsets that they sell for exclusive use in their own network. This is called SIM locking and is implemented by a software feature of the phone. A subscriber may usually contact the provider to remove the lock for a fee, utilize private services to remove the lock, or use software and websites to unlock the handset themselves. It is possible to hack past a phone locked by a network operator. In some countries and regions (e.g., Bangladesh, Belgium, Brazil, Canada, Chile, Germany, Hong Kong, India, Iran, Lebanon, Malaysia, Nepal, Norway, Pakistan, Poland, Singapore, South Africa, Sri Lanka, Thailand) all phones are sold unlocked due to the abundance of dual SIM handsets and operators. GSM security GSM was intended to be a secure wireless system. It has considered the user authentication using a pre-shared key and challenge-response, and over-the-air encryption. However, GSM is vulnerable to different types of attack, each of them aimed at a different part of the network. The development of UMTS introduced an optional Universal Subscriber Identity Module (USIM), that uses a longer authentication key to give greater security, as well as mutually authenticating the network and the user, whereas GSM only authenticates the user to the network (and not vice versa). The security model therefore offers confidentiality and authentication, but limited authorization capabilities, and no non-repudiation. GSM uses several cryptographic algorithms for security. The A5/1, A5/2, and A5/3 stream ciphers are used for ensuring over-the-air voice privacy. A5/1 was developed first and is a stronger algorithm used within Europe and the United States; A5/2 is weaker and used in other countries. Serious weaknesses have been found in both algorithms: it is possible to break A5/2 in real-time with a ciphertext-only attack, and in January 2007, The Hacker's Choice started the A5/1 cracking project with plans to use FPGAs that allow A5/1 to be broken with a rainbow table attack. The system supports multiple algorithms so operators may replace that cipher with a stronger one. Since 2000, different efforts have been made in order to crack the A5 encryption algorithms. Both A5/1 and A5/2 algorithms have been broken, and their cryptanalysis has been revealed in the literature. As an example, Karsten Nohl developed a number of rainbow tables (static values which reduce the time needed to carry out an attack) and have found new sources for known plaintext attacks. He said that it is possible to build "a full GSM interceptor...from open-source components" but that they had not done so because of legal concerns. Nohl claimed that he was able to intercept voice and text conversations by impersonating another user to listen to voicemail, make calls, or send text messages using a seven-year-old Motorola cellphone and decryption software available for free online. GSM uses General Packet Radio Service (GPRS) for data transmissions like browsing the web. The most commonly deployed GPRS ciphers were publicly broken in 2011. The researchers revealed flaws in the commonly used GEA/1 and GEA/2 (standing for GPRS Encryption Algorithms 1 and 2) ciphers and published the open-source "gprsdecode" software for sniffing GPRS networks. They also noted that some carriers do not encrypt the data (i.e., using GEA/0) in order to detect the use of traffic or protocols they do not like (e.g., Skype), leaving customers unprotected. GEA/3 seems to remain relatively hard to break and is said to be in use on some more modern networks. If used with USIM to prevent connections to fake base stations and downgrade attacks, users will be protected in the medium term, though migration to 128-bit GEA/4 is still recommended. The first public cryptanalysis of GEA/1 and GEA/2 (also written GEA-1 and GEA-2) was done in 2021. It concluded that although using a 64-bit key, the GEA-1 algorithm actually provides only 40 bits of security, due to a relationship between two parts of the algorithm. The researchers found that this relationship was very unlikely to have happened if it wasn't intentional. This may have been done in order to satisfy European controls on export of cryptographic programs. Standards information The GSM systems and services are described in a set of standards governed by ETSI, where a full list is maintained. GSM open-source software Several open-source software projects exist that provide certain GSM features: gsmd daemon by Openmoko OpenBTS develops a Base transceiver station The GSM Software Project aims to build a GSM analyzer for less than $1,000 OsmocomBB developers intend to replace the proprietary baseband GSM stack with a free software implementation YateBTS develops a Base transceiver station Issues with patents and open source Patents remain a problem for any open-source GSM implementation, because it is not possible for GNU or any other free software distributor to guarantee immunity from all lawsuits by the patent holders against the users. Furthermore, new features are being added to the standard all the time which means they have patent protection for a number of years. The original GSM implementations from 1991 may now be entirely free of patent encumbrances, however patent freedom is not certain due to the United States' "first to invent" system that was in place until 2012. The "first to invent" system, coupled with "patent term adjustment" can extend the life of a U.S. patent far beyond 20 years from its priority date. It is unclear at this time whether OpenBTS will be able to implement features of that initial specification without limit. As patents subsequently expire, however, those features can be added into the open-source version. , there have been no lawsuits against users of OpenBTS over GSM use. See also Cellular network Enhanced Data Rates for GSM Evolution (EDGE) Enhanced Network Selection (ENS) GSM forwarding standard features codes – list of call forward codes working with all operators and phones GSM frequency bands GSM modem GSM services Cell Broadcast GSM localization Multimedia Messaging Service (MMS) NITZ Network Identity and Time Zone Wireless Application Protocol (WAP) GSM-R (GSM-Railway) GSM USSD codes – Unstructured Supplementary Service Data: list of all standard GSM codes for network and SIM related functions Handoff High-Speed Downlink Packet Access (HSDPA) International Mobile Equipment Identity (IMEI) International Mobile Subscriber Identity (IMSI) Long Term Evolution (LTE) MSISDN Mobile Subscriber ISDN Number Nordic Mobile Telephone (NMT) ORFS Personal communications network (PCN) RTP audio video profile Simulation of GSM networks Standards Comparison of mobile phone standards GEO-Mobile Radio Interface GSM 02.07 – Cellphone features GSM 03.48 – Security mechanisms for the SIM application toolkit Intelligent Network Parlay X RRLP – Radio Resource Location Protocol Um interface Visitors Location Register (VLR) References Further reading External links GSM Association—Official industry trade group representing GSM network operators worldwide 3GPP—3G GSM standards development group LTE-3GPP.info: online GSM messages decoder fully supporting all 3GPP releases from early GSM to latest 5G Telecommunications-related introductions in 1991 GSM standard
3025266
https://en.wikipedia.org/wiki/Ternary%20computer
Ternary computer
A ternary computer (also called trinary computer) is one that uses ternary logic (i.e., base 3) instead of the more common binary system (i.e., base 2) in its calculations. This means it uses trits instead of bits, as most computers do. Types of states Ternary computing deals with three discrete states, but the ternary digits themselves can be defined differently: History One early calculating machine, built entirely from wood by Thomas Fowler in 1840, operated in balanced ternary. The first modern, electronic ternary computer, Setun, was built in 1958 in the Soviet Union at the Moscow State University by Nikolay Brusentsov, and it had notable advantages over the binary computers that eventually replaced it, such as lower electricity consumption and lower production cost. In 1970 Brusentsov built an enhanced version of the computer, which he called Setun-70. In the United States, the ternary computing emulator Ternac working on a binary machine was developed in 1973. The ternary computer QTC-1 was developed in Canada. Balanced ternary Ternary computing is commonly implemented in terms of balanced ternary, which uses the three digits −1, 0, and +1. The negative value of any balanced ternary digit can be obtained by replacing every + with a − and vice versa. It is easy to subtract a number by inverting the + and − digits and then using normal addition. Balanced ternary can express negative values as easily as positive ones, without the need for a leading negative sign as with unbalanced numbers. These advantages make some calculations more efficient in ternary than binary. Considering that digit signs are mandatory, and nonzero digits are magnitude 1 only, notation that drops the '1's and use only zero and the + − signs is more concise than if 1's are included. Unbalanced ternary Ternary computing can be implemented in terms of unbalanced ternary, which uses the three digits 0, 1, 2. The original 0 and 1 are explained as an ordinary Binary computer, but instead uses 2 as leakage current. The world's first unbalanced ternary semiconductor design on a large wafer was implemented by the research team led by Kim Kyung-rok at Ulsan National Institute of Science and Technology in South Korea, which will help development of low power and high computing microchips in the future. This research theme was selected as one of the future projects funded by Samsung in 2017, published on July 15, 2019. Potential future applications With the advent of mass-produced binary components for computers, ternary computers have diminished in significance. However, Donald Knuth argues that they will be brought back into development in the future to take advantage of ternary logic's elegance and efficiency. One possible way this could happen is by combining an optical computer with the ternary logic system. A ternary computer using fiber optics could use dark as 0 and two orthogonal polarizations of light as +1 and −1. IBM also reports infrequently on ternary computing topics (in its papers), but it is not actively engaged in it. The Josephson junction has been proposed as a balanced ternary memory cell, using circulating superconducting currents, either clockwise, counterclockwise, or off. "The advantages of the proposed memory circuit are capability of high speed computation, low power consumption and very simple construction with fewer elements due to the ternary operation." In 2009, a quantum computer was proposed which uses a quantum ternary state, a qutrit, rather than the typical qubit. Ternary computers in popular culture In Robert A. Heinlein's novel Time Enough for Love, the sapient computers of Secundus, the planet on which part of the framing story is set, including Minerva, use an unbalanced ternary system. Minerva, in reporting a calculation result, says "three hundred forty one thousand six hundred forty... the original ternary readout is unit pair pair comma unit nil nil comma unit pair pair comma unit nil nil point nil". Virtual Adepts in the roleplaying game Mage: The Ascension use ternary computers. In Howard Tayler's webcomic Schlock Mercenary, every modern computer is a ternary computer. AIs use the extra digit as "maybe" in boolean (true/false) operations, thus having a much more intimate understanding of fuzzy logic than is possible with binary computers. The Conjoiners, in Alastair Reynolds' Revelation Space series, use ternary logic to program their computers and nanotechnology devices. In Stanisław Lem's short story "The Hunt", the robot hunted by the protagonist is called Setaur, Self-programming Electronic Ternary Automaton Racemic. Tasen and Komato aliens, in the computer game Iji, use ternary logic to program their nanotechnology. See also Radix economy Ternary numeral system Skew binary number system Ternary signal Flip-flap-flop Ternary SRAM Decimal computer References Further reading External links The ternary calculating machine of Thomas Fowler 3niti – Collaboration for Open Ternary Computer Development Development of ternary computers at Moscow State University Tunguska – Ternary Operating System emulator SBTCVM – Open-source balanced ternary emulation project Classes of computers Russian inventions Soviet inventions
11664080
https://en.wikipedia.org/wiki/SimCity%20%282013%20video%20game%29
SimCity (2013 video game)
SimCity is a city-building and urban planning simulation massively multiplayer online game developed by Maxis Emeryville and published by Electronic Arts. Released for Microsoft Windows in early March 2013, it is the first major installment in the SimCity series since the release of SimCity 4 a decade before. A macOS version was released on August 29, 2013. The game is considered a reboot of the SimCity series. Players can create a settlement that can grow into a city by zoning land for residential, commercial, or industrial development, as well as building and maintaining public services, transport and utilities. SimCity uses a new engine called GlassBox that allows for more detailed simulation than previous games. Throughout its development, SimCity received critical acclaim for its new engine and reimagined gameplay; however, publications cautioned the game's use of a persistent Internet connection, with which it stores saved games and allows players to share resources. Prior to release, SimCity received positive reviews; however, the game's launch was received negatively due to widespread technical and gameplay problems related to the mandatory network connection for playing and saving game data. These issues included network outages, problems with saving progress and difficulty connecting to the game's servers. As a result, some reviewers were unable to review the game, labeling the launch a "disaster" and the game "unplayably broken", urging players to avoid purchasing the game until the issues were resolved. The poor performance of SimCity may have been responsible for the 2015 closure of Maxis' Emeryville studios, and the end of the franchise. Gameplay Along with many of the cosmetic changes (such as up-to-date 3D graphics), SimCity uses the new GlassBox engine. "We try to build what you would expect to see, and that's the game," explains system architect Andrew Willmott, meaning that visual effects such as traffic, economic troubles, and pollution will be more obvious. Two other new features are a multiplayer component and finite resources. Unlike previous games in the series, the game has non-orthogonal and curved roads and zoning areas that can conform to different road types. Types of zones include residential, commercial and industrial. The density is driven by the types of roads built around these zones. Cities in a region are connected to each other via predefined regional networks such as highways, railways, and waterways. Elements such as traffic and air pollution are visible flowing between cities. Cities can trade resources or share public services with their neighbors like garbage collection or health care. Cities can also pool their collective wealth and resources to build a "great work" to provide benefits for the entire region like a massive solar power plant or an international airport. The larger the region, the higher is the number of cities and great works that can be built. Terraforming – Creative Director Ocean Quigley stated that all of the terraforming in the game is going to be at the civil engineering scale, and will be the natural consequences of laying out roads, developing zones, and placing buildings. Transportation options – There are a number of options that are included, such as boats, buses, trams, and planes. Customization – Maxis has indicated that the game will support modding, but will not do so at launch like previous versions. Modules in SimCity are attachable structures that can add functionality to existing user-placeable buildings. One example is the extra garage for fire stations, which can provide additional fire trucks for increased protection coverage Another example is the Department of Safety for the City Hall, which unlocks more advanced medical, police and fire department buildings. The user interface, which was inspired by Google Maps and infographics, was designed to convey information to the player more clearly than in previous SimCity games. Animations and color-coded visual cues that represent how efficiently a city functions are only presented when needed at any given moment. For instance, opening up the water tower instantly changes the landscape to a clear world where the density of water is recognizable, and clicking on the sewage tab will immediately show how the waste of the citizens is flowing, and where the system is over capacity. Some of the other visualized data include air pollution, power distribution, police coverage, and zones. Many resources in the game are finite. Some are renewable, such as ground water. Lead gameplay engineer Dan Moskowitz stated, "If you've built up an entire city on the economic basis of extracting a certain resource, when that resource runs out your economy will collapse." Different from some previous SimCity titles, each type of zone (residential, commercial, and industrial) is not divided into density categories. Instead the density of the roads next to them determines the type of buildings that will be created there. This means that there is only one of each zone type, and density of the buildings are determined by the density of the roads. Roads in SimCity are one of the most fundamental elements of the mechanics. Unlike previous SimCity games, roads carry water, power, and sewage. There are also many new tools for drawing roads. They include a straight line tool, one for making rectangular road squares, one for making sweeping arcs, one for making circles, and one for making free-form roads. There is also a more diverse range of roads to choose from. Starting at dirt roads and going up to six lane avenues with street car tracks, the density of the roads determines the density of the buildings next to them, so dirt roads will only develop low density buildings. There are two different categories of roads, streets and avenues. Streets are 24 meters wide and avenues are 48 meters wide. Since all streets are the same width, a dirt road can be upgraded to a high density street. In order to upgrade a street to an avenue, one would need to fully demolish the old street and replace it with a larger avenue. When high and low capacity roads intersect, the higher density roads have the right-of-way, thus stop lights and stop signs will be automatically placed. In order to space the roads so there will be enough room for buildings to develop, road guides are shown when hovering over an existing road. Players will be able to specialize cities on certain industries, such as manufacturing, tourism, education, or others. Each have distinct appearances, simulation behavior, and economic strategies. Players have the option to heavily specialize on one or build multiple specializations in any given city for diversity. The game will feature a simulated global economy. Prices of key resources like oil or food will fluctuate depending on the game world's supply and demand. In particular, if players all over the world are predominantly selling drilled oil from within their game onto the global market, this will drive the price for this resource down. Conversely, a resource that has experienced very little exposure on the world market will be a scarce resource, driving the price up. Multiplayer This version of SimCity is the first to feature full online play since Maxis's SimCity 2000 Network Edition, allowing for regions to house multiple cities from different players. Regions can alternatively be set to private or the game switched to an offline single-player mode for solo play. SimCity requires players to be logged into EA's Origin service to play the game, including when playing in single-player mode. An active Internet connection was required every time the game was launched and had to be maintained throughout gameplay at the time of release. The connection is asynchronous, so any brief network disturbance will not interrupt the gameplay though outages of longer than 19 minutes, as an editor posted on Kotaku, will cause loss of gamestate when playing online. Cities in a region can share or sell resources, and work together to build "Great Works", such as an Arcology. Development Prior to its announcement, the German magazine GameStar leaked concept art. Soon thereafter, a pre-rendered trailer was leaked. The official announcement took place on March 6, 2012, at the Game Developers Conference. Initially it was revealed that the game would be available for the Windows platform, and a later macOS edition was confirmed. EA showcased two new trailers for the game at the Electronic Entertainment Expo 2012, showcasing in-game graphics for the first time. In August 2012, applicants were allowed to sign up to test closed beta versions of the game that were later released in January and February 2013, in order to perform load testing on the game servers. SimCity creative director Ocean Quigley confirmed that an macOS version was in development, but would not be released at the same time as the Microsoft Windows version. A Maxis graphics engineer had earlier commented that the purchase of the Windows version through the Origin platform will entitle access to the future macOS version. Game engine Maxis developed the game using a new simulation engine called GlassBox, which takes a different approach from previous simulation games. Those games first simulated high-level statistics and then created graphic animations to represent that data. The GlassBox Engine replaces those statistics with agents, simulation units that represent objects like water, power, and workers; each graphic animation is directly linked to an agent's activity. For example, rather than simply displaying a traffic jam animation to represent a simulated traffic flow problem, traffic jams are instead produced dynamically by masses of Sim agents that simulate travel to and from work. A four-part video has been released featuring Dan Moskowitz, the lead gameplay engineer, talking about the engine simulation behavior. The citizens in the game are also agents and do not lead realistic lives; they go to work at the first job they can find and they go home to the first empty home they find. After the release of the game, modders created mods that enabled offline play and access to debug developer tools. On January 9, 2014, Maxis published its policy on mods, in which they allow re-skinning and building creation but not mods that change the gameplay. Audio The game's audio is bound to the pulse of the simulation. When a building is running a simulation rule like generating power for example, its driving music and sound effects that are synchronized to the overall beat of the simulation. The audio is telling the player what the simulation is doing. Audio Director Kent Jolly stated that cars in the game are tracked individually. When a car leaves an intersection, the simulator plays a sound of a car pulling away. The sound also changes based on the speed of the game. As cars go faster, the audio is matched to what the player sees, while remaining true to the actual traffic. Chris Tilton is the composer of the game's orchestral score. The music subtly adjusts to the player's experience based on various game states. An example of this is when the view is zoomed out, the player will hear a fuller version of the score. When zoomed in, certain elements of the tracks are taken away. This is done to help make room for all the activity going on in the player's city. The music tracks are also written with population in mind, and the game exposes the full playlist as the player's city develops and grows. Tilton sought to reinvent SimCitys music and not rehash the musical sensibilities of previous games. Release SimCity was released on March 5, 2013, in North America, March 6 in Europe, Australia and Japan, and March 7 in the UK. The retail release of the game came in three editions: the standard edition; the Limited Edition which contained the Heroes & Villains and Plumbob Park DLC sets; and the Collector's Edition which contained the Limited Edition contents in a special steelbook as well as a German City DLC set for Germany, a Paris City DLC set for France and a British City DLC set for other countries. SimCity was also made available in a Digital Deluxe Edition on Origin which contained the three European City DLC sets; a separate Digital Deluxe Upgrade Pack was issued for owners of the standard and Limited editions. The initial release of SimCity in North America suffered multiple severe issues, particularly regarding the game's requirement for a persistent Internet connection. After the game was made available for purchase through EA's Origin delivery service, the high volume of users attempting to download and connect to EA's game servers caused network outages. Players reported experiencing frequent problems during gameplay due to the outages such as long loading times, disconnections, crashing, and loss of saved game data. The server problems and negative feedback led some publications to refer to the launch as "disastrous" and others have compared the launch unfavorably to that of Diablo III, which experienced similar problems when it was first released. The issues caused online retailer Amazon.com to temporarily withdraw the downloadable version of SimCity from its marketplace, citing customer complaints. It was also discovered that there were several issues with the GlassBox engine, such as traffic taking the shortest route instead of the route with the most available capacity, and sims not living persistent lives but rather going to the nearest available workplace for work and nearest available house after work. Post-release EA responded to server issues by adding additional servers and developing a server patch that disables "non-critical gameplay features [including] leaderboards, achievements and region filters." On the evening of March 7, Maxis general manager Lucy Bradshaw issued a statement in response to the launch problems, stating that more servers would be added over the weekend, thousands of players were playing and "more than 700,000 cities have been built by our players in just 24 hours". She went on to acknowledge that "many are experiencing server instability" and that "players across Europe and Asia are experiencing the same frustration". She confirmed that the number of servers would be increased stating "We added servers today, and there will be several more added over the weekend." Senior producer Kip Katsarelis commented that the game servers were constantly at maximum capacity, partly due to the large number of players connected for extended periods of time, which has made it difficult for new users to connect: "We added more servers to accommodate the launch in [Australia, Japan, and Europe]... our plan is to continue to bring more servers online until we have enough to meet the demand, increase player capacity and let more people through the gates and into the game." In an article about "games as a service", Nathan Grayson from Rock, Paper, Shotgun said that the situation was unacceptable and that EA was handling the situation as well as could be expected, but the problem was that they had damaged the idea of "games as a service" and lamented the fact that games publishers hadn't learned from previous similar launch failures: "this just keeps on happening. ... servers have gone toe-to-toe with day-one stampedes in much the same fashion as a turtle against an 18-wheeler: ... Then nature runs its course, and developers and publishers alike scramble to glue one billion bits of finely pulped turtle back together again," and added "A strong service – the kind people latch onto and ultimately demand as the norm – doesn't just react." On March 8, 2013, EA suspended some of SimCitys online marketing campaigns because of the game's ongoing technical problems. EA has stated it will not be offering refunds for users. In a blog post on March 8, Bradshaw gave an update on the server situation, reporting that the issues had improved and server space had expanded, but acknowledged that some users were still suffering stability problems. She also explained the reason for the failure: "So what went wrong? The short answer is: a lot more people logged on than we expected. More people played and played in ways we never saw in the beta" and called their error "dumb". She reported that server capacity had been increased by 120 percent and that errors had dropped by 80 percent. She also promised another update during the weekend. She also announced an offer of a free game from the EA catalogue, saying "I know that's a little contrived – kind of like buying a present for a friend after you did something crummy. But we feel bad about what happened. We're hoping you won't stay mad and that we'll be friends again when SimCity is running at 100 percent." Maxis ruled out making the game able to be played offline, saying it would take a significant amount of engineering work for this to happen. Shortly afterwards, it was discovered that a line of code could be commented out, allowing the game to be played offline indefinitely. In addition, an article published by Rock, Paper, Shotgun highlighted ways in which "They could make an entire region single player offline with absolute ease." The launch failures also led to fans of the series filing a petition through We the People on the official White House website calling for "an industry-wide return policy for video games that rely on remote servers and DRM to function properly" which was later covered by mainstream news organizations such as NBC News. To compensate for the issues during the release, EA offered to early purchasers a free game in March 2013. All Origin users who purchased and registered the game before March 23 were allowed to choose a game for free among a small list of titles, including SimCity 4, Battlefield 3, Dead Space 3, Mass Effect 3 and Need for Speed: Most Wanted. EA had maintained a Server Status page in the SimCity website. This allowed players to check the status of SimCity servers around the world. Patches Since the initial release, Maxis has distributed patches to the game via the in-game patching utility that automatically runs when the game is launched on a user's computer. These patches have addressed, though not entirely fixed, among many other things, issues such as traffic intelligence, game-save rollbacks, and emergency vehicle routing. Maxis has continued to update the game to improve gameplay quality and eliminate bugs. A month following the game launch day, Maxis had released 8 official patches, bringing the game to version 1.8. Maxis released a 2.0 patch, purported to make significant improvements to gameplay and curb defects, that was distributed on April 22, 2013. On May 23, 2013, Maxis released patch 4.0, giving players more updates to the game and re-enabling leader boards. Patch 6 was released July 30, 2013, and included the game's second new region added since the original release date. Patch 7.0, a notable update for users, was released on August 22, 2013. This patch included the addition of a bridge and tunnel tool, letting players create overpasses and underpasses. The update also improved traffic, making it smarter. All patches that have been released have included patch notes describing the contents of the patch and can be found on EA's forums. An offline mode was released in Update 10. The game can now be saved to the local disk, and cities are static and do not operate while the player is working on an adjacent city. Complete Edition On November 13, 2014, EA released SimCity: Complete Edition exclusively on Origin. The compilation release contains the Digital Deluxe Edition of SimCity (including the British City, French City and German City Set), the Cities of Tomorrow expansion pack, plus the Amusement Park and Airships DLC sets. It does not include the Launch Arcology DLC set of the Cities of Tomorrow Limited Edition. Reception Pre-release At E3 2012 in June 2012, SimCity won 8 awards out of 24 nominations. On August 23, 2012, SimCity won Gamescom's "Best PC Game" award. The Gamescom jury described the video game as having "fantastic graphics" and "struck the right balance between retaining the trademarks of the old parts and making it interesting for beginners". On December 14, 2012, the SimCity development team ran a questions-and-answers session on the Internet community Reddit where they received criticism for the game's DRM mechanisms, which require the user to be persistently connected to Electronic Arts' servers in order to be able to play the game. The video games-focused blog Kotaku also voiced concern over the issue, worrying that Electronic Arts could one day shut down their servers, rendering the game unplayable. This prompted a blog response from Bradshaw, in which she defended the always-online component with the comment that "real cities do not exist in a bubble; they share a region and affect one another." She goes on to say that increased connectivity to neighboring cities allows for a better experience, allowing for better trade and wider scale effects such as crime and pollution to keep synchronized across the region. Bradshaw also noted the performance benefit due to the engine using EA's server hardware to assist in gameplay calculations. However, Rock, Paper, Shotgun pointed out after the release that cloud resources were not used to support gameplay computation but simply to support inter-city and social media mechanisms. The information was also reported in the mainstream media Those were confirmed by a change in rhetoric from Bradshaw. After the first beta, EA Management staff discussed Q4 2012 results during which Peter Moore commented on the success of the beta, "SimCity, a completely new version of the treasured classic, includes deep online features. More than 100,000 people played the SimCity beta last weekend, [...] and the critical reception is shaping up well." Critical reception Upon release, SimCity was met with mostly mixed reviews, many of which were downgraded after reviewers received reports of server problems. It received mixed to negative reception soon after, with GameRankings and Metacritic assigning scores of 63.82% and 64/100, respectively. The issues surrounding the launch affected critics' opinions and reviews of the game. Eurogamer, CNET, and IGN delayed their reviews due to being unable to connect to the game servers, and Polygon, which had reviewed the game before the launch, later dropped its 9.5/10 score down to 8/10, then later dropping it again to 4/10 in response to both the issues, and EA's decision to disable gameplay features. Josh Derocher of Destructoid gave a rating of 4/10, saying that despite his enjoyment of the game, "the online dependency, forced multiplayer, and DRM ruin it." Other critics such as Rock, Paper, Shotgun also noted the launch issues with the always-online nature of the game, servers, and cloud save systems. According to Rock, Paper, Shotgun, because a server connection is required even for single-player games, "the game, by its very design, is hideously broken." Leif Johnson writing for GameTrailers gave the game an 8.0/10 stating, "Aside from some issues with its online requirements, bugs, and restrictions on city size, it's still a satisfying and addicting simulator that will grant dozens of hours of entertainment with one well-designed city alone." CNET UK reported on March 6 that review aggregator Metacritic accumulated a user score of 2.0/10 and several critics reported that the product on Amazon.com had an average rating of 1/5 stars. Amazon customers and the press reported problems with path-finding and artificial intelligence, broken economic simulation, multiplayer aspect not working as advertised, and iconic features missing compared to previous installments of the game. SimCity was also criticized for the size of the area the player is given to build a city; critics have noted it to be significantly smaller than what was available in previous games. Maxis responded to this criticism by stating that this was a deliberate compromise to ensure that the game would run smoothly on the majority of users' computers. Maxis has acknowledged that city size is a major complaint, but has stated that they are not currently working on an increase in size. However, they have stated that larger areas may appear in an upcoming release or expansion of the game. In October 2013, Maxis stated that due to player feedback, they attempted to implement larger cities through "months of testing," but ultimately decided to abandon the concept as "the system performance challenges [Maxis] encountered would mean that the vast majority of [SimCitys] players wouldn't be able to load, much less play with bigger cities." Commercial SimCity sold over 1.1 million copies in its first two weeks, 54 percent of which were of the download version of the game. As of July 2013, the game has sold over two million copies. Cities of Tomorrow An expansion pack called Cities of Tomorrow was announced on September 19, 2013. It was released on November 12, 2013, and is set fifty years in the future. It features new regions, technology, city specializations, and transportation methods. The new features in Cities of Tomorrow are divided into three categories: "MegaTowers", "Academy" and the "OmegaCo". The MegaTowers are massive buildings built floor by floor with each floor having a specific purpose, being residential, commercial or to provide services like schools, security, power and entertainment. Each floor can provide jobs, services or housing for hundreds of citizens at the same time. The Academy is a futuristic research center that provides a signal called "ControlNet" to power up structures and improvements developed there and the OmegaCo is composed of factories used to produce an elusive commodity only known as "Omega" to increase the profits from residential, commercial and industrial buildings alike and manufacture drones to further improve the coverage of healthcare, police, fire services or just be used by citizens to perform shopping in their places, thus reducing traffic. The expansion also supports "futurization", in which futuristic buildings tend to "futurize" the buildings, roads, and services around them by significantly blending the roads and buildings to simply make them look more futuristic, such as differences in traffic lights (they have a different sprite), turning service cars more futuristic (futurizing a police station will significantly change the cars and architecture), and so on. Buildings that will futurize the vicinity are distinguished with a hexagon pattern at the lower part of the building when viewed in the Construction screen. Cities of Tomorrow was released in three editions: the standard edition, the Limited Edition which contains the Launch Arcology DLC set, and an Origin edition which contains the Skyclops Coaster Crown DLC set. Reception Cities of Tomorrow received mixed reviews from critics. Brett Todd for GameSpot noted that "you're left with a game that hides the same dissatisfying experience under a more attractive surface," calling the expansion "more of the same." Paul Dean for EuroGamer wrote the expansion pack was "heading in the right direction," but "it still doesn't make SimCity a particularly good game." Legacy The disastrous server issues on launch resulted in wider changes to Maxis and EA in the following years. EA moved away from their near-exclusive focus on always-online titles that had been company policy since 2012. This resulted in major changes to The Sims 4, which was in development at that time as an always-online multiplayer title. The game was reworked as a single-player title, though files relating to the defunct online systems were still present in the game at launch in 2014. Aside from a reworked 2014 mobile port, the SimCity reboot would mark the end of the franchise, with Maxis Emeryville closing in 2015. The reception of SimCity also persuaded Paradox Interactive to green-light the city-building game Cities: Skylines, which released in 2015. See also Cities: Skylines References External links 2013 video games Electronic Arts games City-building games Massively multiplayer online games Multiplayer and single-player video games MacOS games SimCity Video game reboots Video game remakes Video games developed in the United States Video games with expansion packs Windows games RenderWare games
16630704
https://en.wikipedia.org/wiki/2223%20Sarpedon
2223 Sarpedon
2223 Sarpedon is a dark Jupiter trojan from the Trojan camp, approximately in diameter. It was discovered on 4 October 1977, by astronomers at the Purple Mountain Observatory near Nanking, China. The D-type asteroid belongs to the 30 largest Jupiter trojans and has a rotation period of 22.7 hours. It was named after the Lycian hero Sarpedon from Greek mythology. Orbit and classification Sarpedon is orbiting in the trailing Trojan camp, at Jupiter's Lagrangian point, 60° behind its orbit in a 1:1 resonance . It is also a non-family asteroid of the Jovian background population. It orbits the Sun at a distance of 5.2–5.3 AU once every 11 years and 12 months (4,376 days; semi-major axis of 5.24 AU). Its orbit has an eccentricity of 0.02 and an inclination of 16° with respect to the ecliptic. The body's observation arc begins with its official discovery observation at Nanking. Physical characteristics In the Tholen classification, Sarpedon is similar to a dark D-type asteroid, though with an unusual spectrum (DU). Rotation period In April 1996, a rotational lightcurve of Sarpedon was obtained from photometric observations by Italian astronomer Stefano Mottola at ESO's La Silla Observatory using the Bochum 0.61-metre Telescope. Lightcurve analysis gave a rotation period of 22.741 hours with a brightness amplitude of 0.14 magnitude (). A previous observation by Mottola gave a similar period of 22.77 hours from a lower-rated lightcurve (). Diameter and albedo According to the surveys carried out by the Infrared Astronomical Satellite IRAS, the Japanese Akari satellite and the NEOWISE mission of NASA's Wide-field Infrared Survey Explorer, Sarpedon measures between 77.48 and 108.21 kilometers in diameter and its surface has an albedo between 0.027 and 0.051. The Collaborative Asteroid Lightcurve Link adopts the results obtained by IRAS, that is, an albedo of 0.034 and a diameter of 94.63 kilometers based on an absolute magnitude of 9.41. Naming This minor planet was named from Greek mythology after the Lycian hero Sarpedon from the Iliad, who was killed by Patroclus, during the Trojan War. The official naming citation was published by the Minor Planet Center on 1 August 1981 (). References External links Asteroid Lightcurve Database (LCDB), query form (info ) Dictionary of Minor Planet Names, Google books Discovery Circumstances: Numbered Minor Planets (1)-(5000) – Minor Planet Center 002223 002223 Minor planets named from Greek mythology Named minor planets 002223 19771004
19991513
https://en.wikipedia.org/wiki/Time-of-flight%20camera
Time-of-flight camera
A time-of-flight camera (ToF camera) is a range imaging camera system employing time-of-flight techniques to resolve distance between the camera and the subject for each point of the image, by measuring the round trip time of an artificial light signal provided by a laser or an LED. Laser-based time-of-flight cameras are part of a broader class of scannerless LIDAR, in which the entire scene is captured with each laser pulse, as opposed to point-by-point with a laser beam such as in scanning LIDAR systems. Time-of-flight camera products for civil applications began to emerge around 2000, as the semiconductor processes allowed the production of components fast enough for such devices. The systems cover ranges of a few centimeters up to several kilometers. Types of devices Several different technologies for time-of-flight cameras have been developed. RF-modulated light sources with phase detectors Photonic Mixer Devices (PMD), the Swiss Ranger, and CanestaVision work by modulating the outgoing beam with an RF carrier, then measuring the phase shift of that carrier on the receiver side. This approach has a modular error challenge: measured ranges are modulo the RF carrier wavelength. The Swiss Ranger is a compact, short-range device, with ranges of 5 or 10 meters and a resolution of 176 x 144 pixels. With phase unwrapping algorithms, the maximum uniqueness range can be increased. The PMD can provide ranges up to 60 m. Illumination is pulsed LEDs rather than a laser. CanestaVision developer Canesta was purchased by Microsoft in 2010. The Kinect2 for Xbox One was based on ToF technology from Canesta. Range gated imagers These devices have a built-in shutter in the image sensor that opens and closes at the same rate as the light pulses are sent out. Most time-of-flight 3D sensors are based on this principle invented by Medina. Because part of every returning pulse is blocked by the shutter according to its time of arrival, the amount of light received relates to the distance the pulse has traveled. The distance can be calculated using the equation, z = R (S2 − S1) / 2(S1 + S2) + R / 2 for an ideal camera. R is the camera range, determined by the round trip of the light pulse, S1 the amount of the light pulse that is received, and S2 the amount of the light pulse that is blocked. The ZCam by 3DV Systems is a range-gated system. Microsoft purchased 3DV in 2009. Microsoft's second-generation Kinect sensor was developed using knowledge gained from Canesta and 3DV Systems. Similar principles are used in the ToF camera line developed by the Fraunhofer Institute of Microelectronic Circuits and Systems and TriDiCam. These cameras employ photodetectors with a fast electronic shutter. The depth resolution of ToF cameras can be improved with ultra-fast gating intensified CCD cameras. These cameras provide gating times down to 200ps and enable ToF setup with sub-millimeter depth resolution. Range gated imagers can also be used in 2D imaging to suppress anything outside a specified distance range, such as to see through fog. A pulsed laser provides illumination, and an optical gate allows light to reach the imager only during the desired time period. Direct Time-of-Flight imagers These devices measure the direct time-of-flight required for a single laser pulse to leave the camera and reflect back onto the focal plane array. Also known as "trigger mode", the 3D images captured using this methodology image complete spatial and temporal data, recording full 3D scenes with single laser pulse. This allows rapid acquisition and rapid real-time processing of scene information. For time-sensitive autonomous operations, this approach has been demonstrated for autonomous space testing and operation such as used on the OSIRIS-REx Bennu asteroid sample and return mission and autonomous helicopter landing. Advanced Scientific Concepts, Inc. provides application specific (e.g. aerial, automotive, space) Direct TOF vision systems known as 3D Flash LIDAR cameras. Their approach utilizes InGaAs Avalanche Photo Diode (APD) or PIN photodetector arrays capable of imaging laser pulse in the 980 nm to 1600 nm wavelengths. Components A time-of-flight camera consists of the following components: Illumination unit: It illuminates the scene. For RF-modulated light sources with phase detector imagers, the light has to be modulated with high speeds up to 100 MHz, only LEDs or laser diodes are feasible. For Direct TOF imagers, a single pulse per frame (e.g. 30 Hz) is used. The illumination normally uses infrared light to make the illumination unobtrusive. Optics: A lens gathers the reflected light and images the environment onto the image sensor (focal plane array). An optical band-pass filter only passes the light with the same wavelength as the illumination unit. This helps suppress non-pertinent light and reduce noise. Image sensor: This is the heart of the TOF camera. Each pixel measures the time the light has taken to travel from the illumination unit (laser or LED) to the object and back to the focal plane array. Several different approaches are used for timing; see Types of devices above. Driver electronics: Both the illumination unit and the image sensor have to be controlled by high speed signals and synchronized. These signals have to be very accurate to obtain a high resolution. For example, if the signals between the illumination unit and the sensor shift by only 10 picoseconds, the distance changes by 1.5 mm. For comparison: current CPUs reach frequencies of up to 3 GHz, corresponding to clock cycles of about 300 ps - the corresponding 'resolution' is only 45 mm. Computation/Interface: The distance is calculated directly in the camera. To obtain good performance, some calibration data is also used. The camera then provides a distance image over some interface, for example USB or Ethernet. Principle The simplest version of a time-of-flight camera uses light pulses or a single light pulse. The illumination is switched on for a very short time, the resulting light pulse illuminates the scene and is reflected by the objects in the field of view. The camera lens gathers the reflected light and images it onto the sensor or focal plane array. Depending upon the distance, the incoming light experiences a delay. As light has a speed of approximately c = 300,000,000 meters per second, this delay is very short: an object 2.5 m away will delay the light by: For amplitude modulated arrays, the pulse width of the illumination determines the maximum range the camera can handle. With a pulse width of e.g. 50 ns, the range is limited to These short times show that the illumination unit is a critical part of the system. Only with special LEDs or lasers is it possible to generate such short pulses. The single pixel consists of a photo sensitive element (e.g. a photo diode). It converts the incoming light into a current. In analog timing imagers, connected to the photo diode are fast switches, which direct the current to one of two (or several) memory elements (e.g. a capacitor) that act as summation elements. In digital timing imagers, a time counter, that can be running at several gigahertz, is connected to each photodetector pixel and stops counting when light is sensed. In the diagram of an amplitude modulated array analog timer, the pixel uses two switches (G1 and G2) and two memory elements (S1 and S2). The switches are controlled by a pulse with the same length as the light pulse, where the control signal of switch G2 is delayed by exactly the pulse width. Depending on the delay, only part of the light pulse is sampled through G1 in S1, the other part is stored in S2. Depending on the distance, the ratio between S1 and S2 changes as depicted in the drawing. Because only small amounts of light hit the sensor within 50 ns, not only one but several thousand pulses are sent out (repetition rate tR) and gathered, thus increasing the signal to noise ratio. After the exposure, the pixel is read out and the following stages measure the signals S1 and S2. As the length of the light pulse is defined, the distance can be calculated with the formula: In the example, the signals have the following values: S1 = 0.66 and S2 = 0.33. The distance is therefore: In the presence of background light, the memory elements receive an additional part of the signal. This would disturb the distance measurement. To eliminate the background part of the signal, the whole measurement can be performed a second time with the illumination switched off. If the objects are further away than the distance range, the result is also wrong. Here, a second measurement with the control signals delayed by an additional pulse width helps to suppress such objects. Other systems work with a sinusoidally modulated light source instead of the pulse source. For direct TOF imagers, such as 3D Flash LIDAR, a single short pulse from 5 to 10 ns is emitted by the laser. The T-zero event (the time the pulse leaves the camera) is established by capturing the pulse directly and routing this timing onto the focal plane array. T-zero is used to compare the return time of the returning reflected pulse on the various pixels of the focal plane array. By comparing T-zero and the captured returned pulse and comparing the time difference, each pixel accurately outputs a direct time-of-flight measurement. The round trip of a single pulse for 100 meters is 660 ns. With a 10 ns pulse, the scene is illuminated and the range and intensity captured in less than 1 microsecond. Advantages Simplicity In contrast to stereo vision or triangulation systems, the whole system is very compact: the illumination is placed just next to the lens, whereas the other systems need a certain minimum base line. In contrast to laser scanning systems, no mechanical moving parts are needed. Efficient distance algorithm It is a direct process to extract the distance information out of the output signals of the TOF sensor. As a result, this task uses only a small amount of processing power, again in contrast to stereo vision, where complex correlation algorithms are implemented. After the distance data has been extracted, object detection, for example, is also a straightforward process to carry out because the algorithms are not disturbed by patterns on the object. Speed Time-of-flight cameras are able to measure the distances within a complete scene with a single shot. As the cameras reach up to 160 frames per second, they are ideally suited to be used in real-time applications. Disadvantages Background light When using CMOS or other integrating detectors or sensors that use visible or near infra-red light (400 nm - 700 nm), although most of the background light coming from artificial lighting or the sun is suppressed, the pixel still has to provide a high dynamic range. The background light also generates electrons, which have to be stored. For example, the illumination units in many of today's TOF cameras can provide an illumination level of about 1 watt. The Sun has an illumination power of about 1050 watts per square meter, and 50 watts after the optical band-pass filter. Therefore, if the illuminated scene has a size of 1 square meter, the light from the sun is 50 times stronger than the modulated signal. For non-integrating TOF sensors that do not integrate light over time and are using near-infrared detectors (InGaAs) to capture the short laser pulse, direct viewing of the sun is a non-issue because the image is not integrated over time, rather captured within a short acquisition cycle typically less than 1 microsecond. Such TOF sensors are used in space applications and in consideration for automotive applications. Interference In certain types of TOF devices (but not all of them), if several time-of-flight cameras are running at the same time, the TOF cameras may disturb each other's measurements. There exist several possibilities for dealing with this problem: Time multiplexing: A control system starts the measurement of the individual cameras consecutively, so that only one illumination unit is active at a time. Different modulation frequencies: If the cameras modulate their light with different modulation frequencies, their light is collected in the other systems only as background illumination but does not disturb the distance measurement. For Direct TOF type cameras that use a single laser pulse for illumination, because the single laser pulse is short (e.g. 10 nanoseconds), the round trip TOF to and from the objects in the field of view is correspondingly short (e.g. 100 meters = 660 ns TOF round trip). For an imager capturing at 30 Hz, the probability of an interfering interaction is the time that the camera acquisition gate is open divided by the time between laser pulses or approximately 1 in 50,000 (0.66 μs divided by 33 ms). Multiple reflections In contrast to laser scanning systems where a single point is illuminated, the time-of-flight cameras illuminate a whole scene. For a phase difference device (amplitude modulated array), due to multiple reflections, the light may reach the objects along several paths. Therefore, the measured distance may be greater than the true distance. Direct TOF imagers are vulnerable if the light is reflecting from a specular surface. There are published papers available that outline the strengths and weaknesses of the various TOF devices and approaches. Applications Automotive applications Time-of-flight cameras are used in assistance and safety functions for advanced automotive applications such as active pedestrian safety, precrash detection and indoor applications like out-of-position (OOP) detection. Human-machine interfaces and gaming As time-of-flight cameras provide distance images in real time, it is easy to track movements of humans. This allows new interactions with consumer devices such as televisions. Another topic is to use this type of cameras to interact with games on video game consoles. The second-generation Kinect sensor originally included with the Xbox One console used a time-of-flight camera for its range imaging, enabling natural user interfaces and gaming applications using computer vision and gesture recognition techniques. Creative and Intel also provide a similar type of interactive gesture time-of-flight camera for gaming, the Senz3D based on the DepthSense 325 camera of Softkinetic. Infineon and PMD Technologies enable tiny integrated 3D depth cameras for close-range gesture control of consumer devices like all-in-one PCs and laptops (Picco flexx and Picco monstar cameras). Smartphone cameras As of 2019, several smartphones include time-of-flight cameras. These are mainly used to improve the quality of photos by providing the camera software with information about foreground and background. The first mobile phone to employ such technology is the LG G3, released in early 2014. Measurement and machine vision Other applications are measurement tasks, e.g. for the fill height in silos. In industrial machine vision, the time-of-flight camera helps to classify and locate objects for use by robots, such as items passing by on a conveyor. Door controls can distinguish easily between animals and humans reaching the door. Robotics Another use of these cameras is the field of robotics: Mobile robots can build up a map of their surroundings very quickly, enabling them to avoid obstacles or follow a leading person. As the distance calculation is simple, only little computational power is used. Earth topography ToF cameras have been used to obtain digital elevation models of the Earth's surface topography, for studies in geomorphology. Brands Active brands ( ESPROS - 3D TOF imager chips, TOF camera and module for automotive, robotics, industrial and IoT applications 3D Flash LIDAR Cameras and Vision Systems by Advanced Scientific Concepts, Inc. for aerial, automotive and space applications DepthSense - TOF cameras and modules, including RGB sensor and microphones by SoftKinetic IRMA MATRIX - TOF camera, used for automatic passenger counting on mobile and stationary applications by iris-GmbH Kinect - hands-free user interface platform by Microsoft for video game consoles and PCs, using time-of-flight cameras in its second generation of sensor devices. pmd - camera reference designs and software (pmd[vision], including TOF modules [CamBoard]) and TOF imagers (PhotonICs) by PMD Technologies 2+3D - High-resolution SXGA (1280×1024) TOF camera developed by startup company odos imaging, integrating conventional image capture with TOF ranging in the same sensor. Based on technology developed at Siemens. Senz3D - TOF camera by Creative and Intel based on DepthSense 325 camera of Softkinetic, used for gaming. SICK - 3D industrial TOF cameras (Visionary-T) for industrial applications and software 3D MLI Sensor - TOF imager, modules, cameras, and software by IEE (International Electronics & Engineering), based on modulated light intensity (MLI) TOFCam Stanley - TOF camera by Stanley Electric TriDiCam - TOF modules and software, the TOF imager originally developed by Fraunhofer Institute of Microelectronic Circuits and Systems, now developed by the spin out company TriDiCam Hakvision - TOF stereo camera Cube eye - ToF Camera and Modules, VGA Resolution, website : www.cube-eye.co.kr Defunct brands CanestaVision - TOF modules and software by Canesta (company acquired by Microsoft in 2010) D-IMager - TOF camera by Panasonic Electric Works OptriCam - TOF cameras and modules by Optrima (rebranded DepthSense prior to SoftKinetic merger in 2011) ZCam - TOF camera products by 3DV Systems, integrating full-color video with depth information (assets sold to Microsoft in 2009) SwissRanger - an industrial TOF-only camera line originally by the Centre Suisse d'Electronique et Microtechnique, S.A. (CSEM), now developed by Mesa Imaging (Mesa Imaging acquired by Heptagon in 2014) Fotonic - TOF cameras and software powered by Panasonic CMOS chip (Fotonic acquired by Autoliv in 2018) S.Cube - ToF Camera and Modules by Cube eye See also Laser Dynamic Range Imager Structured-light 3D scanner Kinect References Further reading Digital cameras Image sensor technology in computer vision Emerging technologies
60453147
https://en.wikipedia.org/wiki/Gonzalo%20Navarro
Gonzalo Navarro
Gonzalo Navarro Badino (born June 9, 1969) is a full professor of computer science at the University of Chile and ACM Distinguished Member, whose interests include algorithms and data structures, data compression and text searching. He also participates in the Center for Biotechnology and Bioengineering (CeBiB) and the Millennium Institute for Foundational Research on Data (IMFD).. He obtained his PhD at the University of Chile in 1998 under the supervision of Ricardo Baeza-Yates with the thesis Approximate Text Searching, then worked as a post-doctoral researcher with Esko Ukkonen and Maxime Crochemore. He is one of the most prolific and highly cited researchers in Latin America, having authored the books Flexible Pattern Matching in Strings and Compact Data Structures, around 25 book chapters, over 160 journal articles and over 240 conference papers. He is editor in chief of the ACM Journal of Experimental Algorithmics (JEA) and a member of the editorial board of Information Systems, and has been guest editor of special issues of ACM SIGSPATIAL, the Journal of Discrete Algorithms, Information Systems and Algorithmica. He created the Workshop on Compression, Text and Algorithms (WCTA) in 2005 and co-created the conference SISAP in 2008; has chaired or co-chaired SPIRE 2001, SCCC 2004, SPIRE 2005, SIGIR 2005 (posters), IFIP TCS 2006, SISAP 2008, SISAP 2012, LATIN 2016, SPIRE 2018 and CPM 2018; served on the steering committees of SPIRE, LATIN and SISAP; and has given around 50 invited talks, including 12 plenary talks and 5 tutorials in international conferences. Education He studied for his Licenciate in Informatics (1989–1992) (5 years plus thesis) from Latin American School of Informatics (ESLAI, Argentina). His thesis was: “A Study on Control Structures”. His advisor was Prof. Jorge Aguirre (ESLAI and Universidad de Buenos Aires, Argentina). He studied for his Licenciate in Informatics (1986–1993) (5 years plus thesis), at the Faculty of Exact Sciences, Universidad Nacional de La Plata (UNLP, Argentina). His thesis was: “MediaCore: A Multimedia Interface Composition Toolkit”, Advisor: Prof. Jorge Sanz (IBM Argentina and Almaden Research Center). He received a MSc. in computer science (1994–1995), from Faculty of Physics and Mathematical Sciences, Universidad de Chile with Prof. Ricardo Baeza-Yates (Universidad de Chile) as his advisor. His thesis was: “A Language for Queries on Structure and Contents of Textual Databases”. He received his PhD in computer science (1995–1998), from Faculty of Physics and Mathematical Sciences, Universidad de Chile under advisor: Prof. Ricardo Baeza-Yates (Universidad de Chile). His thesis was: “Approximate Text Searching”. Awards and distinctions 2018: ACM Distinguished Member, distinction given by the Association for Computing Machinery to at most 10% of its members for achieving a significant impact on the computing field. 2016: Article "On compressing and indexing repetitive sequences", with Sebastian Kreft, included in the Virtual Special Issue "40th Anniversary of Theoretical Computer Science -- Top Cited Articles: 1975–2014", which collects the most cited articles of each year. 2016: Highest Cited Paper Award of Elsevier, for the articles "On compressing and indexing repetitive sequences" and "Colored range queries and document retrieval", which are among the 5 most cited papers in Theoretical Computer Science. Similar award for the article "DACs: Bringing direct access to variable-length codes", among the 5 most cited in Information Processing and Management, and "Improved Compressed Indexes for Full-Text Document Retrieval", among the 5 most cited in Journal of Discrete Algorithms. 2009: Included in the book "70 Stories of success in Innovation and Science", published by the Ministry of Economy and several government research funding agencies, Chile, 2009. 2008: Award Scopus Chile 2008 in Computer Science, Mathematics and Engineering, awarded by Elsevier to researchers with high scientific productivity, with the support of Conicyt (Chile) 1996: First prize in the III CLEI-UNESCO Contest of Latin American Computer Science MSc. Theses. SPIRE 2001 Although Professor Navarro has organized and participated in a large number of conferences and seminars, his best effort in this direction was without doubt the organization of the 13th International Symposium on String Processing and Information Retrieval (SPIRE 2001), with the support of Ricardo Baeza-Yates, which brought together many professors and students for three days of talks on a boat of the company Skorpios heading to the Laguna San Rafael in Chilean Patagonia. The welcome speech included local tales of pirates and sailors, starting with the sayings neither marry nor depart on a Tuesday (because it brings bad luck) and Tuesday the 13th is a cursed day (with the conference starting on Tuesday, November 13). The conference featured high-quality works and is still known as one of the best of the SPIRE series. References External links https://users.dcc.uchile.cl/~gnavarro/ Living people Argentine computer scientists 1969 births University of Chile alumni National University of La Plata alumni University of Chile faculty
3036763
https://en.wikipedia.org/wiki/884%20Priamus
884 Priamus
884 Priamus is a large Jupiter trojan from the Trojan camp, approximately in diameter. It was discovered on 22 September 1917, by German astronomer Max Wolf at Heidelberg Observatory in southern Germany. The dark D-type asteroid is one of the 20 largest Jupiter trojans and has a rotation period of 6.9 hours. It was named after the Trojan king Priam from Greek mythology. Orbit and classification Priamus is orbiting in the trailing Trojan camp, at Jupiter's Lagrangian point, 60° behind its orbit in a 1:1 resonance . It is also a non-family asteroid of the Jovian background population. It orbits the Sun at a distance of 4.5–5.8 AU once every 11 years and 10 months (4,308 days; semi-major axis of 5.18 AU). Its orbit has an eccentricity of 0.12 and an inclination of 9° with respect to the ecliptic. The body's observation arc begins at Heidelberg in November 1917, two months after its official discovery observation. Naming This minor planet was named from Greek mythology after Priam (Priamus; Priamos), the king of Troy during the Trojan War. The Jupiter trojans 624 Hektor and 3317 Paris are named after his sons Paris and Hector. The official naming of Ajax was first cited in The Names of the Minor Planets by Paul Herget in 1955 (). Physical characteristics In the Tholen taxonomy, Priamus is a dark D-type asteroid, the most common spectral type among the Jupiter trojans, with few dozens already identified in the early Tholen and SMASS classification (Bus–Binzel). Priamus has also been characterized as a D-type by Pan-STARRS survey. Rotation period Several rotational lightcurves have been obtained from photometric observations since the 1980s, when Priamus was first observed by William Hartmann (1988) and Stefano Mottola (1993). The best rated result from July 2010, by Robert Stephens at GMARS and Linda French at Illinois Wesleyan University using the 0.9-meter SMARTS telescope at CTIO in Chile, gave a well-defined rotation period of hours with a consolidated brightness variation between 0.23 and 0.40 in magnitude (). In January 1993 and October 2001, two lightcurves were obtained by Stefano Mottola in collaboration with Claes-Ingvar Lagerkvist and Marco Delbo at Kvistaberg and Pino Torinese observatories, respectively (). Another measurement was made by Ukrainian astronomers in August 2010 (). Between January 2015, and December 2016, photometric observations by Robert Stephens and Daniel Coley in collaboration with Brian Warner at the Center for Solar System Studies, California, gave a three concurring periods of 6.854, 6.863 and 6.865 hours (). Diameter and albedo According to the surveys carried out by the Japanese Akari satellite and the NEOWISE mission of NASA's Wide-field Infrared Survey Explorer, Priamus measures 101.09 and 119.99 kilometers in diameter and its surface has an albedo of 0.044 and 0.037, respectively. The Collaborative Asteroid Lightcurve Link assumes a standard albedo for a carbonaceous asteroid of 0.057 and calculates a diameter of 96.29 kilometers based on an absolute magnitude of 8.81. Notes References External links Asteroid Lightcurve Database (LCDB), query form (info ) Dictionary of Minor Planet Names, Google books Discovery Circumstances: Numbered Minor Planets (1)-(5000) – Minor Planet Center 000884 Discoveries by Max Wolf Minor planets named from Greek mythology Named minor planets 000884 19170922
7509927
https://en.wikipedia.org/wiki/IBM%20Rational%20Rose%20XDE
IBM Rational Rose XDE
Rational Rose XDE, an "eXtended Development Environment" for software developers, integrates with Microsoft Visual Studio .NET and Rational Application Developer. The Rational Software division of IBM, which previously produced Rational Rose, wrote this software. With the Rational June 2006 Product Release, IBM withdrew the “XDE” family of products and introduced the Rational Rose family of products as replacements. The Rational Rose family of products is a set of UML modeling tools for software design. The Unified Modeling Language (UML) is the industry-standard language for specifying, visualizing, constructing, and documenting the artifacts of software systems. It simplifies the complex process of software design, creating a "blueprint" for construction of software systems. Rational Rose could also do source-based reverse engineering; the combination of this capability with source generation from diagrams was dubbed roundtrip engineering. A 2007 book noted that other UML tools are also capable of this, the list including Borland Together, ESS-Model, BlueJ, and Fujaba. The Rational Rose family allows integration with legacy integrated development environments or languages. For more modern architectures Rational Software Architect and Rational Software Modeler were developed. These products were created matching and surpassing Rose XDE capabilities to include support for UML 2.x, pattern customization support, the latest programming languages and approaches to software development such as SOA and more powerful data modeling that supports entity-relationship (ER) modeling. A 2003 UML 2 For Dummies book wrote that Rational Rose suite was the "market (and marketing) leader". The UML part was superseded by Rational Software Architect around 2006, with Rational Rose becoming a legacy product. , the ER modelling part (Rational Rose Data Modeler) has been superseded by another IBM product—Rational Data Architect. IBM still sells Rational Rose, listing Visual Studio 2005 and Windows Vista as compatible environment. (see system requirement tab) See also Imagix 4D Rigi list of UML tools References Further reading Rose XDE Data modeling tools UML tools
47012
https://en.wikipedia.org/wiki/Peering
Peering
In computer networking, peering is a voluntary interconnection of administratively separate Internet networks for the purpose of exchanging traffic between the "down-stream" users of each network. Peering is settlement-free, also known as "bill-and-keep," or "sender keeps all," meaning that neither party pays the other in association with the exchange of traffic; instead, each derives and retains revenue from its own customers. An agreement by two or more networks to peer is instantiated by a physical interconnection of the networks, an exchange of routing information through the Border Gateway Protocol (BGP) routing protocol, tacit agreement to norms of conduct and, in some extraordinarily rare cases (0.07%), a formalized contractual document. In 0.02% of cases the word "peering" is used to describe situations where there is some settlement involved. Because these outliers can be viewed as creating ambiguity, the phrase "settlement-free peering" is sometimes used to explicitly denote normal cost-free peering. How peering works The Internet is a collection of separate and distinct networks referred to as autonomous systems, each one consisting of a set of globally unique IP addresses and a unique global BGP routing policy. The interconnection relationships between Autonomous Systems are of exactly two types: Peering - Two networks exchange traffic between their users freely, and for mutual benefit. Transit – One network pays another network for access to the Internet. Therefore, in order for a network to reach any specific other network on the Internet, it must either: Sell transit service to that network or a chain of resellers ending at that network (making them a 'customer'), Peer with that network or with a network which sells transit service to that network, or Buy transit service from any other network (which is then responsible for providing interconnection to the rest of the Internet). The Internet is based on the principle of global or end-to-end reachability, which means that any Internet user can transparently exchange traffic with any other Internet user. Therefore, a network is connected to the Internet if and only if it buys transit, or peers with every other network which also does not purchase transit (which together constitute a "default free zone" or "DFZ"). Motivations for peering Peering involves two networks coming together to exchange traffic with each other freely, and for mutual benefit. This 'mutual benefit' is most often the motivation behind peering, which is often described solely by "reduced costs for transit services". Other less tangible motivations can include: Increased redundancy (by reducing dependence on one or more transit providers). Increased capacity for extremely large amounts of traffic (distributing traffic across many networks). Increased routing control over one's traffic. Improved performance (attempting to bypass potential bottlenecks with a "direct" path). Improved perception of one's network (being able to claim a "higher tier"). Ease of requesting for emergency aid (from friendly peers). Physical interconnections for peering The physical interconnections used for peering are categorized into two types: Public peering – Interconnection utilizing a multi-party shared switch fabric such as an Ethernet switch. Private peering – Interconnection utilizing a point-to-point link between two parties. Public peering Public peering is accomplished across a Layer 2 access technology, generally called a shared fabric. At these locations, multiple carriers interconnect with one or more other carriers across a single physical port. Historically, public peering locations were known as network access points (NAPs). Today they are most often called exchange points or Internet exchanges ("IXP"). Many of the largest exchange points in the world can have hundreds of participants, and some span multiple buildings and colocation facilities across a city. Since public peering allows networks interested in peering to interconnect with many other networks through a single port, it is often considered to offer "less capacity" than private peering, but to a larger number of networks. Many smaller networks, or networks which are just beginning to peer, find that public peering exchange points provide an excellent way to meet and interconnect with other networks which may be open to peering with them. Some larger networks utilize public peering as a way to aggregate a large number of "smaller peers", or as a location for conducting low-cost "trial peering" without the expense of provisioning private peering on a temporary basis, while other larger networks are not willing to participate at public exchanges at all. A few exchange points, particularly in the United States, are operated by commercial carrier-neutral third parties, which are critical for achieving cost-effective data center connectivity. Private peering Private peering is the direct interconnection between only two networks, across a Layer 1 or 2 medium that offers dedicated capacity that is not shared by any other parties. Early in the history of the Internet, many private peers occurred across "telco" provisioned SONET circuits between individual carrier-owned facilities. Today, most private interconnections occur at carrier hotels or carrier neutral colocation facilities, where a direct crossconnect can be provisioned between participants within the same building, usually for a much lower cost than telco circuits. Most of the traffic on the Internet, especially traffic between the largest networks, occurs via private peering. However, because of the resources required to provision each private peer, many networks are unwilling to provide private peering to "small" networks, or to "new" networks which have not yet proven that they will provide a mutual benefit. Peering agreement Throughout the history of the Internet, there have been a spectrum of kinds of agreements between peers, ranging from handshake agreements to written contracts as required by one or more parties. Such agreements set forth the details of how traffic is to be exchanged, along with a list of expected activities which may be necessary to maintain the peering relationship, a list of activities which may be considered abusive and result in termination of the relationship, and details concerning how the relationship can be terminated. Detailed contracts of this type are typically used between the largest ISPs, as well as the ones operating in the most heavily regulated economies. As of 2011, such contracts account for less than 0.5% of all peering agreements. History of peering The first Internet exchange point was the Commercial Internet eXchange (CIX), formed by Alternet/UUNET (now Verizon Business), PSI, and CERFNET to exchange traffic without regard for whether the traffic complied with the acceptable use policy (AUP) of the NSFNet or ANS' interconnection policy. The CIX infrastructure consisted of a single router, managed by PSI, and was initially located in Santa Clara, California. Paying CIX members were allowed to attach to the router directly or via leased lines. After some time, the router was also attached to the Pacific Bell SMDS cloud. The router was later moved to the Palo Alto Internet Exchange, or PAIX, which was developed and operated by Digital Equipment Corporation (DEC). Because the CIX operated at OSI layer 3, rather than OSI layer 2, and because it was not neutral, in the sense that it was operated by one of its participants rather than by all of them collectively, and it conducted lobbying activities supported by some of its participants and not by others, it would not today be considered an Internet exchange point. Nonetheless, it was the first thing to bear that name. The first exchange point to resemble modern, neutral, Ethernet-based exchanges was the Metropolitan Area Ethernet, or MAE, in Tysons Corner, Virginia. When the United States government de-funded the NSFNET backbone, Internet exchange points were needed to replace its function, and initial governmental funding was used to aid the preexisting MAE and bootstrap three other exchanges, which they dubbed NAPs, or "Network Access Points," in accordance with the terminology of the National Information Infrastructure document. All four are now defunct or no longer functioning as Internet exchange points: MAE-East – Located in Tysons Corner, Virginia, and later relocated to Ashburn, Virginia Chicago NAP – Operated by Ameritech and located in Chicago, Illinois New York NAP – Operated by Sprint and located in Pennsauken, New Jersey San Francisco NAP – Operated by PacBell and located in the Bay Area As the Internet grew, and traffic levels increased, these NAPs became a network bottleneck. Most of the early NAPs utilized FDDI technology, which provided only 100 Mbit/s of capacity to each participant. Some of these exchanges upgraded to ATM technology, which provided OC-3 (155 Mbit/s) and OC-12 (622 Mbit/s) of capacity. Other prospective exchange point operators moved directly into offering Ethernet technology, such as gigabit Ethernet (1,000 Mbit/s), which quickly became the predominant choice for Internet exchange points due to the reduced cost and increased capacity offered. Today, almost all significant exchange points operate solely over Ethernet, and most of the largest exchange points offer 10, 40, and even 100 gigabit service. During the dot-com boom, many exchange point and carrier-neutral colocation providers had plans to build as many as 50 locations to promote carrier interconnection in the United States alone. Essentially all of these plans were abandoned following the dot-com bust, and today it is considered both economically and technically infeasible to support this level of interconnection among even the largest of networks. Depeering By definition, peering is the voluntary and free exchange of traffic between two networks, for mutual benefit. If one or both networks believes that there is no longer a mutual benefit, they may decide to cease peering: this is known as depeering. Some of the reasons why one network may wish to depeer another include: A desire that the other network pay settlement, either in exchange for continued peering or for transit services. A belief that the other network is "profiting unduly" from the no-settlement interconnection. Concern over traffic ratios, which is related to the fair sharing of cost for the interconnection. A desire to peer with the upstream transit provider of the peered network. Abuse of the interconnection by the other party, such as pointing default or utilizing the peer for transit. Instability of the peered network, repeated routing leaks, lack of response to network abuse issues, etc. The inability or unwillingness of the peered network to provision additional capacity for peering. The belief that the peered network is unduly peering with one's customers. Various external political factors (including personal conflicts between individuals at each network). In some situations, networks which are being depeered have been known to attempt to fight to keep the peering by intentionally breaking the connectivity between the two networks when the peer is removed, either through a deliberate act or an act of omission. The goal is to force the depeering network to have so many customer complaints that they are willing to restore peering. Examples of this include forcing traffic via a path that does not have enough capacity to handle the load, or intentionally blocking alternate routes to or from the other network. Some notable examples of these situations have included: BBN Planet vs Exodus Communications PSINet vs Cable & Wireless AOL Transit Data Network (ATDN) vs Cogent Communications Teleglobe vs Cogent Communications France Telecom vs Cogent Communications France Telecom (Wanadoo) vs Proxad (Free) Level 3 Communications vs XO Communications Level 3 Communications vs Cogent Communications Telecom/Telefónica/Impsat/Prima vs CABASE (Argentina) Cogent Communications vs TeliaSonera Sprint-Nextel vs Cogent Communications SFR vs OVH The French ISP 'Free' vs YouTube Modern peering Donut peering model The "donut peering" model describes the intensive interconnection of small and medium-sized regional networks that make up much of the Internet. Traffic between these regional networks can be modeled as a toroid, with a core "donut hole" that is poorly interconnected to the networks around it. As detailed above, some carriers attempted to form a cartel of self-described Tier 1 networks, nominally refusing to peer with any networks outside the oligopoly. Seeking to reduce transit costs, connections between regional networks bypass those "core" networks. Data takes a more direct path, reducing latency and packet loss. This also improves resiliency between consumers and content providers via multiple connections in many locations around the world, in particular during business disputes between the core transit providers. Multilateral peering The majority of BGP AS-AS adjacencies are the product of multilateral peering agreements, or MLPAs. In multilateral peering, an unlimited number of parties agree to exchange traffic on common terms, using a single agreement to which they each accede. The multilateral peering is typically technically instantiated in a route server or route reflector (which differ from looking glasses in that they serve routes back out to participants, rather than just listening to inbound routes) to redistribute routes via a BGP hub-and-spoke topology, rather than a partial-mesh topology. The two primary criticisms of multilateral peering are that it breaks the shared fate of the forwarding and routing planes, since the layer-2 connection between two participants could hypothetically fail while their layer-2 connections with the route server remained up, and that they force all participants to treat each other with the same, undifferentiated, routing policy. The primary benefit of multilateral peering is that it minimizes configuration for each peer, while maximizing the efficiency with which new peers can begin contributing routes to the exchange. While optional multilateral peering agreements and route servers are now widely acknowledged to be a good practice, mandatory multilateral peering agreements (MMLPAs) have long been agreed to not be a good practice. Peering locations The modern Internet operates with significantly more peering locations than at any time in the past, resulting in improved performance and better routing for the majority of the traffic on the Internet. However, in the interests of reducing costs and improving efficiency, most networks have attempted to standardize on relatively few locations within these individual regions where they will be able to quickly and efficiently interconnect with their peering partners. Exchange points As of 2021, the largest exchange points in the world are Ponto de Troca de Tráfego Metro São Paulo, in São Paulo, with 2,289 peering networks; OpenIXP in Jakarta, with 1,097 peering networks; and DE-CIX in Frankfurt, with 1,050 peering networks. The United States, with a historically larger focus on private peering and commercial public peering, has much less traffic visible on public peering switch-fabrics compared to other regions that are dominated by non-profit membership exchange points. Collectively, the many exchange points operated by Equinix are generally considered to be the largest, though traffic figures are not generally published. Other important but smaller exchange points include AMS-IX in Amsterdam, LINX and LONAP in London, and NYIIX in New York. URLs to some public traffic statistics of exchange points include: AMS-IX DE-CIX LINX MSK-IX TORIX NYIIX LAIIX TOP-IX Netnod Mix Milano Peering and BGP A great deal of the complexity in the BGP routing protocol exists to aid the enforcement and fine-tuning of peering and transit agreements. BGP allows operators to define a policy that determines where traffic is routed. Three things commonly used to determine routing are local-preference, multi exit discriminators (MEDs) and AS-Path. Local-preference is used internally within a network to differentiate classes of networks. For example, a particular network will have a higher preference set on internal and customer advertisements. Settlement free peering is then configured to be preferred over paid IP transit. Networks that speak BGP to each other can engage in multi exit discriminator exchange with each other, although most do not. When networks interconnect in several locations, MEDs can be used to reference that network's interior gateway protocol cost. This results in both networks sharing the burden of transporting each other's traffic on their own network (or cold potato). Hot-potato or nearest-exit routing, which is typically the normal behavior on the Internet, is where traffic destined to another network is delivered to the closest interconnection point. Law and policy Internet interconnection is not regulated in the same way that public telephone network interconnection is regulated. Nevertheless, Internet interconnection has been the subject of several areas of federal policy in the United States. Perhaps the most dramatic example of this is the attempted MCI Worldcom/Sprint merger. In this case, the Department of Justice blocked the merger specifically because of the impact of the merger on the Internet backbone market (thereby requiring MCI to divest itself of its successful "internetMCI" business to gain approval). In 2001, the Federal Communications Commission's advisory committee, the Network Reliability and Interoperability Council recommended that Internet backbones publish their peering policies, something that they had been hesitant to do beforehand. The FCC has also reviewed competition in the backbone market in its Section 706 proceedings which review whether advanced telecommunications are being provided to all Americans in a reasonable and timely manner. Finally, Internet interconnection has become an issue in the international arena under something known as the International Charging Arrangements for Internet Services (ICAIS). In the ICAIS debate, countries underserved by Internet backbones have complained that it is unfair that they must pay the full cost of connecting to an Internet exchange point in a different country, frequently the United States. These advocates argue that Internet interconnection should work like international telephone interconnection, with each party paying half of the cost. Those who argue against ICAIS point out that much of the problem would be solved by building local exchange points. A significant amount of the traffic, it is argued, that is brought to the US and exchanged then leaves the US, using US exchange points as switching offices but not terminating in the US. In some worst-case scenarios, traffic from one side of a street is brought all the way to a distant exchange point in a foreign country, exchanged, and then returned to another side of the street. Countries with liberalized telecommunications and open markets, where competition between backbone providers occurs, tend to oppose ICAIS. See also Autonomous system Default-free zone Interconnect agreement Internet traffic engineering Net neutrality North American Network Operators' Group (NANOG) Vendor-neutral data centre References External links PeeringDB: A free database of peering locations and participants The peering Playbook (PDF): Strategies of peering networks Example Tier 1 Peering Requirements: AT&T (AS7018) Example Tier 1 Peering Requirements: AOL Transit Data Network (AS1668) Example Tier 2 Peering Requirements: Entanet (AS8468) Cybertelecom :: Backbones – Federal Internet Law and Policy How the 'Net works: an introduction into Peering and Transit, Ars Technica Internet architecture Net neutrality
57576556
https://en.wikipedia.org/wiki/Hack%20Forums
Hack Forums
Hack Forums (often shortened to 'HF') is an Internet forum dedicated to discussions related to hacker culture and computer security. The website ranks as the number one website in the "Hacking" category in terms of web-traffic by the analysis company Alexa Internet. The website has been widely reported as facilitating online criminal activity, such as the case of Zachary Shames, who was arrested for selling keylogging software on Hack Forums in 2013 which was used to steal personal information. Security breaches In June 2011, the hacktivist group LulzSec, as part of a campaign titled "50 days of lulz", breached Hack Forums and released the data they obtained. The leaked data included credentials and personal information of nearly 200,000 registered users. On 27 August 2014, Hack Forums was hacked with a defacement message by an Egyptian hacker, using the online handle "Eg-R1z". On 26 July 2016, Hack Forums administrator ("Omniscient") warned its users of a security breach. In an e-mail he suggested users to change their passwords and enable 2FA. Alleged criminal incidents According to a press release from the U.S. Department of Justice, Zachary Shames developed a keylogger in 2013 that allowed users to steal sensitive information, including passwords and banking credentials, from a victim's computer. Shames developed the keylogger known as "Limitless Logger Pro", which was sold for $35 on Hack Forums. On 12 August 2013, hackers used SSH brute-force to mass target Linux systems with weak passwords. The tools used by hackers were then later posted on Hack Forums. On 15 May 2014, the FBI targeted customers of a popular Remote Administration Tool (RAT) called 'Blackshades'. Blackshades RAT was malware created and sold on Hack Forums. On 14 January 2016, the developer of the MegalodonHTTP Botnet was arrested. MegalodonHTTP included a number of features as "Binary downloading and executing", "Distributed Denial of service (DDoS) attack methods", "Remote Shell", "Antivirus Disabling", "Crypto miner for Bitcoin, Litecoin, Omnicoin and Dogecoin". The malware was sold on Hack Forums. On 22 September 2016, many major websites were forced offline after being hit with “Mirai”, a malware that targeted unsecured Internet of Things (IoT) devices. The source code for Mirai was published on Hack Forums as open-source. In response, on 26 October 2016, Omniscient, the administrator of Hack Forums, removed the DDoS-for-Hire section from the forum permanently. On 21 October 2016, popular websites, including Twitter, Amazon, Netflix, were taken down by a distributed denial-of-service attack. Researchers claimed that the attack was stemmed from contributors on Hack Forums. On Monday, 26 February 2018, Agence France-Presse (AFP) reported that Ukrainian authorities had collared Avalanche cybercrime organizer Gennady Kapkanov, who was allegedly living under a fake passport in Poltava, a city in central Ukraine. He marketed the Remote Administration Tool (NanoCore RAT) and another software licensing program called Net Seal exclusively on Hack Forums. Earlier, in December 2016, the FBI had arrested Taylor Huddleston, the programmer who created NanoCore and announced it first on Hack Forums. On 31 August 2018, several users on Hack Forums claimed to have received an e-mail from Google informing them that the FBI demanded the release of user data linked to the LuminosityLink malware sold on Hack Forums. On 29 October 2018, Vice Media reported that Saud Al-Qahtani, advisor to Crown Prince Mohammed bin Salman of Saudi Arabia and one of the alleged masterminds behind the assassination of Jamal Khashoggi, was heavily active on Hack Forums for many years under the username Nokia2mon2, requesting assistance in hacking victims and purchasing malicious surveillance software. There were rumours among users of Hack Forums that Nokia2mon2 was connected to the government of Saudi Arabia and he was using the website as a resource to perform espionage on journalists, foreigners, and dissidents. Public reception According to CyberScoop's Patrick Howell O'Neill, "The forum caters mostly to a young audience who are curious and occasionally malicious, but still learning... Furthermore, HackForums is the kind of internet community that can seem impenetrable, even incomprehensible, to outsiders. It has a reputation for being populated by trolls: chaos-driven children and brazen criminal activity." Cybersecurity journalist Brian Krebs described HackForums as "a forum that is overrun with teenage wannabe hackers who spend most of their time trying to impress, attack or steal from one another." Allison Nixon, Director of Security Research at Flashpoint, compared the activity on HackForums to that of real-world street gangs, stating: References External links Official website Hacking (computer security) Hacker culture Crime forums
68508272
https://en.wikipedia.org/wiki/Motorola%2068000%20Educational%20Computer%20Board
Motorola 68000 Educational Computer Board
The Motorola 68000 Educational Computer Board (MEX68KECB) was a development board for the Motorola 68000 microprocessor, introduced by Motorola in 1981. It featured the 68K CPU, memory, I/O devices and built-in educational and training software. Hardware CPU: 4-MHz Motorola 68000 RAM: 32KB ROM: 16KB 9600 baud serial port for dumb terminal connection 9600 baud serial port for host computer connection Parallel port for communication and printer connection Audio output for tape storage 24-bit programmable interval timer Wire-wrap area for custom circuitry Required power voltages: -12V, +5V and +12V Software The board has built-in 16K ROM memory containing assembly/disassembly/stepping/monitoring software called TUTOR. The software was operated using command-line interface over a serial link, and provided many commands useful in machine code debugging. Memory contents (including programs) could be dumped via a serial link to a file on the host computer. The file was transferred in Motorola's S-Record format. Similarly, files from host could be uploaded to the board's arbitrary user memory area. Price The price of the Motorola ECB at launch was which was relatively inexpensive for a computer with an advanced for that time 16/32-bit CPU. Use According to the manual, for basic use only a dumb terminal and power source are required. However, it seems that in colleges the board was predominantly used in connection with a time-sharing host computer to teach assembly language programming and other computer science subjects. References MC68000 Educational Computer Board User's Manual External links MC68000 Educational Computer Board User's Manual Early microcomputers Microcomputers Single-board computers
10614902
https://en.wikipedia.org/wiki/CamStudio
CamStudio
CamStudio is an open-source screencasting program for Microsoft Windows released as free software. The software renders videos in an AVI format. It can also convert these AVIs into Flash Video format, embedded in SWF files. CamStudio is written in C++, but CamStudio 3 will be developed in C#. The program has distributed malware and harmful viruses via the installer. History The original CamStudio was released as an open source product by RenderSoft software in October 2001. The source code license was converted to the GNU General Public License in December 2002 with release 1.8. The Source code of versions 1.0, 1.4 and 2.0 are still available at SourceForge. In 2003, the company was acquired by eHelp Corporation who owned a competing product called RoboDemo (now called Adobe Captivate). eHelp Corporation released an updated version as CamStudio 2.1 under a proprietary software license only and removed the ability to create SWFs. A succession of acquisitions led to the company being owned by Adobe. Development of CamStudio 2.0 (the last open-source version) was resumed and released as free software again in September 2007 with the CamStudio 2.5 Beta 1 release. Accordingly, it was re-branded as CamStudio Open Source. CamStudio 3 is a complete rewrite of the project in the pre-alpha stages of development as of April 19, 2010. Malicious software There have been ongoing reports about malicious code contained in some binaries of the software. In 2013, Google-run website VirusTotal declared that CamStudio contains malicious software, where most anti-virus programs detected Artemis Trojan in CamStudio installer file. In January 2014, the binary on the webpage was reported to be infected with the trojan, Artemis!0FEA2B12900D. In March 2016, the developers of CamStudio reported via forum post that the ad wrapper in the CamStudio installer had been removed and that it no longer offers third-party software or installs malware; however, they did not provide evidence of independent verification in the post. In a VirusTotal analysis of the installer acquired from the official download URL on 10 August 2016, AVware, Dr. Web and VIPRE antivirus tools said it was infected with "InstallCore" while the remaining 51 said it was clean. A VirusTotal analysis of the installer acquired from the official download URL on 14 February 2017, 31 out of 55 antivirus tools reported malicious content, mostly showing InstallCore. A second analysis of the installer acquired from the official download URL on 8 March 2017, 17 out of 60 antivirus tools reported malicious content, mostly showing InstallCore. In 2019, the installer was still infected, being detected by 22 out of 68 engines. As of 23 September 2019, the installer offered via SourceForge appears to be finally virus-free. As of 10 March 2020, the installer offered via the official website was reported to be infected by 20 out of 70 engines and the download URL was reported malicious by ESET engine. As of 27 April 2020, the installer offered via the official website was reported as malware by just 1 of 79 scanners. See also Comparison of screencasting software References External links Because download websites and installer versions vary, when in doubt, verify the downloaded file before installing: CamStudio fork on GitHub (2018 - 2020) (2007) Free software programmed in C++ Screencasting software Windows-only free software
18691693
https://en.wikipedia.org/wiki/Arptables
Arptables
The arptables computer software utility is a network administrator's tool for maintaining the Address Resolution Protocol (ARP) packet filter rules in the Linux kernel firewall modules. The tools may be used to create, update, and view the tables that contain the filtering rules, similarly to the iptables program from which it was developed. A popular application is the creation of filter configurations to prevent ARP spoofing. Linux kernel 2.4 only offers two ARP filtering chains, INPUT and OUTPUT, and Linux kernel 2.6 adds the third, FORWARD, applied when bridging packets. External links eb/arp tables website download site git tree arptables(8) - Linux man page arptables, and ARP poisoning Firewall software Linux network-related software Free network-related software Free security software
21132551
https://en.wikipedia.org/wiki/OOFEM
OOFEM
OOFEM is a free and open-source multi-physics finite element code with object oriented architecture. The aim of this project is to provide efficient and robust tool for FEM computations as well as to offer highly modular and extensible environment for development. Main features Solves various linear and nonlinear problems from structural, thermal and fluid mechanics. Particularly includes many material models for nonlinear fracture mechanics of quasibrittle materials, such as concrete. Efficient parallel processing support based on domain decomposition and message passing paradigms. Direct as well as iterative solvers are available. Direct solvers include symmetric and unsymmetric skyline solver and sparse direct solver. Iterative solvers support many sparse storage formats and come with various preconditioners. Interfaces to third party linear and eigen value solver libraries are available, including IML, PETSc, SLEPc, and SPOOLES. Support for eXtented Finite Elements (XFEM) and iso-geometric analysis (IGA). License OOFEM is free, open source software, released under the GNU Lesser General Public License version 2.1 on any later version See also List of numerical analysis software List of finite element software packages References External links Project website Community resources OOFEM forum OOFEM wiki Finite element software Scientific simulation software Free computer-aided design software Free software programmed in C++ Finite element software for Linux
145459
https://en.wikipedia.org/wiki/Logical%20link%20control
Logical link control
In the IEEE 802 reference model of computer networking, the logical link control (LLC) data communication protocol layer is the upper sublayer of the data link layer (layer 2) of the seven-layer OSI model. The LLC sublayer acts as an interface between the media access control (MAC) sublayer and the network layer. The LLC sublayer provides multiplexing mechanisms that make it possible for several network protocols (e.g. IP, IPX and DECnet) to coexist within a multipoint network and to be transported over the same network medium. It can also provide flow control and automatic repeat request (ARQ) error management mechanisms. Operation The LLC sublayer is primarily concerned with multiplexing protocols transmitted over the MAC layer (when transmitting) and demultiplexing them (when receiving). It can also provide node-to-node flow control and error management. The flow control and error management capabilities of the LLC sublayer are used by protocols such as the NetBIOS Frames protocol. However, most protocol stacks running atop 802.2 do not use LLC sublayer flow control and error management. In these cases flow control and error management are taken care of by a transport layer protocol such as TCP or by some application layer protocol. These higher layer protocols work in an end-to-end fashion, i.e. re-transmission is done from the original source to the final destination, rather than on individual physical segments. For these protocol stacks only the multiplexing capabilities of the LLC sublayer are used. Application examples X.25 and LAPB An LLC sublayer was a key component in early packet switching networks such as X.25 networks with the LAPB data link layer protocol, where flow control and error management were carried out in a node-to-node fashion, meaning that if an error was detected in a frame, the frame was retransmitted from one switch to next instead. This extensive handshaking between the nodes made the networks slow. Local area network The IEEE 802.2 standard specifies the LLC sublayer for all IEEE 802 local area networks, such as IEEE 802.3/Ethernet (when Ethernet II frame format is not used), IEEE 802.5, and IEEE 802.11. IEEE 802.2 is also used in some non-IEEE 802 networks such as FDDI. Ethernet Since bit errors are very rare in wired networks, Ethernet does not provide flow control or automatic repeat request (ARQ), meaning that incorrect packets are detected but only cancelled, not retransmitted (except in case of collisions detected by the CSMA/CD MAC layer protocol). Instead, retransmissions rely on higher layer protocols. As the EtherType in an Ethernet frame using Ethernet II framing is used to multiplex different protocols on top of the Ethernet MAC header it can be seen as an LLC identifier. However, Ethernet frames lacking an EtherType have no LLC identifier in the Ethernet header, and, instead, use an IEEE 802.2 LLC header after the Ethernet header to provide the protocol multiplexing function. Wireless LAN In wireless communications, bit errors are very common. In wireless networks such as IEEE 802.11, flow control and error management is part of the CSMA/CA MAC protocol, and not part of the LLC layer. The LLC sublayer follows the IEEE 802.2 standard. HDLC Some non-IEEE 802 protocols can be thought of as being split into MAC and LLC layers. For example, while HDLC specifies both MAC functions (framing of packets) and LLC functions (protocol multiplexing, flow control, detection, and error control through a retransmission of dropped packets when indicated), some protocols such as Cisco HDLC can use HDLC-like packet framing and their own LLC protocol. PPP and modems Over telephone network modems, PPP link layer protocols can be considered as a LLC protocol, providing multiplexing, but it does not provide flow control and error management. In a telephone network, bit errors might be common, meaning that error management is crucial, but that is today provided by modern protocols. Today's modem protocols have inherited LLC features from the older LAPM link layer protocol, made for modem communication in old X.25 networks. Cellular systems The GPRS LLC layer also does ciphering and deciphering of SN-PDU (SNDCP) packets. Power lines Another example of a data link layer which is split between LLC (for flow and error control) and MAC (for multiple access) is the ITU-T G.hn standard, which provides high-speed local area networking over existing home wiring (power lines, phone lines and coaxial cables). See also Subnetwork Access Protocol (SNAP) Virtual Circuit Multiplexing (VC-MUX)
21399088
https://en.wikipedia.org/wiki/Colette%20Rolland
Colette Rolland
Colette Rolland (born 1943, in Dieupentale, Tarn-et-Garonne, France) is a French computer scientist and Professor of Computer Science in the department of Mathematics and Informatics at the University of Paris 1 Pantheon-Sorbonne, and a leading researcher in the area of information and knowledge systems, known for her work on meta-modeling, particularly goal modelling and situational method engineering. Biography In 1966 she studied applied mathematics at the University of Nancy, where she received her PhD in 1971. In 1973 she was appointed Professor at the University of Nancy, Department of Computer Science. In 1979 she became Professor at University of Paris 1 Pantheon-Sorbonne Department of Mathematics and Informatics. She has been involved in a large number of European research projects and used to lead cooperative research projects with companies. She is currently Professor Emeritus of Computer Science in the department of Mathematics and Informatics. Rolland is in the editorial board of a number of journals including Journal of Information Systems, Journal on Information and Software Technology, Requirements Engineering Journal, Journal of Networking and Information Systems, Data and Knowledge Engineering Journal, Journal of Data Base Management and Journal of Intelligent Information Systems. She is the French representative in IFIP TC8 on Information Systems and has been the co chair and chairperson of the IFIP WG8.1 during nine years. Rolland has been awarded a number of prizes including the IFIP Silver Core, IFIP service award, the Belgium prize ‘de la Fondation Franqui’ and the European prize of ‘Information Systems’. Work Roland's research interests are in the areas of information modeling, databases, temporal data modeling, object-oriented analysis and design, requirements engineering and specially change engineering, method engineering, CASE and CAME tools, change management and enterprise knowledge development. Publications Rolland is the co-author of 7 textbooks; editor of 25 proceedings and author or co-author of over 280 invited and referred papers. Books, a selection: 1991. Automatic Tools for Designing Office Information Systems: The Todos Approach. Research Reports ESPRIT, Project 813, Todos, Vol. 1. With B. Pernici. 1992. Information System Concepts: Improving the Understanding, Proceedings. With Eckhard D. Falkenberg. IFIP Transactions a, Computer Science and Technology. 1993. Advanced Information Systems Engineering. With F. Bodart. Springer. 1994. A Natural Language Approach For Requirements Engineering. With C. Proix. 1996. Facilitating "Fuzzy to Formal" Requirements Modelling. With Janis Bubenko, P. Loucopoulos and V. Deantonellis. 1988. Temporal Aspects in Information Systems. With F. Bodart. Elsevier Science Ltd. 1998. A framework of information system concepts. The FRISCO report. With Eckhard D. Falkenberg, Paul Lindgreen, Björn E. Nilsson, J.L. Han Oei, Ronald Stamper, Frans J M Van Assche, Alexander A. Verrijn-Stuart and Klaus Voss. 1998. "A proposal for a scenario classification framework". With others. In: Journal of. Requirements Engineering. vol. 3 pp. 23–47 – Springer Verlag, 1998 2000. Engineering information systems in the Internet context: IFIP TC8/WG8.1. With Sjaak Brinkkemper and Motoshi Saeki. 2005. "Modeling Goals and Reasoning with Them" with Camille Salinesi, in Engineering and Managing Software Requirements, edited by Aybüke Aurum and Claes Wohlin, Springer:Berlin/Heidelberg, p. 189‑217. 2005. "Measuring the fitness relationship" with Anne Etien, Requirements Engineering, vol. 10, p. 184‑197, 2005. Papers, a selection: 1998. "A proposal for a scenario classification framework". With others. In: Journal of. Requirements Engineering. vol. (3) pp. 23–47 – Springer Verlag, 1998 2005. "Map-driven Modular Method Re-engineering: Improving the RESCUE Requirements Process". CAiSE Short Paper Proceedings 2005 with Jolita Ralyté et al. 2006. "Intentional Cognitive Models with Volition". With Ammar Qusaibaty, Newton Howard 2007. "Capturing System Intentionality with Maps", in Conceptual Modelling in Information Systems Engineering, edited by John Krogstie, Andreas Opdahl and Sjaak Brinkkemper, Springer:Berlin/Heidelberg, p. 141‑158. 2007. "On the Adequate Modeling of Business Process Families". With Naveen Prakash. Paper 8th Workshop on Business Process Modeling, Development, and Support. 2008. Requirements engineering: foundation for software quality : 14th international working conference, REFSQ 2008, Montpellier, France, June 16–17, 2008 : proceedings. With Barbara Paech (eds.). 2008. "Towards Engineering Purposeful Systems: A Requirements Engineering Perspective", in Database and Expert Systems Applications, edited by S. S. Bhowmick, J. Küng and R. Wagner, Lecture Notes in Computer Science, vol. 5181, Springer:Berlin/Heidelberg, p. 1‑4. 2009. "Exploring the Fitness Relationship between System Functionality and Business Needs", in Design Requirements Engineering: A Ten-Year Perspective, edited by Kalle Lyytinen, Pericles Loucopoulos, John Mylopoulos and Bill Robinson, Lecture Notes in Business Information processing, Springer:Berlin/Heidelberg, p. 305‑326. 2009. "Method engineering: towards methods as services", Software Process: Improvement and Practice, vol. 14, no 3, p. 143‑164. 2010. "An Intentional Approach to Service Engineering" with Manuele Kirsch-Pinheiro and Carine Souveyet, IEEE Transactions on Services Computing, vol. 3, p. 292‑305, 2010. 2013. Seminal Contributions to Information Systems Engineering: 25 Years of CAiSE. With Janis Bubenko, John Krogstie, Óscar Pastor, Barbara Pernici, Colette Rolland, Arne Sølvberg (eds.). References External links home page REMORA by Colette Rolland , Virtual Exhibitions in Informatics at uni-klu.ac.at 1943 births Living people French computer scientists Enterprise modelling experts Information systems researchers French women computer scientists University of Paris faculty Nancy-Université alumni Nancy-Université faculty Paris-Sorbonne University faculty
364952
https://en.wikipedia.org/wiki/Jet%20Set%20Willy
Jet Set Willy
Jet Set Willy is a platform video game originally written by Matthew Smith for the ZX Spectrum home computer. It was published in 1984 by Software Projects and ported to most home computers of the time. The game is a sequel to Manic Miner published in 1983, and the second game in the Miner Willy series. It spent over three months at the top of the charts and was the UK's best-selling home video game of 1984. Plot A tired Miner Willy has to tidy up all the items left around his house after a huge party. With this done, his housekeeper Maria will let him go to bed. Willy's mansion was bought with the wealth obtained from his adventures in Manic Miner, but much of it remains unexplored and it appears to be full of strange creatures, possibly a result of the previous (missing) owner's experiments. Willy must explore the enormous mansion and its grounds (including a beach and a yacht) to fully tidy up the house so he can get some much-needed sleep. Gameplay Jet Set Willy is a flip-screen platform game in which the player moves the protagonist, Willy, from room to room in his mansion collecting objects. Unlike the screen-by-screen style of its prequel, the player can explore the mansion at will. Willy is controlled using only left, right and jump. He can climb stairs by walking into them (jumping through them to avoid them) and climb swinging ropes by pushing left or right depending on what direction the rope is swinging. The play area itself consists of 60 playable screens making up the mansion and its grounds and contains hazards (static killer objects), guardians (killer monsters which move along predetermined paths), various platforms and collectable objects. The collectable items glow to distinguish them from other objects in the room. Willy loses a life if he touches a hazard or guardian or falls too far. He is returned to the point at which he entered the room, which may lead to a game-ending situation where Willy repeatedly falls from a height or unavoidably collides with a guardian, losing all lives in succession. One of the more bizarrely named rooms in the game is We Must Perform a Quirkafleeg. (The pre-release name for the screen was The Gaping Pit.) This is a reference to the comic strip Fat Freddy's Cat, a spin-off from the Fabulous Furry Freak Brothers; in the original comic, the quirkafleeg was an obscure ritual in a foreign country, required to be performed upon the sight of dead furry animals. Music Music on the Spectrum version is Beethoven's Moonlight Sonata for the menu, and Grieg's "In the Hall of the Mountain King" during the game itself. Early versions played "If I Were a Rich Man" as the in-game music, but this had to be removed when the publishers of the song demanded £36,000 for its use. Music on the C64 version was Beethoven's Moonlight Sonata during loading, and J.S. Bach - Inventions # 1 during gameplay. Some rooms also play Mozart's Rondo alla Turca. Bugs Upon release, the game could not be completed due to several bugs. Although four completely unrelated issues, they became known collectively as "The Attic Bug". After entering The Attic screen, various rooms would undergo corruption for all subsequent playthroughs, including all monsters disappearing from The Chapel screen, and other screens triggering a game over. This was caused by an error in the path of an arrow in The Attic, resulting in the sprite traveling past the end of the Spectrum's video memory and overwriting crucial game data instead. Initially Software Projects attempted to pass this bug off as an intentional feature to make the game more difficult, claiming that the rooms in question were filled with poison gas. However, they later rescinded this claim and issued a set of POKEs to correct the flaws. Despite these bugs, Ross Holman and Cameron Else won the competition that Software Projects had set for completion of Jet Set Willy and provided Software Projects with a set of bug fixes. Software Projects then hired Cameron Else to port both Manic Miner and Jet Set Willy to the MSX. Reception Reviewing Jet Set Willy for Your Spectrum magazine in June 1984, Sue Denham wrote that the game was "every bit as good and refreshing as the original". In the final issue of Your Sinclair, the ZX Spectrum version was ranked number 32 on "The Your Sinclair Official Top 100 Games of All Time", and voted number 33 on "The Your Sinclair Readers' Top 100 Games of All Time." In 2004, the ZX Spectrum version was voted the 6th best game all of time by Retro Gamer readers in an article originally intended for a special issue of Your Sinclair bundled with Retro Gamer. Copy protection Jet Set Willy came with a form of copy protection: a card with 180 coloured codes on it was bundled with the cassette. Upon loading, one of the codes from the card had to be entered before the game would start. Although the cassette could be duplicated, a copy of the card was also needed and at the time, home colour reproduction was difficult, making Jet Set Willy harder to copy than most Spectrum games. However, means of circumventing the card were quickly found, and one method was published in a UK computer magazine. Ports The original releases of Jet Set Willy for the BBC Micro and the Commodore 64 also contained bugs which made it impossible to complete the game. In the Commodore 64 version, it was impossible to reach all of the items in the Wine Cellar. There are two versions of the original Jet Set Willy for the MSX. The Software Projects version that was sold in the UK is dated 1984 and was programmed by Cameron Else, co-winner of the Jet Set Willy competition. The other version was published by Hudson Soft in 1985 as a Bee Card in Japan. A port of Jet Set Willy for the Atari 8-bit family of computers was released by Tynesoft in 1986. It received generally poor reviews which criticised inferior graphics and animation; however, Rob Hubbard's theme music, unique to this version, was considered a highlight. Like the Spectrum version, it was impossible to complete but for different reasons. Some of the legitimate items that were needed caused the player to lose a life (e.g. the bottles in the Off Licence). Ports from Software Projects for the Amiga and Atari ST were cancelled before release, but have since been made available on the internet. Legacy Expanded versions Jet Set Willy: The Final Frontier, an expanded version for the Amstrad CPC, was later converted back to the ZX Spectrum and released as Jet Set Willy II. Both the original game and Jet Set Willy II were released for the BBC Micro, Acorn Electron, MSX, Commodore 16 and Commodore 64. A differently expanded version of Jet Set Willy was released for the Dragon 32/64, with extra rooms. This version could also not be completed as it was impossible to traverse The Drive in a right-to-left direction, which was necessary to return to bed after collecting all the items. The game could, however, be completed using a built-in cheat, accessed by holding down the keys M, A and X simultaneously, allowing you to start Willy from any position on any screen, using the arrow keys and spacebar. The Dragon port was itself converted to run on the Acorn Archimedes computers. Better collision detection meant that "The Drive" could now be completed right-to-left, unlike on the Dragon. Third-party modifications In its original Spectrum version, the rooms themselves are stored in a straightforward format, with no compression, making it relatively easy to create customised versions of the game. The review of JSW in issue 4 of Your Spectrum included a section entitled "JSW — A Hacker's Guide"; remarks in this section imply that the author had successfully deduced at least some of the data structures, since he was able to remove sections of wall in the Master Bedroom. The following year, issue 13 contained a program that added an extra room ("April Showers") to the game, and issue 15 described the data formats in detail. Several third-party editing tools were published between 1984 and 1986, allowing players to design their own rooms and sprites. See also The following platform games are in the same mould as the Miner Willy series with the purpose of the game being to collect objects to complete the scenes in the game. Brian Bloodaxe Chuckie Egg Dynamite Dan Kokotoni Wilf Roller Coaster Technician Ted Blagger References External links Jet Set Willy at Atari Mania A remake of the original Jet Set Willy at Darn Kitty Jet Set Willy / GAMOPAT Platform games 1984 video games Software Projects games Hudson Soft games Amstrad CPC games Atari 8-bit family games Commodore 16 and Plus/4 games Commodore 64 games Dragon 32 games MSX games Texas Instruments TI-99/4A games ZX Spectrum games BBC Micro and Acorn Electron games Mobile games Video games scored by Rob Hubbard Video games developed in the United Kingdom
98078
https://en.wikipedia.org/wiki/University%20of%20Southampton
University of Southampton
The University of Southampton (abbreviated as Soton in post-nominal letters) is a public research university in Southampton, England. Southampton is a founding member of the Russell Group of research-intensive universities in the United Kingdom, and ranked in the top 100 universities in the world. The university has seven campuses. The main campus is located in the Highfield area of Southampton and is supplemented by four other campuses within the city: Avenue Campus housing the School of Humanities, the National Oceanography Centre housing courses in Ocean and Earth Sciences, Southampton General Hospital offering courses in Medicine and Health Sciences, and Boldrewood Campus housing an engineering and maritime technology campus and Lloyd's Register. In addition, the university operates a School of Art based in nearby Winchester and an international branch in Malaysia offering courses in Engineering. Each campus is equipped with its own library facilities. The University of Southampton currently has undergraduate and postgraduate students, making it the largest university by higher education students in the South East region. The University of Southampton Students' Union, provides support, representation and social activities for the students ranging from involvement in the Union's four media outlets, to any of the 200 affiliated societies and 80 sports. The university owns and operates a sports ground for use by students and also operates a sports centre on the main campus. History Hartley Institution The University of Southampton has its origin as the Hartley Institution which was formed in 1862 from a benefaction by Henry Robinson Hartley (1777–1850). Hartley had inherited a fortune from two generations of successful wine merchants. At his death in 1850, he left a bequest of £103,000 to the Southampton Corporation for the study and advancement of the sciences in his property on Southampton's High Street, in the city centre. Hartley was an eccentric straggler, who had little liking of the new age docks and railways in Southampton. He did not desire to create a college for many (as formed at similar time in other English industrial towns and commercial ports) but a cultural centre for Southampton's intellectual elite. After lengthy legal challenges to the Bequest, and a public debate as to how best interpret the language of his Will, the Southampton Corporation choose to create the Institute (rather than a more widely accessible college, that some public figures had lobbied for). On 15 October 1862, the Hartley Institute was opened by the Prime Minister Lord Palmerston in a major civic occasion which exceeded in splendor anything that anyone in the town could remember. After initial years of financial struggle, the Hartley Institute became the Hartley College in 1883. This move was followed by increasing numbers of students, teaching staff, an expansion of the facilities and registered lodgings for students. University College In 1902, the Hartley College became the Hartley University college, a degree awarding branch of the University of London. This was after inspection of the teaching and finances by the University College Grants Committee, and donations from Council members (including William Darwin the then Treasurer). An increase in student numbers in the following years motivated fund raising efforts to move the college to greenfield land around Back Lane (now University Road) in the Highfield area of Southampton. On 20 June 1914, Viscount Haldane opened the new site of the renamed Southampton University College. However, the outbreak of the First World War six weeks later meant no lectures could take place there, as the buildings were handed over by the college authorities for use as a military hospital. To cope with the volume of casualties, wooden huts were erected at the rear of the building. These were donated to university by the War Office after the end of fighting, in time for the transfer from the high street premises in 1920. At this time, Highfield Hall, a former country house and overlooking Southampton Common, for which a lease had earlier been secured, commenced use as a halls of residence for female students. South Hill, on what is now the Glen Eyre Halls Complex was also acquired, along with South Stoneham House to house male students. Further expansion through the 1920s and 1930s was made possible through private donors, such as the two daughters of Edward Turner Sims for the construction of the university library, and from the people of Southampton, enabling new buildings on both sides of University Road. During World War II the university suffered damage in the Southampton Blitz with bombs landing on the campus and its halls of residence. The college decided against evacuation, instead expanding its Engineering Department, School of Navigation and developing a new School of Radio Telegraphy. The university hosted the Supermarine plans and design team for a period but in December 1940 further bomb hits resulted in it being relocated to Hursley House. Halls of residence were used to house Polish, French and American troops. After the war, departments such as Electronics grew under the influence of Erich Zepler and the Institute of Sound and Vibration was established. University On 29 April 1952, Queen Elizabeth II granted the University of Southampton a Royal Charter, the first to be given to a university during her reign, which enabled it to award degrees. Six faculties were created: Arts, Science, Engineering, Economics, Education and Law. The first University of Southampton degrees were awarded on 4 July 1953, following the appointment of the Duke of Wellington as Chancellor of the university. Student and staff numbers grew throughout the next couple of decades as a response to the Robbins Report. The campus also grew significantly, when in July 1961 the university was given the approval to acquire some 200 houses on or near the campus by the Borough Council. In addition, more faculties and departments were founded, including Medicine and Oceanography (despite the discouragement of Sir John Wolfenden, the chairman of the University Grants Committee). Student accommodation was expanded throughout the 1960s and 1970s with the acquisition of Chilworth manor and new buildings at the Glen Eyre and Montefiore complexes. In 1987, a crisis developed when the University Grants Committee announced, as part of nationwide cutbacks, a series of reductions in the funding of the university. To eliminate the expected losses, the budgets and deficits subcommittee proposed reducing staff numbers. This proposal was met with demonstrations on campus and was later reworked (to reduce the redundancies and reallocate the reductions in faculties funding) after being rejected by the university Senate. By the mid-1980s through to the 1990s, the university looked to expand with new buildings on the Highfield campus, developing the Chilworth Manor site into a science park and conference venue, opening the National Oceanography Centre at a dockside location and purchasing new land from the City Council for the Arts Faculty and sports fields (at Avenue Campus and Wide Lane, respectively). Research university Under the leadership of then Vice-Chancellor, Sir Howard Newby the university became more focused in encouraging and investment in more and better quality research. In the mid-1990s, the university gained two new campuses, as the Winchester School of Art and La Sainte Union College became part of the university. A new school for Nursing and Midwifery was also created and went on to provide training for NHS professionals in central-southern England. This involved a huge increase in student numbers and the establishment of sub-campuses in Basingstoke, Winchester, Portsmouth and Newport, Isle of Wight. In the autumn of 1997, the university experienced Britain's worst outbreak of meningitis, with the death of three students. The university responded to the crisis by organising a mass vaccination programme, and later took the ground-breaking decision to offer all new students vaccinations. The university celebrated its Golden Jubilee on 22 January 2002. By this time, Southampton had research income that represented over half of the total income. In recent years a number of new landmark buildings have been added as part of the estates development. New constructions on the main campus include the Jubilee Sports Complex in 2004, the EEE (ECS, Education and Entrance) building in 2007, the new Mountbatten building in 2008 housing the School of Electronics and Computer Science following a fire and the Life Sciences building in 2010. In addition, the Hartley Library and Student Services Centre were both extended and redesigned in 2005 and the Students' Union was also extended in 2002. Other constructions include the Archaeology building on Avenue Campus in 2006 and the Institute of Development Sciences building at Southampton General Hospital in 2007. The university has also significantly redeveloped its Boldrewood Campus which is home to part of the engineering faculty and to Lloyd's Register's Global Technology Centre. The university joined the Science and Engineering South Consortium (SES) on 9 May 2013. The SES was created to pool the collective insights and resources of the University of Oxford, University of Cambridge, Imperial College London and University College London to innovate and explore new ideas through collaboration whilst providing efficiencies of scale and shared utilisation of facilities. This is the most powerful cluster of research intensive universities in the UK and the new consortium is to become one of the world's leading hubs for science and engineering research. In 2015, the university started a fundraising campaign to build the Centre for Cancer Immunology based at Southampton General Hospital. At the beginning of 2018, the target amount of £25 million was raised, allowing 150 scientists to move into the building in March 2018. The Centre for Cancer Immunology is the first of its kind in the UK and contains facilities that will hosts clinical trial units and laboratories that will explore the relationship between cancer and the immune system. Campuses The university has seven educational campuses – five in Southampton, one in Winchester, and one international branch in Malaysia. The university operates a science park in Chilworth. The university also owns sports facilities and halls of residences on a variety of other nearby sites. Highfield Campus The university's main campus is located in the residential area of Highfield. Opened on 20 June 1914, the site was initially used as a military hospital during World War I. The campus grew gradually, mainly consisting of detailed red brick buildings (such as the Hartley library and West building of the Students' Union) designed by Sir Giles Gilbert Scott. In 1956, Sir Basil Spence was commissioned to prepare a masterplan of the campus for the foreseeable future. This included incorporating the University Road, that split the campus in two and the quarry of Sir Sidney Kimber's brickyard that itself was split by a stream. Unable to remove the road and the private houses along it, Spence designed many of the buildings facing away from it, using contemporary designs working in concrete, glass and mosaic. During recent decades, new buildings were added that contravened the master plan of Spence, such as the Synthetic Chemistry Building and Mountbatten Building (the latter of which was destroyed by fire in 2005). In 1991, the Highfield Planning Group was formed within the university under the chairmanship of Tim Holt. This led to the development of new buildings such as the Jubilee Sports Hall, Student Services Building and the Institute of Sound and Vibration Research. In addition, existing buildings, such as the Hartley Library, were extensively renovated and extended. A new masterplan for the Highfield campus was drawn up in 1998 by Rick Mather, who proposed that the University Road should become a tree-lined boulevard backed by white-rendered buildings. He also contributed some of the newer buildings such as the Zepler and Gower Buildings. Avenue Campus Avenue Campus is currently home to the Faculty of Humanities, with the exception of Music, and is located a short distance away from the main Highfield campus. The site previously housed the Southampton Tramsheds and Richard Taunton's College, of which the existing building still stands on the site. It was purchased by the university from Southampton City Council for £2 million in December 1993 so that the university could expand – planning regulations meant that excess land on the Highfield campus couldn't be built on and had to be reserved for future car parking spaces. The car parking spaces have now been built. The departments moved onto the campus in 1996. The campus consists of the original Tauntons building from the early 20th century but redeveloped with a glass-fronted courtyard and extension and a new Archaeology building built in 2006 costing £2.7 million. Boldrewood Campus Boldrewood Campus, located a short distance from the Highfield campus, houses the university's new Maritime Centre of Excellence, the Southampton Marine and Maritime Institute and Lloyd's Register's Group Technology Centre. The campus was formerly the Biomedical Sciences campus of the university and acted, until 2010, as a non-hospital base for the School of Medicine and home to a research facility for the Biological Sciences. These departments were then relocated to either Southampton General Hospital, the new Life Sciences building at Highfield, or the University of Southampton science park. National Oceanography Centre, Southampton The National Oceanography Centre, Southampton (NOCS) is located in Southampton Docks three miles south of the main university campus. The campus is home of the university's Ocean and Earth Sciences department and is also a campus of the Natural Environment Research Council's research institute, the National Oceanography Centre. Five of the National Oceanography Centre's research divisions are based on the campus. Planning of the campus began in 1989 and was completed in 1994 due to cuts and uncertainties whether a national research centre could be successfully integrated with a university. It was opened in 1996 by the Duke of Edinburgh. The campus was also the base for the NERC purpose-built research vessels RRS James Cook and until recently the RRS Discovery and the RRS Charles Darwin. University Hospital Southampton (UHS) The university maintains a presence at Southampton General in partnership with the NHS trust operating the hospital. It is home to some operations of the Faculty of Medicine and the Faculty of Health Sciences, although these two faculties have bases on Highfield campus. As a teaching hospital, it is used by a range of undergraduate and postgraduate medical students, research academics and clinicians. The university's involvement began in 1971, when it became the first to house a new school of medicine alongside the universities of Nottingham and Leicester, and currently extends to several operations and specific research centres. Winchester School of Art The Winchester School of Art, located in central Winchester, houses the university's arts and textiles courses that are part of the Faculty of Arts and Humanities. The school itself was established in the 1960s and was integrated into the University of Southampton in 1996. The campus contains the original school buildings from the 1960s, in addition to structures built when the merger occurred and in 1998 when the Textile Conservation Centre moved to the site from Hampton Court Palace. The centre remained with the school until its closure in 2009. Malaysia Campus (University of Southampton Malaysia) The university opened its first international campus in Iskandar Puteri, Malaysia in October 2012. Located in the state of Johor near the southwestern tip of Malaysia, the campus is located within EduCity in Iskandar Puteri - a new city comprising universities and institutes of higher education, academia-industry action and R&D centres, as well as student accommodation, shared sports and recreational facilities. The campus operates courses in engineering, it offers an Engineering foundation year programme and MEng programmes in Aeronautics and Astronautics, Mechanical Engineering and Electrical and Electronic Engineering. All programmes have been approved by the Malaysian Qualifications Agency (MQA) and the Board of Engineers Malaysia (BEM). The split campus degree programmes take place in Malaysia for the first two years, with the final two years at Southampton. In 2016, the Malaysia Campus' first group of students graduated, along with the first PhD graduate. Science Park The University of Southampton Science Park contains approximately 50 businesses connected to the university. Originally established in 1983 as Chilworth Science Park, named after the manor house that is now a luxury hotel and conference centre, the park houses business incubator units to help these companies. The companies occupying the park range in expertise and fields including oil and gas exploration, pharmaceuticals, nanotechnology and optoelectronics, with three of the twelve successful spin-out companies created since 2000 being floated on London's Alternative Investment Market (AIM) with a combined market capitalisation value of £160 million. The park was renamed in 2006. Transport links To connect the university's Southampton campuses, halls of residence, hospitals, and other important features of the city, the university operates the Unilink bus service for the benefit of the students, staff and the general public. The service is currently operated by local bus company Bluestar using the Unilink name. The service consists of four routes. The U1 runs between Southampton Airport and the National Oceanography Centre via Wessex Lane Halls, Highfield campus, Portswood, Southampton City Centre and Southampton Central railway station. The other regular routes, the U2 and the U6, run between the City Centre and Bassett Green and Southampton General Hospital respectively while the final route, the U9, runs an infrequent service between Southampton General hospital and Townhill Park. Students who live in some halls of residence receive an annual smart-card bus pass, allowing them to use all of the Unilink services without extra payment. Organisation Governance Responsibility for running the university is held formally by the Chancellor and led at the executive level by the Vice-Chancellor, currently Prof Mark E. Smith. The key bodies in the university governance structure are the Council and Senate. The Council is the governing body of the university. It is ultimately responsible for the overall planning and management of the university. The council is also responsible for ensuring that the funding made available to the university by the Higher Education Funding Council for England is used as prescribed. The council is composed of members from 5 different classes, namely (1) officers; (2) eight lay members appointed by the council; (3) four members appointed by the Senate; (4) one member of the non-teaching staff; (5) the President of the Students' Union. The Senate is the university's primary academic authority, with responsibilities which include the direction and regulation of education and examinations, the award of degrees, and the promotion of research. The Senate has approximately 65 members, including the Vice-Presidents, the Deans and representatives from the academic staff in each faculty and those administrative groups most closely associated with educational activities, and representatives of the Students' Union. The Senate is chaired by the Vice-Chancellor. Faculties The university comprises five faculties, each with a number of academic units. This current faculty structure came into effect in 2018, taking over from a previous structure consisting of eight faculties. The current faculty structure is: Faculty of Arts and Humanities Humanities Winchester School of Art Faculty of Engineering and Physical Sciences Chemistry Electronics and Computer Science Engineering Physics and Astronomy Faculty of Environmental and Life Sciences Biological Sciences Geography and Environmental Science Health Sciences (nursing, midwifery, allied health professionals) Ocean and Earth Sciences National Oceanography Centre Psychology Faculty of Medicine Southampton Medical School Faculty of Social Sciences Economic, Social and Political Sciences Southampton Statistical Sciences Research Institute Mathematical Sciences Southampton Business School Southampton Education School Southampton Law School Affiliations Southampton is a founding member of the Russell Group of research-intensive universities in Britain. Academic profile Courses and subjects Southampton awards a wide range of academic degrees spanning academic degrees for bachelor's in a variety of degrees and master's degrees as well as junior doctorates and higher doctorates. The postnominals awarded are the degree abbreviations used commonly among British universities. The university is part of the Engineering Doctorate scheme, for the award of Eng. D. degrees. Short courses and professional development courses are run by many of the university's Academic Schools and Research Centres. The university works closely with members of the Armed Forces. It provides professional military educators in the British Army to study for a Postgraduate Certificate in Education (PGCE). The university also works with the Royal Navy to provide training and qualifications towards Chartered Engineer status. Admissions In terms of average UCAS points of entrants, Southampton ranked 28th in Britain in 2014. The university gives offers of admission to 84.0% of its applicants, the 6th highest amongst the Russell Group. According to the 2017 Times and Sunday Times Good University Guide, approximately 15% of Southampton's undergraduates come from independent schools. In the 2016–17 academic year, the university had a domicile breakdown of 72:7:21 of UK:EU:non-EU students respectively with a female to male ratio of 53:47. Rankings and reputation In the 2018/2019 international university rankings, Southampton ranked 96th (QS World University Rankings), 118th (Times Higher Education World University Rankings), 125th (CWTS Leiden Ranking) and 101-150 (Academic Ranking of World Universities). The 2021 U.S. News and World Report ranks Southampton 97th in the world and 11th in the UK. In 2019, it ranked 205th among the universities around the world by SCImago Institutions Rankings. Southampton was awarded Bronze ("provision is of satisfactory quality") in the 2017 Teaching Excellence Framework, a government assessment of the quality of undergraduate teaching in universities and other higher education providers in England. The Bronze award was appealed by the university, however it was rejected by the HEFCE in August 2017. In response, the university's Vice Chancellor, Christopher Snowden, claimed the exercise was "devoid of any meaningful assessment of teaching" and that "there are serious lessons to be learned if the TEF is to gain public confidence." Enrolment into the exercise was voluntary and institutions were made aware of the metrics used before agreeing to be assessed by the TEF. In January 2018, the university confirmed that it would re-enter the TEF believing that it would benefit from changed evaluations that would benefit Russell Group universities. The Guardian ranked the university at number 1 in the UK for Civil Engineering and Electronic and Electrical Engineering in 2020. In the 2014 Research Excellence Framework assessing the research output of 154 British Universities and Institutes, Southampton was ranked 18th for GPA (15th among Russell Group Universities), 11th for research power (11th among Russell Group Universities), and 8th for research intensity (7th among Russell Group Universities). Research The university conducts research in most academic disciplines and is home to a number of notable research centres. Southampton has leading research centres in a number of disciplines, e.g. music, computer sciences, engineering or management sciences, and houses world-leading research institutions in fields as varied as oceanography and web science. Within the university there are a number of research institutes and groups that aim to pool resources on a specific research area. Institutes or groups identified by the university of being of significant importance are marked in italics. Institute of Sound and Vibration Research The Institute of Sound and Vibration Research (ISVR), is an acoustical research institute which is part of the University of Southampton. Founded in 1963, it has been awarded a 2006 Queen's Anniversary Prize for Higher and Further Education. ISVR is divided into four distinct groups of research: The Dynamics Group, (specialised in the modelling, measurement and control of structural vibrations). The Fluid Dynamics and Acoustics Group (including the Rolls-Royce University Technology Centre in Gas Turbine Noise) specialised in three fields which are aero-acoustics of aircraft engines, ultrasonics and underwater acoustics, noise source imaging and virtual acoustics. The Human Sciences Group (including the Hearing and Balance Centre and the Human Factors Research Unit) specialises in the human response to sound and vibration. The Signal Processing and Control Group, which specialises in acoustics, dynamics, audiology and human sciences and as a basis for control of sound and vibration. ISVR offers a number of Undergraduate and Postgraduate degree programmes in acoustical engineering, acoustics & music and audiology. EPrints The School of Electronics and Computer Science created the first archiving software (EPrints) to publish its research freely available on the Web. This software is used throughout the university and as an archiving system for many different institutions around the world. Libraries The university has libraries located on each of the academic campuses and in total the collection holds over 1.5 million books and periodicals. The university's primary library is the Hartley Library, located on Highfield campus and first built in 1935 and extended further in 1959 and 2005. The majority of the books and periodicals are held there as well as specialist collections of works such as Ford collection of Parliamentary papers and the European Documentation Centre. In addition, the main library houses the Special Collections and Archives centre, housing more than 6 million manuscripts and a large archive of rare books. Specific collections include the correspondence of Arthur Wellesley, 1st Duke of Wellington, acquired by the university in 1983, as well as the Broadlands Archive, including the Palmerston and Mountbatten papers. The library also contains 4,500 volumes of Claude Montefiore's library on Theology and Judaism, the Ford Parliamentary Papers, Frank Perkins' collection of books on agriculture, Sir Samuel Gurney-Dixon's Dante collection and the James Parkes Library of Jewish/non-Jewish relations. The library also includes six rare editions of the Divina Commedia; the first of these, the Brescia edition of 1487, is the library's earliest book. In addition to the main Hartley Library, there are other libraries based at the university's other campuses primarily focused on the subjects studied at that location. As one of the smaller libraries and given its proximity to the Highfield campus, the Avenue Library only houses a collection of key Humanities resources. It does however also hold an extensive film library, many of an international nature. On a larger scale, the libraries at the National Oceanography Centre, Southampton General Hospital, Winchester School of Art are more complete and house the majority of the resources and specialist collections on oceanography and earth sciences, healthcare and art and design respectively. The Malaysia campus holds a small collection of reference books but the majority of the resources needed for courses at the campus are available online. Separate from the Hartley Library is the E. J. Richards Engineering Library, which contains further materials for more in-depth study and is freely accessible to Engineering students and staff. Arts The university's main Highfield campus is home to three main arts venues supported and funded by the university and Arts Council England. The Nuffield Theatre opened in 1963 with construction funded by a grant from the Nuffield Foundation of £130,000 (£2,450,000 in 2013). The building was designed by Sir Basil Spence as part of his campus masterplan with additional direction provided by Sir Richard Southern. The theatre consists of a 480-seat auditorium, that also served as the principal lecture theatre at the time of construction, as well as additional lecture theatres and adjacent Kitchen bar. The theatre went into administration in May 2020 and permanent closure was announced in July 2020. The Turner Sims Concert Hall was added to the art provision in October 1974 following a £30,000 (£460,000 in 2012) donation from Margaret Grassam Sims in 1967. It was made to provide a venue specifically for music following difficulties in gaining space in the Nuffield Theatre and due to acoustical differences with the spaces. The new space has a single auditorium, designed by the university's Institute of Sound and Vibration Research with musical performances in mind, with a flat space at the bottom so it could be used for exams. The final of the three Art Council supported venues on campus is the John Hansard Gallery. The gallery was opened on 22 September 1980 but is housed in a building that previously housed a tidal model of Southampton Water between 1957 and 1978. It took over responsibility from a photographic gallery, a gallery in the Nuffield Theatre and one located on Boldrewood campus. It houses various exhibitions in contemporary art and is due to move to new premises in Guildhall Square in c.2015. In addition, the western half of Highfield campus contain several 20th-century sculptures by Barbara Hepworth, Justin Knowles, Nick Pope and John Edwards. Student life Students' Union The University of Southampton Students' Union (SUSU) is the university students' union and has a range of facilities located on the Highfield campus and on the Winchester School of Art campus. At Highfield the union is sited in three buildings opposite the Hartley Library. The main building (Building 42) was built in the 1960s as part of the Basil Spence masterplan. The building was also extensively renovated in 2002 leading to the creation of two new Bars and 'The Cube' multi-functional space. The West Building dates back to the 1940s in a red brick style, complementing the Hartley Library opposite. This originally held all of the Union's activities until the construction of the current Union. At present the building hosts the pub 'The Stags Head'. The newest building was built during the mid-1990s which includes the union shop and other retail stores. The union operates four media outlets. Surge Radio, broadcasts from new studios in the main union building over the internet. Internet television station SUSUtv broadcasts a wide range of programmes live and on demand through their website. The student newspaper Wessex Scene is published once every three weeks. The Edge entertainment magazine began life as an insert of the Wessex Scene in 1995 before growing to become a full publication and online presence in 2011. Halls of residence The university provides accommodation for all first year students who require it and places in residences are available for international and MSc students. Accommodation may be catered, self-catered, have en-suite facilities, a sink in the room, or access to communal bathroom facilities. Each hall has a Junior Common Room (JCR) committee that is responsible for the running of social events and representing the residents to the students union and the university via the Students union JCR officer. The university's accommodation exists around two large complexes of halls and some other small halls located around the city. These are: Glen Eyre Complex – The complex lies less than half a mile to the north of Highfield Campus and houses approximately 2000 students. The complex consists of several building sets, designed over the years and arranged either around the central landscaped garden – the oldest buildings, Richard Newitt Courts are separated into blocks A-G and are closest to the Glen Bar, students in these blocks have very small flats (between 4 and 6 to a kitchen with usually more than one bathroom). Old Terrace and New Terrace are close to the site's entrance, New Terrace has ensuite rooms. Chancellors' courts, consisting of Selbourne, Jellicoe and Roll courts are the most modern blocks in the accommodation with Brunei house, the most basic of accommodations, on the outskirts. Located on the south side of Glen Eyre Road on the periphery of the site are Chamberlain Halls, which share most things with the main Glen Eyre site. This site consists of Hartley Grove, South Hill, Beechmount House and the Chamberlain blocks. All Glen Eyre Halls are self-catered at present. Wessex Lane Halls – Located in Swaythling approximately one mile east of the Highfield Campus. The complex provides accommodation for over 1,800 students and currently comprises two halls of residence: Montefiore Hall, and Connaught. Connaught Halls are fully catered. The complex also features South Stoneham House, a period building constructed in 1708. City Gateway Hall – Located in Swaythling one mile north east of the Highfield Campus at the intersection of two major roads. Opened in September 2015, the landmark building was included in the runners-up list of the 2015 Carbuncle Cup. Featuring a 15-story elliptical tower and two adjoining six-story rectangular accommodation blocks the hall provides accommodation for up to 375 students. Mayflower Halls – Located in the city centre within the city's 'Cultural Quarter', and two-minutes walk away from Southampton Central railway station. The hall opened at the start of the 2014/2015 academic year, and houses over 1100 students in a mix of rooms. Archers Road – Lying two miles south of Highfield and housing 500 students, Archers Road compromises two halls on separate sites. The two halls, Gateley and Romero, are all self-contained and self catered but share a reception and other community facilities. Highfield Halls – Located adjacent to Avenue Campus and half a mile from Highfield campus. Highfield halls comprises Aubrey and Wolfe houses and both have on site catering. The site is also used as a University conference facility during the summer months when vacated. Gower Building – Gower is mainly used by mature and postgraduate students, located on Highfield campus. Gower contains a small number of self-contained apartments, located above other University amenities. Erasmus Park – Located in Winchester, this hall houses around 400 students studying at the Winchester School of Art. Riverside Way - Located in Winchester in close proximity to Erasmus Park. This is a private halls site but the university does have an agreement to allocate some students there. Healthcare The University Health Service is an NHS GP practice located on the main Highfield campus, with over 20,000 patients as of December 2021 working from Building 48 between the Physics & Maths Buildings. Sports The university's Sport and Wellbeing department runs the majority of the sports facilities on campus which are based predominately at two locations: the Jubilee Sports Centre and Wide Lane Sports Ground. The Jubilee Sports Centre, opened in 2004 at a cost of £8.5 million, is located on the Highfield Campus and contains a six-lane 25-metre swimming pool, 160 workstation gym and an eight-court sports hall. Wide Lane meanwhile is located nearby in Eastleigh and was refurbished at cost of £4.3 million in 2007. The complex includes flood-lit synthetic turf and grass pitches, tennis courts, a pavilion and a 'Team Southampton' Gym. The university also runs facilities at the Avenue Campus, National Oceanography Centre, the Watersports Centre on the River Itchen and at Glen Eyre and Wessex Lane halls while there is another sports hall, squash courts, martial arts studio and bouldering wall located within the Students' Union. The university competes in numerous sports in the BUCS South East Conference (after switching from the Western Conference in 2009). A number of elite athletes are supported by the SportsRec through sports bursaries and the UK Government's Talented Athlete Scholarship Scheme (TASS). The University Athletic Union was formally established on 29 November 1929, by the University College council. Versions of the union had existed previously to which many clubs such as Cricket, Association Football, Rugby, Boxing, Gymnastics, Tennis and Boat clubs (all formed before the turn of the 20th century) were members. Mustangs Baseball Club The Southampton Mustangs Baseball Club was founded in 1997. In the early years, the club participated in mainly friendly games against other British university baseball teams, as no formal university league was in existence. Starting in 1998, the Mustangs started to host a university baseball tournament – inviting other teams including Oxford, Cambridge, Portsmouth, Royal Holloway, and Norwich. In 2004 the Mustangs entered into the national adult baseball leagues run by the British Baseball Federation (BBF). The club entered in the lowest division, but after a few years of consolidation, the Mustangs have worked their way up from the lower leagues in the BBF to play in the top-tier league of the British baseball, the British National Baseball League (NBL), in the 2010 season. National student championships Throughout its history the university has had a number of successful teams in National student championships. Notable alumni Academics Academics working at the university include Sir Tim Berners-Lee, inventor of the World Wide Web; Wendy Hall, inventor of Microcosm, a predecessor of the World Wide Web, founding director of the Web Science Trust between the University of Southampton and MIT; José Antonio Bowen, President of Goucher College and a Fellow of the Royal Society of Arts; Erich Zepler, who made leading contributions to radio receiver development; David Payne, inventor of EDFA for use in fibre optics cables; Sir Barry Cunliffe, a pioneer of modern British archaeology; Ray Monk, the biographer of Ludwig Wittgenstein; Albie Sachs, former Judge of the Constitutional Court of South Africa; and Tim Holt, former President of the Royal Statistical Society and Office for National Statistics. See also References Further reading Patterson, A. Temple (1962). The University of Southampton : A Centenary History of the Evolution and Development of the University of Southampton, 1862–1962. Southampton: The Camelot Press Ltd. Nash, Sally and Martin Sherwood (2002). University of Southampton: An Illustrated History. London: James and James External links University of Southampton Students' Union website Russell Group Educational institutions established in 1952 1952 establishments in England Tourist attractions in Southampton Universities established in the 1950s Universities UK
26099252
https://en.wikipedia.org/wiki/HTML5%20video
HTML5 video
The HTML5 specification introduced the video element for the purpose of playing videos, partially replacing the object element. HTML5 video is intended by its creators to become the new standard way to show video on the web, instead of the previous de facto standard of using the proprietary Adobe Flash plugin, though early adoption was hampered by lack of agreement as to which video coding formats and audio coding formats should be supported in web browsers. As of 2020, HTML5 video is the only widely supported video playback technology in modern browsers, with the Flash plugin being phased out. History of <video> element The <video> element started being discussed by the WHATWG in October 2006. The <video> element was proposed by Opera Software in February 2007. Opera also released a preview build that was showcased the same day, and a manifesto that called for video to become a first-class citizen of the web. <video> element examples The following HTML5 code fragment will embed a WebM video into a web page. <video src="movie.webm" poster="movie.jpg" controls> This is fallback content to display for user agents that do not support the video tag. </video> The "controls" attribute enables the browser's own user interface for controlling playback. Alternatively, playback can be controlled with JavaScript, which the web designer can use to create a custom user interface. The optional "poster" attribute specifies an image to show in the video's place before playback is started. Its purpose is to be representative of the video. Multiple sources Video format support varies among browsers (see below), so a web page can provide video in multiple formats. For other features, browser sniffing is used sometimes, which may be error-prone: any web developer's knowledge of browsers will inevitably be incomplete or not up-to-date. The browser in question "knows best" what formats it can use. The "video" element supports fallback through specification of multiple sources. Using any number of <source> elements, as shown below, the browser will choose automatically which file to download. Alternatively, the JavaScript function can be used to achieve the same. The "type" attribute specifies the MIME type and possibly a list of codecs, which helps the browser to determine whether it can decode the file without beginning to download it. The MIME type denotes the container format of the file, and the container format defines the interpretation of the codec string. <video poster="poster.jpg" controls> <source src="av1.mp4" type='video/mp4; codecs="av01.0.00M.08, opus"'> <source src="avc.mp4" type='video/mp4; codecs="avc1.4D401E, mp4a.40.2"'> <source src="vp9.webm" type='video/webm; codecs="vp9.0, opus"'> <source src="theora.ogv" type='video/ogg; codecs="theora, vorbis"'> <p>This is fallback content to display for user agents that do not support the video tag.</p> </video> Supported video and audio formats The HTML5 specification does not specify which video and audio formats browsers should support. User agents are free to support any video formats they feel are appropriate, but content authors cannot assume that any video will be accessible by all complying user agents, since user agents have no minimal set of video and audio formats to support. The HTML5 Working Group considered it desirable to specify at least one video format which all user agents (browsers) should support. The ideal format in this regard would: Have good compression, good image quality, and low decode processor use. Be royalty-free. In addition to software decoders, a hardware video decoder should exist for the format, as many embedded processors do not have the performance to decode video. Initially, Ogg Theora was the recommended standard video format in HTML5, because it was not affected by any known patents. But on 10 December 2007, the HTML5 specification was updated, replacing the reference to concrete formats: with a placeholder: The result was a polarisation of HTML5 video between industry-standard, ISO-defined but patent-encumbered formats, and open formats. The new AV1 format by Alliance for Open Media aims to be both industry standard, royalty-free, and open, and has wide industry support. Free formats Although Theora is not affected by known non-free patents, Apple has expressed concern about unknown patents that might affect it, whose owners might be waiting for a corporation with extensive financial resources to use the format before suing. Formats like H.264 might also be subject to unknown patents in principle, but they have been deployed much more widely and so it is presumed that any patent-holders would have already made themselves known. Apple has also opposed requiring Ogg format support in the HTML standard (even as a "should" requirement) on the grounds that some devices might support other formats much more easily, and that HTML has historically not required particular formats for anything. Some web developers criticized the removal of the Ogg formats from the specification. A follow-up discussion also occurred on the W3C questions and answers blog. Mozilla and Opera support only the open formats of Theora and WebM. Google stated its intention to remove support for H.264 in 2011, specifically for the HTML5 video tag. Although it has been removed from Chromium, it has yet to be removed from Google Chrome ten years later. MPEG-DASH Support via the HTML5 Media Source Extensions (MSE) The adaptive bitrate streaming standard MPEG-DASH can be used in Web browsers via the HTML5 Media Source Extensions (MSE) and JavaScript-based DASH players. Such players are, e.g., the open-source project dash.js of the DASH Industry Forum, but there are also products such as the HTML5 Video Player of Bitmovin (using HTML5 with JavaScript, but also a Flash-based DASH players for legacy Web browsers not supporting the HTML5 MSE). Google's purchase of On2 Google's acquisition of On2 in 2010 resulted in its acquisition of the VP8 video format. Google has provided a royalty-free license to use VP8. Google also started WebM, which combines the standardized open source VP8 video codec with Vorbis audio in a Matroska based container. The opening of VP8 was welcomed by the Free Software Foundation. When Google announced in January 2011 that it would end native support of H.264 in Chrome, criticism came from many quarters including Peter Bright of Ars Technica and Microsoft web evangelist Tim Sneath, who compared Google's move to declaring Esperanto the official language of the United States. However, Haavard Moen of Opera Software strongly criticized the Ars Technica article and Google responded to the reaction by clarifying its intent to promote WebM in its products on the basis of openness. After the launch of WebM, Mozilla and Opera have called for the inclusion of VP8 in HTML. On 7 March 2013, Google Inc. and MPEG LA, LLC announced agreements covering techniques that "may be essential" to VP8, with Google receiving a license from MPEG LA and 11 patent holders, and MPEG LA ending its efforts to form a VP8 patent pool. In 2012, VP9 was released by Google as a successor to VP8, also open and royalty free. At the end of 2017 the new AV1 format developed by the Alliance for Open Media (AOMedia) as the evolution of VP9 has reached the feature freeze, and the bitstream freeze is expected for January 2018. Firefox nightly builds already include support for AV1. Non-free formats H.264/MPEG-4 AVC is widely used, and has good speed, compression, hardware decoders, and video quality, but is patent-encumbered. Users of H.264 need licenses either from the individual patent holders, or from the MPEG LA, a group of patent holders including Microsoft and Apple, except for some Internet broadcast video uses. H.264 is usually used in the MP4 container format, together with Advanced Audio Coding (AAC) audio. AAC is also covered by patents in itself, so users of MP4 will have to license both H.264 and AAC. In June 2009, the WHATWG concluded that no existing format was suitable as a specified requirement. Apple still only supports H.264, but Microsoft now supports VP9 and WebM, and has pledged support for AV1. Cisco makes a licensed H.264 binary module available for free On 30 October 2013, Cisco announced that it was making a binary H.264 module available for download. Cisco will pay the costs of patent licensing for those binary modules when downloaded by the using software while it is being installed, making H.264 free to use in that specific case. In the announcement, Cisco cited its desire of furthering the use of the WebRTC project as the reason, since WebRTC's video chat feature will benefit from having a video format supported in all browsers. The H.264 module will be available on "all popular or feasibly supportable platforms, which can be loaded into any application". Cisco is also planning to publish source code for those modules under BSD license, but without paying the royalties, so the code will practically be free software only in countries without H.264 software patents, which has already been true about other existing implementations. Also on 30 October 2013, Mozilla's Brendan Eich announced that Firefox would automatically download Cisco's H.264 module when needed by default. He also noted that the binary module is not a perfect solution, since users do not have full free software rights to "modify, recompile, and redistribute without license agreements or fees". Thus Xiph and Mozilla continue the development of Daala. OpenH264 only supports the baseline profile of H.264, and does not by itself address the need for an AAC decoder. Therefore, it is not considered sufficient for typical MP4 web video, which is typically in the high profile with AAC audio. However, for use in WebRTC, the omission of AAC was justified in the release announcement: "the standards bodies have aligned on Opus and G.711 as the common audio codecs for WebRTC". There is doubt as to whether a capped global licensing of AAC, like Cisco's for H.264, is feasible after AAC's licensing bureau removed the price cap shortly after the release of OpenH264. Browser support This table shows which video formats are likely to be supported by a given user agent. Most of the browsers listed here use a multimedia framework for decoding and display of video, instead of incorporating such software components. It is not generally possible to tell the set of formats supported by a multimedia framework without querying it, because that depends on the operating system and third party codecs. In these cases, video format support is an attribute of the framework, not the browser (or its layout engine), assuming the browser properly queries its multimedia framework before rejecting unknown video formats. In some cases, the support listed here is not a function of either codecs available within the operating system's underlying media framework, or of codec capabilities built into the browser, but rather could be by a browser add-on that might, for example, bypass the browser's normal HTML parsing of the <video> tag to embed a plug-in based video player. Note that a video file normally contains both video and audio content, each encoded in its own format. The browser has to support both the video and audio formats. See HTML5 audio for a table of which audio formats are supported by each browser. The video format can be specified by MIME type in HTML (see example). MIME types are used for querying multimedia frameworks for supported formats. Of these browsers, only Firefox and Opera employ libraries for built-in decoding. In practice, Internet Explorer and Safari can also guarantee certain format support, because their manufacturers also make their multimedia frameworks. At the other end of the scale, Konqueror has identical format support to Internet Explorer when run on Windows, and Safari when run on Mac, but the selected support here for Konqueror is the typical for Linux, where Konqueror has most of its users. In general, the format support of browsers is much dictated by conflicting interests of vendors, specifically that Media Foundation and QuickTime support commercial standards, whereas GStreamer and Phonon cannot legally support other than free formats by default on the free operating systems that they are intended for. Notes Digital rights management (Encrypted Media Extensions) HTML has support for digital rights management (DRM, restricting how content can be used) via the HTML5 Encrypted Media Extensions (EME). The addition of DRM is controversial because it allows restricting users' freedom to use media restricted by DRM, even where fair use gives users the legal right to do so. A main argument in W3C's approval of EME was that the video content would otherwise be delivered in plugins and apps, and not in the web browser. In 2013 Netflix added support for HTML5 video using EME, beside their old delivery method using a Silverlight plugin (also with DRM). Usage In 2010, in the wake of Apple iPad launch and after Steve Jobs announced that Apple mobile devices would not support Flash, a number of high-profile sites began to serve H.264 HTML5 video instead of Adobe Flash for user-agents identifying as iPad. HTML5 video was not as widespread as Flash videos, though there were rollouts of experimental HTML5-based video players from DailyMotion (using Ogg Theora and Vorbis format), YouTube (using the H.264 and WebM formats), and Vimeo (using the H.264 format). Support for HTML5 video has been steadily increasing. In June 2013, Netflix added support for HTML5 video. In January 2015, YouTube switched to using HTML5 video instead of Flash by default. In December 2015, Facebook switched from Flash to HTML5 for all video content. As of 2016, Flash is still widely installed on desktops, while generally not being supported on mobile devices such as smartphones. The Flash plugin is widely assumed, including by Adobe, to be destined to be phased out, which will leave HTML5 video as the only widely supported method to play video on the World Wide Web. Chrome, Firefox, Safari, and Edge, have plans to make almost all flash content click to play in 2017. The only major browser which does not have announced plans to deprecate Flash is Internet Explorer. Adobe announced on 25 July 2017 that they would be permanently ending development of Flash in 2020. See also HTML5 audio Comparison of layout engines (HTML5 Media) Comparison of HTML5 and Flash References External links . . . video platform software and news. HTML5 Video: A Practical Guide: Convert, Embed, Javascript and Flash Fallback for HTML5 Videos Mozilla's overview of media formats supported by browsers HTML5 New media Multimedia
27567
https://en.wikipedia.org/wiki/Shareware
Shareware
Shareware is a type of proprietary software which is initially shared by the owner for trial use at little or no cost with usually limited functionality or incomplete documentation but which can be upgraded upon payment. Shareware is often offered as a download from a website or on a compact disc included with a magazine. Shareware differs from freeware, which is fully-featured software distributed at no cost to the user but without source code being made available; and free and open-source software, in which the source code is freely available for anyone to inspect and alter. There are many types of shareware and, while they may not require an initial up-front payment, many are intended to generate revenue in one way or another. Some limit use to personal non-commercial purposes only, with purchase of a license required for use in a business enterprise. The software itself may be time-limited, or it may remind the user that payment would be appreciated. Types of shareware Adware Adware, short for "advertising-supported software", is any software package which automatically renders advertisements in order to generate revenue for its author. The advertisements may be in the user interface of the software or on a screen presented to the user during the installation process. The functions may be designed to analyze which websites the user visits and to present advertising pertinent to the types of goods or services featured there. The term is sometimes used to refer to software that displays unwanted advertisements. Shareware is often packaged with adware. During the install of the intended software, the user is presented with a requirement to agree to the terms of click through licensing or similar licensing which governs the installation of the software. Crippleware Crippleware has vital features of the program, such as printing or the ability to save files, disabled (or have unwanted features like watermarks on screencasting and video editing software) until the user buys the software. This allows users to take a close look at the features of a program without being able to use it to generate output. The distinction between freemium and crippleware is that an unlicensed freemium program has useful functionality, while crippleware demonstrates its potential but is not useful on its own. Trialware Trialware or is a program that limits the time that it can be effectively used, commonly via a built-in time limit, number of uses, or only allowing progression up to a certain point (e.g. in video games, see Game demo). The user can try out the fully featured program until the trial period is up, and then most trialware reverts to either a reduced-functionality (freemium, nagware, or crippleware) or non-functional mode, unless the user purchases a full version. Trialware has become normalized for online Software as a Service (SaaS). WinRAR is a notable example of an unlimited trialware, i.e. a program that retains its full functionality even after the trial period has ended. The rationale behind trialware is to give potential users the opportunity to try out the program to judge its usefulness before purchasing a license. According to industry research firm Softletter, 66% of online companies surveyed had free-trial-to-paying-customer conversion rates of 25% or less. SaaS providers employ a wide range of strategies to nurture leads, and convert them into paying customers. Donationware Donationware is a licensing model that supplies fully operational unrestricted software to the user and requests an optional donation be paid to the programmer or a third-party beneficiary (usually a non-profit). The amount of the donation may also be stipulated by the author, or it may be left to the discretion of the user, based on individual perceptions of the software's value. Since donationware comes fully operational (i.e. not crippleware) with payment optional, it is a type of freeware. In some cases, there is a delay to start the program or "nag screen" reminding the user that they haven't donated to the project. This nag feature and/or delayed start is often removed in an update once the user has donated to (paid for) the software. Nagware Nagware (also known as begware, annoyware or a nagscreen) is a pejorative term for shareware that persistently reminds the user to purchase a license. It usually does this by popping up a message when the user starts the program, or intermittently while the user is using the application. These messages can appear as windows obscuring part of the screen, or as message boxes that can quickly be closed. Some nagware keeps the message up for a certain time period, forcing the user to wait to continue to use the program. Unlicensed programs that support printing may superimpose a watermark on the printed output, typically stating that the output was produced by an unlicensed copy. Some titles display a dialog box with payment information and a message that paying will remove the notice, which is usually displayed either upon startup or after an interval while the application is running. These notices are designed to annoy the user into paying. Freemium Freemium works by offering a product or service free of charge (typically digital offerings such as software, content, games, web services or other) while charging a premium for advanced features, functionality, or related products and services. For example, a fully functional feature-limited version may be given away for free, with advanced features disabled until a license fee is paid. The word "freemium" combines the two aspects of the business model: "free" and "premium". It has become a popular model especially in the antivirus industry. Red Hat Linux OS works in a similar fashion, with a version for free use (Fedora Linux) and charging for their premium Enterprise version. Postcardware Postcardware, also called just cardware, is a style of software distribution similar to shareware, distributed by the author on the condition that users send the author a postcard. A variation of cardware, Emailware, uses the same approach but requires the user to send the author an email. Postcardware, like other novelty software distribution terms, is often not strictly enforced. Cardware is similar to beerware. The concept was first used by Aaron Giles, author of JPEGView. Another well-known piece of postcardware is the roguelike game Ancient Domains of Mystery, whose author collects postcards from around the world. Orbitron is distributed as postcardware. Exifer is a popular application among digital photographers that has been postcardware. Caledos Automatic Wallpaper Changer is a "still alive" project cardware. "Empathy" is a postcardware for password-protected executables. Dual Module Player and Linux were also postcardware for a long time. An example for emailware is the video game Jump 'n Bump. Another popular postcardware company is the Laravel package developers from Spatie, which have released over 200 open-source packages to the Laravel framework, which are postcardware licensed, and all shown at their website. History In 1982, Andrew Fluegelman created a program for the IBM PC called PC-Talk, a telecommunications program, and used the term freeware; he described it "as an experiment in economics more than altruism". About the same time, Jim "Button" Knopf released PC-File, a database program, calling it user-supported software. Not much later, Bob Wallace produced PC-Write, a word processor, and called it shareware. Appearing in an episode of Horizon titled Psychedelic Science originally broadcast 5 April 1998, Bob Wallace said the idea for shareware came to him "to some extent as a result of my psychedelic experience". In 1983 Jerry Pournelle wrote of "an increasingly popular variant" of free software "that has no name, but works thus: 'If you like this, send me (the author) some money. I prefer cash.'" In 1984, Softalk-PC magazine had a column, The Public Library, about such software. Public domain is a misnomer for shareware, and Freeware was trademarked by Fluegelman and could not be used legally by others, and User-Supported Software was too cumbersome. So columnist Nelson Ford had a contest to come up with a better name. The most popular name submitted was Shareware, which was being used by Wallace. However, Wallace acknowledged that he got the term from an InfoWorld magazine column by that name in the 1970s, and that he considered the name to be generic, so its use became established over freeware and user-supported software. Fluegelman, Knopf, and Wallace clearly established shareware as a viable software marketing method. Via the shareware model, Button, Fluegelman and Wallace became millionaires. Prior to the popularity of the World Wide Web and widespread Internet access, shareware was often the only economical way for independent software authors to get their product onto users' desktops. Those with Internet or BBS access could download software and distribute it amongst their friends or user groups, who would then be encouraged to send the registration fee to the author, usually via postal mail. During the late 1980s and early 1990s, shareware software was widely distributed over online services, bulletin board systems and on diskettes. Contrary to commercial developers who spent millions of dollars urging users "Don't Copy That Floppy", shareware developers encouraged users to upload the software and share it on disks. Commercial shareware distributors such as Educorp and Public Domain Inc printed catalogs describing thousands of public domain and shareware programs that were available for a small charge on floppy disk. These companies later made their entire catalog available on CD-ROM. One such distributor, Public Software Library (PSL), began an order-taking service for programmers who otherwise had no means of accepting credit card orders. Meanwhile major online service provider CompuServe enabled people to pay (register) for software using their CompuServe accounts. When AOL bought out CompuServe, that part of CompuServe called SWREG (Shareware Registration) was sold to UK businessman Stephen Lee of Atlantic Coast PLC who placed the service on to the internet and enabled over 3,000 independent software developers to use SWREG as a back office to accept various payment methods including credit, debit and charge cards, Paypal and other services in multiple currencies. This worked in realtime so that a client could pay for software and instantly download it which was novel at the time. SWREG was eventually bought by Digital River, Inc. Also, services like Kagi started offering applications that authors could distribute along with their products that would present the user with an onscreen form to fill out, print, and mail along with their payment. Once telecommunications became more widespread, this service also expanded online. Toward the beginning of the Internet era, books compiling reviews of available shareware were published, sometimes targeting specific niches such as small business. These books would typically come with one or more floppy disks or CD-ROMs containing software from the book. As Internet use grew, users turned to downloading shareware programs from FTP or web sites. This spelled the end of bulletin board systems and shareware disk distributors. At first, disk space on a server was hard to come by, so networks like Info-Mac were developed, consisting of non-profit mirror sites hosting large shareware libraries accessible via the web or ftp. With the advent of the commercial web hosting industry, the authors of shareware programs started their own sites where the public could learn about their programs and download the latest versions, and even pay for the software online. This erased one of the chief distinctions of shareware, as it was now most often downloaded from a central "official" location instead of being shared samizdat-style by its users. To ensure users would get the latest bug-fixes as well as an install untainted by viruses or other malware, some authors discouraged users from giving the software to their friends, encouraging them to send a link instead. Major download sites such as VersionTracker and CNet's Download.com began to rank titles based on quality, feedback, and downloads. Popular software was sorted to the top of the list, along with products whose authors paid for preferred placement. Registration If features are disabled in the freely accessible version, paying may provide the user with a licence key or code they can enter into the software to disable the notices and enable full functionality. Some pirate web sites publish license codes for popular shareware, leading to a kind of arms race between the developer and the pirates where the developer disables pirated codes and the pirates attempt to find or generate new ones. Some software publishers have started accepting known pirated codes, using the opportunity to educate users on the economics of the shareware model. Some shareware relies entirely on the user's honesty and requires no password. Simply checking an "I have paid" checkbox in the application is all that is required to disable the registration notices. Games In the early 1990s, shareware distribution was a popular method of publishing games for smaller developers, including then-fledgling companies Apogee Software (also known as 3D Realms), Epic MegaGames (now Epic Games), Ambrosia Software and id Software. It gave consumers the chance to play the game before investing money in it, and it gave them exposure that some products would be unable to get in the retail space. With the Kroz series, Apogee introduced the "episodic" shareware model that became the most popular incentive for buying a game. While the shareware game would be a truly complete game, there would be additional "episodes" of the game that were not shareware and could only be legally obtained by paying for the shareware episode. In some cases these episodes were neatly integrated and would feel like a longer version of the game, and in other cases the later episodes would be stand-alone games. Sometimes the additional content was completely integrated with the unregistered game, such as in Ambrosia's Escape Velocity series, in which a character representing the developer's pet parrot, equipped with an undefeatable ship, would periodically harass and destroy the player after they reached a certain level representing the end of the trial period. Racks of games on single 5 1/4-inch and later 3.5-inch floppy disks were common in retail stores. However, computer shows and bulletin board systems (BBS) such as Software Creations BBS were the primary distributors of low-cost software. Free software from a BBS was the motive force for consumers to purchase a computer equipped with a modem, so as to acquire software at no cost. The important distinguishing feature between a shareware game and a game demo is that the shareware game is (at least in theory) a complete working software program albeit with reduced content compared to the full game, while a game demo omits significant functionality as well as content. Shareware games commonly offered both single player and multiplayer modes plus a significant fraction of the full game content such as the first of three episodes, while some even offered the entire product as shareware while unlocking additional content for registered users. By contrast a game demo may offer as little as one single-player level or consist solely of a multiplayer map, this makes them easier to prepare than a shareware game. Industry standards and technologies There are several widely accepted standards and technologies that are used in the development and promotion of shareware. FILE_ID.DIZ is a descriptive text file often included in downloadable shareware distribution packages. Portable Application Description (PAD) is used to standardize shareware application descriptions. PAD file is an XML document that describes a shareware or freeware product according to the PAD specification. DynamicPAD extends the Portable Application Description (PAD) standard by allowing shareware vendors to provide customized PAD XML files to each download site or any other PAD-enabled resource. DynamicPAD is a set of server-side PHP scripts distributed under a GPL license and a freeware DynamicPAD builder for 32-bit Windows. The primary way to consume or submit a DynamicPAD file is through the RoboSoft application by Rudenko Software, the DynamicPAD author. DynamicPAD is available at the DynamicPAD web site. Code signing is a technology that is used by developers to digitally sign their products. Versions of Microsoft Windows since Windows XP Service Pack 2 show a warning when the user installs unsigned software. This is typically offered as a security measure to prevent untrusted software from potentially infecting the machine with malware. However, critics see this technology as part of a tactic to delegitimize independent software development by requiring hefty upfront fees and a review process before software can be distributed. See also Careware Association of Software Professionals Keygen References External links Independent Software Developers Forum (ISDEF) Webcast on protecting trialware Software licenses Free goods and services Revenue models
2900
https://en.wikipedia.org/wiki/File%20archiver
File archiver
A file archiver is a computer program that combines a number of files together into one archive file, or a series of archive files, for easier transportation or storage. File archivers may employ lossless data compression in their archive formats to reduce the size of the archive. Basic archivers just take a list of files and concatenate their contents sequentially into archives. The archive files need to store metadata, at least the names and lengths of the original files, if proper reconstruction is possible. More advanced archivers store additional metadata, such as the original timestamps, file attributes or access control lists. The process of making an archive file is called archiving or packing. Reconstructing the original files from the archive is termed unarchiving, unpacking or extracting. History An early archiver was the Multics command archive, descended from the CTSS command of the same name, which was a basic archiver and performed no compression. Multics also had a "tape_archiver" command, abbreviated ta, which was perhaps the forerunner of unix' tar. Unix archivers The Unix tools ar, tar, and cpio act as archivers but not compressors. Users of the Unix tools use additional compression tools, such as gzip, bzip2, or xz, to compress the archive file after packing or remove compression before unpacking the archive file. The filename extensions are successively added at each step of this process. For example, archiving a collection of files with tar and then compressing the resulting archive file with gzip results a file with .tar.gz extension. This approach has two goals: It follows the Unix philosophy that each program should accomplish a single task to perfection, as opposed to attempting to accomplish everything with one tool. As compression technology progresses, users may use different compression programs without having to modify or abandon their archiver. The archives use solid compression. When the files are combined, the compressor can exploit redundancy across several archived files and achieve better compression than a compressor that compresses each files individually. This approach, however, has disadvantages too: Extracting or modifying one file is difficult. Extracting one file requires decompressing an entire archive, which can be time- and space-consuming. Modifying one means the file needs to be put back into archive and the archive recompressed again. This operation requires additional time and disk space. The archive becomes damage-prone. If the area holding shared data for several files is damaged, all those files are lost. It is impossible to take advantage of redundancy between files unless the compression window is larger than the size of an individual file. For example, gzip uses DEFLATE, which typically operates with a 32768-byte window, whereas bzip2 uses a Burrows–Wheeler transform roughly 27 times bigger. xz defaults to 8 MiB but supports significantly larger windows. Windows archivers The built-in archiver of Microsoft Windows as well as third-party archiving software, such as WinRAR and 7-zip, often use a graphical user interface. They also offer an optional command-line interface, while Windows itself does not. Windows archivers perform both archiving and compression. Solid compression may or may not be offered, depending on the product: Windows itself does not support it; WinRAR and 7-zip offer it as an option that can be turned on or off. See also Comparison of file archivers Archive format List of archive formats Comparison of archive formats References External links Storage systems Computer file systems Computer archives Utility software types
66163638
https://en.wikipedia.org/wiki/Australian%20Cyber%20Collaboration%20Centre
Australian Cyber Collaboration Centre
The Australian Cyber Collaboration Centre (A3C) is a not-for-profit organisation funded largely by South Australian Government grants and based on collaboration of its member organisations, which focuses on cyber security. It is connected to the Department for Innovation and Skills and located at Lot Fourteen in Adelaide, South Australia. History The former Chief Information Security Officer of Western Australia Police, Hai Tran, was appointed as the inaugural CEO in June 2020, ahead of its official launch on 6 July 2020. at Lot Fourteen on North Terrace in Adelaide. The centre was established in collaboration with the federal and South Australian Government, as well as industry partners including BAE Systems Australia and Optus; academic institutions including UniSA, Flinders University, The University of Adelaide and TAFE SA; South Australia's Office for Cyber Security; Commonwealth's Defence Science and Technology Group; and the independent (partly government-funded) organisations AustCyber and the Cyber Security Cooperative Research Centre. Before its opening, A3C had already launched a six-day pilot training course in collaboration with University of Adelaide and aizoOn Australia, focused on digital forensic and incident response. In November 2021, A3C extended its partnerships to include Cisco. Role and responsibilities The A3C's function is "to make cyberspace a better, and safer, place for organisations, corporations, agencies and institutions to do business". Its work includes identifying vulnerabilities to cyber attacks; providing testing of all hardware and software components of IT systems (the Cyber Test Range); providing training in cyber security (the Cyber Training Academy); creating strategic and practical plans for implementing cyber security; and following progress and assessing the value of investments afterwards. Training is an essential component of its work, as cyber crime affects about 25 per cent of businesses, South Australia is developing its defence, space and other technology industries, and there is a shortage of skilled workers in cyber security. In 2019 Minister of Innovation and Skills, David Pisoni forecasted up to 7500 job opportunities in the ICT sector in the state in the next five five years, of which 1500 would need cyber security skills. A3C also focuses on small businesses which may not have large resources to protect themselves from cyber attacks, aiming to help them foster collaborations with other organisations which can help. Governance, funding and membership The centre is under the ministerial responsibility of the Minister of Innovation and Skills, David Pisoni, and overseen by a board. The inaugural chair is Kim Scott, director of TAO Consulting. While most funding comes from government sources, the Commonwealth Bank is a major sponsor, and the Global Cyber Alliance is a partner to A3C. the Chief Executive of the Australian Cyber Collaboration Centre is Mike Barber. Its approximately 40 members are drawn from academia, industry, cyber security and defence industry companies, government departments, equipment vendors and other membership bodies. References External links Australian intelligence agencies Government departments of South Australia Computer security organizations Cyberwarfare 2020 establishments in Australia
1996946
https://en.wikipedia.org/wiki/Synergy%20%28software%29
Synergy (software)
Synergy is a software application for sharing a keyboard and mouse between multiple computers. It is used in situations where several PCs are used together, with a monitor connected to each, but are to be controlled by one user. The user needs only one keyboard and mouse on the desk — similar to a KVM switch without the video. Partly open source and partly closed source, the open source components are released under the terms of the GNU General Public License, which is free software. The first version of Synergy was created on May 13, 2001, by Chris Schoeneman and worked with the X Window System only. Synergy now supports Windows, macOS, Linux, and other Unix-like operating systems. Design Once the program is installed, users can move the mouse "off" the side of their desktop on one computer, and the mouse pointer will appear on the desktop of another computer. Key presses will be delivered to whichever computer the mouse-pointer is located in. This makes it possible to control several machines as easily as if they were a single multi-monitor computer. The clipboard and even screensavers can be synchronized. The program is implemented as a server which defines which screen-edges lead to which machines, and one or more clients, which connect to the server to offer the use of their desktops. The keyboard and mouse are connected to the server machine. As of version 2.0 (2017) keystrokes, mouse movements and clipboard contents are sent via an encrypted SSL network connection. This previously required the purchase of the Pro edition in version 1. In July 2013 the Defuse Security Group reported the proprietary encryption used in Synergy 1.6 to be insecure and released an exploit which could be used to passively decrypt the commands sent to the Synergy 1.6 clients. This was solved by using SSL in 1.7. TCP/IP communications (default port 24800) are used to send mouse, keyboard and clipboard events between computers in Synergy 1. History The first incarnation of Synergy was CosmoSynergy, created by Richard Lee and Adam Feder then at Cosmo Software, Inc., a subsidiary of SGI (née Silicon Graphics, Inc.), at the end of 1996. They wrote it, and Chris Schoeneman contributed, to solve a problem: most of the engineers in Cosmo Software had both an Irix and a Windows box on their desks and switchboxes were expensive and annoying. CosmoSynergy was a great success but Cosmo Software declined to productize it and the company was later closed. Synergy is a from-scratch reimplementation of CosmoSynergy. It provides most of the features of the original and adds a few improvements. Synergy+ was created in 2009 as a maintenance fork for the purpose of fixing bugs inherited from the original version. The original version of Synergy had not been updated for a notable length of time (as of 6 June 2010, the latest release was 2 April 2006). There was never official confirmation that the original Synergy project had been abandoned; however, there was public discussion providing speculation. In said discussion, Chris Schoeneman (the creator of Synergy) stated that instead of supporting a 1.3.x team, he intends on releasing version 2.0 of Synergy, and publicly announced on 27 Aug 2008 that he has been making progress on this version. See also Multiseat configuration (the inverse of Synergy) x2x also allows the keyboard and mouse to be shared between machines, using OpenSSH tunneling QuickSynergy Barrier, an open-source fork of the Synergy 1.9 code base References External links (the minimum price at this site is $29.95). Free Synergy 1 downloads - Official source code - User-maintained fork of Synergy 1.9 Free software programmed in C++ Free system software Keyboard-sharing software Linux network-related software Remote desktop Remote desktop software for Linux Utilities for Linux Utilities for macOS Utilities for Windows Virtual Network Computing
28773180
https://en.wikipedia.org/wiki/Stephen%20Elop
Stephen Elop
Stephen Elop (born 31 December 1963) is a Canadian businessman who most recently worked at Australian telecom company Telstra from April 2016. In the past he had worked for Nokia as the first non-Finnish CEO and later as Executive Vice President, Devices & Services, as well as the head of the Microsoft Business Division, as the COO of Juniper Networks, as the president of worldwide field operations at Adobe Systems, in several senior positions in Macromedia and as the CIO at Boston Chicken. He is best known for his ill-fated tenure as Nokia CEO from 2010 to 2014, which included controversies such as the "burning platform" memo and the company's partnership with Microsoft, resulting in the move to Windows Phone software exclusivity. He was criticised for some of his decisions, which resulted in the company making massive losses financially and in the market. As then head of the Microsoft Devices Group, Elop was in charge of Microsoft's varied product offerings including Lumia phones, Surface Pro 3, and Xbox One. Since January 2016 he also has a role as Distinguished Engineering Executive in Residence within McMaster University's Faculty of Engineering, where he originally studied in the 1980s. Early life and education Elop was born in Ancaster, Ontario, Canada, as the second of three sons in a middle-class family. His mother was a chemist and his father was an engineer at Westinghouse Electric Corporation. Both of them still live in Ancaster. His grandfather was a wireless operator who used morse code from ships in both the First World War and Second World War. Elop was influenced by and learned much about technology from his grandfather. From 1981, Elop studied computer engineering and management at McMaster University, Hamilton, Ontario. After his first year at the University, Elop wrote the user operating manual, called the Orange Book, for the campus's new computer system, VAX-11/780. During that time he helped lay 22 kilometres of Ethernet cables around campus to build one of the first computer networks in Canada. He graduated second in his class with a bachelor's degree in 1986. In 2007, McMaster's Faculty of Engineering made Elop the second L.W. Shemilt Distinguished Engineering Alumni Award winner and in 2009, he was awarded an Honorary Doctor of Science Degree by McMaster. Career After graduating, Elop joined a Toronto-based software development firm called Soma Inc. Soma was later acquired by Lotus Development Corporation of Massachusetts, United States, and Elop moved over, serving as director of consulting. In 1992 he became CIO of Boston Chicken, until the firm filed for Chapter 11 bankruptcy in 1998. Macromedia and Adobe In 1998 he joined Macromedia's Web/IT department and worked at the company for seven years, where he held several senior positions, including as: general manager of the eBusiness division; executive vice president of worldwide field operations; COO; and finally as CEO from January 2005 for three months before their acquisition by Adobe Systems was announced in April 2005. Due to family reasons, Elop lived at his Canadian home in Limehouse, Ontario, commuting to work in California with Air Canada. During Elop's tenure, Macromedia continued to deliver widely used software suites like Studio 8. Based on the performance of the company during this time, Elop was able to guide the company through a successful acquisition that benefited shareholders. With an exchange of $3.4 billion in stock, the acquisition combined the companies’ document management, web publishing and online video delivery tools. It proved to be a profitable move for Macromedia shareholders. After the announcement of the agreement, Macromedia shares were valued at $41.86, notably above the then current market value of $33.45. It has been claimed Elop pushed Macromedia Flash Player to get into the mobile market. At Macromedia, Elop was nicknamed "The General" due to his military-style haircut. He was then president of worldwide field operations at Adobe, tendering his resignation in June 2006 and leaving on 5 December. Elop was paid a $500,000 salary with $315,000 bonus and $1.88 million severance package during his time at Adobe. Juniper and Microsoft After leaving Adobe, Elop was COO of Juniper Networks for exactly one year from January 2007 – 2008. During his short tenure he drove an internal overhaul and was credited for applying operational efficiency. In late 2007 Elop was approached by Microsoft CEO, Steve Ballmer, with whom he met several times including chairman Bill Gates. Juniper's CEO Scott Kriens intended to name Elop as the new CEO before Elop revealed he was leaving for Microsoft. Elop named this his toughest professional moment in a Bloomberg interview. Juniper's stock price rose 75% throughout 2007. Elop's spell at Microsoft started on 11 January 2008, as the head of the Business Division, responsible for the Microsoft Office and Microsoft Dynamics line of products, and as a member of the company's senior leadership team. He was effectively leading the largest division of the world's largest software company (as the Business Division was Microsoft's largest source of income). It was during this time that the Business Division successfully released Office 2010, giving record profits for the Business Division. He became known as an operator and a change agent because of successes at Microsoft. Businessweek credited Elop with pushing Microsoft to develop cloud-based versions of the company's programs, and asserted that this helped Microsoft maintain its dominance, while holding off startups looking to disrupt its traditional business model. Also during his tenure as president, the Business Division formed an alliance with Nokia on 12 August 2009 to bring Microsoft Office Mobile to Symbian OS. CEO of Nokia On 10 September 2010, it was announced that Elop would take Nokia's CEO position, replacing the disposed Olli-Pekka Kallasvuo, and becoming the first non-Finnish director in Nokia's history. Nokia's chairman Jorma Ollila commented: "Stephen has the right industry experience and leadership skills." Some analysts had already predicted early on a potential closer Nokia and Microsoft cooperation following Elop's debut. His tenure began on 21 September. His family stayed in Canada. On 11 March 2011 Nokia announced that it had paid Elop a $6 million signing bonus, "compensation for lost income from his prior employer," on top of his $1.4 million annual salary. On his first day of work as CEO, Elop e-mailed every Nokia employee asking what changes they like to see at Nokia and what they do not. Elop was open to the employees and gave them the chance to voice their opinions - unusual for Nokia under the bureaucratic predecessors and chairman. Elop approached employees with his personal stories of "At Microsoft we beat Google. [referring to Microsoft Office and Google Apps] We can beat Apple just as well." During a private employees presentation in 2011, Elop called for open dialogue within the company's environment. During Elop's tenure, Nokia's stock price dropped 62%, their mobile phone market share was halved, their smartphone market share fell from 33% to 3%, and the company suffered a cumulative €4.9 billion loss. "Burning Platform" memo Some time in early 2011, Elop issued a company internal memo titled "Burning Platform" which was leaked to the press. The memo likened the 2010 situation of Nokia, in the smartphone market, to a person standing on a burning oil platform ("platform" being a reference to the name given to operating systems such as Symbian, Apple iOS and Google Android). It also mentions the introduction to a "new strategy" on 11 February. Elop stresses in the memo how significantly the market has changed: The battle of devices has now become a war of ecosystems, where ecosystems include not only the hardware and software of the device, but developers, applications, ecommerce, advertising, search, social applications, location-based services, unified communications and many other things. Our competitors aren't taking our market share with devices; they are taking our market share with an entire ecosystem. This means we're going to have to decide how we either build, catalyse or join an ecosystem. The memo was not intended for the public but was eventually leaked by Engadget on 8 February 2011, becoming widely circulated and receiving a large deal of attention. The "new strategy" bit highly speculated to tech bloggers that Nokia would form an alliance with Microsoft, particularly after Google's Vic Gundotra tweeted "Two turkeys do not make an Eagle" shortly after the leak. It was then reported that Nokia's VP Anssi Vanjoki originally said this quote in 2005 about BenQ's purchase of Siemens's mobile phone business. By some in the media, the memo was seen as a necessary wake-up call for Nokia, and Engadget called it "one of the most exciting" CEO memos they have seen. However Nokia's Board of Directors saw the memo as an act of misjudgment and Chairman Jorma Ollila gave bitter feedback for it at a board meeting. This leaked memo (along with the new strategy two days later) led to the term "Elop effect" being used by opponents of the strategy. The term was coined by former Nokia executive Tomi Ahonen, who said it "combines the Ratner effect with the Osborne effect", meaning both publicly attacking one's own products and promising a successor to a current product too long before it is available. In an interview with the Financial Post, Elop described the memo as "a very powerful statement of the reality of the situation without a lot of marketing polish on it." The Windows Phone strategy On 11 February 2011 in a press conference in London, Elop officially announced the new strategy for Nokia, which involved a "strategic partnership" with Microsoft and shifting its smartphone strategy to Microsoft's Windows Phone, whilst gradually phasing out their in-house Symbian and MeeGo operating systems (expected it to be finalized by 2016, but actually finished in January 2014, and plans for any MeeGo devices beyond the Nokia N9 were scrapped). Elop also quoted Winston Churchill, "The pessimist sees the difficulty in every opportunity, but an optimist sees the opportunity in every difficulty." The final decision of a partnership with Microsoft was made the night before the conference. Nokia chairman Ollila supported the Microsoft alliance and predicted the business will strongly recover. Questioned on why he decided to go with the partnership rather than choosing Android, Elop said: "The fundamental thing we were looking at was the ability to differentiate. As a member of the Android ecosystem, there were ways that we could see that we could differentiate, but we were worried over time how much differentiation we could continue to maintain or extend." The move was seen as a risky 'all-eggs-in-one-basket' strategy, inspired by his previous success at Macromedia by putting all focus on Flash in the early 2000s. The first Nokia Windows Phone smartphone shipped in November 2011, the Nokia Lumia 800, was made in the form of a device design identically similar (only an additional camera button was added) to the Nokia N9, the first MeeGo device. The N9 enjoyed positive reviews for attractive hardware and a well-designed software experience—though at launch reviewers noted that a healthy software ecosystem was non-existent and would almost certainly not develop. However Elop stuck with the Microsoft deal, saying that MeeGo development will not continue even with the N9's success, a move that was widely criticised. In an interview held late 2012, Elop stated the reason for switching to Windows instead of Android: "the single most important word is 'differentiation'. Entering the Android environment late, we knew we would have a hard time differentiating." In another interview in 2013, Elop implied that Samsung Electronics would have been dominant in the Android space, leaving no space for other OEMs. A journalist from The Guardian agreed, noting HTC's decline in revenue. However, later on Nokia would begin reweighing its options and at Mobile World Congress held in February 2014 Stephen Elop took stage to unveil Nokia's first Android Phone, Nokia X. Following the launch of the Nokia Lumia 920 flagship and its positive reception and apparent strong sales, Elop said to an Yle newscaster in December 2012: "...if you think about the last year, it's been a very difficult year. We've made many difficult decisions, we've made changes. But what we've also been doing is our very best work in making great products and getting them to consumers. So whether it's the Lumia 920, whether it is your Asha Full Touch products - the people of Nokia are doing their best work, but what's happening now, is that it's not us saying that, it's the people around the world. Our employees are feeling that, [...] so that creates a sense of hope and optimism. Now at the same time, we know we have a lot of hard work still had [...] but there's that sense that the hard work, that that seesaw has really begun to pay off. You feel that in the company." He also thanked the Finnish shareholders for supporting Nokia during its "darkest days." During his tenure, Elop faced vocal criticism from both industry specialists and employees. In 2011, Elop announced that some 11,000 employees would have to be laid off as part of a plan to "restructure" Nokia's business, and in June 2012 it was announced that further 10,000 layoffs were in order and that several facilities would have to be closed down due to budget cuts. Some critics, especially in Finland, started to speculate that Elop could be a trojan horse, whose mission was to prepare Nokia for a future acquisition by Microsoft. When confronted with the theory by an anonymous attendee of the 2011 Mobile World Congress, Elop denied the speculation stating, "The obvious answer is, no. But however, I am very sensitive to the perception and awkwardness of that situation. We made sure that the entire management team was involved in the process [...] everyone on the management team believed this was the right decision," referring to Nokia's adoption of Microsoft's Windows Phone operating system. Elop denied the accusations again in an interview in 2014. In the book The Decline and Fall of Nokia published in 2014, author David J. Cord firmly rejects the idea that Elop was a Trojan Horse. He claims that all of Elop's decisions were logical when they were made, and he also cites the testimony of other Nokia executives who were part of those decision-making processes. Another book published later in 2014 called Operation Elop also refutes the Trojan Horse claims. Its Finnish authors, journalists from Kauppalehti, noted that Elop "made monumental mistakes - but all in good faith." Acquisition by Microsoft In May 2013, after the two years that he had been granted for the transition to the Windows Phone platform, Elop was pressed by Nokia's shareholders about the lack of results compared to the competitors and the insufficient sales figures to secure the company's survival. During the annual general meeting, several shareholders voiced that they were running out of patience with Elop's efforts in putting Nokia back to the smartphone race. Elop replied that there was no turning back on his decision of adopting Windows Phone, while some analysts criticized Elop for closing doors to alternative strategies and going all-in with Microsoft's operating system. Some analysts speculated that Nokia had already lost the smartphone race to Samsung and Apple, and that if they were to regain their position in the market, it would have to be by means of low-end devices such as the Asha. In June 2013, it was reported that Microsoft had been to advanced talks for buying Nokia, but the negotiations had faltered over price and worries about Nokia's slumping market position. As of June 2013, Nokia's mobile phone market share had fallen from 23% to 15%, their smartphone market share gone from 32.6% to 3.3%, and their stock value dropped by 85% since Elop's takeover. On 3 September 2013, it was announced that Microsoft had agreed to buy Nokia's mobile phone and devices business for 5.4 billion euros (US$7.2bn; £4.6bn) and that Elop would stand down as Nokia's CEO to become Executive Vice President of the Microsoft Devices Group business unit. On the day's press conference, Elop said Nokia had much to be proud of, saying "We have transitioned through a period of incredible difficulty and we are now delivering the best products we have ever delivered, while simultaneously having changed our culture and the way we work." He also said he felt sadness as it changes what Nokia stands for, but added that Nokia products will become an even stronger competitor together with Microsoft. Elop was said to bring a unique set of skills back to Microsoft, given his varied leadership experience and proven ability to manage products and divisions at the company (i.e. Microsoft Office). Nokia's devices and services business would ultimately become Microsoft Mobile in April 2014. After Elop stepped down as CEO of Nokia, Risto Siilasmaa replaced him as interim CEO before the appointment of Rajeev Suri. Bonus controversy Controversy arose around Elop receiving a €18.8 million bonus after Nokia sold its mobile phone business to Microsoft and he stepped down as the CEO. The controversy was further fueled after it was revealed that his contract had been revised on the same day as the deal was announced. Moreover, the chairman of Nokia's Board of Directors gave initially incorrect information about the contract to the public, and had to correct his statements later. Shortly before his departure from Nokia, Elop had filed for divorce, which he also cited as a reason to reject a renegotiation of the controversial bonus. He claimed he couldn't afford a reduction of the payoff because his wife would demand half of it. Elop also enjoyed a preferential tax status in Finland, a 35% fixed-rate income tax irrespective of the size of income, while typical tax payers in Finland pay a progressive income tax. Approximately 70% of the bonus costs were absorbed by Microsoft during the acquisition, the majority of which came in the form of accelerated stock awards. Criticism spread to politics, with Prime Minister of Finland Jyrki Katainen telling Finnish television that the payoff was "quite outrageous", and that it cannot be justified given the country's difficult economic times. Jutta Urpilainen, the minister of finance, wrote on her blog "In addition to the general toxic atmosphere, it [the payoff] may be a threat to social harmony". Some Nokia employees and investors also shared concerns. Microsoft Devices Group In 2014, Elop returned to Microsoft as executive vice president of the Microsoft Devices Group. From that point, Elop focused on the team's “mandate to help people do more” and their interest in "[putting] the entirety of the Microsoft experience in people's hands." Some major developments from the group included new Nokia, and later Microsoft-branded Lumia smartphones, the launch of new products including Microsoft HoloLens and the Microsoft Band, and the spin out of Nokia MixRadio to Japan's Line Corporation. On 17 June 2015, Elop was laid off from his position at Microsoft as part of massive job cuts in the Microsoft Devices Group. According to Microsoft CEO Satya Nadella, "Stephen and I have agreed that now is the right time for him to retire from Microsoft. I regret the loss of leadership that this represents, and look forward to seeing where his next destination will be." Telstra On 16 March 2016, Australia's largest telecommunications provider Telstra announced that Elop would be joining the company in a newly created position as Group Executive Technology, Innovation and Strategy. In his first speech at a Telstra conference in September 2016, Elop cited Nokia as an example of a "great" company that can self-assess and "transform" when necessary, referencing its success as a networks equipment supplier. He said that Telstra was also needing a necessary transformation to become more of a technology company. Elop was dismissed from Telstra as part of its restructuring on 31 July 2018. APiJET On 17 September 2019, APiJET, a Seattle-based joint venture of Aviation Partners, Inc. and iJet Technologies which makes real-time aircraft data analytics, announced that Elop had been named its CEO. As of January 2021, Stephen Elop has terminated his assignment as CEO to APiJET, serves on the APiJET board and is senior advisor to APiJET. Personal life In an interview, Elop said that he sees his Canadian roots as a "significant source of strength in the world", and he added "I will forever in my life be a Canadian, first and foremost." In his spare time, Elop is an avid recreational pilot, owning a Cessna CitationJet. Elop is also a fan of the Vancouver Canucks ice hockey team. During his time working for Macromedia and Adobe in the mid-2000s, Elop occupied his weekends with his children. Elop was married to Nancy from Wyoming, Ontario who he first met when studying at McMaster. They have five children: triplet girls, an adopted Chinese girl, and a boy. In August 2013 he filed for divorce from his wife of 26 years, having been separated since October 2012. Elop listed for sale his US$5 million mansion in Redmond, Washington, U.S., which he purchased in 2008 and lived in with his family. The divorce finalised on 3 July 2014. References 1963 births Living people Canadian chief executives McMaster University alumni Microsoft employees Nokia people People from Hamilton, Ontario Chief operating officers Chief information officers
435217
https://en.wikipedia.org/wiki/Enigmail
Enigmail
Enigmail is a data encryption and decryption extension for Mozilla Thunderbird and the Postbox that provides OpenPGP public key e-mail encryption and signing. Enigmail works under Microsoft Windows, Unix-like, and Mac OS X operating systems. Enigmail can operate with other mail clients compatible with PGP/MIME and inline PGP such as: Microsoft Outlook with Gpg4win package installed, Gnome Evolution, KMail, Claws Mail, Gnus, Mutt. Its cryptographic functionality is handled by GNU Privacy Guard. In their default configuration, Thunderbird and SeaMonkey provide e-mail encryption and signing using S/MIME, which relies on X.509 keys provided by a centralised certificate authority. Enigmail adds an alternative mechanism where cooperating users can instead use keys provided by a web of trust, which relies on multiple users to endorse the authenticity of the sender's and recipient's credentials. In principle this enhances security, since it does not rely on a centralised entity which might be compromised by security failures or engage in malpractice due to commercial interests or pressure from the jurisdiction in which it resides. Enigmail was first released in 2001 by Ramalingam Saravanan, and since 2003 maintained by Patrick Brunschwig. Both Enigmail and GNU Privacy Guard are free, open-source software. Enigmail with Thunderbird is now the most popular PGP setup. Enigmail has announced its support for the new "pretty Easy privacy" (p≡p) encryption scheme in a joint Thunderbird extension to be released in December 2015. As of June 2016 the FAQ note it will be available in Q3 2016. Enigmail also supports Autocrypt exchange of cryptographic keys since version 2.0. In October 2019, the developers of Thunderbird announced built-in support for encryption and signing based on OpenPGP Thunderbird 78 to replace the Enigmail add-on. The background is a change in the code base of Thunderbird, removing support for legacy add-ons. Since this would require a rewrite from scratch for Enigmail, Patrick Brunschwig instead supports the Thunderbird team in a native implementation in Thunderbird. Enigmail will be maintained for Thunderbird 68 until 6 months after the release of Thunderbird 78. The support of Enigmail for Postbox will be unaffected. See also GNU Privacy Guard OpenPGP References External links Cryptographic software Thunderbird WebExtensions OpenPGP Free email software MacOS security software Windows security software Unix security software MacOS Internet software Windows Internet software Unix Internet software Cross-platform free software
864041
https://en.wikipedia.org/wiki/Lodz%20University%20of%20Technology
Lodz University of Technology
Lodz University of Technology (TUL) was created in 1945 and has developed into one of the biggest technical universities in Poland. Originally located in an old factory building, today covering nearly 200,000 sq. meters in over 70 separate buildings, the majority of them situated in the main University area. Almost 15,000 students are currently studying at the University. The educational and scientific tasks of the University are carried out by about 3,000 staff members. Faculties Faculty of Mechanical Engineering The Faculty of Mechanical Engineering is one of the first departments of the Lodz University of Technology conducting research activity and teaching. The Faculty offers full-time, part-time, doctoral and postgraduate studies. It educates students in seven study areas being part of technical sciences, both full-time and part-time. Organizational units The faculty comprises 3 institutes and 6 departments: Institute of Materials Science and Engineering Institute of Machine Tools and Production Engineering Institute of Turbomachinery Department of Vehicles and Fundamentals of Machine Design Department of Strength of Materials Department of Dynamics Department of Manufacturing Engineering Department of Automation, Biomechanics and Mechatronics Department of Materials Engineering and Production Systems Fields of study The Faculty of Mechanical Engineering offers full-time and part-time, first and second cycle studies in seven fields of study: Automation and Robotics Energy Mechanical Engineering Mechatronics Materials Engineering Production Engineering Transportation The faculty also offers third cycle studies in the following fields of study: Construction and Operation of Machines Mechanics Materials Engineering Faculty of Electrical, Electronic, Computer and Control Engineering The Faculty of Electrical, Electronic, Computer and Control Engineering is one of the largest faculties of Lodz University of Technology. It came into existence in 1945. The faculty educates students in the fields of Automation and Robotics, Electronics and Telecommunications, Electrical Engineering, Power Engineering, Information Technology, Occupational Safety Engineering, Biomedical Engineering, Mechatronics and Transport. Organisational units The faculty comprises the following units: Institute of Electrical Engineering Systems Institute of Automatic Control Institute of Mechatronics and Information Systems Institute of Electrical Power Engineering Institute of Electronics Institute of Applied Computer Science Department of Electrical Apparatus Department of Microelectronics and Computer Science Department of Semiconductors and Optoelectric Devices Fields of study The Faculty of Electrical, Electronic, Computer and Control Engineering offers full-time and part-time, first and second cycle studies in the following fields of study: Automation and Robotics Electronics and Telecommunications (both in Polish and English) Electrical Engineering Power Engineering Information Technology (in Polish and English) Occupational Safety Engineering Biomedical Engineering (in Polish and English) Mechatronics Transport The faculty also offers postgraduate courses and trainings for persons who want to improve their skills. Faculty of Chemistry The Faculty of Chemistry is one of the faculties of Lodz University of Technology educating in the field of chemical science and technology. Organizational units Institute of General and Ecological Chemistry Institute of Organic Chemistry Institute of Applied Radiation Chemistry Institute of Polymer and Dye Technology Department of Molecular Physics Fields of study The faculty offers the following full-time studies: Chemistry – 3,5-year first cycle studies, 1,5-year second cycle studies and 4-year third cycle studies Chemical Technology – first, second and third cycle studies Environmental Protection – first cycle studies Nanotechnology – first and second cycle studies Chemistry of Building Materials (since 2011 in cooperation with AGH University of Science and Technology in Cracow and Gdańsk University of Technology) Chemistry and Special Purpose Material Engineering (since March 2015, interdisciplinary, the field shared with the second degree studies of the Faculty of Chemistry of Adam Mickiewicz University in Poznań and the Department of New Technologies and Chemistry at Military University of Technology in Warsaw) Faculty of Material Technologies and Textile Design Organizational units Institute of Architecture of Textiles Department of Design Theory and Textile Design History Department of Woven Fabrics Department of Visual Arts Department of Clothing Technology and Textronics Department of Man-Made Fibres Department of Knitting Technology Department of Material and Commodity Sciences and Textile Metrology Department of Material and Commodity Sciences and Textile Metrology Department of Physical Chemistry of Polymers Department of Yarn, Non-Woven Fabrics and Fibriform Composites Technology Department of Technical Mechanics and Computer Engineering Department of Textile Machine Mechanics Fields of study The faculty offers the following full-time courses in 5 fields of study: Education of Technology and Information Engineering (1st cycle studies; 2nd cycle studies) Material Engineering (1st cycle studies; 2nd cycle studies) Occupational Health (1st cycle studies) Textile Engineering (1st cycle studies; 2nd cycle studies) Pattern Design (1st cycle studies; 2nd cycle studies) The faculty also offers the following postgraduate courses: Postgraduate Studies in Science of Commodities– 1 year (2 semesters) Postgraduate Studies “Occupational Health”– 1 year (2 semesters) Inter-faculty Postgraduate Studies in Biomaterial Engineering – 1 year (2 semesters) Postgraduate Studies in Clothing Technology– 1,5 year (3 semesters) Postgraduate Studies in Fashion and Design "Université de la Mode"– 2 years (4 semesters) The Faculty provides full-time PhD courses lasting 4 years in the field of textile mechanical engineering and textile chemical engineering. Faculty of Biotechnology and Food Sciences The Faculty of Biotechnology and Food Sciences is an interdisciplinary faculty where the conducted research is related to chemistry, engineering and biology. It is one of the most unusual faculties at Polish technical universities. Organizational units Institute of General Food Chemistry Food and Environmental Analysis Chemical Biophysics Bioorganic Chemistry and Cosmetic Raw Materials Analytical and Bioinorganic Chemistry Institute of Technical Biochemistry Institute of Chemical Technology of Food Department of Technology of Sugar Production Department of Technology of Confectionery and Starch Department of Technology of Food Refrigeration Department of Food Analysis and Technology Sugar Technology Laboratory Quality Management Institute of Fermentation Technology and Microbiology Department of Technical Microbiology Department of Technology of Fermentation Department of Technology of Spirit and Yeast Fields of study The faculty offers education in the following fields: Biotechnology Environmental Protection Food Science and Nutrition Environmental Biotechnology The faculty, together with IFE (International Faculty of Engineering), offers first and second cycle full-time studies in Biotechnology in English. Faculty of Civil Engineering, Architecture and Environmental Engineering The Faculty of Civil Engineering, Architecture and Environmental Engineering is one of the departments of Lodz University of Technology educating in the following fields: architecture and urban planning, construction, engineering and environmental protection. It was established on May 11, 1956. Organisational units The faculty consists of two institutes and six departments: Institute of Architecture and Urban Planning Institute of Environmental Engineering and Building Installations Department of Mechanics of Materials Department of Building Physics and Building Materials Department of Structural Mechanics Department of Concrete Structures Department of Geotechnics and Engineering Structures Department of Geodesy, Environmental Cartography and Descriptive Geometry Fields of study The faculty offers full-time and part-time studies in the following fields of study: Construction Architecture and Urban Planning Environmental Engineering Spatial Economy The faculty also offers third cycle studies. Faculty of Technical Physics, Information Technology and Applied Mathematics The Faculty of Technical Physics, Computer Science and Applied Mathematics is one of the nine of faculties of Lodz University of Technology. It was established in 1976. Organisational units The faculty comprises the following units: Institute of Computer Science: Department of Intelligent Systems and Software Engineering Department of Computer Graphics and Multimedia Department of Systems and Information Technology Institute of Mathematics Department of Mathematical Modelling Department of Contemporary Applied Mathematical Analysis Department of Insurance and Capital Markets Institute of Physics Fields of study The faculty offers full-time and part-time, first and second cycle studies in the following fields of study: Technical Physics Information Technology Mathematics Faculty of Management and Production Engineering The Faculty of Management and Production Engineering offers education in the field of organization and management combined with technical sciences, adopting a practical approach. Organizational units Department of Management Department of Production Management and Logistics Department of European Integration and International Marketing Department of Management Systems and Innovation Institute of Social Sciences and Management of Technologies Fields of study The Faculty of Management and Production Engineering offers the following fields of study: Management Management Engineering Management and Production Engineering Business and Technology - in English and French Occupational Safety Engineering Logistics Paper Production and Printing Faculty of Process and Environmental Engineering The Faculty of Process and Environmental Engineering is an organizational unit of TUL conducting research and teaching. Organizational units Department of Process Equipment Department of Chemical Engineering Department of Bioprocess Engineering Department of Process Thermodynamics Department of Numerical Modelling Department of Safety Engineering Department of Molecular Engineering Department of Environmental Engineering Department of Heat and Mass Transfer Department of Environmental Engineering Techniques Fields of study The faculty offers four areas of study: Biochemical Engineering (first cycle, full-time studies) Process Engineering (first and second cycle studies, full-time studies) Environmental Engineering (first and second cycle studies, full-time and part-time studies) Occupational Safety Engineering (first cycle studies, full-time and part-time studies) The faculty also conducts studies of the third degree (PhD) lasting four years in full-time in the field of Chemical Engineering in Environmental Protection. Doctoral programmes prepare students to obtain a doctoral degree in the field of environmental engineering or chemical engineering. The faculty offers the following postgraduate courses: Safety and Occupational Health Safety of Industrial Processes Management of Municipal Waste International Faculty of Engineering International Faculty of Engineering (IFE) is an interfaculty unit offering education in foreign languages (English and French) under the auspices of TUL. IFE students can participate in 12 courses with English as a language of instruction and 1 offered in French: Architecture Engineering (AE) – in cooperation with the Faculty of Civil Engineering, Architecture and Environmental Engineering Biotechnology (BIO) – in cooperation with the Faculty of Biotechnology and Food Sciences Telecommunications and Computer Science (TCS) – in cooperation with the Faculty of Electrical, Electronic, Computer and Control Engineering Computer Science and Information Technology (ST) – in cooperation with the Faculty of Technical Physics, Computer Science and Applied Mathematics Computer Science and Information Technology (CS) – in cooperation with the Faculty of Electrical, Electronic, Computer and Control Engineering Information Technology (IT) – in cooperation with the Faculty of Technical Physics, Computer Science and Applied Mathematics Biomedical Engineering (BME) - in cooperation with the Faculty of Electrical, Electronic, Computer and Control Engineering Mechanical Engineering and Applied Computer Science (ME&ACS) – in cooperation with the Faculty of Mechanical Engineering Business and Technology (BT) – in cooperation with the Faculty of Organization and Management Gestion et technologie (GT) - in cooperation with the Faculty of Organization and Management Mechatronics (Mech) - in cooperation with the Faculty of Mechanical Engineering Management - in cooperation with the Faculty of Organization and Management Management and Production Engineering - in cooperation with the Faculty of Organization and Management Foreign Language Centre The Foreign Language Centre of Lodz University of Technology was established in 1951. Since 2005, it has been located in a former factory building on Aleja Politechniki 12 in Łódź. The building was modernized and furnished with new facilities with the aid of the European Regional Development Fund (March 2004 – January 2006). The centre has 28 classrooms, a conference room and a library, and is the seat of the University of the Third Age of Lodz University of Technology. As at 2013 the Centre employed 72 academic teachers and 15 administrative, technical and maintenance employees. Languages taught at the Centre include English, German, Russian, Italian, French, Spanish and Polish for foreigners. In 2013, there were 6273 students learning at the centre. English courses comprised 89% of all the classes. The centre conducts classes for first- and second-degree, as well as doctoral students in accordance with the University curriculum, for participants in Socrates-Erasmus and IAESTE programs, students of the General Secondary School of TUL, students of the University of the Third Age of TUL, foreign exchange students from Cangzhou Vocational College of Technology (China) and foreigners who want to study at TUL. The centre is taking part in a program organized by the City of Lodz Office called “Młodzi w Łodzi – Językowzięci”, the aim of which is to promote learning languages that are less popular in Poland and to encourage students and graduates from Łódź to improve their language qualifications. As part of the program, courses of the Finnish, Danish and Swedish language are conducted at the centre. Academic teachers employed at the centre also teach at the International Faculty of Engineering. The centre is also an examination centre for the international examinations TELC, LCCI and BULATS. Other Units Library of Lodz University of Technology Institute of Papermaking and Printing Computer Center Laser Diagnostic and Therapy Center Rectors Bohdan Stefanowski (1945–1948) Osman Achmatowicz (1948–1952) Bolesław Konorski (1952–1953) Mieczysław Klimek (1953–1962) Jerzy Werner (1962–1968) Mieczysław Serwiński (1968–1975) Edward Galas (1975–1981) Jerzy Kroh (1981–1987) Czesław Strumiłło (1987–1990) Jan Krysiński (1990–1996) Józef Mayer (1996–2002) Jan Krysiński (2002–2008) Stanisław Bielecki (2008–2016) Sławomir Wiak (2016–) References joint publication, ed. Ryszard Przybylski: Lodz University of Technology 1945–1995. Lodz. Published by Lodz University of Technology, 1995, p. 160-183. Specific External links Official website (polish version) Official website (english version) Official International Faculty of Engineering website The Foreign Language Centre of TUL website: http://www.cj.p.lodz.pl/index.php/en/about Association of Academic Foreign Languages Centres SERMO 1945 establishments in Poland Educational institutions established in 1945
45561243
https://en.wikipedia.org/wiki/Hao%20Li
Hao Li
Hao Li (; born January 17, 1981, in Saarbrücken, West Germany) is a computer scientist, innovator, and entrepreneur from Germany, working in the fields of computer graphics and computer vision. He is founder and CEO of Pinscreen, Inc, as well as Distinguished Fellow at the University of California, Berkeley. He was previously an associate professor of computer science at the University of Southern California, and former director of the Vision and Graphics Lab at the USC Institute for Creative Technologies. He was also a visiting professor at Weta Digital and a research lead at Industrial Light & Magic / Lucasfilm. For his work in non-rigid shape registration, human digitization, and real-time facial performance capture, Li received the TR35 Award in 2013 from the MIT Technology Review. He was named Andrew and Erna Viterbi Early Career Chair in 2015, and was awarded the Google Faculty Research Award and the Okawa Foundation Research Grant the same year. Li won an Office of Naval Research Young Investigator Award in 2018 and was named to the DARPA ISAT Study Group in 2019. He is a member of the Global Future Council on Virtual and Augmented Reality of the World Economic Forum. Early life Li was born in 1981 in Saarbrücken, Germany (then West Germany). His parents are both Taiwanese immigrants living in Germany. Education Li went to a French-German high school in Saarbrücken and speaks four languages (English, German, French, and Mandarin Chinese). He obtained his Diplom (eq. M.Sc.) in Computer Science at the Karlsruhe Institute of Technology (then University of Karlsruhe (TH)) in 2006 and his PhD in Computer Science at ETH Zurich in 2010. He was a visiting researcher at ENSIMAG in 2003, the National University of Singapore in 2006, Stanford University in 2008, and EPFL in 2010. He was also a postdoctoral fellow at Columbia University and Princeton University between 2011 and 2012. Career Li joined Industrial Light & Magic / Lucasfilm in 2012 as a research lead to develop the next generation real-time performance capture technologies for virtual production and visual effects. He later joined the Computer Science department at the University of Southern California as assistant professor in 2013 and was promoted to associate professor in 2019. In 2014, he spent a summer as a visiting professor at Weta Digital working on facial tracking and hair digitization technologies for the visual effects of Furious 7 and The Hobbit: The Battle of the Five Armies. In 2015, he founded Pinscreen, Inc., an Artificial Intelligence startup that specializes on the creation of photorealistic virtual avatars using advanced machine learning algorithms. In 2016, he was appointed director of the Vision and Graphics Lab at the USC Institute for Creative Technologies. Li has joined the University of California, Berkeley in 2020 as a Distinguished Fellow. Research He has worked on dynamic geometry processing and data-driven techniques for making 3D human digitization and facial animation. During his PhD, Li co-created a real-time and markerless system for performance-driven facial animation based on depth sensors which won the best paper award at the ACM SIGGRAPH / Eurographics Symposium on Computer Animation in 2009. The team later commercialized a variant of this technology as the facial animation software Faceshift (acquired by Apple Inc. in 2015 and incorporated into the iPhone X in 2017). This technique in deformable shape registration is used by the company C-Rad AB and deployed in hospitals for tracking tumors in real-time during radiation therapy. In 2013, he worked on a home scanning system that uses a Kinect to capture people into game characters or realistic miniature versions. This technology was licensed by Artec and released as a free software Shapify.me. In 2014, he was brought on as visiting professor at Weta Digital to build the high-fidelity facial performance capture pipeline for reenacting the deceased actor Paul Walker in the movie Furious 7 (2015). His recent research focuses on combining techniques in Deep Learning and Computer Graphics to facilitate the creation of 3D avatars and to enable true immersive face-to-face communication and telepresence in Virtual Reality. In collaboration with Oculus / Facebook, in 2015 he helped developed a facial performance sensing head-mounted display, which allows users to transfer their facial expressions onto their digital avatars while being immersed in a virtual environment. In the same year, he founded the company Pinscreen, Inc. in Los Angeles, which introduced a technology that can generate realistic 3D avatars of a person including the hair from a single photograph. They also work on deep neural networks that can infer photorealistic faces and expressions, which has been showcased at the Annual Meeting of the New Champions 2019 of the World Economic Forum in Dalian. Due to the ease of generating and manipulating digital faces, Hao has been raising public awareness about the threat of manipulated videos such as deepfakes. In 2019, Hao and media forensics expert, Hany Farid, from the University of California, Berkeley, released a research paper outlining a new method for spotting deepfakes by analyzing facial expression and movement patterns of a specific person. With the rapid progress in artificial intelligence and computer graphics, Li has predicted that genuine videos and deepfakes will become indistinguishable in as soon as 6 to 12 months, as of September 2019. In January 2020, Li spoke at the World Economic Forum Annual Meeting 2020 in Davos about deepfakes and how they could pose a danger to society. Li and his team at Pinscreen, Inc. also demonstrated a real-time deepfake technology at the annual meeting, where the faces of celebrities are superimposed onto the participants' face. In 2020, Li and his team developed a volumetric human teleportation system which can digitize an entire human body in 3D from a single webcam and stream the content in real-time. The technology uses 3D deep learning to infer a complete textured model of a person using a single view. The team presented the work at ECCV 2020 and demonstrated the system live at the ACM SIGGRAPH's Real-Time Live! show, where they won the "Best in Show" award. Awards ACM SIGGRAPH 2020 Real-Time Live! "Best in Show" Award. DARPA Information Science and Technology (ISAT) Study Group Member. Office of Naval Research Young Investigator Award. Andrew and Erna Viterbi Early Career Chair. Okawa Foundation Research Grant. Google Faculty Research Award. World's top 35 innovator under 35 by MIT Technology Review. Best Paper Award at the ACM SIGGRAPH / Eurographics Symposium on Computer Animation 2009. Media For his work on visual effects, Hao has been credited in several motion pictures, including Blade Runner 2049 (2017), Valerian and the City of a Thousand Planets (2017), Furious 7 (2015), The Hobbit: The Battle of the Five Armies (2014), and Noah (2014). Hao also appeared as himself in various documentaries on artificial intelligence and deepfakes, including Buzzfeed's Follow This in 2018, CBC's The Fifth Estate in 2018, and iHuman in 2019. References External links Hao Li's Home Page Pinscreen, Inc. Hao Li at University of California, Berkeley Campus Directory Living people American computer scientists Computer graphics professionals Computer graphics researchers 1981 births
230547
https://en.wikipedia.org/wiki/Case%20Western%20Reserve%20University
Case Western Reserve University
Case Western Reserve University (CWRU) is a private research university in Cleveland, Ohio. Case Western Reserve was established in 1967, when Western Reserve University, founded in 1826 and named for its location in the Connecticut Western Reserve, and Case Institute of Technology, founded in 1880 through the endowment of Leonard Case, Jr., formally federated. Case Western Reserve joined the Association of American Universities in 1969. Case Western Reserve undergraduate and graduate schools include the College of Arts and Sciences, Case School of Engineering, Mandel School of Applied Social Sciences, Case Medical School, Weatherhead School of Management, Case School of Dental Medicine, School of Law, and Frances Payne Bolton School of Nursing. Its main campus is approximately 5 miles (8 km) east of Downtown Cleveland in the neighborhood known as University Circle, an area containing many educational, medical, and cultural institutions. Case Western Reserve has a number of programs taught in conjunction with other University Circle institutions, including the Cleveland Clinic, the University Hospitals of Cleveland, the Louis Stokes Cleveland Department of Veteran's Affairs Medical Center, Cleveland Institute of Music, the Cleveland Hearing & Speech Center, the Cleveland Museum of Art, the Cleveland Institute of Art, the Cleveland Museum of Natural History, and the Cleveland Play House. Severance Hall, home of the Cleveland Orchestra, is on the Case Western Reserve campus. Seventeen Nobel laureates have been affiliated with Case Western Reserve's faculty and alumni or one of its two predecessors. History Western Reserve College (1826–1882) and University (1882–1967) Western Reserve College, the college of the Connecticut Western Reserve, was founded in 1826 in Hudson, Ohio, as the Western Reserve College and Preparatory School. Western Reserve College, or "Reserve" as it was popularly called, was the first college in northern Ohio. The school was called "Yale of the West"; its campus, now that of the Western Reserve Academy, imitated that of Yale. It had the same motto, "Lux et Veritas" (Light and Truth), the same entrance standards, and almost the same curriculum. The vision its founders had of Western Reserve College was that it would instill in students an "evangelical ethos", and produce ministers to remedy the acute shortage of them in Ohio. Liberal arts and sciences were important, but secondary. The college was located in Hudson because the town made the largest financial offer (to help in its construction). The town of Hudson, about 30 miles southeast of Cleveland, was a quiet antislavery center from the beginning: its founder, David Hudson, was against slavery, and founding trustee Owen Brown was a noted abolitionist who secured the location for the college. The abolitionist John Brown, who would lead the 1859 raid on Harpers Ferry, grew up in Hudson and was the son of co-founder Owen Brown. Hudson was a major stop on the Underground Railroad. Along with Presbyterian influences of its founding, the school's origins were strongly though briefly associated with the pre-Civil War abolitionist movement; the immediate abolition of slavery, instead of "colonizing" Africa with freed Blacks, was the dominant topic on campus in 1831, to the point that President Green complained nothing else was being discussed. The trustees were unhappy with the situation. The college's chaplain and sacred literature (Bible) professor, Beriah Green, gave four sermons on the topic, and then resigned, expecting that he would be fired. President Charles Backus Storrs took a leave of absence for health, and soon died. One of the two remaining professors, Elizur Wright, soon left to head the American Anti-Slavery Society. The center of American abolitionism, along with support from the well-to-do Tappan brothers, moved with Green to the Oneida Institute near Utica, New York, then, after a student walk-out, to Lane Seminary near Cincinnati, and finally, after a second mass student walkout, to Oberlin Collegiate Institute, later Oberlin College. "Oberlin's student body was the beneficiary of anti-abolitionist censure from other regional colleges, especially the Western Reserve College in nearby Hudson. Students flocked to Oberlin so that they could openly debate the antislavery issue without the threat of punishment or dismissal." Western Reserve was the first college west of the Appalachian Mountains to enroll (1832) and graduate (1836) an African-American student, John Sykes Fayette. Frederick Douglass gave the commencement speech in 1854. In 1838, the Loomis Observatory was built by astronomer Elias Loomis, and today remains the second oldest observatory in the United States, and the oldest still in its original location. In 1852, the Medical School became the second medical? school in the United States to graduate a woman, Nancy Talbot Clark. Five more women graduated over the next four years, including Emily Blackwell and Marie Zakrzewska, giving Western Reserve the distinction of graduating six of the first eight female physicians in the United States. By 1875, Cleveland had emerged as the dominant population and business center of the area, and the city wanted a prominent higher education institution. In 1882, with funding from Amasa Stone, Western Reserve College moved to Cleveland and changed its name to Adelbert College of Western Reserve University. Adelbert was the name of Stone's son. Case School of Applied Science (1880–1947) and Institute of Technology (1947–1967) In 1877, Leonard Case Jr. began laying the groundwork for the Case School of Applied Science by secretly donating valuable pieces of Cleveland real estate to a trust. He asked his confidential advisor, Henry Gilbert Abbey, to administer the trust and to keep it secret until after his death in 1880. On March 29, 1880, articles of incorporation were filed for the founding of the Case School of Applied Science. Classes began on September 15, 1881. The school received its charter by the state of Ohio in 1882. For the first four years of the school's existence, it was located in the Case family's home on Rockwell Street in downtown Cleveland. Classes were held in the family house, while the chemistry and physics laboratories were on the second floor of the barn. Amasa Stone's gift to relocate Western Reserve College to Cleveland also included a provision for the purchase of land in the University Circle area, adjacent to Western Reserve University, for the Case School of Applied Science. The school relocated to University Circle in 1885. In 1921 Albert Einstein came to Case campus during his first visit to the United States, out of respect of the physics work performed on campus. Besides noting the research done in the Michelson–Morley experiment, Einstein also met with Physics professor Dayton Miller to discuss his own research. During World War II, Case School of Applied Science was one of 131 colleges and universities nationally that took part in the V-12 Navy College Training Program which offered students a path to a Navy commission. Over time, the Case School of Applied Science expanded to encompass broader subjects, adopting the name Case Institute of Technology in 1947 to reflect the institution's growth. Led by polymer expert Eric Baer in 1963, the nation's first stand-alone Polymer Science and Engineering program was founded, to eventually become the Department of Macromolecular Science and Engineering. Federation of two universities Although the trustees of Case Institute of Technology and Western Reserve University did not formally federate their institutions until 1967, the institutions already shared buildings and staff when necessary and worked together often. One such example was seen in 1887, when Case physicist Albert Michelson and Reserve chemist Edward Morley collaborated on the famous Michelson–Morley experiment. There had been some discussion of a merger of the two institutions as early as 1890, but those talks dissolved quickly. In the 1920s, the Survey Commission on Higher Education in Cleveland took a strong stand in favor of federation and the community was behind the idea as well, but in the end all that came of the study was a decision by the two institutions to cooperate in founding Cleveland College, a special unit for part-time and adult students in downtown Cleveland. By the 1960s, Reserve President John Schoff Millis and Case President T. Keith Glennan shared the idea that federation would create a complete university, one better able to attain national distinction. Financed by the Carnegie Corporation, Cleveland Foundation, Greater Cleveland Associated Foundation, and several local donors, a study commission of national leaders in higher education and public policy was charged with exploring the idea of federation. The Heald Commission, so known for its chair, former Ford Foundation President Henry T. Heald, issued its final report, "Vision of a University." The report predicted that a federation could create one of the largest private universities in the nation. Case Western Reserve University (1967–present) In 1967, Case Institute of Technology, a school with its emphasis on engineering and science, and Western Reserve University, a school with professional programs and liberal arts, came together to form Case Western Reserve University. In 1968, the Department of Biomedical Engineering launched as a newly unified collaboration between the School of Engineering and School of Medicine as the first in the nation and as one of the first Biomedical Engineering programs in the world. The following year in 1969, the first Biomedical Engineering MD/PhD program in the world began at Case Western Reserve. The first computer engineering degree program in the United States was established in 1971 at Case Western Reserve. The "Forward Thinking" campaign was launched in 2011 by President Barbara Snyder to fundraise $1 billion, the largest in school history. The goal was reached in 2014 after 30 months. The board of trustees unanimously agreed to expand the campaign to $1.5 billion, which reached its mark a few years later in 2017. The campaign ultimately raised $1.82 billion. The first 2020 United States presidential debate was held at the Samson Pavilion of the Health Education Campus (HEC), shared by the Cleveland Clinic. In February 2020, president Barbara Snyder was appointed the president of Association of American Universities (AAU). Later that year, former Tulane University president Scott Cowen was appointed interim president. On October 29, 2020, Eric W. Kaler, former University of Minnesota president, was appointed as the new Case Western Reserve University president, effective July 1, 2021. Presidents Campus The university is approximately 5 miles (8 km) east of downtown Cleveland. The campus comprises a large portion of University Circle, a park-like city neighborhood and commercial center, home to many educational, medical, and other cultural institutions, including the historic Wade Park District. Case Western Reserve has a number of programs taught in conjunction with neighboring institutions, including the Cleveland Institute of Music, the Cleveland Institute of Art, the Cleveland Hearing & Speech Center, the Cleveland Museum of Art, the Cleveland Museum of Natural History, the Western Reserve Historical Society, Cleveland Botanical Garden, and the Cleveland Play House. Case Quad The Case Quadrangle, known also to students as the Engineering Quad, is located south of Euclid Avenue between Adelbert Road and Martin Luther King Jr. Drive. Most engineering and science buildings are located on this quad, notably the John D. Rockefeller Physics Building. The Case Quad also houses administration buildings, including Adelbert Hall. The famous Michelson–Morley experiment occurred here, where a historical marker and the Michelson-Morley Memorial Fountain stand as commemoration. Other notable campus buildings include the Strosacker Auditorium and Nord Hall. The southernmost edge consists of athletic areas—Adelbert Gymnasium, Van Horn Field and the Veale Convocation, Recreation and Athletic Center (commonly referred to as the Veale Center). The Veale Center houses the Horsburgh Gymnasium and the Veale Natatorium. Mather Quad The Flora Stone Mather Quadrangle, known to students as the Mather Quad, is home to many humanities and social sciences subjects. The quad is located north of Euclid Avenue between East Blvd., East 115th Street, and Juniper Road. The Flora Stone Mather College Historic District is more strictly defined by the area between East Blvd, Bellflower Road, and Ford Road north of Euclid Avenue. Named for the philanthropist wife of prominent industrialist Samuel Mather and sister-in-law of the famous statesman John Hay, the Mather Quad is home to Weatherhead School of Management, School of Law, Mandel School of Applied Social Sciences, and many departments of the College of Arts and Sciences. The Kelvin Smith Library, Thwing Center, and Tinkham Veale Student Center (known also as "The Tink") sit on the western edge of the Mather Quad and are often considered the center of campus. North Residential Village Situated on the northeast end of campus, the North Residential Village (NRV) is home to all Case Western Reserve's first-year students and many second-year students residing on campus. Constructed in the 1960s, the NRV consists of 12 4-floor buildings, an 11-floor building, Leutner (a dining hall), and a building containing the NRV area office and rehearsal space for Case Western Reserve's music department. Triangle Towers Triangle Towers are buildings located within Uptown that house both current students and staff and are also available for rent to the public. There are two towers, Tower 1 and Tower 2. Both overlook Uptown shops like Michell's Ice Cream and ABC Tavern. The Village at 115 Located between East 115th Street and East 118th Street, the Village at 115 opened in the fall of 2005 for upper-class students. The Village consists of seven houses that surround DiSanto Field and the Bill Sudeck Track. Village housing is apartment style, with apartments that house one to nine people (excluding eight person units). The apartments are fully furnished. The Village is also LEED Gold certified. Houses 1-4 & 6-7 are certified silver while house 5 is certified gold. Stephanie Tubbs Jones Residence Hall The Stephanie Tubbs Jones Residence Hall is located the north side of campus along East 115th Street before Wade Park Avenue. The hall is for upperclassman and is LEED Gold certified. As of 2019, the building is the newest residential hall on campus and consists of 5 floors along with 106 apartments. In addition to the apartments, the building also contains 8 townhouse-like residential units. South Residential Village Located between Murray Hill, Cedar, Edgehill, and Overlook roads, the South Residential Village (SRV) is home to most of Case Western Reserve's sophomore class. SRV is divided into two sections: Murray Hill Complex and Carlton Road Complex (known to students as bottom of the hill and top of the hill, respectively, due to the hill separating the two complexes). Carlton Road Complex includes three sophomore-only dormitories and several Greek life houses. Murray Hill Complex includes four sophomore only buildings and Fribley, the SRV dining hall. It also includes five Greek Houses. Transportation On and near campus, CircleLink serves as the free public shuttle service for residents, employees and visitors in University Circle and Little Italy with routes running every 20–30 minutes during service hours. Colloquially, the shuttle buses are known as Greenies. To supplement evening and nighttime hours, the Safe Ride Program provides personal pickup of students and staff upon request. For city public transit, rail and bus access are managed by the Greater Cleveland Regional Transit Authority (RTA). RTA passes of unlimited use are provided to undergraduate and full-time graduate students. The two Red Line rapid train stations are Little Italy–University Circle and Cedar–University. Notably, the Red Line connects campus to Cleveland Hopkins Airport and Downtown Cleveland. The bus rapid transit (BRT) HealthLine runs down the center of campus along Euclid Ave. Numerous RTA bus routes run through campus. Case Western Reserve parking is managed by Standard Parking, and includes a network over 50 surface lots and 15 parking structures. Pay-by-phone parking is an available option for meters throughout campus. Academics Rankings In U.S. News & World Reports 2021 rankings, Case Western Reserve was ranked as tied for 42nd among national universities and 155th among global universities. The 2020 edition of The Wall Street Journal/Times Higher Education (WSJ/THE) rankings ranked Case Western Reserve as 52nd among US colleges and universities. In 2018, Case Western Reserve was ranked 37th in the category American "national universities" and 146th in the category "global universities" by U.S. News & World Report. In 2019 U.S. News ranked it tied for 42nd and 152nd, respectively. Case Western Reserve was also ranked 32nd among U.S. universities—and 29th among private institutions—in the inaugural 2016 edition of The Wall Street Journal/Times Higher Education (WSJ/THE) rankings, but ranked tied for 39th among U.S. universities in 2019. Case Western Reserve is noted (among other fields) for research in electrochemistry and electrochemical engineering. The Michelson–Morley interferometer experiment was conducted in 1887 in the basement of a campus dormitory by Albert A. Michelson of Case School of Applied Science and Edward W. Morley of Western Reserve University. Michelson became the first American to win a Nobel Prize in science. Also in 2018, "The Hollywood Reporter" ranked CWRU's Department of Theater Master of Fine Arts program with the Cleveland Play House as 18th in the English-speaking world. In 2019, this ranking improved to 12th. In 2014, Washington Monthly ranked Case Western Reserve University as the 9th best National University, but in the 2018 rankings, Case Western Reserve was ranked the 118th best National University. In 2013, Washington Monthly ranked Case Western Reserve as the nation's 4th best National University for contributing to the public good. The publication's ranking was based upon a combination of factors including social mobility, research, and service. In 2009, the school had ranked 15th. Although Washington Monthly no longer ranks contributions to the public good as such, in its 2018 rankings of National Universities Case Western Reserve was ranked 180th in Social mobility and 118th in Service. In 2014, The Times ranked Case Western Reserve 116th worldwide, but in 2019 it ranked the university 132nd. In 2013, Case Western Reserve was among the Top 25 LGBT-Friendly Colleges and Universities, according to Campus Pride, a national organization that aims to make universities safer and more inclusive for lesbian, gay, bisexual and transgender (LGBT) individuals. The recognition follows Case Western Reserve's first five-star ranking on the Campus Pride Index, a detailed survey of universities' policies, services and institutional support for LGBT individuals. Case Western Reserve ranks 13th among private institutions (26th among all) in federal expenditures for science and engineering research and development, per the National Science Foundation. Case Western Reserve is included in the college educational guide, Hidden Ivies, which discusses the college admissions process and attempts to evaluate 63 colleges in comparison to Ivy League colleges. Undergraduate profile The undergraduate student body hails from all 50 states and over 90 countries. The six most popular majors are Biomedical Engineering, Biology/Biological Sciences, Social Work, Nursing, Mechanical Engineering, and Psychology. Since 2016, the top fields for graduating CWRU undergraduate students have been engineering, nursing, research and science, accounting and financial services, and information technology. Case Western Reserve has an acceptance rate of 27% for the class of 2023. The Class of 2023 had 82 percent of students from outside the state of Ohio and 16 percent from outside the United States. 70 percent graduated in the top 10 percent of their high school class. The mid-50% for SAT scores (25%–75%) were between 1360 and 1480. The mid-50% for ACT scores was 30 to 34 (superscored). Schools and programs The university in its present form consists of eight schools: College of Arts and Sciences (1826) School of Dental Medicine (1892) Case School of Engineering (1880) School of Law (1892) Weatherhead School of Management (1952) School of Medicine 'University Program' (1843) Cleveland Clinic Lerner College of Medicine ('College Program') (2002) Frances Payne Bolton School of Nursing (1898) Mandel School of Applied Social Sciences (1915) CWRU also supports over a hundred 'Centers' in various fields. Naming controversy In 2003, the university unveiled a new logo and branding campaign that emphasized the "Case" portion of its name. In 2006, interim university president Gregory Eastwood convened a task group to study reactions to the campaign. The panel's report indicated that it had gone so poorly that, "There appear to be serious concerns now about the university's ability to recruit and maintain high-quality faculty, fund-raising and leadership." Also, the logo was derided among the university's community and alumni and throughout northeastern Ohio; critics said it looked like "...a fat man with a surfboard." In 2007, the university's board of trustees approved a shift back to giving equal weight to "Case" and "Western Reserve." A new logo was chosen and implementation began July 1. In an open letter to the university community, interim president Eastwood admitted that "the university had misplaced its own history and traditions." Research Case Western Reserve is classified among "R1: Doctoral Universities – Very High Research Activity". Following is a partial list of major contributions made by faculty, staff, and students at Case Western Reserve since 1887: Case Western Reserve was the site of the Michelson-Morley interferometer experiment, conducted in 1887 by Albert A. Michelson of Case Institute of Technology and Edward W. Morley of Western Reserve University. This experiment proved the non-existence of the ether, and provided evidence that later substantiated Einstein's special theory of relativity Albert A. Michelson, who became the first American to win a Nobel Prize in science, taught at Case Institute of Technology. He won the prize in physics in 1907. Edward W. Morley, in 1895, made the most precise (to that date) determination of the atomic weight of oxygen, the basis for calculating the weights of all other elements. Dayton C. Miller, in 1896, performed the first full X-ray of the human body—on himself. George W. Crile, in 1905, performed the first modern blood transfusion, using a coupling device to connect blood vessels. Roger G. Perkins, in 1911, pioneered drinking water chlorination to eradicate typhoid bacilli. Henry J. Gerstenberger, in 1915, developed simulated infant formula. Claude S. Beck, in 1935, pioneered surgical treatment of coronary artery disease. Frederick S. Cross, in the 1950s, developed the first heart-lung machine used during open heart surgery. Claude S. Beck, in 1947, performed the first successful lifesaving defibrillation of the human heart and developed cardiopulmonary resuscitation (CPR). Robert Kearns, in 1964, invented the intermittent windshield wiper used in most modern automobiles. Frederick Reines, in 1965, first detected neutrinos created by cosmic ray collisions with the Earth's atmosphere and developed innovative particle detectors. Case Western Reserve had selected Prof. Reines as chair of the physics department based on Reines's work that first detected neutrinos emitted from a nuclear reactor—work for which Reines shared a 1995 Nobel Prize. Eric Baer, in 1967, pioneered the materials science of polymers and created the first comprehensive polymer science and engineering department at a major U.S. university. Joseph F. Fagan, in 1987, developed a test for infants to identify mental retardation within one year of birth. In 1987 the first edition of the Encyclopedia of Cleveland History was published. Huntington F. Willard of the School of Medicine and University Hospitals of Cleveland—collaborating with colleagues at Athersys, Inc., in 1997—created the first artificial human chromosomes, opening the door to more detailed study of human genetics and potentially offering a new approach to gene therapy. Roger Quinn, in 2001, developed robots such as Whegs that mimic cockroaches and other crawling insects Case Biorobotics Lab Tshilidzi Marwala, in 2006, began work on Local Loop Unbundling in Africa. He also chaired the Local Loop Unbundling Committee on behalf of the South African Government. Furthermore, Marwala and his collaborators developed an artificial larynx, developed the theory of rational counterfactuals, computer bluffing as well as establishing the relationship between artificial intelligence and the theory of information asymmetry. In 2007, a team from Case Western Reserve participated in the DARPA Urban Challenge with a robotic car named DEXTER. Team Case placed as one of 36 semi-finalists. DEXTER was the only car in the race without any seating for humans, and the only one built from scratch as a robot car. Case Western Reserve University researchers are developing atomically thin drumheads which is tens of trillions times smaller in volume and 100,000 times thinner than the human eardrum. They will be made with the intent to receive and transmit signals across a radio frequency range which will be far greater than what we can hear with the human ear. Simon Ostrach and Yasuhiro Kamotani led spacelab projects entitled surface tension driven convection experiment (STDCE) aboard the space shuttle STS-50 and the re-flight STDCE-2 in USML-2 aboard STS-73 studying oscillatory thermocapillary flows in the absence of gravitational effects. James T'ien has contributed to the study of numerous microgravity combustion space flight experiments including the Candle Flame In Non-Buoyant Atmospheres aboard the Space Shuttle STS-50 along with the reflight to Mir Orbiting Station in 1995, the Burning and Suppression of Solids (BASS) taking place aboard the International Space Station along with the experiment reflight (BASS-2). He received the NASA Public Service Medal in 2000. He is a member of the National Academy of Sciences and serves on the Committee of Biological and Physical Sciences in Space. . Today, the university operates several facilities off campus for scientific research. One example of this is the Warner and Swasey Observatory at Kitt Peak National Observatory in Arizona. Electrochemistry The Yeager Center for Electrochemical Sciences (YCES), formerly the Case Center for Electrochemical Sciences, has provided annual workshops on electrochemical measurements since the late 1970s. Related laboratories at Case include the Electrochemical Engineering and Energy Laboratory (EEEL), the Electrochemical Materials Fabrication Laboratory (EMFL), the Case Electrochemical Capacitor Fabrication Facility and the ENERGY LAB. The editor for the Journal of the Electrochemical Society is a Case professor, and the university is home to six Fellows of the Electrochemical Society. Some notable achievements involve the work on boron-doped diamond electrodes, in-situ electrochemical spectroscopy, polybenzibidazole (PBI) membranes for fuel cells and flow batteries and electrochemical sensors. In July 2018, the university was awarded $10.75 million by the U.S. Department of Energy to establish the Energy Frontier Research Center, which will explore "Breakthrough Electrolytes for Energy Storage." Sears think[box] Larry Sears and Sally Zlotnick Sears think[box] is a public-access design and innovation center at Case Western Reserve University that allows students and other users to access prototyping equipment and other invention resources. The makerspace is located in the Richey Mixon building, a seven-story, 50,000 sq. ft. facility behind the campus athletic center. Over $35 million has been invested in space including in large part from a funding of $10 million from alumni Larry Sears and his wife Sally Zlotnick Sears. Larry Sears is an adjunct faculty member in the Department of Electrical Engineering and Computer Science at CWRU and the founder of Hexagram, Inc. (Now ACLARA Wireless Technologies). Many projects and startup companies have come out of the makerspace. Student life The primary area for restaurants and shopping is the Uptown district along Euclid Ave adjacent to campus. Cleveland's Little Italy is within walking distance. A campus shuttle runs to Coventry Village, a shopping district in neighboring Cleveland Heights. Popular with students, Downtown Cleveland, Ohio City, Legacy Village, and Shaker Square are all a short driving distance or accessible by RTA. Music WRUW-FM (91.1 FM) is the campus radio station of Case Western Reserve University. Its motto "More Music, Fewer Hits" can be seen adorning the rear bumpers of many vehicles in the area. WRUW broadcasts at a power of 15,000 watts and covers most of Northeast Ohio 24 hours a day, 365 days a year. WRUW is staffed by Case Western Reserve students and community volunteers. The station's format can be classified as non-commercial "variety." Case Western Reserve is also home to 19 performing ensembles, including a cappella groups such as Dhamakapella, the Case Men's Glee Club, Case Women's Glee Club, Case in Point, and Solstice. Other ensembles include the Case/University Circle Symphony Orchestra, Camerata Chamber Orchestra, Case/CIM Baroque Orchestra, Concert Choir, Early Music Singers, Jazz Ensemble 1 and 2, Marching Spartans, Percussion Ensemble, Symphonic Winds, University Singers, Collegium Musicum, New Music Ensemble, Wind Ensemble, and Chamber Music. Case Western Reserve's main music venue is the Maltz Performing Arts Center. Case Western Reserve also has two main rehearsal spaces for performing arts music majors and school ensembles. Haydn Hall contains practice rooms with Steinway pianos, along with the department offices. Denison Hall serves as a rehearsal, practice, and teaching space for the music students and school ensembles, and is attached to Wade Commons. The Cleveland Youth Wind Symphony also rehearses in Denison Hall. Music majors may take lessons and courses at the Cleveland Institute of Music. For performances, all students, ensembles, and a cappella groups use Harkness Chapel. The bands and orchestra also perform at Severance Hall (the on-campus home of the Cleveland Orchestra) and CIM's Kulas Hall. Computing Case Western Reserve had the first ABET-accredited program in computer engineering. In 1968, the university formed a private company, Chi Corporation, to provide computer time to both it and other customers. Initially this was on a Univac 1108 (replacing the preceding UNIVAC 1107), 36 bit, one's complement machine. The company was sold in 1977 to Robert G. Benson in Beachwood, Ohio becoming Ecocenters Corporation. Project Logos, under ARPA contract, was begun within the department on a DEC System-10 (later converted to TENEX (BBN) in conjunction with connection to the ARPANET) to develop a computer-aided computer design system. This system consisted in a distributed, networked, graphics environment, a control and data flow designer and logic (both hardware and software) analyzer. An Imlac PDS-1 with lightpen interrupt was the main design workstation in 1973, communicating with the PDP-10 over a display communications protocol written by Don Huff as a Master Thesis and implemented on the Imlac by Ted Brenneman. Graphics and animation became another departmental focus with the acquisition of an Evans & Sutherland LDS-1 (Line Drawing System-1), which was hosted by the DEC System-10, and later with the acquisition of the stand-alone LDS-2. Case Western Reserve was one of the earliest universities connected to the ARPANET, predecessor to the Internet. ARPANET went online in 1969; Case Western Reserve was connected in January, 1971. Case Western Reserve graduate Ken Biba published the Biba Integrity Model in 1977 and served on the ARPA Working Group that developed the Transmission Control Protocol (TCP) used on the Internet. Case Western Reserve pioneered the early Free-net computer systems, creating the first Free-net, The Cleveland Free-Net, as well as writing the software that drove a majority of those systems, known as FreePort. The Cleveland Free-Net was shut down in late 1999, as it had become obsolete. It was the first university to have an all-fiber-optic network, in 1989. At the inaugural meeting in October, 1996, Case Western Reserve was one of the 34 charter university members of Internet2. The university was ranked No. 1 in Yahoo Internet Life's 1999 Most Wired College list. There was a perception that this award was obtained through partially false or inaccurate information submitted for the survey, and the university did not appear at all on the 2000 Most Wired College list (which included 100 institutions). The numbers reported were much lower than those submitted by Ray Neff in 1999. The university had previously placed No. 13 in the 1997 poll. In August 2003, Case Western Reserve joined the Internet Streaming Media Alliance, then one of only two university members. In September 2003, Case Western Reserve opened 1,230 public wireless access points on the Case Western Reserve campus and University Circle. Case Western Reserve was one of the founding members of OneCleveland, formed in October 2003. OneCleveland is an "ultra broadband" (gigabit speed) fiber optic network. This network is for the use of organizations in education, research, government, healthcare, arts, culture, and the nonprofit sector in Greater Cleveland. Case Western Reserve's Virtual Worlds gaming computer lab opened in 2005. The lab has a large network of Alienware PCs equipped with game development software such as the Torque Game Engine and Maya 3D modeling software. Additionally, it contains a number of specialized advanced computing rooms including a medical simulation room, a MIDI instrument music room, a 3D projection "immersion room," a virtual reality research room, and console room, which features video game systems such as Xbox 360, PlayStation 3, and Wii. This laboratory can be used by any student in the Electrical Engineering and computer science department, and is heavily used for the Game Development (EECS 290) course. Case Western's Internet Technology Service also runs a High Performance Computing Cluster (HPCC) utilizing 2684 processors over 200 computer nodes interconnected with gigabit fiberoptic ethernet. The HPCC is available for research utilizing a wide array of commercial and custom scientific software packages and computer languages including: Matlab, Mathematica, Ansys CFX Fluent and ICEM, Schrödinger, LAMMPS, Gaussian, NEURON, MCell, Python, Qhull, Sundials, Charmm/qchem, Rosetta, Gromacs, NAMD, C, C++, Fortran. Housing Residence halls are divided into two areas: one featuring suite-style rooms for second-year students in the South Residential Village, the other featuring double, single and suite style rooms for first-year students and upperclassmen in the North Residential Village. Both have gigabit ethernet network access and the wired network is one of the fastest that exists. A wireless campus network is also available in all buildings on campus and ranked as one of the fastest by Intel in 2005. Suite style housing, known as the Village at 115th, was opened in fall 2005 for upperclassmen and features one- to nine-person, "apartment-style" residence halls that come with air conditioning, a full kitchen area, and full-sized beds. Residence Life at Case Western Reserve has a recent history of being liberal in its policies, including allowing co-ed suites (an option offered to non-freshman students, when requested and agreed upon by all occupants of a suite) and several co-ed floors for freshmen, as well as a three-day guest policy. Pets are allowed except for dogs, cats, ferrets, and a few other small mammals, but requests are granted discussion. First-year students are grouped into one of four residential colleges that are overseen by first-year coordinators. The Mistletoe, Juniper, and Magnolia residential colleges were established when the "First Year Experience" system was introduced, and Cedar was created in the fall of 2005 to accommodate a large influx of new students. In the fall of 2007, Magnolia was integrated into Mistletoe, however, it was later re-separated in the fall of 2012. The areas of focus for each college are – Cedar: visual and performing arts; Mistletoe: service leadership; Juniper: multiculturalism and Magnolia: sustainability. Magnolia now includes Clarke Tower, which also houses second year students as well as first year students. The residential colleges plan events together and are run by college councils that take student input and use it to plan social and community service-oriented activities. 3rd year students who are allowed to live off campus through graduate students have several university owned, university controlled, and independent apartment options. Greek life Nearly one-half of the campus undergraduates are in a fraternity or sorority. There are ten sororities and sixteen fraternities currently on campus. Greek organizations are governed by an Interfraternity Council and Panhellenic Council. During the 2010–2011 school year, fraternities and sororities at Case collectively raised over $45,375 for philanthropy. In September 2010, the Delta Chi fraternity joined the Greek community, achieving chapter status in October 2012. In September 2012, Pi Beta Phi sorority began a colonization effort. In the Spring of 2013, Delta Sigma Phi fraternity began colonization efforts as well. In the Spring of 2014, a colony of Pi Kappa Phi was opened. In the 2014–2015 academic year a chapter of the sorority Sigma Sigma Sigma joined the campus along with the return of the fraternity Sigma Alpha Epsilon. Most recently, a colony of Alpha Gamma Delta was established in Spring 2018. Zeta Psi was suspended by its alumni board due to a slew of sexual assault allegations detailed on the Instagram page CWRU Survivors. Case's Interfraternity Council then voted to withdraw recognition of the chapter. The fraternities are: Beta Theta Pi Delta Chi Delta Tau Delta Delta Sigma Phi Delta Upsilon Phi Delta Theta Phi Gamma Delta Phi Kappa Psi Phi Kappa Tau Phi Kappa Theta Pi Kappa Phi Sigma Alpha Epsilon Sigma Chi Sigma Nu Theta Chi Zeta Beta Tau The sororities are: Alpha Chi Omega Alpha Gamma Delta Alpha Phi Delta Gamma Kappa Alpha Theta Phi Mu Phi Sigma Rho Pi Beta Phi Sigma Psi Sigma Sigma Sigma Safety and security Office of Emergency Management The Office of Emergency Management prepares for various levels of emergencies on campus, such as chemical spills, severe weather, infectious diseases, and security threats. RAVE, a multi-platform emergency alerting system, is operated by Emergency Management for issuing emergency alerts and instructions for events on campus. The Office of Emergency Management also performs risk assessment to identify possible safety issues and aims to mitigate these issues. Additionally, CERT is managed through Emergency Management, enabling faculty and staff members to engage in emergency preparedness. The Office of Emergency Management works closely with other campus departments, such as Police and Security Services, University Health Services, and Environmental Health and Safety, as well as community resources including city, state, and federal emergency management agencies. Police and security services Case operates a police force of sworn officers as well as a security officers. Starting as security only, the university expanded the role of protective services to include sworn officers who have arrest power and carry firearms. Some officers have additional training, such as SWAT training. On top of routine duties such as fingerprinting, traffic control, and bicycle registration, police and security also conduct investigations, undercover operations, and community outreach. Police and Security operate a fleet of vehicles, including police cruisers, scooters, and Smart cars. Police and Security are dispatched by a 24/7 campus dispatch center, responsible for emergency call handling, alarm monitoring, and video surveillance. Additionally, the dispatch center can send RAVE notifications and manages CWRU Shield, a mobile application allowing video, image, and text tips, safety checks, and viewing emergency procedures. CWRU Police also works closely with RTA transit police, University Circle Police, Cleveland Police, East Cleveland Police, Cleveland Heights Police, University Hospitals Police Department, and other surrounding emergency services. Police and Security, with conjunction with the Emergency Management Office, conduct tabletop drills and full-scale exercises involving surrounding emergency services. Traditions Starting in 1910, the Hudson Relay is an annual relay race event remembering and honoring the university relocation from Hudson, Ohio to Cleveland. Conceived by then-student, Monroe Curtis, the relay race was run from the old college in Hudson, Ohio to the new university in University Circle. Since the mid-1980s, the race has been run entirely in the University Circle area. The race is a distance of . It is held weekend before spring semester finals. Competing running teams are divided by graduating class. If a class wins the relay all four years, tradition dictates a reward of a champagne and steak dinner with the president of the university be awarded. Only six classes have won all four years—1982, 1990, 1994, 2006, 2011, and 2017. The winning classes of each year is carved on an original boulder located behind Adelbert Hall. Springfest is a day-long concert and student group festival that occurs later in the same day as Hudson Relays. The Springfest Planning Committee brings in several bands and a beer garden, student groups set up booths to entertain the student body, and various inflatable carnival-style attractions are brought in to add to the festive atmosphere. Occasionally, due to adverse weather conditions, the festival must be moved indoors, usually to Thwing Center or Adelbert Gym. Halloween at the Farm is a tradition established in the fall of 2002. Halloween at the Farm takes place at the Squire Valleevue Farm in Hunting Valley, Ohio. Students, their families, and faculty are invited to enjoy games, a bonfire, an open-air concert and hay rides. Organized by the members of the Class Officer Collective, HATF is one of the biggest events of the year. In the fall of 2009 the event was moved to the main campus and renamed "Halloween at Home". Since 1976, the Film Society of Case Western Reserve University has held a science fiction marathon. The film festival, the oldest of its type, boasts more than 34 hours of non-stop movies, cartoons, trailers, and shorts spanning many decades and subgenres, using both film and digital projection. The Film Society, which is student-run and open to the public, also shows movies on Friday and Saturday evenings throughout the school year. Athletics Case Western Reserve competes in 19 varsity sports—10 men's sports and 9 women's sports. All 19 varsity teams wear a commemorative patch on their uniforms honoring Case alumnus, M. Frank Rudy, inventor of the Nike air-sole. The Spartans' primary athletic rival is Carnegie Mellon University. DiSanto Field is home to the football, men's soccer, women's soccer, and track and field teams. Case Western Reserve is a founding and current member of the University Athletic Association (UAA). The conference participates in the National Collegiate Athletic Association's (NCAA) Division III. Case Institute of Technology and Western Reserve University were also founding members of the Presidents' Athletic Conference (PAC) in 1958. When the athletic departments of the two universities merged in 1971 they dominated the (PAC) for several years. The university remained a member of the PAC until 1983. In the fall of 1984 the university joined the North Coast Athletic Conference (NCAC), a pioneer in gender equality in sports, as a charter member. The 1998–99 school year marked the final season in which the Spartans were members of the NCAC. As the university had held joint conference membership affiliation with the UAA and the NCAC for over a decade. In 2014, the football team began competing as an associate member of the PAC, as only four out of the eight UAA member institutions sponsored football. The university offers ten men's sports and nine women's sports. In recent years, the Case Western Reserve baseball team has made appearances in the NCAA post-season. In 2014, the Spartans advanced to the NCAA Mid-East Regional Final before losing to Salisbury State 3–2. The 2014 team set a school record for victories in a season with 34, and also won a UAA title. In 2011, Spartan third baseman Chad Mullins was named the D3Baseball.com Player of the Year after hitting .437 with eight home runs and 71 RBIs. Mullins also ranked in the Division III national top ten in hits, runs scored, and total bases. Case Western Reserve has a long and historied cross country program. The Case Western Reserve's women's cross country team finished the 2006 season with a UAA Championship and a bid to the NCAA Championship. The Lady Spartans finished 10th in the nation. The women's team went on to finish even higher at nationals in 2007, earning a sixth-place finish at the NCAA DIII national championship. Both the men's and women's Cross Country teams qualified for and competed in the NCAA DIII national championships in 2008, with the women's team coming away with two All-Americans and a 16th-place finish. In 2009, they had two All-Americans and finished 15th. In 2010, the lady Spartans finished 19th, with one all-American. From 2006 to 2010 the women's cross country team earned 8 individual All-American Titles, including current professional marathoner Esther Erb. The Case Western Reserve football team reemerged in the mid-2000s under the direction of Head Coach Greg Debeljak. The 2007 team finished undefeated earning the school's first playoff appearance and first playoff victory, winning against Widener University. The undefeated seasons continued in both 2008 and 2009, earning more UAA titles and NCAA Division III playoff appearances, helping set up an all-time school record of a 38-game regular season win streak. In 2017, the Spartans again went undefeated and advanced to the NCAA Division III playoffs, defeating Illinois Wesleyan University in the first round, before being eliminated in the second round by the Mount Union Purple Raiders, the eventual NCAA Division III national champion. In total, the team has won eight UAA football championships–1988, 1996, 2007, 2008, 2009, 2011, 2016, and 2017. In 2014, the football team began competing as an associate member of the Presidents' Athletic Conference, winning the conference in 2017. All other sports continue to compete in the University Athletic Association. In 2019, the Spartans finished 9–2, winning an outright PAC title, and earning an automatic bid to the Division III playoffs, where they were defeated in the first round. The Case Western Reserve men's soccer team finished their 2006 season with a 17–2–2 record and a UAA championship. The team reached the Sweet 16 in their first-ever NCAA Division III tournament appearance and concluded the season ranked 12th in the nation. In 2018, the Case Western Reserve men's soccer team made their deepest NCAA run ever, advancing to the NCAA Division III "Elite Eight" before falling in the quarterfinals to Calvin College, 3–1. The 2018 team finished the season with a record of 16–2–2. In 2014, the Spartan men's tennis team was ranked in the Division III Top 10 for most of the season, and advanced to the NCAA Elite Eight before falling to Middlebury College. That same year, two CWRU tennis players, Eric Klawitter and Christopher Krimbill, won the NCAA men's doubles title. In 2021, the Spartan men's tennis team advanced all the way to the finals of the NCAA Division III championship before losing to Emory 5–2; the team finished with an overall 14–3 record, with two of those losses coming against non-Division III competition. CWRU has produced eight individual Division III national champions in Indoor and Outdoor Track and Field. The Case Western Reserve Ultimate Frisbee Team, although a club sport, competes against Division I teams around the country. Established in 1995, the Fighting Gobies have been successful, with the men's team taking home first place in the Ohio Valley Regional Tournament. CWRU wrestlers have won four individual Division III national titles. Notable people Notable alumni include John Charles Cutler, former surgeon general who violated human rights and led to deaths in the Tuskegee Syphilis Study, Terre Haute prison experiments, and the syphilis experiments in Guatemala; Anthony Russo and Joe Russo, Hollywood movie directors, Paul Buchheit, creator and lead developer of Gmail; Craig Newmark, billionaire founder of Craigslist; Peter Tippett, developer of the anti-virus software Vaccine, which Symantec purchased and turned into the popular Norton AntiVirus. Founders of Fortune 500 companies include Herbert Henry Dow, founder of Dow Chemical, Art Parker, founder of Parker Hannifin, and Edward Williams, co-founder of Sherwin-Williams. Other notable alumni include Pete Koomen, co-founder and CTO of Optimizely; Larry Hurtado, one of the leading New Testament scholars in the world; Harvey Hilbert, a zen master, psychologist and expert on post-Vietnam stress syndrome; Peter Sterling, neuroscientist and co-founder of the concept of allostasis; Ogiame Atuwatse III, Tsola Emiko the 21st Olu of Warri - a historic monarch of the Itsekiri people in Nigeria's Delta region, and Donald Knuth, a leading expert on computer algorithms and creator of the TeX typesetting system. Nobel laureates See also Association of Independent Technological Universities References Further reading External links Case Western Reserve Athletics website 1826 establishments in Ohio Educational institutions established in 1826 Technological universities in the United States Universities and colleges in Cleveland University Circle Universities and colleges formed by merger in the United States Private universities and colleges in Ohio Western Reserve, Ohio
28249265
https://en.wikipedia.org/wiki/Bitcoin
Bitcoin
Bitcoin (₿) is a decentralized digital currency, without a central bank or single administrator, that can be sent from user to user on the peer-to-peer bitcoin network without the need for intermediaries. Transactions are verified by network nodes through cryptography and recorded in a public distributed ledger called a blockchain. The cryptocurrency was invented in 2008 by an unknown person or group of people using the name Satoshi Nakamoto. The currency began use in 2009 when its implementation was released as open-source software. Bitcoins are created as a reward for a process known as mining. They can be exchanged for other currencies, products, and services. Bitcoin has been criticized for its use in illegal transactions, the large amount of electricity (and thus carbon footprint) used by mining, price volatility, and thefts from exchanges. Some investors and economists have characterized it as a speculative bubble at various times. Others have used it as an investment, although several regulatory agencies have issued investor alerts about bitcoin. A few local and national governments are officially using Bitcoin in some capacity, with one country, El Salvador, adopting it as a legal tender. The word bitcoin was defined in a white paper published on 31 October 2008. It is a compound of the words bit and coin. No uniform convention for bitcoin capitalization exists; some sources use Bitcoin, capitalized, to refer to the technology and network and bitcoin, lowercase, for the unit of account. The Wall Street Journal, The Chronicle of Higher Education, and the Oxford English Dictionary advocate the use of lowercase bitcoin in all cases. Design Units and divisibility The unit of account of the bitcoin system is the bitcoin. Currency codes for representing bitcoin are BTC and XBT. Its Unicode character is ₿. One bitcoin is divisible to eight decimal places. Units for smaller amounts of bitcoin are the millibitcoin (mBTC), equal to bitcoin, and the satoshi (sat), which is the smallest possible division, and named in homage to bitcoin's creator, representing (one hundred millionth) bitcoin. 100,000 satoshis are one mBTC. Blockchain The bitcoin blockchain is a public ledger that records bitcoin transactions. It is implemented as a chain of blocks, each block containing a hash of the previous block up to the genesis block in the chain. A network of communicating nodes running bitcoin software maintains the blockchain. Transactions of the form payer X sends Y bitcoins to payee Z are broadcast to this network using readily available software applications. Network nodes can validate transactions, add them to their copy of the ledger, and then broadcast these ledger additions to other nodes. To achieve independent verification of the chain of ownership each network node stores its own copy of the blockchain. At varying intervals of time averaging to every 10 minutes, a new group of accepted transactions, called a block, is created, added to the blockchain, and quickly published to all nodes, without requiring central oversight. This allows bitcoin software to determine when a particular bitcoin was spent, which is needed to prevent double-spending. A conventional ledger records the transfers of actual bills or promissory notes that exist apart from it, but the blockchain is the only place that bitcoins can be said to exist in the form of unspent outputs of transactions. Individual blocks, public addresses and transactions within blocks can be examined using a blockchain explorer. Transactions Transactions are defined using a Forth-like scripting language. Transactions consist of one or more inputs and one or more outputs. When a user sends bitcoins, the user designates each address and the amount of bitcoin being sent to that address in an output. To prevent double spending, each input must refer to a previous unspent output in the blockchain. The use of multiple inputs corresponds to the use of multiple coins in a cash transaction. Since transactions can have multiple outputs, users can send bitcoins to multiple recipients in one transaction. As in a cash transaction, the sum of inputs (coins used to pay) can exceed the intended sum of payments. In such a case, an additional output is used, returning the change back to the payer. Any input satoshis not accounted for in the transaction outputs become the transaction fee. Though transaction fees are optional, miners can choose which transactions to process and prioritize those that pay higher fees. Miners may choose transactions based on the fee paid relative to their storage size, not the absolute amount of money paid as a fee. These fees are generally measured in satoshis per byte (sat/b). The size of transactions is dependent on the number of inputs used to create the transaction, and the number of outputs. The blocks in the blockchain were originally limited to 32 megabytes in size. The block size limit of one megabyte was introduced by Satoshi Nakamoto in 2010. Eventually the block size limit of one megabyte created problems for transaction processing, such as increasing transaction fees and delayed processing of transactions. Andreas Antonopoulos has stated Lightning Network is a potential scaling solution and referred to lightning as a second-layer routing network. Ownership In the blockchain, bitcoins are registered to bitcoin addresses. Creating a bitcoin address requires nothing more than picking a random valid private key and computing the corresponding bitcoin address. This computation can be done in a split second. But the reverse, computing the private key of a given bitcoin address, is practically unfeasible. Users can tell others or make public a bitcoin address without compromising its corresponding private key. Moreover, the number of valid private keys is so vast that it is extremely unlikely someone will compute a key-pair that is already in use and has funds. The vast number of valid private keys makes it unfeasible that brute force could be used to compromise a private key. To be able to spend their bitcoins, the owner must know the corresponding private key and digitally sign the transaction. The network verifies the signature using the public key; the private key is never revealed. If the private key is lost, the bitcoin network will not recognize any other evidence of ownership; the coins are then unusable, and effectively lost. For example, in 2013 one user claimed to have lost 7,500 bitcoins, worth $7.5 million at the time, when he accidentally discarded a hard drive containing his private key. About 20% of all bitcoins are believed to be lost -they would have had a market value of about $20 billion at July 2018 prices. To ensure the security of bitcoins, the private key must be kept secret. If the private key is revealed to a third party, e.g. through a data breach, the third party can use it to steal any associated bitcoins. , around 980,000 bitcoins have been stolen from cryptocurrency exchanges. Regarding ownership distribution, as of 16 March 2018, 0.5% of bitcoin wallets own 87% of all bitcoins ever mined. Mining Mining is a record-keeping service done through the use of computer processing power. Miners keep the blockchain consistent, complete, and unalterable by repeatedly grouping newly broadcast transactions into a block, which is then broadcast to the network and verified by recipient nodes. Each block contains a SHA-256 cryptographic hash of the previous block, thus linking it to the previous block and giving the blockchain its name. To be accepted by the rest of the network, a new block must contain a proof-of-work (PoW). The PoW requires miners to find a number called a nonce (number used once), such that when the block content is hashed along with the nonce, the result is numerically smaller than the network's difficulty target. This proof is easy for any node in the network to verify, but extremely time-consuming to generate, as for a secure cryptographic hash, miners must try many different nonce values (usually the sequence of tested values is the ascending natural numbers: 0, 1, 2, 3, ...) before a result happens to be less than the difficulty target. Because the difficulty target is extremely small compared to a typical SHA-256 hash, block hashes have many leading zeros as can be seen in this example block hash: By adjusting this difficulty target, the amount of work needed to generate a block can be changed. Every 2,016 blocks (approximately 14 days given roughly 10 minutes per block), nodes deterministically adjust the difficulty target based on the recent rate of block generation, with the aim of keeping the average time between new blocks at ten minutes. In this way the system automatically adapts to the total amount of mining power on the network. , it takes on average 79 sextillion (79 thousand billion billion) attempts to generate a block hash smaller than the difficulty target. Computations of this magnitude are extremely expensive and utilize specialized hardware. The proof-of-work system, alongside the chaining of blocks, makes modifications of the blockchain extremely hard, as an attacker must modify all subsequent blocks in order for the modifications of one block to be accepted. As new blocks are mined all the time, the difficulty of modifying a block increases as time passes and the number of subsequent blocks (also called confirmations of the given block) increases. Computing power is often bundled together by a Mining pool to reduce variance in miner income. Individual mining rigs often have to wait for long periods to confirm a block of transactions and receive payment. In a pool, all participating miners get paid every time a participating server solves a block. This payment depends on the amount of work an individual miner contributed to help find that block. Supply The successful miner finding the new block is allowed by the rest of the network to collect for themselves all transaction fees from transactions they included in the block, as well as a pre-determined reward of newly created bitcoins. , this reward is currently 6.25 newly created bitcoins per block. To claim this reward, a special transaction called a coinbase is included in the block, with the miner as the payee. All bitcoins in existence have been created through this type of transaction. The bitcoin protocol specifies that the reward for adding a block will be reduced by half every 210,000 blocks (approximately every four years). Eventually, the reward will round down to zero, and the limit of 21 million bitcoins will be reached 2140; the record keeping will then be rewarded by transaction fees only. Decentralization Bitcoin is decentralized thus: Bitcoin does not have a central authority. The bitcoin network is peer-to-peer, without central servers. The network also has no central storage; the bitcoin ledger is distributed. The ledger is public; anybody can store it on a computer. There is no single administrator; the ledger is maintained by a network of equally privileged miners. Anyone can become a miner. The additions to the ledger are maintained through competition. Until a new block is added to the ledger, it is not known which miner will create the block. The issuance of bitcoins is decentralized. They are issued as a reward for the creation of a new block. Anybody can create a new bitcoin address (a bitcoin counterpart of a bank account) without needing any approval. Anybody can send a transaction to the network without needing any approval; the network merely confirms that the transaction is legitimate. Conversely, researchers have pointed out at a "trend towards centralization". Although bitcoin can be sent directly from user to user, in practice intermediaries are widely used. Bitcoin miners join large mining pools to minimize the variance of their income. Because transactions on the network are confirmed by miners, decentralization of the network requires that no single miner or mining pool obtains 51% of the hashing power, which would allow them to double-spend coins, prevent certain transactions from being verified and prevent other miners from earning income. just six mining pools controlled 75% of overall bitcoin hashing power. In 2014 mining pool Ghash.io obtained 51% hashing power which raised significant controversies about the safety of the network. The pool has voluntarily capped their hashing power at 39.99% and requested other pools to act responsibly for the benefit of the whole network. Around the year 2017, over 70% of the hashing power and 90% of transactions were operating from China. According to researchers, other parts of the ecosystem are also "controlled by a small set of entities", notably the maintenance of the client software, online wallets and simplified payment verification (SPV) clients. Privacy and fungibility Bitcoin is pseudonymous, meaning that funds are not tied to real-world entities but rather bitcoin addresses. Owners of bitcoin addresses are not explicitly identified, but all transactions on the blockchain are public. In addition, transactions can be linked to individuals and companies through "idioms of use" (e.g., transactions that spend coins from multiple inputs indicate that the inputs may have a common owner) and corroborating public transaction data with known information on owners of certain addresses. Additionally, bitcoin exchanges, where bitcoins are traded for traditional currencies, may be required by law to collect personal information. To heighten financial privacy, a new bitcoin address can be generated for each transaction. Wallets and similar software technically handle all bitcoins as equivalent, establishing the basic level of fungibility. Researchers have pointed out that the history of each bitcoin is registered and publicly available in the blockchain ledger, and that some users may refuse to accept bitcoins coming from controversial transactions, which would harm bitcoin's fungibility. For example, in 2012, Mt. Gox froze accounts of users who deposited bitcoins that were known to have just been stolen. Wallets A wallet stores the information necessary to transact bitcoins. While wallets are often described as a place to hold or store bitcoins, due to the nature of the system, bitcoins are inseparable from the blockchain transaction ledger. A wallet is more correctly defined as something that "stores the digital credentials for your bitcoin holdings" and allows one to access (and spend) them. Bitcoin uses public-key cryptography, in which two cryptographic keys, one public and one private, are generated. At its most basic, a wallet is a collection of these keys. Software wallets The first wallet program, simply named Bitcoin, and sometimes referred to as the Satoshi client, was released in 2009 by Satoshi Nakamoto as open-source software. In version 0.5 the client moved from the wxWidgets user interface toolkit to Qt, and the whole bundle was referred to as Bitcoin-Qt. After the release of version 0.9, the software bundle was renamed Bitcoin Core to distinguish itself from the underlying network. Bitcoin Core is, perhaps, the best known implementation or client. Alternative clients (forks of Bitcoin Core) exist, such as Bitcoin XT, Bitcoin Unlimited, and Parity Bitcoin. There are several modes which wallets can operate in. They have an inverse relationship with regards to trustlessness and computational requirements. Full clients verify transactions directly by downloading a full copy of the blockchain (over 150 GB ). They are the most secure and reliable way of using the network, as trust in external parties is not required. Full clients check the validity of mined blocks, preventing them from transacting on a chain that breaks or alters network rules. Because of its size and complexity, downloading and verifying the entire blockchain is not suitable for all computing devices. Lightweight clients consult full nodes to send and receive transactions without requiring a local copy of the entire blockchain (see simplified payment verification – SPV). This makes lightweight clients much faster to set up and allows them to be used on low-power, low-bandwidth devices such as smartphones. When using a lightweight wallet, however, the user must trust full nodes, as it can report faulty values back to the user. Lightweight clients follow the longest blockchain and do not ensure it is valid, requiring trust in full nodes. Third-party internet services called online wallets or webwallets offer similar functionality but may be easier to use. In this case, credentials to access funds are stored with the online wallet provider rather than on the user's hardware. As a result, the user must have complete trust in the online wallet provider. A malicious provider or a breach in server security may cause entrusted bitcoins to be stolen. An example of such a security breach occurred with Mt. Gox in 2011. Cold storage Wallet software is targeted by hackers because of the lucrative potential for stealing bitcoins. A technique called "cold storage" keeps private keys out of reach of hackers; this is accomplished by keeping private keys offline at all times by generating them on a device that is not connected to the internet. The credentials necessary to spend bitcoins can be stored offline in a number of different ways, from specialized hardware wallets to simple paper printouts of the private key. Hardware wallets A hardware wallet is a computer peripheral that signs transactions as requested by the user. These devices store private keys and carry out signing and encryption internally, and do not share any sensitive information with the host computer except already signed (and thus unalterable) transactions. Because hardware wallets never expose their private keys, even computers that may be compromised by malware do not have a vector to access or steal them. The user sets a passcode when setting up a hardware wallet. As hardware wallets are tamper-resistant, the passcode will be needed to extract any money. Paper wallets A paper wallet is created with a keypair generated on a computer with no internet connection; the private key is written or printed onto the paper and then erased from the computer. The paper wallet can then be stored in a safe physical location for later retrieval. Physical wallets can also take the form of metal token coins with a private key accessible under a security hologram in a recess struck on the reverse side. The security hologram self-destructs when removed from the token, showing that the private key has been accessed. Originally, these tokens were struck in brass and other base metals, but later used precious metals as bitcoin grew in value and popularity. Coins with stored face value as high as ₿1000 have been struck in gold. The British Museum's coin collection includes four specimens from the earliest series of funded bitcoin tokens; one is currently on display in the museum's money gallery. In 2013, a Utah manufacturer of these tokens was ordered by the Financial Crimes Enforcement Network (FinCEN) to register as a money services business before producing any more funded bitcoin tokens. History Creation The domain name bitcoin.org was registered on 18 August 2008. On 31 October 2008, a link to a paper authored by Satoshi Nakamoto titled Bitcoin: A Peer-to-Peer Electronic Cash System was posted to a cryptography mailing list. Nakamoto implemented the bitcoin software as open-source code and released it in January 2009. Nakamoto's identity remains unknown. On 3 January 2009, the bitcoin network was created when Nakamoto mined the starting block of the chain, known as the genesis block. Embedded in the coinbase of this block was the text "The Times 03/Jan/2009 Chancellor on brink of second bailout for banks". This note references a headline published by The Times and has been interpreted as both a timestamp and a comment on the instability caused by fractional-reserve banking. The receiver of the first bitcoin transaction was Hal Finney, who had created the first reusable proof-of-work system (RPoW) in 2004. Finney downloaded the bitcoin software on its release date, and on 12 January 2009 received ten bitcoins from Nakamoto. Other early cypherpunk supporters were creators of bitcoin predecessors: Wei Dai, creator of b-money, and Nick Szabo, creator of bit gold. In 2010, the first known commercial transaction using bitcoin occurred when programmer Laszlo Hanyecz bought two Papa John's pizzas for ₿10,000 from Jeremy Sturdivant. Blockchain analysts estimate that Nakamoto had mined about one million bitcoins before disappearing in 2010 when he handed the network alert key and control of the code repository over to Gavin Andresen. Andresen later became lead developer at the Bitcoin Foundation. Andresen then sought to decentralize control. This left opportunity for controversy to develop over the future development path of bitcoin, in contrast to the perceived authority of Nakamoto's contributions. 2011–2012 After early "proof-of-concept" transactions, the first major users of bitcoin were black markets, such as Silk Road. During its 30 months of existence, beginning in February 2011, Silk Road exclusively accepted bitcoins as payment, transacting 9.9 million in bitcoins, worth about $214 million. In 2011, the price started at $0.30 per bitcoin, growing to $5.27 for the year. The price rose to $31.50 on 8 June. Within a month, the price fell to $11.00. The next month it fell to $7.80, and in another month to $4.77. In 2012, bitcoin prices started at $5.27, growing to $13.30 for the year. By 9 January the price had risen to $7.38, but then crashed by 49% to $3.80 over the next 16 days. The price then rose to $16.41 on 17 August, but fell by 57% to $7.10 over the next three days. The Bitcoin Foundation was founded in September 2012 to promote bitcoin's development and uptake. On 1 November 2011, the reference implementation Bitcoin-Qt version 0.5.0 was released. It introduced a front end that used the Qt user interface toolkit. The software previously used Berkeley DB for database management. Developers switched to LevelDB in release 0.8 in order to reduce blockchain synchronization time. The update to this release resulted in a minor blockchain fork on 11 March 2013. The fork was resolved shortly afterwards. Seeding nodes through IRC was discontinued in version 0.8.2. From version 0.9.0 the software was renamed to Bitcoin Core. Transaction fees were reduced again by a factor of ten as a means to encourage microtransactions. Although Bitcoin Core does not use OpenSSL for the operation of the network, the software did use OpenSSL for remote procedure calls. Version 0.9.1 was released to remove the network's vulnerability to the Heartbleed bug. 2013–2016 In 2013, prices started at $13.30 rising to $770 by 1 January 2014. In March 2013 the blockchain temporarily split into two independent chains with different rules due to a bug in version 0.8 of the bitcoin software. The two blockchains operated simultaneously for six hours, each with its own version of the transaction history from the moment of the split. Normal operation was restored when the majority of the network downgraded to version 0.7 of the bitcoin software, selecting the backwards-compatible version of the blockchain. As a result, this blockchain became the longest chain and could be accepted by all participants, regardless of their bitcoin software version. During the split, the Mt. Gox exchange briefly halted bitcoin deposits and the price dropped by 23% to $37 before recovering to the previous level of approximately $48 in the following hours. The US Financial Crimes Enforcement Network (FinCEN) established regulatory guidelines for "decentralized virtual currencies" such as bitcoin, classifying American bitcoin miners who sell their generated bitcoins as Money Service Businesses (MSBs), that are subject to registration or other legal obligations. In April, exchanges BitInstant and Mt. Gox experienced processing delays due to insufficient capacity resulting in the bitcoin price dropping from $266 to $76 before returning to $160 within six hours. The bitcoin price rose to $259 on 10 April, but then crashed by 83% to $45 over the next three days. On 15 May 2013, US authorities seized accounts associated with Mt. Gox after discovering it had not registered as a money transmitter with FinCEN in the US. On 23 June 2013, the US Drug Enforcement Administration listed ₿11.02 as a seized asset in a United States Department of Justice seizure notice pursuant to 21 U.S.C. § 881. This marked the first time a government agency had seized bitcoin. The FBI seized about ₿30,000 in October 2013 from the dark web website Silk Road, following the arrest of Ross William Ulbricht. These bitcoins were sold at blind auction by the United States Marshals Service to venture capital investor Tim Draper. Bitcoin's price rose to $755 on 19 November and crashed by 50% to $378 the same day. On 30 November 2013, the price reached $1,163 before starting a long-term crash, declining by 87% to $152 in January 2015. On 5 December 2013, the People's Bank of China prohibited Chinese financial institutions from using bitcoins. After the announcement, the value of bitcoins dropped, and Baidu no longer accepted bitcoins for certain services. Buying real-world goods with any virtual currency had been illegal in China since at least 2009. In 2014, prices started at $770 and fell to $314 for the year. On 30 July 2014, the Wikimedia Foundation started accepting donations of bitcoin. In 2015, prices started at $314 and rose to $434 for the year. In 2016, prices rose and climbed up to $998 by 1 January 2017. Release 0.10 of the software was made public on 16 February 2015. It introduced a consensus library which gave programmers easy access to the rules governing consensus on the network. In version 0.11.2 developers added a new feature which allowed transactions to be made unspendable until a specific time in the future. Bitcoin Core 0.12.1 was released on 15 April 2016, and enabled multiple soft forks to occur concurrently. Around 100 contributors worked on Bitcoin Core 0.13.0 which was released on 23 August 2016. In July 2016, the CheckSequenceVerify soft fork activated. In August 2016, the Bitfinex cryptocurrency exchange platform was hacked in the second-largest breach of a Bitcoin exchange platform up to that time, and 119,756 bitcoin, worth about $72 million at the time, were stolen. In October 2016, Bitcoin Core's 0.13.1 release featured the "Segwit" soft fork that included a scaling improvement aiming to optimize the bitcoin blocksize. The patch which was originally finalised in April, and 35 developers were engaged to deploy it. This release featured Segregated Witness (SegWit) which aimed to place downward pressure on transaction fees as well as increase the maximum transaction capacity of the network. The 0.13.1 release endured extensive testing and research leading to some delays in its release date. SegWit prevents various forms of transaction malleability. 2017–2019 Research produced by the University of Cambridge estimated that in 2017, there were 2.9 to 5.8 million unique users using a cryptocurrency wallet, most of them using bitcoin. On 15 July 2017, the controversial Segregated Witness [SegWit] software upgrade was approved ("locked-in"). Segwit was intended to support the Lightning Network as well as improve scalability. SegWit was subsequently activated on the network on 24 August 2017. The bitcoin price rose almost 50% in the week following SegWit's approval. On 21 July 2017, bitcoin was trading at $2,748, up 52% from 14 July 2017's $1,835. Supporters of large blocks who were dissatisfied with the activation of SegWit forked the software on 1 August 2017 to create Bitcoin Cash, becoming one of many forks of bitcoin such as Bitcoin Gold. Prices started at $998 in 2017 and rose to $13,412.44 on 1 January 2018, after reaching its all-time high of $19,783.06 on 17 December 2017. China banned trading in bitcoin, with first steps taken in September 2017, and a complete ban that started on 1 February 2018. Bitcoin prices then fell from $9,052 to $6,914 on 5 February 2018. The percentage of bitcoin trading in the Chinese renminbi fell from over 90% in September 2017 to less than 1% in June 2018. Throughout the rest of the first half of 2018, bitcoin's price fluctuated between $11,480 and $5,848. On 1 July 2018, bitcoin's price was $6,343. The price on 1 January 2019 was $3,747, down 72% for 2018 and down 81% since the all-time high. In September 2018, an anonymous party discovered and reported an invalid-block denial-of-server vulnerability to developers of Bitcoin Core, Bitcoin ABC and Bitcoin Unlimited. Further analysis by bitcoin developers showed the issue could also allow the creation of blocks violating the 21 million coin limit and was assigned and the issue resolved. Bitcoin prices were negatively affected by several hacks or thefts from cryptocurrency exchanges, including thefts from Coincheck in January 2018, Bithumb in June, and Bancor in July. For the first six months of 2018, $761 million worth of cryptocurrencies was reported stolen from exchanges. Bitcoin's price was affected even though other cryptocurrencies were stolen at Coinrail and Bancor as investors worried about the security of cryptocurrency exchanges. In September 2019 the Intercontinental Exchange (the owner of the NYSE) began trading of bitcoin futures on its exchange called Bakkt. Bakkt also announced that it would launch options on bitcoin in December 2019. In December 2019, YouTube removed bitcoin and cryptocurrency videos, but later restored the content after judging they had "made the wrong call." In February 2019, Canadian cryptocurrency exchange Quadriga Fintech Solutions failed with approximately $200 million missing. By June 2019 the price had recovered to $13,000. 2020–present On 13 March 2020, bitcoin fell below $4,000 during a broad market selloff, after trading above $10,000 in February 2020. On 11 March 2020, 281,000 bitcoins were sold, held by owners for only thirty days. This compared to ₿4,131 that had laid dormant for a year or more, indicating that the vast majority of the bitcoin volatility on that day was from recent buyers. During the week of 11 March 2020, cryptocurrency exchange Kraken experienced an 83% increase in the number of account signups over the week of bitcoin's price collapse, a result of buyers looking to capitalize on the low price. These events were attributed to the onset of the COVID-19 pandemic. In August 2020, MicroStrategy invested $250 million in bitcoin as a treasury reserve asset. In October 2020, Square, Inc. placed approximately 1% of total assets ($50 million) in bitcoin. In November 2020, PayPal announced that US users could buy, hold, or sell bitcoin. On 30 November 2020, the bitcoin value reached a new all-time high of $19,860, topping the previous high of December 2017. Alexander Vinnik, founder of BTC-e, was convicted and sentenced to five years in prison for money laundering in France while refusing to testify during his trial. In December 2020 Massachusetts Mutual Life Insurance Company announced a bitcoin purchase of US$100 million, or roughly 0.04% of its general investment account. On 19 January 2021, Elon Musk placed the handle #Bitcoin in his Twitter profile, tweeting "In retrospect, it was inevitable", which caused the price to briefly rise about $5000 in an hour to $37,299. On 25 January 2021, Microstrategy announced that it continued to buy bitcoin and as of the same date it had holdings of ₿70,784 worth $2.38 billion. On 8 February 2021 Tesla's announcement of a bitcoin purchase of US$1.5 billion and the plan to start accepting bitcoin as payment for vehicles, pushed the bitcoin price to $44,141. On 18 February 2021, Elon Musk stated that "owning bitcoin was only a little better than holding conventional cash, but that the slight difference made it a better asset to hold". After 49 days of accepting the digital currency, Tesla reversed course on 12 May 2021, saying they would no longer take Bitcoin due to concerns that "mining" the cryptocurrency was contributing to the consumption of fossil fuels and climate change. The decision resulted in the price of Bitcoin dropping around 12% on 13 May. During a July Bitcoin conference, Musk suggested Tesla could possibly help Bitcoin miners switch to renewable energy in the future and also stated at the same conference that if Bitcoin mining reaches, and trends above 50 percent renewable energy usage, that "Tesla would resume accepting bitcoin." The price for bitcoin rose after this announcement. In June 2021, the Legislative Assembly of El Salvador voted legislation to make Bitcoin legal tender in El Salvador. The law took effect on 7 September. The implementation of the law has been met with protests and calls to make the currency optional, not compulsory. According to a survey by the Central American University, the majority of Salvadorans disagreed with using cryptocurrency as a legal tender, and a survey by the Center for Citizen Studies (CEC) showed that 91% of the country prefers the dollar over Bitcoin. As of October 2021, the country's government was exploring mining bitcoin with geothermal power and issuing bonds tied to bitcoin. According to a survey done by the Central American University 100 days after the Bitcoin Law came into force: 34.8% of the population has no confidence in Bitcoin, 35.3% has little confidence, 13.2% has some confidence, and 14.1% has a lot of confidence. 56.6% of respondents have downloaded the government Bitcoin wallet; among them 62.9% has never used it or only once whereas 36.3% uses Bitcoin at least once a month. In 2022, the International Monetary Fund (IMF) urged El Salvador to reverse its decision after Bitcoin lost half its value in two months. The IMF also warned that it would be difficult to get a loan from the institution. Also In June, the Taproot network software upgrade was approved, adding support for Schnorr signatures, improved functionality of Smart contracts and Lightning Network. The upgrade was installed in November. On 16 October 2021, the SEC approved the ProShares Bitcoin Strategy ETF, a cash-settled futures exchange-traded fund (ETF). The first bitcoin ETF in the United States gained 5% on its first trading day on 19 October 2021. Associated ideologies Satoshi Nakamoto stated in his white paper that: "The root problem with conventional currencies is all the trust that's required to make it work. The central bank must be trusted not to debase the currency, but the history of fiat currencies is full of breaches of that trust." Austrian economics roots According to the European Central Bank, the decentralization of money offered by bitcoin has its theoretical roots in the Austrian school of economics, especially with Friedrich von Hayek in his book Denationalisation of Money: The Argument Refined, in which Hayek advocates a complete free market in the production, distribution and management of money to end the monopoly of central banks. Anarchism and libertarianism According to The New York Times, libertarians and anarchists were attracted to the philosophical idea behind bitcoin. Early bitcoin supporter Roger Ver said: "At first, almost everyone who got involved did so for philosophical reasons. We saw bitcoin as a great idea, as a way to separate money from the state." The Economist describes bitcoin as "a techno-anarchist project to create an online version of cash, a way for people to transact without the possibility of interference from malicious governments or banks". Economist Paul Krugman argues that cryptocurrencies like bitcoin are "something of a cult" based in "paranoid fantasies" of government power. Nigel Dodd argues in The Social Life of Bitcoin that the essence of the bitcoin ideology is to remove money from social, as well as governmental, control. Dodd quotes a YouTube video, with Roger Ver, Jeff Berwick, Charlie Shrem, Andreas Antonopoulos, Gavin Wood, Trace Meyer and other proponents of bitcoin reading The Declaration of Bitcoin's Independence. The declaration includes a message of crypto-anarchism with the words: "Bitcoin is inherently anti-establishment, anti-system, and anti-state. Bitcoin undermines governments and disrupts institutions because bitcoin is fundamentally humanitarian." David Golumbia says that the ideas influencing bitcoin advocates emerge from right-wing extremist movements such as the Liberty Lobby and the John Birch Society and their anti-Central Bank rhetoric, or, more recently, Ron Paul and Tea Party-style libertarianism. Steve Bannon, who owns a "good stake" in bitcoin, considers it to be "disruptive populism. It takes control back from central authorities. It's revolutionary." A 2014 study of Google Trends data found correlations between bitcoin-related searches and ones related to computer programming and illegal activity, but not libertarianism or investment topics. Economics Bitcoin is a digital asset designed to work in peer-to-peer transactions as a currency. Bitcoins have three qualities useful in a currency, according to The Economist in January 2015: they are "hard to earn, limited in supply and easy to verify." Per some researchers, , bitcoin functions more as a payment system than as a currency. Economists define money as serving the following three purposes: a store of value, a medium of exchange, and a unit of account. According to The Economist in 2014, bitcoin functions best as a medium of exchange. However, this is debated, and a 2018 assessment by The Economist stated that cryptocurrencies met none of these three criteria. Yale economist Robert J. Shiller writes that bitcoin has potential as a unit of account for measuring the relative value of goods, as with Chile's Unidad de Fomento, but that "Bitcoin in its present form [...] doesn't really solve any sensible economic problem". According to research by Cambridge University, between 2.9 million and 5.8 million unique users used a cryptocurrency wallet in 2017, most of them for bitcoin. The number of users has grown significantly since 2013, when there were 300,000–1.3 million users. Acceptance by merchants The overwhelming majority of bitcoin transactions take place on a cryptocurrency exchange, rather than being used in transactions with merchants. Delays processing payments through the blockchain of about ten minutes make bitcoin use very difficult in a retail setting. Prices are not usually quoted in units of bitcoin and many trades involve one, or sometimes two, conversions into conventional currencies. Merchants that do accept bitcoin payments may use payment service providers to perform the conversions. In 2017 and 2018 bitcoin's acceptance among major online retailers included only three of the top 500 U.S. online merchants, down from five in 2016. Reasons for this decline include high transaction fees due to bitcoin's scalability issues and long transaction times. Bloomberg reported that the largest 17 crypto merchant-processing services handled $69 million in June 2018, down from $411 million in September 2017. Bitcoin is "not actually usable" for retail transactions because of high costs and the inability to process chargebacks, according to Nicholas Weaver, a researcher quoted by Bloomberg. High price volatility and transaction fees make paying for small retail purchases with bitcoin impractical, according to economist Kim Grauer. However, bitcoin continues to be used for large-item purchases on sites such as Overstock.com, and for cross-border payments to freelancers and other vendors. Financial institutions Bitcoins can be bought on digital currency exchanges. Per researchers, "there is little sign of bitcoin use" in international remittances despite high fees charged by banks and Western Union who compete in this market. The South China Morning Post, however, mentions the use of bitcoin by Hong Kong workers to transfer money home. In 2014, the National Australia Bank closed accounts of businesses with ties to bitcoin, and HSBC refused to serve a hedge fund with links to bitcoin. Australian banks in general have been reported as closing down bank accounts of operators of businesses involving the currency. On 10 December 2017, the Chicago Board Options Exchange started trading bitcoin futures, followed by the Chicago Mercantile Exchange, which started trading bitcoin futures on 17 December 2017. In September 2019 the Central Bank of Venezuela, at the request of PDVSA, ran tests to determine if bitcoin and ether could be held in central bank's reserves. The request was motivated by oil company's goal to pay its suppliers. François R. Velde, Senior Economist at the Chicago Fed, described bitcoin as "an elegant solution to the problem of creating a digital currency". David Andolfatto, Vice President at the Federal Reserve Bank of St. Louis, stated that bitcoin is a threat to the establishment, which he argues is a good thing for the Federal Reserve System and other central banks, because it prompts these institutions to operate sound policies. As an investment The Winklevoss twins have purchased bitcoin. In 2013, The Washington Post reported a claim that they owned 1% of all the bitcoins in existence at the time. Other methods of investment are bitcoin funds. The first regulated bitcoin fund was established in Jersey in July 2014 and approved by the Jersey Financial Services Commission. Forbes named bitcoin the best investment of 2013. In 2014, Bloomberg named bitcoin one of its worst investments of the year. In 2015, bitcoin topped Bloomberg's currency tables. According to bitinfocharts.com, in 2017, there were 9,272 bitcoin wallets with more than $1 million worth of bitcoins. The exact number of bitcoin millionaires is uncertain as a single person can have more than one bitcoin wallet. Venture capital Peter Thiel's Founders Fund invested million in BitPay. In 2012, an incubator for bitcoin-focused start-ups was founded by Adam Draper, with financing help from his father, venture capitalist Tim Draper, one of the largest bitcoin holders after winning an auction of 30,000 bitcoins, at the time called "mystery buyer". The company's goal is to fund 100 bitcoin businesses within 2–3 years with $10,000 to $20,000 for a 6% stake. Investors also invest in bitcoin mining. According to a 2015 study by Paolo Tasca, bitcoin startups raised almost $1 billion in three years (Q1 2012 – Q1 2015). Price and volatility The price of bitcoins has gone through cycles of appreciation and depreciation referred to by some as bubbles and busts. In 2011, the value of one bitcoin rapidly rose from about US$0.30 to US$32 before returning to US$2. In the latter half of 2012 and during the 2012–13 Cypriot financial crisis, the bitcoin price began to rise, reaching a high of US$266 on 10 April 2013, before crashing to around US$50. On 29 November 2013, the cost of one bitcoin rose to a peak of US$1,242. In 2014, the price fell sharply, and as of April remained depressed at little more than half 2013 prices. it was under US$600. According to Mark T. Williams, , bitcoin has volatility seven times greater than gold, eight times greater than the S&P 500, and 18 times greater than the US dollar. Hodl is a meme created in reference to holding (as opposed to selling) during periods of volatility. Unusual for an asset, bitcoin weekend trading during December 2020 was higher than for weekdays. Hedge funds (using high leverage and derivates) have attempted to use the volatility to profit from downward price movements. At the end of January 2021, such positions were over $1billion, their highest of all time. , the closing price of bitcoin equaled US$44,797. Legal status, tax and regulation Because of bitcoin's decentralized nature and its trading on online exchanges located in many countries, regulation of bitcoin has been difficult. However, the use of bitcoin can be criminalized, and shutting down exchanges and the peer-to-peer economy in a given country would constitute a de facto ban. The legal status of bitcoin varies substantially from country to country and is still undefined or changing in many of them. Regulations and bans that apply to bitcoin probably extend to similar cryptocurrency systems. According to the Library of Congress, an "absolute ban" on trading or using cryptocurrencies applies in nine countries: Algeria, Bolivia, Egypt, Iraq, Morocco, Nepal, Pakistan, Vietnam, and the United Arab Emirates. An "implicit ban" applies in another 15 countries, which include Bahrain, Bangladesh, China, Colombia, the Dominican Republic, Indonesia, Kuwait, Lesotho, Lithuania, Macau, Oman, Qatar, Saudi Arabia and Taiwan. Regulatory warnings The U.S. Commodity Futures Trading Commission has issued four "Customer Advisories" for bitcoin and related investments. A July 2018 warning emphasized that trading in any cryptocurrency is often speculative, and there is a risk of theft from hacking, and fraud. In May 2014 the U.S. Securities and Exchange Commission warned that investments involving bitcoin might have high rates of fraud, and that investors might be solicited on social media sites. An earlier "Investor Alert" warned about the use of bitcoin in Ponzi schemes. The European Banking Authority issued a warning in 2013 focusing on the lack of regulation of bitcoin, the chance that exchanges would be hacked, the volatility of bitcoin's price, and general fraud. FINRA and the North American Securities Administrators Association have both issued investor alerts about bitcoin. Price manipulation investigation An official investigation into bitcoin traders was reported in May 2018. The U.S. Justice Department launched an investigation into possible price manipulation, including the techniques of spoofing and wash trades. The U.S. federal investigation was prompted by concerns of possible manipulation during futures settlement dates. The final settlement price of CME bitcoin futures is determined by prices on four exchanges, Bitstamp, Coinbase, itBit and Kraken. Following the first delivery date in January 2018, the CME requested extensive detailed trading information but several of the exchanges refused to provide it and later provided only limited data. The Commodity Futures Trading Commission then subpoenaed the data from the exchanges. State and provincial securities regulators, coordinated through the North American Securities Administrators Association, are investigating "bitcoin scams" and ICOs in 40 jurisdictions. Academic research published in the Journal of Monetary Economics concluded that price manipulation occurred during the Mt Gox bitcoin theft and that the market remains vulnerable to manipulation. The history of hacks, fraud and theft involving bitcoin dates back to at least 2011. Research by John M. Griffin and Amin Shams in 2018 suggests that trading associated with increases in the amount of the Tether cryptocurrency and associated trading at the Bitfinex exchange account for about half of the price increase in bitcoin in late 2017. J.L. van der Velde, CEO of both Bitfinex and Tether, denied the claims of price manipulation: "Bitfinex nor Tether is, or has ever, engaged in any sort of market or price manipulation. Tether issuances cannot be used to prop up the price of bitcoin or any other coin/token on Bitfinex." Adoption by governments El Salvador officially adopted Bitcoin as legal tender, in the face of internal and international criticism, becoming the first nation to do so. Iran announced pending regulations that would require bitcoin miners in Iran to sell bitcoin to the Central Bank of Iran, and the central bank would use it for imports. Iran, as of October 2020, had issued over 1,000 bitcoin mining licenses. The Iranian government initially took a stance against cryptocurrency, but later changed it after seeing that digital currency could be used to circumvent sanctions. The US Office of Foreign Assets Control listed two Iranians and their bitcoin addresses as part of its Specially Designated Nationals and Blocked Persons List for their role in the 2018 Atlanta cyberattack whose ransom was paid in bitcoin. In Switzerland, the Canton of Zug accepts tax payments in bitcoin. Criticisms Economic concerns Bitcoin, along with other cryptocurrencies, has been described as an economic bubble by at least eight Nobel Memorial Prize in Economic Sciences laureates at various times, including Robert Shiller on 1 March 2014, Joseph Stiglitz on 29 November 2017, and Richard Thaler on 21 December 2017. On 29 January 2018, a noted Keynesian economist Paul Krugman has described bitcoin as "a bubble wrapped in techno-mysticism inside a cocoon of libertarian ideology", on 2 February 2018, professor Nouriel Roubini of New York University has called bitcoin the "mother of all bubbles", and on 27 April 2018, a University of Chicago economist James Heckman has compared it to the 17th-century tulip mania. Journalists, economists, investors, and the central bank of Estonia have voiced concerns that bitcoin is a Ponzi scheme. In April 2013, Eric Posner, a law professor at the University of Chicago, stated that "a real Ponzi scheme takes fraud; bitcoin, by contrast, seems more like a collective delusion." A July 2014 report by the World Bank concluded that bitcoin was not a deliberate Ponzi scheme. In June 2014, the Swiss Federal Council examined concerns that bitcoin might be a pyramid scheme, and concluded that "since in the case of bitcoin the typical promises of profits are lacking, it cannot be assumed that bitcoin is a pyramid scheme." Bitcoin wealth is highly concentrated, with 0.01% holding 27% of in-circulation currency, as of 2021. Energy consumption and carbon footprint Bitcoin has been criticized for the amount of electricity consumed by mining. , the Cambridge Centre for Alternative Finance (CCAF) estimates that bitcoin consumes 131 TWh annually, representing 0.29% of the world's energy production and ranking bitcoin mining between Ukraine and Egypt in terms of electricity consumption. Until 2021, according to the CCAF much of bitcoin mining was done in China. Chinese miners used to rely on cheap coal power in Xinjiang in late autumn, winter and spring, and then migrate to regions with overcapacities in low-cost hydropower, like Sichuan, between May and October. In June 2021 China banned Bitcoin mining and Chinese miners moved to other countries such as the US and Kazakhstan. As of September 2021, according to the New York Times, Bitcoin's use of renewables range from 40% to 75%. According to the Bitcoin Mining Council and based on a survey of 32% of the current global bitcoin network, 56% of bitcoin mining came from renewable resources in Q2 2021. The development of intermittent renewable energy sources, such as wind power and solar power, is challenging because they cause instability in the electrical grid. Several papers concluded that these renewable power stations could use the surplus energy to mine Bitcoin and thereby reduce curtailment, hedge electricity price risk, stabilize the grid, increase the profitability of renewable energy infrastructure, and therefore accelerate transition to sustainable energy and decrease Bitcoin's carbon footprint. Concerns about bitcoin's environmental impact relate bitcoin's energy consumption to carbon emissions. The difficulty of translating the energy consumption into carbon emissions lies in the decentralized nature of bitcoin impeding the localization of miners to examine the electricity mix used. The results of recent studies analyzing bitcoin's carbon footprint vary. A 2018 study published in Nature Climate Change by Mora et al. claimed that bitcoin "could alone produce enough emissions to push warming above 2 °C within less than three decades." However, three other studies also published in Nature Climate Change later dismissed this analysis on account of its poor methodology and false assumptions with one study concluding: "[T]he scenarios used by Mora et al are fundamentally flawed and should not be taken seriously by the public, researchers, or policymakers." According to studies published in Joule and American Chemical Society in 2019, bitcoin's annual energy consumption results in annual carbon emission ranging from 17 to 22.9 Mt which is comparable to the level of emissions of countries as Jordan and Sri Lanka or Kansas City. George Kamiya, writing for the International Energy Agency, says that "predictions about bitcoin consuming the entire world's electricity" are sensational, but that the area "requires careful monitoring and rigorous analysis". One study done by Michael Novogratz's Galaxy Digital claimed that Bitcoin mining used less energy than the traditional banking system. Electronic waste Bitcoins annual e-waste is estimated to be about 30 metric tons as of May 2021, which is comparabe to the small IT equipment waste produced by the Netherlands. One Bitcoin generates 272g of e-waste per transaction. The average lifespan of Bitcoin mining devices is estimated to be only 1.29 years. Other estimates assume that a Bitcoin transaction generates about 380g of e-waste, equivalent of 2.35 iPhones. One reason for the e-waste problem of Bitcoin is that unlike most computing hardware the used application-specific integrated circuits have no alternative use beyond Bitcoin mining. Use in illegal transactions The use of bitcoin by criminals has attracted the attention of financial regulators, legislative bodies, law enforcement, and the media. Several news outlets have asserted that the popularity of bitcoins hinges on the ability to use them to purchase illegal goods. Nobel-prize winning economist Joseph Stiglitz says that bitcoin's anonymity encourages money laundering and other crimes. Software implementation Bitcoin Core is free and open-source software that serves as a bitcoin node (the set of which form the bitcoin network) and provides a bitcoin wallet which fully verifies payments. It is considered to be bitcoin's reference implementation. Initially, the software was published by Satoshi Nakamoto under the name "Bitcoin", and later renamed to "Bitcoin Core" to distinguish it from the network. It is also known as the Satoshi client. The MIT Digital Currency Initiative funds some of the development of Bitcoin Core. The project also maintains the cryptography library libsecp256k1. Bitcoin Core includes a transaction verification engine and connects to the bitcoin network as a full node. Moreover, a cryptocurrency wallet, which can be used to transfer funds, is included by default. The wallet allows for the sending and receiving of bitcoins. It does not facilitate the buying or selling of bitcoin. It allows users to generate QR codes to receive payment. The software validates the entire blockchain, which includes all bitcoin transactions ever. This distributed ledger which has reached more than 235 gigabytes in size as of Jan 2019, must be downloaded or synchronized before full participation of the client may occur. Although the complete blockchain is not needed all at once since it is possible to run in pruning mode. A command line-based daemon with a JSON-RPC interface, bitcoind, is bundled with Bitcoin Core. It also provides access to testnet, a global testing environment that imitates the bitcoin main network using an alternative blockchain where valueless "test bitcoins" are used. Regtest or Regression Test Mode creates a private blockchain which is used as a local testing environment. Finally, bitcoin-cli, a simple program which allows users to send RPC commands to bitcoind, is also included. Checkpoints which have been hard coded into the client are used only to prevent Denial of Service attacks against nodes which are initially syncing the chain. For this reason the checkpoints included are only as of several years ago. A one megabyte block size limit was added in 2010 by Satoshi Nakamoto. This limited the maximum network capacity to about three transactions per second. Since then, network capacity has been improved incrementally both through block size increases and improved wallet behavior. A network alert system was included by Satoshi Nakamoto as a way of informing users of important news regarding bitcoin. In November 2016 it was retired. It had become obsolete as news on bitcoin is now widely disseminated. Bitcoin Core includes a scripting language inspired by Forth that can define transactions and specify parameters. ScriptPubKey is used to "lock" transactions based on a set of future conditions. scriptSig is used to meet these conditions or "unlock" a transaction. Operations on the data are performed by various OP_Codes. Two stacks are used - main and alt. Looping is forbidden. Bitcoin Core uses OpenTimestamps to timestamp merge commits. The original creator of the bitcoin client has described their approach to the software's authorship as it being written first to prove to themselves that the concept of purely peer-to-peer electronic cash was valid and that a paper with solutions could be written. The lead developer is Wladimir J. van der Laan, who took over the role on 8 April 2014. Gavin Andresen was the former lead maintainer for the software client. Andresen left the role of lead developer for bitcoin to work on the strategic development of its technology. Bitcoin Core in 2015 was central to a dispute with Bitcoin XT, a competing client that sought to increase the blocksize. Over a dozen different companies and industry groups fund the development of Bitcoin Core. In popular culture Term "HODL" Hodl ( ; often written HODL) is slang in the cryptocurrency community for holding a cryptocurrency rather than selling it. A person who does this is known as a Hodler. It originated in a December 2013 post on the Bitcoin Forum message board by an apparently inebriated user who posted with a typo in the subject, "I AM HODLING." It is often humorously suggested to be a backronym to "hold on for dear life". In 2017, Quartz listed it as one of the essential slang terms in Bitcoin culture, and described it as a stance, "to stay invested in bitcoin and not to capitulate in the face of plunging prices." TheStreet.com referred to it as the "favorite mantra" of Bitcoin holders. Bloomberg News referred to it as a mantra for holders during market routs. Literature In Charles Stross' 2013 science fiction novel, Neptune's Brood, the universal interstellar payment system is known as "bitcoin" and operates using cryptography. Stross later blogged that the reference was intentional, saying "I wrote Neptune's Brood in 2011. Bitcoin was obscure back then, and I figured had just enough name recognition to be a useful term for an interstellar currency: it'd clue people in that it was a networked digital currency." Film The 2014 documentary The Rise and Rise of Bitcoin portrays the diversity of motives behind the use of bitcoin by interviewing people who use it. These include a computer programmer and a drug dealer. The 2016 documentary Banking on Bitcoin is an introduction to the beginnings of bitcoin and the ideas behind cryptocurrency today. Music In 2018, a Japanese band called Kasotsuka Shojo – Virtual Currency Girls – launched. Each of the eight members represented a cryptocurrency, including Bitcoin, Ethereum and Cardano. Academia In September 2015, the establishment of the peer-reviewed academic journal Ledger () was announced. It covers studies of cryptocurrencies and related technologies, and is published by the University of Pittsburgh. The journal encourages authors to digitally sign a file hash of submitted papers, which will then be timestamped into the bitcoin blockchain. Authors are also asked to include a personal bitcoin address in the first page of their papers. See also Alternative currency Base58 Crypto-anarchism List of bitcoin companies List of bitcoin organizations SHA-256 crypto currencies Virtual currency law in the United States Notes References External links Bitcoin.org website 2009 software Application layer protocols Computer-related introductions in 2009 Cryptocurrency projects Currencies introduced in 2009 Private currencies Currency symbols Elliptic curve cryptography
58924497
https://en.wikipedia.org/wiki/Jason%20Gill
Jason Gill
Jason Gill is an American college baseball coach and former shortstop and third baseman. Gill is the head coach of the USC Trojans baseball team. Playing career Gill attended Mater Dei High School in Santa Ana, California. He then accepted a scholarship to play at Cuesta College, to play college baseball. He was named an honorable mention All-Western State Conference. After two seasons at Cuesta, Gill then went on to play at Cal State Dominguez Hills. While at Dominguez Hills, Gill was name an honorable mention All-California Collegiate Athletic Association. Gill then moved on to Cal State Fullerton to play his senior season. He batted .345 with .469 on base percentage and slugged .388. He was named a Second Team All-West Coast Conference performer. Coaching career Gill then served as a graduate assistant at Fullerton for two seasons while completing his degree. Gill then spent two season as an assistant coach for the Nevada Wolf Pack baseball team. He then accepted a role as an assistant for the Loyola Marymount Lions baseball program. Gill then spent three seasons as an assistant for the UC Irvine Anteaters baseball team. He then returned to Cal State Fullerton as an assistant for three seasons. On August 13, 2008, Gill was named the head coach of the Loyola Marymount program. In 2011, Gill was interviewed for the head coaching position at Cal State Fullerton, but remained with the Lions. On June 14, 2019, Gill was hired to be the new head coach of the USC Trojans baseball program. Head coaching record See also List of current NCAA Division I baseball coaches References External links Loyola Marymount Lions bio USC Trojans bio Living people 1970 births Baseball first basemen Cuesta Cougars baseball players Cal State Dominguez Hills Toros baseball players Cal State Fullerton Titans baseball players Cal State Fullerton Titans baseball coaches Nevada Wolf Pack baseball coaches Loyola Marymount Lions baseball coaches UC Irvine Anteaters baseball coaches Oregon Ducks baseball coaches USC Trojans baseball coaches
8928937
https://en.wikipedia.org/wiki/Griffin%20PowerMate
Griffin PowerMate
The Griffin PowerMate is an input device produced by Griffin Technology. First released in 2001, it is a multifunction knob, which can be rotated, pressed, and rotated while pressed. These actions can be programmed to invoke specific responses from a range of computer applications, such as changing the volume, or skipping through videos. The PowerMate is also equipped with a blue LED on the underside, which can be programmed to flash, pulse, or remain illuminated at various intensities in response to input from the attached computer . The original PowerMate required a USB port, but a wireless Bluetooth version was introduced in 2014. Later, the entire product was discontinued in 2018. Example setup The PowerMate can be set up such that the clockwise/counter-clockwise rotation of the input device causes scrolling up and down of a web page in a web browser when it is in focus on the desktop, or the system audio volume when it is not. The status LED on the underside of the PowerMate could be set up as an indicator of resource usage, e.g., CPU usage, when the input device is not in use, and when in use the LED could be set to indicate volume level by LED brightness. The configuration software allows for custom configuration for each software application. A video editing program can use the PowerMate to scroll through the timeline. Or it can be configured to scroll through your browser page history and push to refresh. It would also work well for breakout-style games. By adding "Key Press" actions, PowerMate can be configured to perform any particular software function for which there is a "send key" command - i.e., a keyboard shortcut of one or more keys. By setting "Actions" (e.g., "Key Press") for "Triggers" (for example, "Long Press" or "Rotate Right"), a user can use PowerMate in place of the keyboard for any command. System compatibility The Griffin PowerMate was officially supported on Mac OS X, Windows XP and Vista. Griffin's software for Windows works under Windows 7 and 8 but crashes occasionally; for macOS, there is no official support past Sierra, though their USB version and configuration software (PowerMate Manager) continues to work on later versions. There are Linux and Sierra+ macOS drivers, but they are unofficial. References External links Free PowerMate driver for Linux Gizmod PowerMate support for Linux Free PowerMate BT driver for macOS Computing input devices
1122262
https://en.wikipedia.org/wiki/Hauppauge%20Computer%20Works
Hauppauge Computer Works
Hauppauge Computer Works ( ) is a US manufacturer and marketer of electronic video hardware for personal computers. Although it is most widely known for its WinTV line of TV tuner cards for PCs, Hauppauge also produces personal video recorders, digital video editors, digital media players, hybrid video recorders and digital television products for both Windows and Mac. The company is named after the hamlet of Hauppauge, New York, in which it is based. In addition to its headquarters in New York, Hauppauge also has sales and technical support offices in France, Germany, the Netherlands, Sweden, Italy, Poland, Australia, Japan, Singapore, Indonesia, Taiwan, Spain and the UK. Company history Hauppauge was co-founded by Kenneth Plotkin and Kenneth Aupperle, and became incorporated in 1982. Starting in 1983, the company followed Microway, the company that a year earlier provided the software needed by scientists and engineers to modify the IBM PC Fortran compiler so that it could transparently employ Intel 8087 coprocessors. The 80-bit Intel 8087 math coprocessor ran a factor of 50 faster than the 8/16-bit 8088 CPU that the IBM PC software came with. However, in 1982, the speed-up in floating-point-intensive applications was only a factor of 10 as the initial software developed by Microway and Hauppauge continued to call floating point libraries to do computations instead of placing inline x87 instructions inline with the 8088's instructions that allowed the 8088 to drive the 8087 directly. By 1984, inline compilers made their way into the market providing increased speed ups. Hauppauge provided similar software products in competition with Microway that they bundled with math coprocessors and remained in the Intel math coprocessor business until 1993 when the Intel Pentium came out with a built-in math coprocessor. However, like other companies that entered the math coprocessor business, Hauppauge produced other products that contributed to a field that is today called HPC - high-performance computing. The math coprocessor business rapidly expanded starting in 1984 with software products that accelerated applications like Lotus 1-2-3. At the same time the advent of the 80286 based IBM PC/AT with its 80287 math coprocessor provided new opportunities for companies that had grown up selling 8087s and supporting software. This included products like Hauppauge's 287 Fast/5, a product that took advantage of the 80287's design that used an asynchronous clock to drive its FPU at 5 MHz instead of the 4 MHz clocking provided by IBM, making it possible for the 80287s that came with the AT to be overclocked to 12 MHz. By 1987, math coprocessors had become Intel's most profitable product line bringing in competition from vendors like Cyrix whose first product was a math coprocessor faster than the new Intel 80387, but whose speed was stalled by the 80386 that acted as a governor. This is when Andy Grove decided it was time for Intel to recapture its channel to market opening up a division to compete with its math coprocessor customers that by this time included 47th Street Camera. The new Intel division, PCEO (the PC Enhancement Operation) came out with a product called "Genuine Intel Math Coprocessors". After playing around in the accelerator board business PCEO would settle down in the 80386 motherboard business originally selling a motherboard designed by one of its engineers as a home project that eventually ended up with a new division that today sells 40% of the motherboards used in high end PCs that find their way into products including Supercomputers, medical products, etc. Companies like Hauppauge and Microway that were impacted by their new competitor that made their living accelerating floating point applications being run on PCs followed suit by venturing into the Intel i860 vector coprocessor business: Hauppauge came out with an Intel 80486 motherboard that included an Intel i860 vector processor while Microway came out with add-in cards that had between one or more i860s. These products along with transputer-based add-in cards would eventually lead into what became known as HPC (high performance computing). HPC was actually initiated in 1986 by an English company, Inmos, that designed a CPU competitive with an Intel 80386/387 that also included four twisted pair high-speed interconnects that could communicate with other transputers and be linked to a PC motherboard making it possible to create distributed memory processing computers that could employ 32 processors with the same throughput as 32 Intel 386/387s operating in a single PC. The add-in card parallel processing business morphed from the transputer to the Intel i860 around 1989 when Inmos was purchased by STMicroelectronics that cut R&D funding eventually forcing companies that had entered the parallel processing business to shift to the Intel i860. The i860 was a vector processor with graphics extensions that could initially provide 50 megaflops of throughput in an era when an 80486 with an Intel 80487 peaked at half a megaflop and would eventually top out at 100 Megaflops making it as fast as 100 Inmos T414 transputers. Intel i860 add-in cards made it possible for as many as 20 Intel i860s to run in parallel and could be programmed using a software library similar to today's MPI libraries which today support distributed memory parallel processing in which servers sitting in 1U rack mount chassis that are essentially PCs provide the horsepower behind the majority of the world's supercomputers. This same approach could be employed using Hauppauge's motherboards connected by Gigabit Ethernet, something that was however first demonstrated using a wall of IBM RS/6000 PCs at the 1991 Supercomputing Conference. IBM's lead was quickly followed by academic users who realized they could do the same thing with much less expensive hardware by adapting their x86 PCs to run in parallel at first using a software library adapted from similar transputer libraries called PVM (parallel virtual machines) that would eventually morph into today's MPI. Products like the Intel i860 vector processor that could be employed both as a vector and graphics processor were end of life'd around 1993 at the same time that Intel introduced the Intel Pentium P5: a CISC processor that used CISC instructions that were pipelined into hard coded lower level RISC like primitives that provided the Pentium with a Superscalar architecture that also could execute the x87 FPU instruction set using a built in FPU that was essentially implemented using the scalar instructions of the i860 as well as a memory bus that provided a 400 MB/sec interface to memory that was borrowed from the i860 as well. This high speed bus played a crucial role in speeding up the most common floating point intensive applications that at this point in time used Gauss Elimination to solve simultaneous linear equations buy which today are solved using blocking and LU decomposition. The Intel Pentium, while good, did not provide enough floating point performance to compete with a 300 MHz DEC Alpha 21164 that provided 600 Megaflops in 1995. At the same point in time, Intel supercomputing had moved from the 50 MHz Intel i860XP that was six times slower than the Alpha 21164 to the special version of their Pentium that at 200 megaflops was only three times slower than the 21164. However, the impending speed upgrade of the Alpha to 600 MHz ultimately doomed the future of Intel supercomputing. Motherboards During the late 1980s and early 1990s, Hauppauge produced motherboards for Intel 486 processors. A number of these motherboards were standard ISA built to fairly competitive price points. Some, however, were workstation and server-oriented, including EISA support, optional cache memory modules, and support for the Weitek 4167 FPU. Hauppauge also sold a unique motherboard, the Hauppauge 4860. This was the only standard PC/AT motherboard ever made with both an Intel 80486 and an Intel i860 processor (optional). While both required the 80486, the i860 could either run an independent lightweight operating system or serve as a more conventional co-processor. Hauppauge no longer produces motherboards, focusing instead on the TV card market. Product lines Digital Terrestrial/Satellite Hauppauge digital terrestrial and satellite products capture DVB-T and DVB-S broadcasts respectively without the need to re-encode the streams. There are several benefits from this approach: the cost of the TV card can be lower because there is no need to supply an MPEG-2 encoder the quality of captures can be higher because there is no need to re-encode streams ratio of file size to quality is higher due to the broadcasters' high-efficiency encoders Until August 2004 all of Hauppauge's DVB products were badge-engineered TechnoTrend products. The first of the new Hauppauge-designed cards was the Nova-t PCI 90002 and the silent replacement of the TechnoTrend model caused confusion and anger among Hauppauge's customers who found that the new card didn't support TechnoTrend's proprietary interfaces. This rendered any existing third-party software unusable with the new cards. The new cards also came with a software packaged called WinTV2000 which lacked features that TechnoTrend's software had including seven-day EPG, Digital Teletext and LCN-based channel ordering. The new cards supported Microsoft's BDA standard but at the time this was at its infancy and very few 3rd party applications included support for it. By 2005, all of the TechnoTrend products had been removed from the Hauppauge lineup, with the exception of the DEC2000-t and DEC3000-s which haven't seen a replacement. Hybrid Video Recorders The Hybrid Video Recorder (HVR) range capture a combination of different broadcast types. The majority of Hauppauge HVR models capture analogue PAL and DVB-T but there have been some more recent models which capture analogue NTSC and ATSC as well as a tri-mode card which supports analogue PAL, DVB-S and DVB-T. HVR-9xx devices are bus-powered USB 2.0 sticks, not much larger than a USB flash drive. They have support for analogue and digital terrestrial TV. The HVR-9xx sticks are produced in Taiwan by Deltron, and are also sold for Apple computers by Elgato under the EyeTV brand. HVR-1xxx devices are PCI-based products that receive analogue and digital terrestrial TV. They are similar to the HVR-9xx but have support for NICAM or dbx Stereo for analogue terrestrial on all models. HVR-3xxx and 4xxx devices are tri-mode and quad-mode devices respectively. Tri-mode means support for analogue terrestrial/cable, digital terrestrial and DVB-S digital satellite. Quad-mode devices additionally support DVB-S2 HD digital satellite. The HVR-4000 marks a change in bundled applications in that instead of using Hauppauge's WinTV2000 package, it ships with Cyberlink PowerCinema. Personal Video Recorders The Personal Video Recorder (PVR) range uses an on-board MPEG/MPEG-2 encoder to compress the incoming analogue TV signals. The benefits of using a hardware encoder include lower CPU usage when encoding live TV. The first WinTV-PVR product was the WinTV PVR-PCI, launched in late 2000 and not receiving any driver updates since February 2002. It was joined by the WinTV PVR-USB, which has two variants. The first variant supported MPEG-2 streams up to 6 Mbit/s and supported Half-D1 resolutions (320 × 480). This was replaced by an updated model supporting up to 12 Mbit/s streams and Full-D1 resolution (720 × 480). The first WinTV-PVR to gain popularity was the PVR-250. The original version of the PVR-250 was a variant of the Sag Harbor (PVR-350) which used the ivac15 chipset. Although the chipset was able to do hardware decoding the video out components were not included on the card. In later versions of the PVR-250 the ivac15 was replaced with the ivac16 to reduce cost and to relieve heat issues. The PVR-250 and PVR-350 were joined by the USB 2.0 PVR-USB2 to complete their generation of devices. Their successors, the PVR-150 and PVR-500, were released alongside the PVR-250/350/USB2 and while popular with both OEMs and the general public, there have been numerous driver issues as well as video quality complaints. The PVR-500 was released as a Media Center card and wasn't supplied with Hauppauge's WinTV2000 software. It was effectively two PVR-150s on a single board, connected via a PCI-PCI bridge chip. The PVR-USB2 was silently replaced with the PVR-USB2+ which is identical both visually and terms of features, but uses a Conexant chipset rather than the Philips chipset in the old model. From its name and time of release, the PVR-160 appears to be newer than the PVR-150 but it is not. The PVR-160 is a repackaging of the WinTV Roslyn. The Roslyn is based on the Conexant Blackbird design and uses the CX2388x video decoder. This board was originally available only to OEMs and third-party software vendors such as Frey Technologies (SageTV) and Snapstream (BeyondTV). The board was sold under many names including the PVR-250BTV (Snapstream). This card is known to have color and brightness issues that can be corrected somewhat using registry hacks. Hauppauge received a large surplus amount of these cards from OEM and third-party vendors. The cards were repackaged with an MCE remote and receiver and rebranded the PVR-160. The PVR-160 was often mistakenly referred to as the PVR-250MCE but is not related to the PVR-250. High-Definition Personal Video Recorder In May 2008, Hauppauge released the HD-PVR, a USB 2.0 device with an on-board H.264 hardware encoder for recording from high-definition sources through component inputs. It is the world's first USB device that can capture in high definition. The HD-PVR has proved to be a very popular device, and Hauppauge has been updating its drivers and software continually since its release. In addition to being able to capture from any component video source in 480p, 720p, or 1080i, the HD-PVR comes with an IR blaster that communicates with your cable or satellite set-top box for automated program recordings and channel-changing capabilities. In 2012, Hauppauge released the HD-PVR Gaming Edition 2, which features a much smaller design than its predecessor along with 1080p HDMI support. The PVR is not officially supported on Macintosh systems, but a variety of third-party programs exist that allow it to function on OS X, including EyeTV by Elgato and HDPVRCapture. In 2013, Hauppauge released an upgrade for the existing HD-PVR 2 with the HD-PVR 2 Gaming Edition Plus, which supports Macintosh systems. WinTV Analogue The standard analogue range of products use software encoding for recording analogue TV. The more recent Hauppauge cards use SoftPVR, which allows MPEG and MPEG-2 encoding in software provided that a sufficiently fast CPU is installed in the system. MediaMVP The MediaMVP is a thin client device that displays music, video and pictures (hence "MVP") on a television. It is based on an IBM PowerPC RISC processor specialized for multimedia decoding. The operating system is a form of Linux, and everything (including the menus) is served to the device via ethernet or, on newer devices, 802.11g wireless LAN from the server PC. Various open source software products can use the device as a front-end. An example is MVPMC, which allows the MediaMVP to be used as a front-end for MythTV or ReplayTV. Table of products WinTV software Hauppauge's principal software offering is WinTV, a TV tuning, viewing, and recording application supplied on a CD-ROM included with tuner hardware. A previous version was called WinTV2000 (WinTV32 without skins). It had companion applications, including WinTV Scheduler, which performs timed recordings, and WinTV Radio, which receives FM radio. It was modified towards a service-based software package, with card management and recordings taken care of by the "TV Server" service and EPG data collection by the "EPG Service", allowing WinTV2000 to work with multiple Hauppauge tuners in the same PC. In 2007 Hauppauge launched WinTV Version 6, followed in 2009 by WinTV7. WinTV8 was current . WinTV updates are available without charge to Hauppauge tuner users (major updates require access to a qualifying earlier WinTV installation CD, e.g. WinTV8 requires a CD not earlier than WinTV7). An option available at extra cost, WinTV Extend, allows TV to be streamed over the Internet to several portable devices such as smartphones, and PCs. Wing "Wing", a supplemental software application from Hauppauge, allows the company's PVR products to convert MPEG recordings into formats suitable for playback on the Apple iPod, Sony PSP or a DivX player; it converts MPEG-2 videos into H.264, MPEG-4 and DivX. Third-party software Third-party programs which support Hauppauge tuners include: GB-PVR, InterVideo WinDVR, Snapstream's Beyond TV, SageTV, Windows Media Center and the Linux-based MythTV. Linux Hauppauge offers limited support for Linux, with Ubuntu repositories and firmware downloads available on its website. There are drivers available from non-Hauppauge sources for most of the company's cards (in IVTV and LinuxTV). It appears that some of these drivers (Nova and HVR) are written by a Hauppauge engineer. The PVR-150 captures video on Linux, but there are reportedly difficulties getting the remote control and IR blaster to work. Also, a January 2007 product substitution of HVR-1600 in PVR-150 retail boxes forced many Linux users to exchange their purchases because the Linux driver has not been updated for the HVR-1600. SageTV Media Center for Linux supports PVR-150, PVR-250, PVR-350, PVR-500 and MediaMVP. For ATSC and DVB applications, a list of Linux supported Hauppauge and other makes of TV cards can be found on the LinuxTVWiki page (see "Supported Hardware" section). External links Hauppauge UK Hauppauge UK Support Forum PCTV Systems SageTV (a vendor of products based on Hauppauge hardware) SHS-PVR Unofficial WinTV-PVR & MediaMVP forums usbvision (partially functional Linux driver for WinTV-USB) The Hauppauge 4860 Motherboard in Detail WinTV-PVR Family Identification References 1983 establishments in New York (state) American companies established in 1983 Digital video recorders Islip (town), New York Smithtown, New York
11006413
https://en.wikipedia.org/wiki/Bill%20Dally
Bill Dally
William James Dally (born August 17, 1960) is an American computer scientist and educator. Microelectronics He developed a number of techniques used in modern interconnection networks including routing-based deadlock avoidance, wormhole routing, link-level retry, virtual channels, global adaptive routing, and high-radix routers. He has developed efficient mechanisms for communication, synchronization, and naming in parallel computers including message-driven computing and fast capability-based addressing. He has developed a number of stream processors starting in 1995 including Imagine, for graphics, signal, and Image processing, and Merrimac, for scientific computing. He has published over 200 papers as well as the textbooks "Digital Systems Engineering" with John Poulton, and "Principles and Practices of Interconnection Networks" with Brian Towles. He was inventor or co-inventor on over 70 granted patents. An author quoted him saying: "Locality is efficiency, Efficiency is power, Power is performance, Performance is king". Career Bell Labs Dally received the B.S. degree in electrical engineering from Virginia Tech. While working for Bell Telephone Laboratories he contributed to the design of the Bellmac 32, an early 32-bit microprocessor, and earned an M.S. degree in electrical engineering from Stanford University in 1981. He then went to the California Institute of Technology (Caltech), graduating with a Ph.D. degree in computer science in 1986. At Caltech he designed the MOSSIM simulation engine and an integrated circuit for routing. While at Caltech, he was part of the founding group of Stac Electronics in 1983. MIT From 1986 to 1997 he taught at MIT where he and his group built the J–Machine and the M–Machine, parallel machines emphasizing low overhead synchronization and communication. During his MIT times he claims to have collaborated on developing design of Cray T3D and Cray T3E supercomputers. He became the Willard R. and Inez Kerr Bell Professor in the Stanford University School of Engineering and chairman of the computer science department at Stanford. Dally's corporate involvements include various collaborations at Cray Research since 1989. He did Internet router work at Avici Systems starting in 1997, was chief technical officer at Velio Communications from 1999 until its 2003 acquisition by LSI Logic, founder and chairman of Stream Processors, Inc until it folded. Nvidia and IEEE fellow Dally was elected a Fellow of the Association for Computing Machinery in 2002, and a Fellow of the IEEE, also in 2002. In 2003 he became a consultant for NVIDIA for the first time and helped to develop GeForce 8800 GPUs series. He received the ACM/SIGARCH Maurice Wilkes Award in 2000, the Seymour Cray Computer Science and Engineering Award in 2004, and the IEEE Computer Society Charles Babbage Award in 2006. In 2007 he was elected to the American Academy of Arts and Sciences. In January 2009 he was appointed chief scientist of Nvidia. He worked full-time at Nvidia, while supervising about 12 of his graduate students at Stanford. In 2009, he was elected to the National Academy of Engineering for contributions to the design of high-performance interconnect networks and parallel computer architectures. He received the 2010 ACM/IEEE Eckert–Mauchly Award for "outstanding contributions to the architecture of interconnection networks and parallel computers." Books Dally and Poulton, Digital Systems Engineering, 1998, . Dally and Towles, Principles and Practices of Interconnection Networks, 2004, . Dally and Harting, Digital Design: A Systems Approach, 2012, . References External links Stanford Faculty Home Page American computer scientists American technology writers Computer systems researchers Fellows of the Association for Computing Machinery Fellow Members of the IEEE Stanford University School of Engineering faculty Stanford University Department of Electrical Engineering faculty Stanford University Department of Computer Science faculty Massachusetts Institute of Technology faculty California Institute of Technology alumni Virginia Tech alumni Stanford University School of Engineering alumni Living people Seymour Cray Computer Engineering Award recipients Members of the United States National Academy of Engineering American electrical engineers Nvidia people 1960 births
27555510
https://en.wikipedia.org/wiki/XBMC4Xbox
XBMC4Xbox
XBMC4Xbox is a free and open source media player software made solely for the first-generation Xbox video-game console. The software was forked from the XBMC project (now known as Kodi and formerly known as Xbox Media Player) after XBMC removed support for the Xbox console. Other than the audio / video playback and media center functionality, XBMC4Xbox also has the ability to catalog and launch original Xbox games, and homebrew applications such as console emulators from the Xbox's built-in harddrive. Since the XBMC4Xbox is homebrew software that is not endorsed or supported by Microsoft in any way, it means XBMC4Xbox requires a modchip or softmod exploit installed to run on the Xbox game-console. Binary builds of XBMC can also not be legally distributed by the XBMC4Xbox project members, so all releases of binary-builds are made by independent third-parties who compile and distribute unofficial versions of the application. Overview XBMC4XBox's 10-foot user interface is designed for the living-room TV, and the large icons and text in the graphical user interface allows the user to easily manage most common digital music, video, image, podcasts, and playlists formats from a computer, optical disk, local network, and the internet using an Xbox's game-controller or the Xbox DVD-Kit remote control. It also has a skinnable and user-configurable interface and plugin support. XBMC4Xbox does also just like XBMC feature; audio visualizations, slideshows, weather forecasts reporting, and a Python-based API for third-party plugins. Add-ons such as skins and plugins for XBMC are not out-of-the-box compatible with XBMC4Xbox due to differences in their API's which means that all XBMC addons have to be ported in order to specifically work with XBMC4Xbox. The software is not an authorized/signed Microsoft product, therefore a modification of the Xbox is required in order to run XBMC4Xbox on an Xbox game-console. On a modded Xbox, XBMC4Xbox can be run as an application (like any Xbox game), or as a dashboard that appears directly when the Xbox is turned on. Since XBMC4Xbox is an open source software program, its development source code is stored on a publicly accessible subversion repository. Accordingly, unofficial executable builds from the subversion repository are often released by third parties on sites unaffiliated with the XBMC4Xbox project. XBMC4Xbox source code is distributed as open source under GPL (GNU General Public License), and is community developed by a group of volunteering people from different parts of the world working on XBMC4Xbox for free in their spare time. The source code for XBMC4Xbox is mostly updated on a daily basis by developers in a public subversion repository. Features This is a description of the unique features and functions of the XBMC4Xbox fork for the Xbox that are not available or different in the original XBMC software from which it was forked: Xbox dashboard function (game and application launcher) XBMC4Xbox has a "My Programs" section which functions as a replacement dashboard to launch Xbox games (retail and homebrew) and applications/emulator directly off the Xbox built-in harddrive, all from a GUI with thumbnail and list options. This replaces the original Xbox Dashboard from Microsoft, and with the exception of flashing new BIOS to an Xbox modchip it also features many extra functions that other homebrew dashboards have. XBMC4Xbox Trainer Support (Xbox game cheats mods) XBMC4Xbox also has the ability to use and apply Xbox Trainer Files. Trainers are small files that allow for in game value modification (such as cheat code) through altering retail functions in game values by way of using TSR (Terminate and Stay Resident) keys. There are many things that can be modified including ammunition, extra-lives, or even how high a character can jump. Trainer support in XBMC4Xbox was achieved through collaboration with Team Xored. This collaboration began in December 2005 and came to fruition in January 2006 by successfully integrating the Team Xored Trainer Engine into XBMC4Xbox. XBMC4Xbox can run trainers with the following file extensions: *.ETM and *.XBTF XLink Kai (Xbox Live online-gaming alternative) XBMC4Xbox previously had an XLink Kai front-end integrated to control that client, but that has been removed in more recent builds. Audio and video playback handling XBMC4Xbox can be used to play/view all common multimedia formats. However, it cannot playback most native 720p and 1080p video files due to Xbox hardware limitations. XBMC4Xbox can upscale the resolution of many standard definition videos. XBMC4Xbox multimedia playback cores XBMC4Xbox uses two different multimedia video player 'cores' for video-playback. The first core, dubbed "DVDPlayer", is XBMC's in-house developed video-playback core with support for DVD-Video movies and is based on libmpeg2 and libmad for MPEG decoding yet FFmpeg for media-container demuxing, splitting, as well as decoding other audio formats. Respective audio decoding is handled by liba52 for ac3 audio decoding and libdts / libdca for DTS audio. Also included is support for DVD-menus through libdvdnav and dvdread. One relatively unusual feature of this DVD-player core is the capability to on-the-fly pause and play DVD-Video movies that are stored in ISO and IMG DVD-images or DVD-Video (IFO/VOB/BUP) images (even directly from uncompressed RAR and ZIP archives), from either local harddrive storage or network-share storage. The second video-player 'core' for video-playback is a ported version of the open-source cross-platform player, MPlayer, which today is only used as a backup player in XBMC4Xbox. MPlayer which is known for playing practically all common media-formats and XBMC4Xbox handles all codecs and containers normally supported by MPlayer, (which is all FFmpeg supported codecs and also several external ones with the help of proprietary DLL-files. The third 'core', PAPlayer (abbreviated from Psycho-acoustic Audio Player), only supports audio playback. PAPlayer was also developed by the XBMC team, before the projects split, in 2005. The PAPlayer supports more codecs than MPlayer, and is therefore the default audio playback 'core'. Some file formats that don't work with MPlayer play with PAPlayer and there are less bugs (e.g. the visualisation bug in MPlayer, where visualisations 'break' after a file has been played). After the previous XBMC4Xbox site went down, the wiki was lost, so there is no record for supported filetypes for PAPlayer in XBMC4Xbox. However, XBMC.org has a page on PAPlayer supported formats. Programming and development XBMC4Xbox is a software application programmed in C++, XBMC4Xbox uses Microsoft DirectX multimedia framework and Direct3D rendering, (as the Xbox does not support OpenGL). The Xbox SDK (Xbox Development Kit, a.k.a. XDK) software development kit (with libraries) is required to compile XBMC4Xbox. Also required to compile (and program in) XBMC4Xbox is the older Microsoft Visual Studio .NET version 7.1 According to Microsoft, it is a common misconception that the Xbox uses a modified Windows 2000 kernel, instead they claim that the Xbox operating system was built from scratch but implements a subset of Windows APIs. The idea that it does, indeed, run a modified copy of the Windows kernel still persists in the community, however what is known for sure is that the Xbox's kernel works like a BIOS and is Win32 based, but does not have all of the resources or capabilities of a full Windows NT based operating system, (for example: neither DirectShow, registry, nor DLL are natively supported on the Xbox), and because of the constraints on the hardware and environment of the Xbox, all software development of XBMC4Xbox for the Xbox is focused on reserving the limited resources that exist, the main hindrance of which is the amount of available RAM at any one time. XBMC4Xbox software and related Xbox hardware limitations UDF (Universal Disk Format) file-system limitation: XBMC4Xbox only supports UDF version 1.02 (designed for DVD-Video media), which has a maximum file-size of 1 GB (meaning if you burn a DVD-media in a newer UDF version with a video that is larger than 1GB, XBMC will not be able to play that file), same goes for UDF/ISO hybrid formats (a.k.a. UDF Bridge format). Workaround: Burn all your CD/DVD-media in ISO 9660 format, which is the most common standard for recording CD/DVDs. Unfortunately ISO 9660 has a 2GB (Gigabyte) file-size limitation, which cannot be bypassed. The Xbox built-in harddrive is formatted in FATX (File Allocation Table for Xbox) which has a 4GB (4096 Megabyte) file-size limitation, and only supports file/folder-names up to 42 characters, a maximum of 255 in total file-structure character-depth and a maximum number of 4096 files/folders in a single subfolder, plus in the root of each partition, the maximum number of files/folders is 256. FATX also does not support all standard ASCII characters in file/folder names (for example < > = ? : ; " * +, / \|¤ &). XBMC will automatically try to rename any files/folders you transfer to the Xbox according to these limitations. None of these file-size and file-name issues are XBMC bugs as the limitations are in the Xbox itself. Workaround: Store your files/folders on your computer or a Network-Attached Storage (NAS) device which supports SMB/CIFS, FTP or UPnP and share them over a local-area-network instead. The USB flash drive (USB key-drives/memory-keys) reader/writer class used by XBMC for Xbox currently has a few limitations as well. It is limited to USB flash drives and harddisks compatible with USB Mass Storage Device Class following the USB 1.1 standard, with a maximum size of 4 GB. It can read and write to FATX formatted flash drives, but can only read FAT12, FAT16 (including VFAT), and FAT32. NTFS formatted drives are not supported yet. With its by today's standard old and slow 733 MHz Intel Pentium III-like CPU and 64MB shared memory, the Xbox has neither a fast enough CPU nor sufficient amounts of RAM to play HDTV videos encoded in native 720p/1080i resolution. However, XBMC on the Xbox can up-convert all standard definition movies and output them at 720p or 1080i. The Xbox is only able to play MPEG-4 AVC (H.264) encoded videos if the video-resolution is under 480p (720x480 pixels). If the video is however encoded with MPEG-4 ASP instead, then the videos native-resolution can be anything up to 960x540 pixels (a resolution which is also known as HRHD resolution). History As the successor to Xbox Media Player (XBMP), XboxMediaCenter (XBMC), was ported to other platforms and architectures, becoming XBMC or XBMC Media Center thus losing the Xbox connection. On May 27, 2010, to differentiate the now mainline multiplatform XBMC from the original Xbox, the team behind XBMC announced the splitting of the Xbox branch into a new project; "XBMC4Xbox" which will continue the development and support of XBMC for the old Xbox hardware platform as a separate project, with the original XBMC project no longer offering any support for the Xbox. Apart from the name the next noticeable thing is the changed version numbering. The last official release of XBMC for Xbox was 9.11 Camelot, a release which at the time was more closely connected to the multiplatform XBMC that had been in development for some time. The XBMC4Xbox project has since reverted to version numbering that does not include a reference to a date for release. Instead it is now uses a simpler major.minor version system, which is what was used before Xbox Media Center became just XBMC. New releases are now made available when they are ready rather than having set release dates. In previous years before XBMC4Xbox split from XBMC, there was less developer interest in the Xbox version of XBMC, as the new multiplatform version of XBMC became the primary concern for the XBMC team. Only one developer (Arnova) still looked after the Xbox version. Lack of interest from the XBMC developers got to a point where a new home was needed for the Xbox codebase, and in 2010 it was moved to SourceForge. A new community site had already been set up at xbmc4xbox.org and was chosen to replace the forums on xbmc.org where XboxX discussion was no longer relevant, as xbmc.org only deals with the platforms that they actively develop. Legality and copyright XBMCXbox software is just like XBMC licensed under the GNU General Public License (GPL) by its developers, meaning they allow anybody to redistribute XBMCXbox source code under very liberal conditions. However, in order to compile the Xbox build of XBMCXbox into executable form, it is currently necessary to use Microsoft's proprietary XDK (Xbox Development Kit) which is only available to licensed developers and the resulting code may only be legally distributed by Microsoft. Accordingly, code compiled with an unauthorized copy of the Xbox Development Kit may not be legally distributed by anyone other than Microsoft. So while XBMC4Xbox's source code is made publicly available by the developers under an open-source (GNU GPL) license, the developers themselves are legally unable to distribute executable versions of XBMC4Xbox. This is because XBMC4Xbox requires Microsoft's proprietary software development kit in order to compile. Thus, the only publicly available executable versions of XBMC4Xbox are from third parties, as a result, pre-compiled versions of XBMC4Xbox may be illegal to distribute in many countries around the world. Also for audio and video codecs which are not natively supported via FFmpeg, XBMC4Xbox via MPlayer provides a DLL loader which can load third-party made audio and video codec DLLs to decode unsupported formats. This is potentially legal if the user owns a licensed copy of the DLL. However, some third-party XBMC4Xbox builds incorporate all available third-party DLLs that XBMC4Xbox can support, and the redistribution of these without a license is copyright infringement. See also Home theater PC References External links www.xbmc4xbox.org.uk - XBMC4Xbox Official Website Official SourceForge Project Page with source code Free multimedia software Free media players Free software programmed in C++ Free video software Multimedia software Xbox (console) software
44557045
https://en.wikipedia.org/wiki/Devuan
Devuan
Devuan is a fork of Debian that uses sysvinit, runit or OpenRC instead of systemd. The Devuan development team aim to maintain compatibility with other init systems in the future and not detach Linux from other Unix systems. History The release of Debian 8 alienated some developers and other users due to the project's adoption of systemd and subsequent removal of support for other existing init systems. The first stable release of Devuan was published on May 25, 2017. Instead of continuing the Debian practice of using Toy Story character names as release codenames, Devuan aliases its releases using planet names. The first stable release shared the Debian 8 codename Jessie. However, the Devuan release was named for minor planet 10464. The second stable release is named ASCII for asteroid/minor planet 3568 and is based on Debian 9 Stretch. The third stable release is named Beowulf after minor planet 38086 and is based on Debian 10 Buster. The fourth release is named Chimaera after minor planet 623 and is based on Debian 11 Bullseye. The permanent alias for the Devuan unstable branch is Ceres, so named for the dwarf planet. Devuan 2.0.0 ASCII was released on June 9, 2018, and 2.1 ASCII was released on November 21, 2019. ASCII provides a choice of five different desktop environments at install time (XFCE, Cinnamon, KDE, LXQt, MATE), while many other window managers are available from the repositories. It also provides installation options for choosing between sysvinit and OpenRC for init, and between GRUB and LILO for the boot loader. Devuan maintains a modified version of the Debian expert text installer, which has the ability to install only free software if the user chooses, while the live desktop image also uses a custom graphical installer from Refracta, a derivative of Devuan. Devuan 3.0 Beowulf was released on June 3, 2020, based on Debian 10.4. Ppc64el has been added to the list of supported architectures. Runit is now available as an alternative init. Eudev and elogind are now used to replace some Systemd functionality. Devuan 4.0 Chimaera was released October 14, 2021. It is based on Debian Bullseye (11.1) with Linux kernel 5.10. Packages Devuan has its own package repository which mirrors upstream Debian, with local modifications made only when needed to allow for init systems other than systemd. Modified packages include policykit and udisks. Devuan is supposed to work like the corresponding Debian release. Devuan does not provide systemd in its repositories but still retains libsystemd0 until it has removed all dependencies. Amprolla is the program used to merge Debian packages with Devuan packages. It downloads packages from Debian and merges changes to packages that Devuan overrides. Version history Source: References External links Devuan home page Source code repository Release archive Debian-based distributions Linux distributions Linux distributions without systemd Pentesting software toolkits
1288483
https://en.wikipedia.org/wiki/Gigapackets
Gigapackets
Gigapackets are billions (109) of packets or datagrams. The packet is the fundamental unit of information in computer networks. Data transfer rates in gigapackets per second are associated with high speed networks, especially fiber optic networks. The bit rates that are used to create gigapackets are in the range of gigabits per second. These rates are seen in network speeds of gigabit Ethernet or 10 Gigabit Ethernet and SONET Optical Carrier rates of OC-48 at 2.5 Gbit/s and OC-192 at 10 Gbit/s. References Packets (information technology) Units of information
32375078
https://en.wikipedia.org/wiki/Tufin
Tufin
Tufin is a security policy management company specializing in the automation of security policy changes across hybrid platforms while improving security and compliance. The Tufin Orchestration Suite supports next-generation firewalls, network layer firewalls, routers, network switches, load balancers, web proxies, private and public cloud platforms and microservices. History Prior to its 2019 initial public offering, Tufin was privately funded since its establishment. Products Tufin develops and markets the Tufin Orchestration Suite which consists of: SecureTrack: Firewall Operations Management, Auditing and Compliance SecureChange: Security Change Automation SecureApp: Application Connectivity Management SecureCloud: Hybrid Cloud Security The company releases updates to Tufin Orchestration Suite each quarter. The Suite is designed for large enterprises as well as managed security service providers (MSSP) and IT security auditors. Tufin products help security teams to implement and maintain their security policy on all of their firewalls, routers and network switches. They accelerate service delivery through network change automation and expedite the process of compliance audits for security standards such as PCI DSS, NERC and Sarbanes Oxley. Tufin products also help companies to manage and automate the daily configuration changes to network security devices. Innovation Tufin's innovation includes several technologies such as the Automatic Policy Generator which refines security rules based on network traffic, methods for automating security policy management and the concept of managing network security policies from an application scope. Tufin's core technology is protected by multiple US patents. Partnerships Other network security vendors provide operations management, auditing and change automation for Tufin products. Tufin technology partners include Check Point, Cisco, Fortinet, Juniper Networks, McAfee, Palo Alto Networks, Stonesoft, F5 Networks, VMware, OpenStack, Amazon Web Services (AWS), Microsoft Azure, BMC, ServiceNow, Puppet Labs and others. References External links Application Connectivity Management - "Tufin's SecureApp Completes Trifecta Of Security Policy Management" by Alan Shimel Rethinking Firewall Management - ZDNet article by Tom Foremski SIX deploys Tufin - How the Swiss Stock Exchange uses Tufin to manage firewalls and applications 2013 Survey of 500 C-level managers and senior IT professionals Unified Security Policy - "Tufin firewall management software gets smarter about business apps" by Shamus McGillicuddy Official company web site "Network Security Policy Management Solutions Have Evolved" - a research paper by Gartner "Tufin Collaborates with Microsoft to Integrate Public Cloud Support with Microsoft Azure into the Tufin Orchestration Suite™" 2019 initial public offerings companies based in Boston companies listed on the New York Stock Exchange computer security companies computer security software companies software companies established in 2005 software companies of the United States
25787329
https://en.wikipedia.org/wiki/Virtual%20firewall
Virtual firewall
A virtual firewall (VF) is a network firewall service or appliance running entirely within a virtualized environment and which provides the usual packet filtering and monitoring provided via a physical network firewall. The VF can be realized as a traditional software firewall on a guest virtual machine already running, a purpose-built virtual security appliance designed with virtual network security in mind, a virtual switch with additional security capabilities, or a managed kernel process running within the host hypervisor. Background So long as a computer network runs entirely over physical hardware and cabling, it is a physical network. As such it can be protected by physical firewalls and fire walls alike; the first and most important protection for a physical computer network always was and remains a physical, locked, flame-resistant door. Since the inception of the Internet this was the case, and structural fire walls and network firewalls were for a long time both necessary and sufficient. Since about 1998 there has been an explosive increase in the use of virtual machines (VM) in addition to — sometimes instead of — physical machines to offer many kinds of computer and communications services on local area networks and over the broader Internet. The advantages of virtual machines are well explored elsewhere. Virtual machines can operate in isolation (for example as a guest operating system on a personal computer) or under a unified virtualized environment overseen by a supervisory virtual machine monitor or "hypervisor" process. In the case where many virtual machines operate under the same virtualized environment they might be connected together via a virtual network consisting of virtualized network switches between machines and virtualized network interfaces within machines. The resulting virtual network could then implement traditional network protocols (for example TCP) or virtual network provisioning such as VLAN or VPN, though the latter while useful for their own reasons are in no way required. There is a continued perception that virtual machines are inherently secure because they are seen as "sandboxed" within the host operating system. It is often believed that the host, in like manner, is secured against exploitation from the virtual machine itself and that the host is no threat to the virtual machine because it is a physical asset protected by traditional physical and network security. Even when this is not explicitly assumed, early testing of virtual infrastructures often proceeds in isolated lab environments where security is not as a rule an immediate concern, or security may only come to the fore when the same solution is moving into production or onto a computer cloud, where suddenly virtual machines of different trust levels may wind up on the same virtual network running across any number of physical hosts. Because they are true networks, virtual networks may end up suffering the same kinds of vulnerabilities long associated with a physical network, some of which being: Users on machines within the virtual network have access to all other machines on the same virtual network. Compromising or misappropriating one virtual machine on a virtual network is sufficient to provide a platform for additional attacks against other machines on the same network segment. If a virtual network is internetworked to the physical network or broader Internet then machines on the virtual network might have access to external resources (and external exploits) that could leave them open to exploitation. Network traffic that passes directly between machines without passing through security devices is unmonitored. The problems created by the near invisibility of between-virtual machine (VM-to-VM) traffic on a virtual network are exactly like those found in physical networks, complicated by the fact that the packets may be moving entirely inside the hardware of a single physical host: Because the virtual network traffic may never leave the physical host hardware, security administrators cannot observe VM-to-VM traffic, cannot intercept it, and so cannot know what that traffic is for. Logging of VM-to-VM network activity within a single host and verification of virtual machine access for regulatory compliance purposes becomes difficult. Inappropriate uses of virtual network resources and bandwidth consumption VM-to-VM are difficult to discover or rectify. Unusual or inappropriate services running on or within the virtual network could go undetected. There are security issues known only in virtualized environments that wreak havoc with physical security measures and practices, and some of these are touted as actual advantages of virtual machine technology over physical machines: VMs can be deliberately (or unexpectedly) migrated between trusted and untrusted virtualized environments where migration is enabled. VMs and/or virtual storage volumes can be easily cloned and the clone made to run on any part of the virtualized environment, including a DMZ. Many companies use their purchasing or IT departments as the IT security lead agency, applying security measures at the time a physical machine is taken from the box and initialized. Since virtual machines can be created in a few minutes by any authorized user and set running without a paper trail, they can in these cases bypass established "first boot" IT security practices. VMs have no physical reality leaving not a trace of their creation nor (in larger virtualized installations) of their continued existence. They can be as easily destroyed as well, leaving nearly no digital signature and absolutely no physical evidence whatsoever. In addition to the network traffic visibility issues and uncoordinated VM sprawl, a rogue VM using just the virtual network, switches and interfaces (all of which run in a process on the host physical hardware) can potentially break the network as could any physical machine on a physical network — and in the usual ways — though now by consuming host CPU cycles it can additionally bring down the entire virtualized environment and all the other VMs with it simply by overpowering the host physical resources the rest of the virtualized environment depend upon. This was likely to become a problem, but it was perceived within the industry as a well understood problem and one potentially open to traditional measures and responses. Virtual firewalls One method to secure, log and monitor VM-to-VM traffic involved routing the virtualized network traffic out of the virtual network and onto the physical network via VLANs, and hence into a physical firewall already present to provide security and compliance services for the physical network. The VLAN traffic could be monitored and filtered by the physical firewall and then passed back into the virtual network (if deemed legitimate for that purpose) and on to the target virtual machine. Not surprisingly, LAN managers, security experts and network security vendors began to wonder if it might be more efficient to keep the traffic entirely within the virtualized environment and secure it from there. A virtual firewall then is a firewall service or appliance running entirely within a virtualised environment — even as another virtual machine, but just as readily within the hypervisor itself — providing the usual packet filtering and monitoring that a physical firewall provides. The VF can be installed as a traditional software firewall on a guest VM already running within the virtualized environment; or it can be a purpose-built virtual security appliance designed with virtual network security in mind; or it can be a virtual switch with additional security capabilities; or it can be a managed kernel process running within the host hypervisor that sits atop all VM activity. The current direction in virtual firewall technology is a combination of security-capable virtual switches, and virtual security appliances. Some virtual firewalls integrate additional networking functions such as site-to-site and remote access VPN, QoS, URL filtering and more. Operation Virtual firewalls can operate in different modes to provide security services, depending on the point of deployment. Typically these are either bridge-mode or hypervisor-mode (hypervisor-based, hypervisor-resident). Both may come shrink wrapped as a virtual security appliance and may install a virtual machine for management purposes. A virtual firewall operating in bridge-mode acts like its physical-world firewall analog; it sits in a strategic part of the network infrastructure — usually at an inter-network virtual switch or bridge — and intercepts network traffic destined for other network segments and needing to travel over the bridge. By examining the source origin, the destination, the type of packet it is and even the payload the VF can decide if the packet is to be allowed passage, dropped, rejected, or forwarded or mirrored to some other device. Initial entrants into the virtual firewall field were largely bridge-mode, and many offers retain this feature. By contrast, a virtual firewall operating in hypervisor-mode is not actually part of the virtual network at all, and as such has no physical-world device analog. A hypervisor-mode virtual firewall resides in the virtual machine monitor or hypervisor where it is well positioned to capture VM activity including packet injections. The entire monitored VM and all its virtual hardware, software, services, memory and storage can be examined, as can changes in these . Further, since a hypervisor-based virtual firewall is not part of the network proper and is not a virtual machine its functionality cannot be monitored in turn or altered by users and software limited to running under a VM or having access only to the virtualized network. Bridge-mode virtual firewalls can be installed just as any other virtual machine in the virtualized infrastructure. Since it is then a virtual machine itself, the relationship of the VF to all the other VM may become complicated over time due to VMs disappearing and appearing in random ways, migrating between different physical hosts, or other uncoordinated changes allowed by the virtualized infrastructure. Hypervisor-mode virtual firewalls require a modification to the physical host hypervisor kernel in order to install process hooks or modules allowing the virtual firewall system access to VM information and direct access to the virtual network switches and virtualized network interfaces moving packet traffic between VMs or between VMs and the network gateway. The hypervisor-resident virtual firewall can use the same hooks to then perform all the customary firewall functions like packet inspection, dropping, and forwarding but without actually touching the virtual network at any point. Hypervisor-mode virtual firewalls can be very much faster than the same technology running in bridge-mode because they are not doing packet inspection in a virtual machine, but rather from within the kernel at native hardware speeds. See also Virtual security appliance Network function virtualization References Further reading "Zeus Bot Appears in EC2 Cloud, Detected, Dismissed" Babcock, Charles. InformationWeek Dec 2009 "40,000 Firewalls! Help Please!?" Texiwill. The Virtualization Practice. Sept 2009 "OPINION / Why do we need virtual security? " Ben-Efraim, Amir. Government Security News. Aug 2009 "Keep Your Virtual Networks Safe" Zillion Magazine. July 2009 "The virtual blind spot" Schultz, Beth. NetworkWorld. July 2010 "Cloud security in the real world: 4 examples" Brandel, Mary. CSO: Security & Risk. June 2010 "Securing mixed environments - not everybody will be virtualized" Ogren, Eric. ComputerWorld. June 2010 "New security tools protect virtual machines" Strom, David. Network World March 2011 Computer networking Virtualization software Ethernet
27652460
https://en.wikipedia.org/wiki/Shared%20Source%20Initiative
Shared Source Initiative
The Shared Source Initiative (SSI) is a source-available software licensing scheme launched by Microsoft in May 2001. The program includes a spectrum of technologies and licenses, and most of its source code offerings are available for download after eligibility criteria are met. Overview Microsoft's Shared Source Initiative allows individuals and organizations to access Microsoft's source code for reference (e.g. when developing complementary systems), for review and auditing from a security perspective (mostly wanted by some large corporations and governments), and for development (academic institutions, OEMs, individual developers). As part of the framework, Microsoft released 5 licenses for general use. Two of them, Microsoft Public License and Microsoft Reciprocal License, have been approved by the Open Source Initiative as open source licenses and are regarded by the Free Software Foundation as free software licenses. Other shared source licenses are proprietary, and thus allow the copyright holder to retain tighter control over the use of their product. Microsoft's Shared Source Initiative has been imitated by other companies such as RISC OS Open Ltd. Microsoft also uses specific licenses for some of their products, such as the Shared Source CLI License and the Microsoft Windows Embedded CE 6.0 Shared Source License. Free and open-source licenses The following licenses are considered open-source by the Open Source Initiative and free by the Free Software Foundation. Microsoft Public License (Ms-PL) This is the least restrictive of the Microsoft licenses and allows for distribution of compiled code for either commercial or non-commercial purposes under any license that complies with the Ms-PL. Redistribution of the source code itself is permitted only under the Ms-PL. Initially titled Microsoft Permissive License, it was renamed to Microsoft Public License while being reviewed for approval by the Open Source Initiative (OSI). The license was approved on October 12, 2007, along with the Ms-RL. According to the Free Software Foundation, it is a free software license but not compatible with the GNU GPL. Ms-PL provides a free and flexible licensing for developers using source codes under this license. However, the Ms-PL is a copyleft license because it requires the source code of software it governs to be distributed only under the same license (the Ms-PL). Microsoft Reciprocal License (Ms-RL) This Microsoft license allows for distribution of derived code so long as the modified source files are included and retain the Ms-RL. The Ms-RL allows those files in the distribution that do not contain code originally licensed under Ms-RL to be licensed according to the copyright holder's choosing. This is similar, but not the same as the CDDL, EPL or LGPL (GPL with a typical "linking exception"). Initially known as the Microsoft Community License, it was renamed during the OSI approval process. On December 9, 2005, the Ms-RL license was submitted to the Open Source Initiative for approval by John Cowan. OSI then contacted Microsoft and asked if they wanted OSI to proceed. Microsoft replied that they did not wish to be reactive and that they needed time to review such a decision. At the O'Reilly Open Source Convention in July 2007, Bill Hilf, director of Microsoft's work with open source projects, announced that Microsoft had formally submitted Ms-PL and Ms-RL to OSI for approval. It was approved on October 12, 2007, along with the Ms-PL. According to the Free Software Foundation, it is a free software license but not compatible with the GNU GPL. Restricted licenses The following source-available software licenses have limitations that prevent them from being open-source according to the Open Source Initiative and free to the Free Software Foundation. Microsoft Limited Public License (Ms-LPL) This is a version of the Microsoft Public License in which rights are only granted to developers of Microsoft Windows-based software. This license is not open source, as defined by the OSI, because the restriction limiting use of the software to Windows violates the stipulation that open-source licenses must be technology-neutral. It is also considered to be non-free by the Free Software Foundation due to this restriction. Microsoft Limited Reciprocal License (Ms-LRL) This is a version of the Microsoft Reciprocal License in which rights are only granted when developing software for a Microsoft Windows platform. Like the Ms-LPL, this license is not open source because it is not technology-neutral due to its restriction that licensed software must be used on Windows, and is also not considered free by the Free Software Foundation due to this restriction. Microsoft Reference Source License (Ms-RSL) This is the most restrictive of the Microsoft Shared Source licenses. The source code is made available to view for reference purposes only, mainly to be able to view Microsoft classes source code while debugging. Developers may not distribute or modify the code for commercial or non-commercial purposes. The license has previously been abbreviated Ms-RL, but Ms-RL now refers to the Microsoft Reciprocal License. Criticism Two specific shared source licenses are interpreted as free software and open source licenses by FSF and OSI. However, former OSI president Michael Tiemann considers the phrase "Shared Source" itself to be a marketing term created by Microsoft. He argues that it is "an insurgent term that distracts and dilutes the Open Source message by using similar-sounding terms and offering similar-sounding promises". The Shared Source Initiative has also been noted to increase the problem of license proliferation. See also Open Source Initiative Source-available software Software using the Microsoft Public License (category) References External links Microsoft initiatives Software licenses
24963451
https://en.wikipedia.org/wiki/W.%20Bruce%20Croft
W. Bruce Croft
W. Bruce Croft is a distinguished professor of computer science at the University of Massachusetts Amherst whose work focuses on information retrieval. He is the founder of the Center for Intelligent Information Retrieval and served as the editor-in-chief of ACM Transactions on Information Systems from 1995 to 2002. He was also a member of the National Research Council Computer Science and Telecommunications Board from 2000 to 2003. Since 2015, he is the Dean of the College of Information and Computer Sciences at the University of Massachusetts Amherst. He was Chair of the UMass Amherst Computer Science Department from 2001 to 2007. Bruce Croft formed the Center for Intelligent Information Retrieval (CIIR) in 1991, since when he and his students have worked with more than 90 industry and government partners on research and technology projects and have produced more than 900 papers. Bruce Croft has made major contributions to most areas of information retrieval, including pioneering work in clustering, passage retrieval, sentence retrieval, and distributed search. One of the most important areas of work for Croft relates to ranking functions and retrieval models, where he has led the development of one of the major approaches to modeling search: language modelling. In later years, Croft also led the way in the development of feature-based ranking functions. Croft and his research group have also developed a series of search engines: InQuery, the Lemur toolkit, Indri, and Galago. These search engines are open source and offer unique capabilities that are not replicated in other research retrieval platforms source – consequently they are downloaded by hundreds of researchers world wide. As a consequence of his work, Croft is one of the most cited researchers in information retrieval. Education Croft earned a bachelor's degree with honors in 1973 and a master's degree in computer science in 1974 from Monash University in Melbourne, Australia. He earned his Ph.D in computer science from the University of Cambridge in 1979 and joined the University of Massachusetts, Amherst faculty later that year. Honors and awards Croft has received several prestigious awards, including: ACM Fellow in 1997 American Society for Information Science and Technology Research Award in 2000 Gerard Salton Award (a lifetime achievement award) from ACM SIGIR in 2003 Tony Kent Strix Award in 2013 IEEE Computer Society Technical Achievement Award in 2014 Best Student Paper Award from SIGIR in 1997 and 2005 Test of Time Award from SIGIR for his papers published in 1990, 1995, 1996, 1998, 2001 Many other publications are short-listed as the Best Paper Award in SIGIR and CIKM References External links Faculty homepage American computer scientists Fellows of the Association for Computing Machinery University of Massachusetts Amherst faculty Year of birth missing (living people) Living people Information retrieval researchers
171336
https://en.wikipedia.org/wiki/Optical%20mouse
Optical mouse
An optical mouse is a computer mouse which uses a light source, typically a light-emitting diode (LED), and a light detector, such as an array of photodiodes, to detect movement relative to a surface. Variations of the optical mouse have largely replaced the older mechanical mouse design, which uses moving parts to sense motion. The earliest optical mice detected movement on pre-printed mousepad surfaces. Modern optical mice work on most opaque diffusely reflective surfaces like paper, but most of them do not work properly on specularly reflective surfaces like polished stone or transparent surfaces like glass. Optical mice that use dark field illumination can function reliably even on such surfaces. Mechanical mice Though not commonly referred to as optical mice, nearly all mechanical mice tracked movement using LEDs and photodiodes to detect when beams of infrared light did and didn't pass through holes in a pair of incremental rotary encoder wheels (one for left/right, another for forward/back), driven by a rubberized ball. Thus, the primary distinction of “optical mice” is not their use of optics, but their complete lack of moving parts to track mouse movement, instead employing an entirely solid-state system. Early optical mice The first two optical mice, first demonstrated by two independent inventors in December 1980, had different basic designs: One of these, invented by Steve Kirsch of MIT and Mouse Systems Corporation, used an infrared LED and a four-quadrant infrared sensor to detect grid lines printed with infrared absorbing ink on a special metallic surface. Predictive algorithms in the CPU of the mouse calculated the speed and direction over the grid. The other type, invented by Richard F. Lyon of Xerox, used a 16-pixel visible-light image sensor with integrated motion detection on the same ntype (5µm) MOS integrated circuit chip, and tracked the motion of light dots in a dark field of a printed paper or similar mouse pad. The Kirsch and Lyon mouse types had very different behaviors, as the Kirsch mouse used an x-y coordinate system embedded in the pad, and would not work correctly when the pad was rotated, while the Lyon mouse used the x-y coordinate system of the mouse body, as mechanical mice do. The optical mouse ultimately sold with the Xerox STAR office computer used an inverted sensor chip packaging approach patented by Lisa M. Williams and Robert S. Cherry of the Xerox Microelectronics Center. Modern optical mice Modern surface-independent optical mice work by using an optoelectronic sensor (essentially, a tiny low-resolution video camera) to take successive images of the surface on which the mouse operates. As computing power grew cheaper, it became possible to embed more powerful special-purpose image-processing chips in the mouse itself. This advance enabled the mouse to detect relative motion on a wide variety of surfaces, translating the movement of the mouse into the movement of the cursor and eliminating the need for a special mouse-pad. A surface-independent coherent light optical mouse design was patented by Stephen B. Jackson at Xerox in 1988. The first commercially available, modern optical computer mice were the Microsoft IntelliMouse with IntelliEye and IntelliMouse Explorer, introduced in 1999 using technology developed by Hewlett-Packard. It worked on almost any surface, and represented a welcome improvement over mechanical mice, which would pick up dirt, track capriciously, invite rough handling, and need to be taken apart and cleaned frequently. Other manufacturers soon followed Microsoft's lead using components manufactured by the HP spin-off Agilent Technologies, and over the next several years mechanical mice became obsolete. The technology underlying the modern optical computer mouse is known as digital image correlation, a technology pioneered by the defense industry for tracking military targets. A simple binary-image version of digital image correlation was used in the 1980 Lyon optical mouse. Optical mice use image sensors to image naturally occurring texture in materials such as wood, cloth, mouse pads and Formica. These surfaces, when lit at a grazing angle by a light emitting diode, cast distinct shadows that resemble a hilly terrain lit at sunset. Images of these surfaces are captured in continuous succession and compared with each other to determine how far the mouse has moved. To understand how optical flow is used in optical mice, imagine two photographs of the same object except slightly offset from each other. Place both photographs on a light table to make them transparent, and slide one across the other until their images line up. The amount that the edges of one photograph overhang the other represents the offset between the images, and in the case of an optical computer mouse the distance it has moved. Optical mice capture one thousand successive images or more per second. Depending on how fast the mouse is moving, each image will be offset from the previous one by a fraction of a pixel or as many as several pixels. Optical mice mathematically process these images using cross correlation to calculate how much each successive image is offset from the previous one. An optical mouse might use an image sensor having an 18 × 18 pixel array of monochromatic pixels. Its sensor would normally share the same ASIC as that used for storing and processing the images. One refinement would be accelerating the correlation process by using information from previous motions, and another refinement would be preventing deadbands when moving slowly by adding interpolation or frame-skipping. The development of the modern optical mouse at Hewlett-Packard Co. was supported by a succession of related projects during the 1990s at HP Laboratories. In 1992 William Holland was awarded US Patent 5,089,712 and John Ertel, William Holland, Kent Vincent, Rueiming Jamp, and Richard Baldwin were awarded US Patent 5,149,980 for measuring linear paper advance in a printer by correlating images of paper fibers. Ross R. Allen, David Beard, Mark T. Smith, and Barclay J. Tullis were awarded US Patents 5,578,813 (1996) and 5,644,139 (1997) for 2-dimensional optical navigational (i.e., position measurement) principles based on detecting and correlating microscopic, inherent features of the surface over which the navigation sensor travelled, and using position measurements of each end of a linear (document) image sensor to reconstruct an image of the document. This is the freehand scanning concept used in the HP CapShare 920 handheld scanner. By describing an optical means that explicitly overcame the limitations of wheels, balls, and rollers used in contemporary computer mice, the optical mouse was anticipated. These patents formed the basis for US Patent 5,729,008 (1998) awarded to Travis N. Blalock, Richard A. Baumgartner, Thomas Hornak, Mark T. Smith, and Barclay J. Tullis, where surface feature image sensing, image processing, and image correlation was realized by an integrated circuit to produce a position measurement. Improved precision of 2D optical navigation, needed for application of optical navigation to precise 2D measurement of media (paper) advance in HP DesignJet large format printers, was further refined in US Patent 6,195,475 awarded in 2001 to Raymond G. Beausoleil, Jr., and Ross R. Allen. While the reconstruction of the image in the document scanning application (Allen et al.) required resolution by the optical navigators on the order of 1/600th of an inch, implementation of optical position measurement in computer mice not only benefit from the cost reductions inherent in navigating at lower resolution, but also enjoy the advantage of visual feedback to the user of the cursor position on the computer display. In 2002, Gary Gordon, Derek Knee, Rajeev Badyal and Jason Hartlove were awarded US Patent 6,433,780 for an optical computer mouse that measured position using image correlation. Some small trackpads (such as those on Blackberry smartphones) work like an optical mouse. Light source LED mice Optical mice often used light-emitting diodes (LEDs) for illumination when first popularized. The color of the optical mouse's LEDs can vary, but red is most common, as red diodes are inexpensive and silicon photodetectors are very sensitive to red light. IR LEDs are also widely used. Other colors are sometimes used, such as the blue LED of the V-Mouse VM-101 illustrated at right. Laser mice The laser mouse uses an infrared laser diode instead of an LED to illuminate the surface beneath their sensor. As early as 1998, Sun Microsystems provided a laser mouse with their Sun SPARCstation servers and workstations. However, laser mice did not enter the mainstream consumer market until 2004, following the development by a team at Agilent Laboratories, Palo Alto, led by Doug Baney of a laser-based mouse based on a 850 nm VCSEL (laser) that offered 20X improvement in tracking performance. Tong Xie, Marshall T. Depue, and Douglas M. Baney were awarded US patents 7,116,427 and 7,321,359 for their work on low power consumption broad navigability VCSEL-based consumer mice. Paul Machin at Logitech, in partnership with Agilent Technologies introduced the new technology as the MX 1000 laser mouse. This mouse uses a small infrared laser (VCSEL) instead of an LED and has significantly increased the resolution of the image taken by the mouse. The laser illumination enables superior surface tracking compared to LED-illuminated optical mice. Glass laser (or glaser) mice have the same capability of a laser mouse but work far better on mirror or transparent glass surfaces than other optical mice on those surfaces. In 2008, Avago Technologies introduced laser navigation sensors whose emitter was integrated into the IC using VCSEL technology. In August 2009, Logitech introduced mice with two lasers, to track on glass and glossy surfaces better; they dubbed them a "Darkfield" laser sensor. Power Manufacturers often engineer their optical mice—especially battery-powered wireless models—to save power when possible. To do this, the mouse dims or blinks the laser or LED when in standby mode (each mouse has a different standby time). A typical implementation (by Logitech) has four power states, where the sensor is pulsed at different rates per second: 11500: full on, for accurate response while moving, illumination appears bright. 1100: fallback active condition while not moving, illumination appears dull. 110: standby 12: sleep state Movement can be detected in any of these states; some mice turn the sensor fully off in the sleep state, requiring a button click to wake. Optical mice utilizing infrared elements (LEDs or lasers) offer substantial increases in battery life over visible spectrum illumination. Some mice, such as the Logitech V450 848 nm laser mouse, are capable of functioning on two AA batteries for a full year, due to the low power requirements of the infrared laser. Mice designed for use where low latency and high responsiveness are important, such as in playing video games, may omit power-saving features and require a wired connection to improve performance. Examples of mice which sacrifice power-saving in favor of performance are the Logitech G5 and the Razer Copperhead. Optical versus mechanical mice Unlike mechanical mice, whose tracking mechanisms can become clogged with lint, optical mice have no moving parts (besides buttons and scroll wheels); therefore, they do not require maintenance other than removing debris that might collect under the light emitter. However, they generally cannot track on glossy and transparent surfaces, including some mouse-pads, causing the cursor to drift unpredictably during operation. Mice with less image-processing power also have problems tracking fast movement, whereas some high-quality mice can track faster than 2 m/s. Some models of laser mouse can track on glossy and transparent surfaces, and have a much higher sensitivity. mechanical mice had lower average power requirements than their optical counterparts; the power used by mice is relatively small, and only an important consideration when the power is derived from batteries, with their limited capacity. Optical models outperform mechanical mice on uneven, slick, soft, sticky, or loose surfaces, and generally in mobile situations lacking mouse pads. Because optical mice render movement based on an image which the LED (or infrared diode) illuminates, use with multicolored mouse pads may result in unreliable performance; however, laser mice do not suffer these problems and will track on such surfaces. References Computer mice History of human–computer interaction Video game control methods American inventions
69730727
https://en.wikipedia.org/wiki/Choreographic%20programming
Choreographic programming
In computer science, choreographic programming is a programming paradigm where programs are compositions of interactions among multiple concurrent participants. Overview Choreographies In choreographic programming, developers use a choreographic programming language to define the intended communication behaviour of concurrent participants. Programs in this paradigm are called choreographies. Choreographic languages are inspired by security protocol notation (also known as "Alice and Bob" notation). The key to these languages is the communication primitive, for example Alice.expr -> Bob.x reads "Alice communicates the result of evaluating the expression expr to Bob, which stores it in its local variable x". Alice, Bob, etc. are typically called roles or processes. The example below shows a choreography for a simplified single sign-on (SSO) protocol based on a Central Authentication Service (CAS) that involves three roles: Client, which wishes to obtain an access token from CAS to interact with Service. Service, which needs to know from CAS if the Client should be given access. CAS, which is the Central Authentication Service responsible for checking the Client's credentials. The choreography is: Client.(credentials, serviceID) -> CAS.authRequest if CAS.check(authRequest) then CAS.token = genToken(authRequest) CAS.Success(token) -> Client.result CAS.Success(token) -> Service.result else CAS.Failure -> Client.result CAS.Failure -> Service.result The choreography starts in Line 1, where Client communicates a pair consisting of some credentials and the identifier of the service it wishes to access to CAS. CAS stores this pair in its local variable authRequest (for authentication request). In Line 2, the CAS checks if the request is valid for obtaining an authentication token. If so, it generates a token and communicates a Success message containing the token to both Client and Service (Lines 3–5). Otherwise, the CAS informs Client and Service that authentication failed, by sending a Failure message (Lines 7–8). We refer to this choreography as the "SSO choreography" in the remainder. Endpoint Projection A key feature of choreographic programming is the capability of compiling choreographies to distributed implementations. These implementations can be libraries for software that needs to participate in a computer network by following a protocol, or standalone distributed programs. The translation of a choreography into distributed programs is called endpoint projection (EPP for short). Endpoint projection returns a program for each role described in the source choreography. For example, given the choreography above, endpoint projection would return three programs: one for Client, one for Service, and one for CAS. They are shown below in pseudocode form, where send and recv are primitives for sending and receiving messages to/from other roles. For each role, its code contains the actions that the role should execute to implement the choreography correctly together with the others. Development The paradigm of choreographic programming originates from its titular PhD thesis. The inspiration for the syntax of choreographic programming languages can be traced back to security protocol notation, also known as "Alice and Bob" notation. Choreographic programming has also been heavily influenced by standards for service choreography and interaction diagrams, as well as developments of the theory of process calculi. Choreographic programming is an active area of research. The paradigm has been used in the study of information flow, parallel computing, cyber-physical systems, runtime adaptation, and system integration. Languages AIOCJ (website). A choreographic programming language for adaptable systems that produces code in Jolie. Chor (website). A session-typed choreographic programming language that compiles to microservices in Jolie. Choral (website). A higher-order, object-oriented choreographic programming language that compiles to libraries in Java. Core Choreographies. A core theoretical model for choreographic programming. An implementation is available in Coq. Pirouette. A mechanised choreographic programming language theory with higher-order procedures. See also Security protocol notation Sequence diagram Service choreography Structured concurrency References External links www.choral-lang.org Concurrent computing Programming paradigms
716896
https://en.wikipedia.org/wiki/Virtual%20world
Virtual world
A virtual world (also called a virtual space) is a computer-simulated environment which may be populated by many users who can create a personal avatar, and simultaneously and independently explore the virtual world, participate in its activities and communicate with others. These avatars can be textual, graphical representations, or live video avatars with auditory and touch sensations. The user accesses a computer-simulated world which presents perceptual stimuli to the user, who in turn can manipulate elements of the modeled world and thus experience a degree of presence. Such modeled worlds and their rules may draw from reality or fantasy worlds. Example rules are gravity, topography, locomotion, real-time actions, and communication. Communication between users can range from text, graphical icons, visual gesture, sound, and rarely, forms using touch, voice command, and balance senses. Massively multiplayer online games depict a wide range of worlds, including those based on science fiction, the real world, super heroes, sports, horror, and historical milieus. Most MMORPGs have real-time actions and communication. Players create a character who travels between buildings, towns, and worlds to carry out business or leisure activities. Communication is usually textual, but real-time voice communication is also possible. The form of communication used can substantially affect the experience of players in the game. Media studies professor Edward Castronova used the term "synthetic worlds" to discuss individual virtual worlds, but this term has not been widely adopted. Virtual worlds are not limited to games but, depending on the degree of immediacy presented, can encompass computer conferencing and text-based chatrooms. History The concept of virtual worlds significantly predates computers. The Roman naturalist, Pliny the Elder, expressed an interest in perceptual illusion. In the twentieth century, the cinematographer Morton Heilig explored the creation of the Sensorama, a theatre experience designed to stimulate the senses of the audience—vision, sound, balance, smell, even touch (via wind)—and so draw them more effectively into the productions Among the earliest virtual worlds implemented by computers were virtual reality simulators, such as the work of Ivan Sutherland. Such devices are characterized by bulky headsets and other types of sensory input simulation. Contemporary virtual worlds, in particular the multi-user online environments, emerged mostly independently of this research, fueled instead by the gaming industry but drawing on similar inspiration. While classic sensory-imitating virtual reality relies on tricking the perceptual system into experiencing an immersive environment, virtual worlds typically rely on mentally and emotionally engaging content which gives rise to an immersive experience. Maze War was the first networked, 3D multi-user first person shooter game. Maze introduced the concept of online players in 1973–1974 as "eyeball 'avatars' chasing each other around in a maze." It was played on ARPANET, or Advanced Research Projects Agency Network, a precursor to the Internet funded by the United States Department of Defense for use in university and research laboratories. The initial game could only be played on an Imlac, as it was specifically designed for this type of computer. The first virtual worlds presented on the Internet were communities and chat rooms, some of which evolved into MUDs and MUSHes. The first MUD, known as MUD1, was released in 1978. The acronym originally stood for Multi-User Dungeon, but later also came to mean Multi-User Dimension and Multi-User Domain. A MUD is a virtual world with many players interacting in real time. The early versions were text-based, offering only limited graphical representation and often using a Command Line Interface. Users interact in role-playing or competitive games by typing commands and can read or view descriptions of the world and other players. Such early worlds began the MUD heritage that eventually led to massively multiplayer online role-playing games, more commonly known as MMORPGs, a genre of role-playing games in which a large number of players interact within a virtual world. Some prototype virtual worlds were WorldsAway, a two-dimensional chat environment where users designed their own avatars; Dreamscape, an interactive community featuring a virtual world by CompuServe; Cityspace, an educational networking and 3D computer graphics project for children; and The Palace, a 2-dimensional community driven virtual world. However, credit for the first online virtual world usually goes to Habitat, developed in 1987 by LucasFilm Games for the Commodore 64 computer, and running on the Quantum Link service (the precursor to America Online). In 1996, the city of Helsinki, Finland with Helsinki Telephone Company (since Elisa Group) launched what was called the first online virtual 3D depiction intended to map an entire city. The Virtual Helsinki project was eventually renamed Helsinki Arena 2000 project and parts of the city in modern and historical context were rendered in 3D. In 1999, Whyville.net the first virtual world specifically for children was launched with a base in game-based learning and one of the earliest virtual currency-based economies. Shortly after, in 2000, Habbo launched and grew to become one of the most popular and longest running virtual worlds with millions of users around the world. Virtual world concepts Definitions for a "virtual world" include: "A virtual world is something with the following characteristics: It operates using an underlying automated rule set—its physics; Each player represents an individual "in" the virtual world-that player’s character; Interaction with the world takes place in real time—if you do something, it happens pretty much when you do it; The world is shared-other people can play in the same world at the same time as you; The world is persistent-it’s still there when you’re not; It’s not the real world", by Richard Bartle in 2015 "A simulated environment where many agents can virtually interact with each other, act and react to things, phenomena and the environment; agents can be zero or many human(s), each represented by many entities called a virtual self (an avatar), or many software agents; all action/reaction/interaction must happen in a real-time shared spatiotemporal nonpausable virtual environment; the environment may consist of many data spaces, but the collection of data spaces should constitute a shared data space, one persistent shard", by Nevelsteen in 2018 There is no generally accepted definition of virtual world, but they do require that the world be persistent; in other words, the world must continue to exist even after a user exits the world, and user-made changes to the world should be preserved. While the interaction with other participants is done in real-time, time consistency is not always maintained in online virtual worlds. For example, EverQuest time passes faster than real-time despite using the same calendar and time units to present game time. As virtual world is a general term, the virtual environment supports varying degrees of play and gaming. Some uses of the term include Massively multiplayer online games (MMOGs) games in which a large number of players interact within a virtual world. The concept of MMO has spread to other game types such as sports, real-time strategy and others. The persistence criterion is the only criterion that separates virtual worlds from video games, meaning that some MMO versions of RTS and FPS games resemble virtual worlds; Destiny is a video game that is such a pseudo virtual world. Emerging concepts include basing the terrain of such games on real satellite photos, such as those available through the Google Maps API or through a simple virtual geocaching of "easter eggs" on WikiMapia or similar mash-ups, where permitted; these concepts are virtual worlds making use of mixed reality. Collaborative virtual environments (CVEs) designed for collaborative work in a virtual environment. Massively multiplayer online real-life games (MMORLGs), also called virtual social worlds, where the user can edit and alter their avatar at will, allowing them to play a more dynamic role, or multiple roles. Economy A virtual economy is the emergent property of the interaction between participants in a virtual world. While the designers have a great deal of control over the economy by the encoded mechanics of trade, it is nonetheless the actions of players that define the economic conditions of a virtual world. The economy arises as a result of the choices that players make under the scarcity of real and virtual resources such as time or currency. Participants have a limited time in the virtual world, as in the real world, which they must divide between task such as collecting resources, practicing trade skills, or engaging in less productive fun play. The choices they make in their interaction with the virtual world, along with the mechanics of trade and wealth acquisition, dictate the relative values of items in the economy. The economy in virtual worlds is typically driven by in-game needs such as equipment, food, or trade goods. Virtual economies like that of Second Life, however, are almost entirely player-produced with very little link to in-game needs. While the relevance of virtual world economics to physical world economics has been questioned, it has been shown the users of virtual worlds respond to economic stimuli (such as the law of supply and demand) in the same way that people do in the physical world. In fact, there are often very direct corollaries between physical world economic decisions and virtual world economic decisions, such as the decision by prisoners of war in World War II to adopt cigarettes as currency and the adoption of Stones of Jordan as currency in Diablo II. The value of objects in a virtual economy is usually linked to their usefulness and the difficulty of obtaining them. The investment of real world resources (time, membership fees, etc.) in acquisition of wealth in a virtual economy may contribute to the real world value of virtual objects. This real world value is made obvious by the (mostly illegal) trade of virtual items on online market sites like eBay, PlayerUp, IGE for real world money. Recent legal disputes also acknowledge the value of virtual property, even overriding the mandatory EULA which many software companies use to establish that virtual property has no value and/or that users of the virtual world have no legal claim to property therein. Some industry analysts have moreover observed that there is a secondary industry growing behind the virtual worlds, made up by social networks, websites and other projects completely devoted to virtual worlds communities and gamers. Special websites such as GamerDNA, Koinup and others which serve as social networks for virtual worlds users are facing some crucial issues as the DataPortability of avatars across many virtual worlds and MMORPGs. Virtual worlds offer advertisers the potential for virtual advertisements, such as the in-game advertising already found in a number of video games. Geography The geography of virtual worlds can vary widely because the role of geography and space is an important design component over which the developers of virtual worlds have control and may choose to alter. Virtual worlds are, at least superficially, digital instantiations of three-dimensional space. As a result, considerations of geography in virtual worlds (such as World of Warcraft) often revolve around “spatial narratives” in which players act out a nomadic hero's journey along the lines of that present in The Odyssey. The creation of fantastic places is also a reoccurring theme in the geographic study of virtual worlds, although, perhaps counterintuitively, the heaviest users of virtual worlds often downgrade the sensory stimuli of the world's fantastic places in order to make themselves more efficient at core tasks in the world, such as killing monsters. However, the geographic component of some worlds may only be a geographic veneer atop an otherwise nonspatial core structure. For instance, while imposing geographic constraints upon users when they quest for items, these constraints may be removed when they sell items in a geographically unconstrained auction house. In this way, virtual worlds may provide a glimpse into what the future economic geography of the physical world may be like as more and more goods become digital. Research Virtual spaces can serve a variety of research and educational goals and may be useful for examining human behaviour. Offline- and virtual-world personalities differ from each other but are nevertheless significantly related which has a number of implications for self-verification, self-enhancement and other personality theories. Panic and agoraphobia have also been studied in a virtual world. Given the large engagement, especially of young children in virtual worlds, there has been a steady growth in research studies involving the social, educational and even emotional impact of virtual worlds on children. The John D. and Catherine T. MacArthur Foundation for example have funded research into virtual worlds including, for example, how preteens explore and share information about reproductive health. A larger set of studies on children's social and political use of the virtual world Whyville.net has also been published in the book "Connected Play: Tweens in a Virtual World" Authored by Yasmin B. Kafai, Deborah A. Fields, and Mizuko Ito. Several other research publications now specifically address the use of virtual worlds for education. Other research focused more on adults explores the reasons for indulging and the emotions of virtual world users. Many users seek an escape or a comfort zone in entering these virtual worlds, as well as a sense of acceptance and freedom. Virtual worlds allow users to freely explore many facets of their personalities in ways that are not easily available to them in real life. However, users may not be able to apply this new information outside of the virtual world. Thus, virtual worlds allow for users to flourish within the world and possibly become addicted to their new virtual life which may create a challenge as far as dealing with others and in emotionally surviving within their real lives. One reason for this freedom of exploration can be attributed to the anonymity that virtual worlds provide. It gives the individual the ability to be free from social norms, family pressures or expectations they may face in their personal real world lives. The avatar persona experiences an experience similar to an escape from reality like drug or alcohol usage for numbing pain or hiding behind it. The avatar no longer represents a simple tool or mechanism manipulated in cyberspace. Instead, it has become the individual's bridge between the physical and virtual world, a conduit through which to express oneself among other social actors. The avatar becomes the person's alter ego; the vehicle to which one utilizes to exist among others who are all seeking the same satisfaction. While greatly facilitating ease of interaction across time and geographic boundaries, the virtual world presents an unreal environment with instant connection and gratification. Online encounters are employed as seemingly fulfilling alternatives to “live person” relationships (Toronto, 2009). When one is ashamed, insecure, lost or just looking for something different and stimulating to engage in, virtual worlds are the perfect environment for its users. A person has unlimited access to an infinite array of opportunities to fulfill every fantasy, grant every wish, or satisfy every desire. He or she can face any fear or conquer any enemy, all at the click of a mouse (Toronto, 2009). Ultimately, virtual worlds are the place to go when real life becomes overbearing or boring. While in real life individuals hesitate to communicate their true opinions, it is easier to do so online because they don't ever have to meet the people they are talking with (Toronto, 2009). Thus, virtual worlds are basically a psychological escape. Another area of research related to virtual worlds is the field of navigation. Specifically, this research investigates whether or not virtual environments are adequate learning tools in regards to real-world navigation. Psychologists at Saint Michael's College found that video game experience corresponded with ability to navigate virtual environments and complete objectives; however, that experience did not correlate with an increased ability to navigate real, physical environments. An extensive study at the University of Washington conducted multiple experiments involving virtual navigation. One experiment had two groups of subjects, the first of which examined maps of a virtual environment, and the second of which navigated the virtual environment. The groups of subjects then completed an objective in the virtual environment. There was little difference between the two groups’ performances, and what difference there was, it was in favor of the map-users. The test subjects, though, were generally unfamiliar with the virtual world interface, likely leading to some impaired navigation, and thus bias in the yielded analysis of the experiments. The study concluded that the interface objects made natural navigation movements impossible, and perhaps less intrusive controls for the virtual environment would reduce the effect of the impairment. Hardware Unlike most video games, which are usually navigated using various free-ranging human interface devices (HIDs), virtual worlds are usually navigated (as of 2009) using HIDs which are designed and oriented around flat, 2-dimensional graphical user interfaces; as most comparatively inexpensive computer mice are manufactured and distributed for 2-dimensional UI navigation, the lack of 3D-capable HID usage among most virtual world users is likely due to both the lack of penetration of 3D-capable devices into non-niche, non-gaming markets as well as the generally higher pricing of such devices compared to 2-dimensional HIDs. Even those users who do make use of HIDs which provide such features as six degrees of freedom often have to switch between separate 3D and 2D devices in order to navigate their respectively designed interfaces. Like video gamers, some users of virtual world clients may also have a difficult experience with the necessity of proper graphics hardware (such as the more advanced graphics processing units distributed by Nvidia and AMD) for the sake of reducing the frequency of less-than-fluid graphics instances in the navigation of virtual worlds. However, in part for this reason, a growing number of virtual world engines, especially serving children, are entirely browser-based requiring no software down loads or specialized computer hardware. The first virtual world of this kind was Whyville.net, launched in 1999, built by Numedeon inc. which obtained an early patent for its browser-based implementation. Application domains Social Although the social interactions of participants in virtual worlds are often viewed in the context of 3D Games, other forms of interaction are common as well, including forums, blogs, wikis, chatrooms, instant messaging, and video-conferences. Communities are born in places which have their own rules, topics, jokes, and even language. Members of such communities can find like-minded people to interact with, whether this be through a shared passion, the wish to share information, or a desire to meet new people and experience new things. Users may develop personalities within the community adapted to the particular world they are interacting with, which can impact the way they think and act. Internet friendships and participation online communities tend to complement existing friendships and civic participation rather than replacing or diminishing such interactions. Medical Disabled or chronically invalided people of any age can benefit enormously from experiencing the mental and emotional freedom gained by temporarily leaving their disabilities behind and doing, through the medium of their avatars, things as simple and potentially accessible to able, healthy people as walking, running, dancing, sailing, fishing, swimming, surfing, flying, skiing, gardening, exploring and other physical activities which their illnesses or disabilities prevent them from doing in real life. They may also be able to socialize, form friendships and relationships much more easily and avoid the stigma and other obstacles which would normally be attached to their disabilities. This can be much more constructive, emotionally satisfying and mentally fulfilling than passive pastimes such as television watching, playing computer games, reading or more conventional types of internet use. The Starlight Children's Foundation helps hospitalized children (suffering from painful diseases or autism for example) to create a comfortable and safe environment which can expand their situation, experience interactions (when the involvement of a multiple cultures and players from around the world is factored in) they may not have been able to experience without a virtual world, healthy or sick. Virtual worlds also enable them to experience and act beyond the restrictions of their illness and help to relieve stress. Virtual worlds can help players become more familiar and comfortable with actions they may in real-life feel reluctant or embarrassed. For example, in World of Warcraft, /dance is the emote for a dance move which a player in the virtual world can "emote" quite simply. And a familiarization with said or similar "emotes" or social skills (such as, encouragement, gratitude, problem-solving, and even kissing) in the virtual world via avatar can make the assimilation to similar forms of expression, socialization, interaction in real life smooth. Interaction with humans through avatars in the virtual world has potential to seriously expand the mechanics of one's interaction with real-life interactions. Commercial As businesses compete in the real world, they also compete in virtual worlds. As there has been an increase in the buying and selling of products online (e-commerce) this twinned with the rise in the popularity of the internet, has forced businesses to adjust to accommodate the new market. Many companies and organizations now incorporate virtual worlds as a new form of advertising. There are many advantages to using these methods of commercialization. An example of this would be Apple creating an online store within Second Life. This allows the users to browse the latest and innovative products. Players cannot actually purchase a product but having these “virtual stores” is a way of accessing a different clientele and customer demographic. The use of advertising within "virtual worlds" is a relatively new idea. This is because Virtual Worlds is a relatively new technology. Before companies would use an advertising company to promote their products. With the introduction of the prospect of commercial success within a Virtual World, companies can reduce cost and time constraints by keeping this "in-house". An obvious advantage is that it will reduce any costs and restrictions that could come into play in the real world. Using virtual worlds gives companies the opportunity to gauge customer reaction and receive feedback. Feedback can be crucial to the development of a project as it will inform the creators exactly what users want. Using virtual worlds as a tool allows companies to test user reaction and give them feedback on products. This can be crucial as it will give the companies an insight as to what the market and customers want from new products, which can give them a competitive edge. Competitive edge is crucial in the ruthless world that is today's business. Another use of virtual worlds business is where players can create a gathering place. Many businesses can now be involved in business-to-business commercial activity and will create a specific area within a virtual world to carry out their business. Within this space all relevant information can be held. This can be useful for a variety of reasons. Players can conduct business with companies on the other side of the world, so there are no geographical limitations, it can increase company productivity. Knowing that there is an area where help is on hand can aid the employees. Sun Microsystems have created an island in Second Life dedicated for the sole use of their employees. This is a place where people can go and seek help, exchange new ideas or to advertise a new product. According to trade media company Virtual Worlds Management, commercial investments in the "virtual worlds" sector were in excess of US$425 million in Q4 2007, and totaled US$184 million in Q1 2008. However, the selection process for defining a "virtual worlds" company in this context has been challenged by one industry blog. E-commerce (legal) A number of virtual worlds have incorporated systems for sale of goods through virtual interfaces and using virtual currencies. Transfers of in-world credits typically are not bound by laws governing commerce. Such transactions may lack the oversight and protections associated with real-world commerce, and there is potential for fraudulent transactions. One example is that of Ginko Financial, a bank system featured in Second Life where avatars could deposit their real life currency after converted to Linden Dollars for a profit. In July 2007, residents of Second Life crowded around the ATM's in an unsuccessful attempt to withdraw their money. After a few days the ATM's along with the banks disappeared altogether. Around $700,000 in real world money was reported missing from residents in Second Life. An investigation was launched but nothing substantial ever came of finding and punishing the avatar known as Nicholas Portocarrero who was the head of Ginko Financial. Civil and criminal laws exist in the real world and are put in place to govern people's behavior. Virtual Worlds such as Eve Online and Second Life also have people and systems that govern them. Providers of online virtual spaces have more than one approach to the governing of their environments. Second Life for instance was designed with the expectation being on the residents to establish their own community rules for appropriate behaviour. On the other hand, some virtual worlds such as Habbo enforce clear rules for behaviour, as seen in their terms and conditions. In some instances virtual worlds don't need established rules of conduct because actions such as ‘killing’ another avatar is impossible. However, if needed to, rule breakers can be punished with fines being payable through their virtual bank account, alternatively a players suspension may be put into effect. Instances of real world theft from a virtual world do exist, Eve Online had an incident where a bank controller stole around 200bn credits and exchanged them for real world cash amounting to £3,115. The player in question has now been suspended as trading in-game cash for real money is against Eve Online's terms and conditions. Entertainment There are many MMORPG virtual worlds out on many platforms. Most notable are IMVU for Windows, PlayStation Home for PlayStation 3, and Second Life for Windows. Many Virtual worlds have shut down since launch however. Notable shutdowns are The Sims Online, The Sims Bustin Out Online Weekend Mode, PlayStation Home, and Club Penguin. Single-player games Some single-player video games contain virtual worlds populated by non-player characters (NPC). Many of these allow players to save the current state of this world instance to allow stopping and restarting the virtual world at a later date. (This can be done with some multiplayer environments as well.) The virtual worlds found in video games are often split into discrete levels. Single-player games such as Minecraft have semi-infinite procedurally generated worlds that allow players to optionally create their own world without other players, and then combine skills from the game to work together with other players and create bigger and more intricate environments. These environments can then be accessed by other players, if the server is available to other players then they may be able to modify parts of it, such as the structure of the environment. At one level, a more or less realistic rendered 3D space like the game world of Halo 3 or Grand Theft Auto V is just as much a big database as Microsoft's Encarta encyclopedia. Use in education Virtual worlds represent a powerful new medium for instruction and education that presents many opportunities but also some challenges. Persistence allows for continuing and growing social interactions, which themselves can serve as a basis for collaborative education. The use of virtual worlds can give teachers the opportunity to have a greater level of student participation. It allows users to be able to carry out tasks that could be difficult in the real world due to constraints and restrictions, such as cost, scheduling or location. Virtual worlds have the capability to adapt and grow to different user needs, for example, classroom teachers are able to use virtual worlds in their classroom leveraging their interactive whiteboard with the open-source project Edusim. They can be a good source of user feedback, the typical paper-based resources have limitations that Virtual Worlds can overcome. Multi-user virtual worlds with easy-to-use affordances for building are useful in project-based learning. For example, Active Worlds is used to support classroom teachers in Virginia Beach City Public Schools, the out-of-school NASA RealWorld-InWorld Engineering Design Challenge, and many after school and in school programs in EDUni-NY. Projects range from tightly scaffolded reflection spaces to open building based on student-centered designs. New York Museums AMNH and NYSci have used the medium to support STEM learning experiences for their program participants. Virtual worlds can also be used with virtual learning environments, as in the case of what is done in the Sloodle project, which aims to merge Second Life with Moodle. Virtual worlds allow users with specific needs and requirements to access and use the same learning materials from home as they would receive if they were physically present. Virtual worlds can help users stay up to date with relevant information and needs while also feeling as they are involved. Having the option to be able to attend a presentation via a virtual world from home or from their workplace, can help the user to be more at ease and comfortable. Although virtual worlds are used as an alternative method of communicating and interacting with students and teachers, a sense of isolation can occur such as losing certain body language cues and other more personal aspects that one would achieve if they were face to face. Some virtual worlds also offer an environment where simulation-based activities and games allow users to experiment various phenomenon and learn the underlying physics and principles. An example is Whyville launched in 1999, which targets kids and teenagers, offering them many opportunities to experiment, understand and learn. Topics covered in Whyville vary from physics to nutrition to ecology. Whyville also has a strong entrepreneurial structure based on user created virtual content sold in the internal virtual economy. Some multi-user virtual worlds have become used for educational purposes and are thus called Multi-User Virtual Learning Environments (MUVLEs). Examples have included the use of Second Life for teaching English as a foreign languages (EFL) Many specialist types of MUVLE have particular pedagogies associated with them. For instance, George Siemens, Stephen Downes continue to promote the use of a type of MUVLE Dave Cormier coined called a 'MOOC'. Even though MOOCs were once seen as "next big thing" by universities and online education service providers such as Blackboard Inc, this was in fact what has been called a "stampede." By early 2013, serious questions emerged about whether MOOCs were simply part of a hype cycle and indeed following that hype whether academia was thus "MOOC'd out." Language Language learning is the most widespread type of education in virtual worlds. Business Online training overcomes constraints such as distance, infrastructure, accommodation costs and tight scheduling. Although video conferencing may be the most common tool, virtual worlds have been adopted by the business environment for training employees. For example, Second Life has been used in business schools. Virtual training content resembles traditional tutorials and testing of user knowledge. Despite the lack of face to face contact and impaired social linking, learning efficiency may not be adversely affected as adults need autonomy in learning and are more self-directed than younger students. Some companies and public places allow free virtual access to their facilities as an alternative to a video or picture. In fiction Virtual worlds, virtual reality, and cyberspace are popular fictional motifs. The first was probably John M. Ford's 1980 novel Web of Angels, and a prominent early example is the work of William Gibson. Virtual worlds are integral to works such as Tron, Neuromancer, Ghost in the Shell, Snow Crash, The Lawnmower Man, Lawnmower Man 2, ReBoot, Digimon, The Matrix, MegaMan NT Warrior, Epic, Code Lyoko and Real Drive. In A.K. Dewdney's novel, the Planiverse (1984), college students create a virtual world called 2DWorld, leading to contact with Arde, a two-dimensional parallel universe. The main focus of the Japanese cyberpunk, psychological, 13-episode anime titled Serial Experiments Lain (1998) is the Wired, a virtual reality world that governs the sum of all electronic communication and machines; outer receptors are used to mentally transport a person into the Wired itself as a uniquely different virtual avatar. Yasutaka Tsutsui's novel, Gaspard in the Morning (1992), is the story of an individual immersed in the virtual world of a massively multiplayer online game. The plots of isekai works such as Moon: Remix RPG Adventure (1997), Digimon Adventure (1999), .hack (2002), Sword Art Online (2002), Summer Wars (2009), Accel World (2009), Ready Player One (2011), Jumanji (2017), Space Jam: A New Legacy (2021) and Belle (2021) also involve the virtual worlds of video games. The fourth series of the New Zealand TV series The Tribe features the birth of Reality Space and the Virtual World that was created by Ram, the computer genius-wizard leader of The Technos. In 2009, BBC Radio 7 commissioned Planet B, set in a virtual world in which a man searches for his girlfriend, believed to be dead, but in fact still alive within the world called "Planet B". The series is the biggest-ever commission for an original drama series. The plot of "San Junipero", series 3, episode 4 of the anthology TV series Black Mirror, revolves around a virtual world in which participants can choose time periods to visit. Living people may visit only 5 hours per week; while the dying can choose to permanently preserve their consciousness there. An upcoming South Korean sci-fi fantasy film Wonderland, is about a virtual simulated place for people to reunite with a person they may not meet again, by using artificial intelligence. Future Virtual worlds may lead to a "mobility" of labor that may impact national and organizational competitiveness in a manner similar to the changes seen with the mobility of goods and then the mobility of labor. Virtual worlds may increasingly function as centers of commerce, trade, and business. Virtual asset trade is massive and growing; e.g., Second Life revenue reached approximately 7 million US Dollars per month, in 2011. Real world firms, such as Coca-Cola, have used virtual worlds to advertise their brand. See also Cyberspace Extended reality Metaverse Simulated reality Simulated reality in fiction Transreality gaming Virtual community Virtual globe Virtual reality Citations References Teixeira, Marcelo Mendonça; Ferreira, Tiago Alessandro Espinola (2014). The communication model of virtual universe. Munich: Grin Verlag. External links Journal of Gaming & Virtual Worlds Video game gameplay Virtual reality Persistent worlds Articles containing video clips Cyberpunk themes Virtual world communities
39626432
https://en.wikipedia.org/wiki/Edward%20Snowden
Edward Snowden
Edward Joseph Snowden (born June 21, 1983) is an American former computer intelligence consultant who leaked highly classified information from the National Security Agency (NSA) in 2013, when he was an employee and subcontractor. His disclosures revealed numerous global surveillance programs, many run by the NSA and the Five Eyes Intelligence Alliance with the cooperation of telecommunication companies and European governments, and prompted a cultural discussion about national security and individual privacy. In 2013, Snowden was hired by an NSA contractor, Booz Allen Hamilton, after previous employment with Dell and the CIA. Snowden says he gradually became disillusioned with the programs with which he was involved, and that he tried to raise his ethical concerns through internal channels but was ignored. On May 20, 2013, Snowden flew to Hong Kong after leaving his job at an NSA facility in Hawaii, and in early June he revealed thousands of classified NSA documents to journalists Glenn Greenwald, Laura Poitras, Barton Gellman, and Ewen MacAskill. Snowden came to international attention after stories based on the material appeared in The Guardian, The Washington Post, and other publications. On June 21, 2013, the United States Department of Justice unsealed charges against Snowden of two counts of violating the Espionage Act of 1917 and theft of government property, following which the Department of State revoked his passport. Two days later, he flew into Moscow's Sheremetyevo International Airport, where Russian authorities observed the canceled passport, and he was restricted to the airport terminal for over one month. Russia later granted Snowden the right of asylum with an initial visa for residence for one year, which was subsequently repeatedly extended. In October 2020, he was granted permanent residency in Russia. A subject of controversy, Snowden has been variously called a traitor, a hero, a whistleblower, a dissident, a coward, and a patriot. U.S. officials condemned his actions as having done "grave damage" to the U.S. intelligence capabilities. Snowden has defended his leaks as an effort "to inform the public as to that which is done in their name and that which is done against them." His disclosures have fueled debates over mass surveillance, government secrecy, and the balance between national security and information privacy. In early 2016, Snowden became the president of the Freedom of the Press Foundation, a San Francisco-based nonprofit organization that aims to protect journalists from hacking and government surveillance. He also has a job at an unnamed Russian IT company. In 2017, he married Lindsay Mills. On September 17, 2019, his memoir Permanent Record was published. On September 2, 2020, a U.S. federal court ruled in United States v. Moalin that the U.S. intelligence's mass surveillance program exposed by Snowden was illegal and possibly unconstitutional. Early life Childhood, family, and education Edward Joseph Snowden was born on June 21, 1983, in Elizabeth City, North Carolina. His maternal grandfather, Edward J. Barrett, a rear admiral in the U.S. Coast Guard, became a senior official with the FBI and was at the Pentagon in 2001 during the September 11 attacks. Snowden's father, Lonnie, was a warrant officer in the Coast Guard, and his mother, Elizabeth, is a clerk at the U.S. District Court for the District of Maryland. His older sister, Jessica, was a lawyer at the Federal Judicial Center in Washington, D.C. Edward Snowden said that he had expected to work for the federal government, as had the rest of his family. His parents divorced in 2001, and his father remarried. Snowden scored above 145 on two separate IQ tests. In the early 1990s, while still in grade school, Snowden moved with his family to the area of Fort Meade, Maryland. Mononucleosis caused him to miss high school for almost nine months. Rather than returning to school, he passed the GED test and took classes at Anne Arundel Community College. Although Snowden had no undergraduate college degree, he worked online toward a master's degree at the University of Liverpool, England, in 2011. He was interested in Japanese popular culture, had studied the Japanese language, and worked for an anime company that had a resident office in the U.S. He also said he had a basic understanding of Mandarin Chinese and was deeply interested in martial arts. At age 20, he listed Buddhism as his religion on a military recruitment form, noting that the choice of agnostic was "strangely absent." In September 2019, as part of interviews relating to the release of his memoir Permanent Record, Snowden revealed to The Guardian that he married Lindsay Mills in a courthouse in Moscow. The couple have a son born in December 2020. Political views Snowden has said that, in the 2008 presidential election, he voted for a third-party candidate, though he "believed in Obama's promises." Following the election, he believed President Barack Obama was continuing policies espoused by George W. Bush. In accounts published in June 2013, interviewers noted that Snowden's laptop displayed stickers supporting Internet freedom organizations including the Electronic Frontier Foundation (EFF) and the Tor Project. A week after publication of his leaks began, Ars Technica confirmed that Snowden had been an active participant at the site's online forum from 2001 through May 2012, discussing a variety of topics under the pseudonym "TheTrueHOOHA." In an online discussion about racism in 2009, Snowden said: ''I went to London just last year it's where all of your muslims live I didn't want to get out of the car. I thought I had gotten off of the plane in the wrong country... it was terrifying.'' In a January 2009 entry, TheTrueHOOHA exhibited strong support for the U.S. security state apparatus and said leakers of classified information "should be shot in the balls." However, Snowden disliked Obama's CIA director appointment of Leon Panetta, saying "Obama just named a fucking politician to run the CIA." Snowden was also offended by a possible ban on assault weapons, writing "Me and all my lunatic, gun-toting NRA compatriots would be on the steps of Congress before the C-Span feed finished." Snowden disliked Obama's economic policies, was against Social Security, and favored Ron Paul's call for a return to the gold standard. In 2014, Snowden supported a universal basic income. Career Feeling a duty to fight in the Iraq War to help free oppressed people, Snowden enlisted in the United States Army on May 7, 2004, and became a Special Forces candidate through its 18X enlistment option. He did not complete the training due to bilateral tibial stress fractures, and was discharged on September 28, 2004. Snowden was then employed for less than a year in 2005 as a security guard at the University of Maryland's Center for Advanced Study of Language, a research center sponsored by the National Security Agency (NSA). According to the University, this is not a classified facility, though it is heavily guarded. In June 2014, Snowden told Wired that his job as a security guard required a high-level security clearance, for which he passed a polygraph exam and underwent a stringent background investigation. Employment at CIA After attending a 2006 job-fair focused on intelligence agencies, Snowden accepted an offer for a position at the CIA. The Agency assigned him to the global communications division at CIA headquarters in Langley, Virginia. In May 2006, Snowden wrote in Ars Technica that he had no trouble getting work because he was a "computer wizard". After distinguishing himself as a junior employee on the top computer team, Snowden was sent to the CIA's secret school for technology specialists, where he lived in a hotel for six months while studying and training full-time. In March 2007, the CIA stationed Snowden with diplomatic cover in Geneva, Switzerland, where he was responsible for maintaining computer-network security. Assigned to the U.S. Permanent Mission to the United Nations, a diplomatic mission representing U.S. interests before the UN and other international organizations, Snowden received a diplomatic passport and a four-bedroom apartment near Lake Geneva. According to Greenwald, while there Snowden was "considered the top technical and cybersecurity expert" in that country and "was hand-picked by the CIA to support the president at the 2008 NATO summit in Romania". Snowden described his CIA experience in Geneva as formative, stating that the CIA deliberately got a Swiss banker drunk and encouraged him to drive home. Snowden said that when the latter was arrested for drunk driving, a CIA operative offered to help in exchange for the banker becoming an informant. Ueli Maurer, President of the Swiss Confederation for the year 2013, publicly disputed Snowden's claims in June of that year. "This would mean that the CIA successfully bribed the Geneva police and judiciary. With all due respect, I just can't imagine it," said Maurer. In February 2009, Snowden resigned from the CIA. NSA sub-contractee as an employee at Dell In 2009, Snowden began work as a contractee for Dell, which manages computer systems for multiple government agencies. Assigned to an NSA facility at Yokota Air Base near Tokyo, Snowden instructed top officials and military officers on how to defend their networks from Chinese hackers. Snowden looked into mass surveillance in China which prompted him to investigate and then expose Washington's mass surveillance program after he was asked in 2009 to brief a conference in Tokyo. During his four years with Dell, he rose from supervising NSA computer system upgrades to working as what his résumé termed a "cyber strategist" and an "expert in cyber counterintelligence" at several U.S. locations. In 2010, he had a brief stint in New Delhi where he enrolled himself in a local IT institute to learn core Java programming and advanced ethical hacking. In 2011, he returned to Maryland, where he spent a year as lead technologist on Dell's CIA account. In that capacity, he was consulted by the chiefs of the CIA's technical branches, including the agency's chief information officer and its chief technology officer. U.S. officials and other sources familiar with the investigation said Snowden began downloading documents describing the government's electronic spying programs while working for Dell in April 2012. Investigators estimated that of the 50,000 to 200,000 documents Snowden gave to Greenwald and Poitras, most were copied by Snowden while working at Dell. In March 2012, Dell reassigned Snowden to Hawaii as lead technologist for the NSA's information-sharing office. NSA sub-contractee as an employee at Booz Allen Hamilton On March 15, 2013three days after what he later called his "breaking point" of "seeing the Director of National Intelligence, James Clapper, directly lie under oath to Congress"Snowden quit his job at Dell. Although he has said his career high annual salary was $200,000, Snowden said he took a pay cut to work at consulting firm Booz Allen Hamilton, where he sought employment in order to gather data and then release details of the NSA's worldwide surveillance activity. At the time of his departure from the U.S. in May 2013, he had been employed for 15 months inside the NSA's Hawaii regional operations center, which focuses on the electronic monitoring of China and North Korea, first for Dell and then for two months with Booz Allen Hamilton. While intelligence officials have described his position there as a system administrator, Snowden has said he was an infrastructure analyst, which meant that his job was to look for new ways to break into Internet and telephone traffic around the world. An anonymous source told Reuters that, while in Hawaii, Snowden may have persuaded 20–25 co-workers to give him their login credentials by telling them he needed them to do his job. The NSA sent a memo to Congress saying that Snowden had tricked a fellow employee into sharing his personal private key to gain greater access to the NSA's computer system. Snowden disputed the memo, saying in January 2014, "I never stole any passwords, nor did I trick an army of co-workers." Booz Allen terminated Snowden's employment on June 10, 2013, the day after he went public with his story, and 3 weeks after he had left Hawaii on a leave of absence. A former NSA co-worker said that although the NSA was full of smart people, Snowden was a "genius among geniuses" who created a widely implemented backup system for the NSA and often pointed out security flaws to the agency. The former colleague said Snowden was given full administrator privileges with virtually unlimited access to NSA data. Snowden was offered a position on the NSA's elite team of hackers, Tailored Access Operations, but turned it down to join Booz Allen. An anonymous source later said that Booz Allen's hiring screeners found possible discrepancies in Snowden's resume but still decided to hire him. Snowden's résumé stated that he attended computer-related classes at Johns Hopkins University. A spokeswoman for Johns Hopkins said that the university did not find records to show that Snowden attended the university, and suggested that he may instead have attended Advanced Career Technologies, a private for-profit organization that operated as the Computer Career Institute at Johns Hopkins University. The University of Maryland University College acknowledged that Snowden had attended a summer session at a UM campus in Asia. Snowden's résumé stated that he estimated he would receive a University of Liverpool computer security master's degree in 2013. The university said that Snowden registered for an online master's degree program in computer security in 2011 but was inactive as a student and had not completed the program. In his May 2014 interview with NBC News, Snowden accused the U.S. government of trying to use one position here or there in his career to distract from the totality of his experience, downplaying him as a "low-level analyst." In his words, he was "trained as a spy in the traditional sense of the word in that I lived and worked undercover overseas—pretending to work in a job that I'm not—and even being assigned a name that was not mine." He said he'd worked for the NSA undercover overseas, and for the DIA had developed sources and methods to keep information and people secure "in the most hostile and dangerous environments around the world. So when they say I'm a low-level systems administrator, that I don't know what I'm talking about, I'd say it's somewhat misleading." In a June interview with Globo TV, Snowden reiterated that he "was actually functioning at a very senior level." In a July interview with The Guardian, Snowden explained that, during his NSA career, "I began to move from merely overseeing these systems to actively directing their use. Many people don't understand that I was actually an analyst and I designated individuals and groups for targeting." Snowden subsequently told Wired that while at Dell in 2011, "I would sit down with the CIO of the CIA, the CTO of the CIA, the chiefs of all the technical branches. They would tell me their hardest technology problems, and it was my job to come up with a way to fix them." During his time as an NSA analyst, directing the work of others, Snowden recalled a moment when he and his colleagues began to have severe ethical doubts. Snowden said 18 to 22-year-old analysts were suddenly"thrust into a position of extraordinary responsibility, where they now have access to all your private records. In the course of their daily work, they stumble across something that is completely unrelated in any sort of necessary sense—for example, an intimate nude photo of someone in a sexually compromising situation. But they're extremely attractive. So what do they do? They turn around in their chair and they show a co-worker ... and sooner or later this person's whole life has been seen by all of these other people." Snowden observed that this behavior happened routinely every two months but was never reported, being considered one of the "fringe benefits" of the work. Whistleblower status Snowden has described himself as a whistleblower, a description used by many sources, including CNBC, The New Yorker, Reuters, and The Guardian, among others. The term has both informal and legal meanings. Snowden said that he had told multiple employees and two supervisors about his concerns, but the NSA disputes his claim. Snowden elaborated in January 2014, saying "[I] made tremendous efforts to report these programs to co-workers, supervisors, and anyone with the proper clearance who would listen. The reactions of those I told about the scale of the constitutional violations ranged from deeply concerned to appalled, but no one was willing to risk their jobs, families, and possibly even freedom to go to through what [Thomas Andrews] Drake did." In March 2014, during testimony to the European Parliament, Snowden wrote that before revealing classified information he had reported "clearly problematic programs" to ten officials, who he said did nothing in response. In a May 2014 interview, Snowden told NBC News that after bringing his concerns about the legality of the NSA spying programs to officials, he was told to stay silent on the matter. He said that the NSA had copies of emails he sent to their Office of General Counsel, oversight, and compliance personnel broaching "concerns about the NSA's interpretations of its legal authorities. I had raised these complaints not just officially in writing through email, but to my supervisors, to my colleagues, in more than one office." In May 2014, U.S. officials released a single email that Snowden had written in April 2013 inquiring about legal authorities but said that they had found no other evidence that Snowden had expressed his concerns to someone in an oversight position. In June 2014, the NSA said it had not been able to find any records of Snowden raising internal complaints about the agency's operations. That same month, Snowden explained that he has not produced the communiqués in question because of the ongoing nature of the dispute, disclosing for the first time that "I am working with the NSA in regard to these records and we're going back and forth, so I don't want to reveal everything that will come out." Self-description as a whistleblower and attribution as such in news reports does not determine whether he qualifies as a whistleblower within the meaning of the Whistleblower Protection Act of 1989 (5 USC 2303(b)(8)-(9); Pub. Law 101-12). However, Snowden's potential status as a Whistleblower under the 1989 Act is not directly addressed in the criminal complaint against him in the United States District Court for the Eastern District of Virginia (see below) (Case No. 1:13 CR 265 (0MH)). These and similar and related issues are discussed in an essay by David Pozen, in a chapter of the book Whistleblowing Nation, published in March 2020, an adaptation of which also appeared on Lawfare Blog in March 2019. The unclassified portion of a September 15, 2016, report by the United States House Permanent Select Committee on Intelligence (HPSCI), initiated by the chairman and Ranking Member in August 2014, and posted on the website of the Federation of American Scientists, concluded that Snowden was not a whistleblower in the sense required by the Whistleblower Protection Act. The bulk of the report is classified. Global surveillance disclosures Size and scope of disclosures The exact size of Snowden's disclosure is unknown, but Australian officials have estimated 15,000 or more Australian intelligence files and British officials estimate at least 58,000 British intelligence files were included. NSA Director Keith Alexander initially estimated that Snowden had copied anywhere from 50,000 to 200,000 NSA documents. Later estimates provided by U.S. officials were in the order of 1.7 million, a number that originally came from Department of Defense talking points. In July 2014, The Washington Post reported on a cache previously provided by Snowden from domestic NSA operations consisting of "roughly 160,000 intercepted e-mail and instant-message conversations, some of them hundreds of pages long, and 7,900 documents taken from more than 11,000 online accounts." A U.S. Defense Intelligence Agency report declassified in June 2015 said that Snowden took 900,000 Department of Defense files, more than he downloaded from the NSA. Potential impact on U.S. national security In March 2014, Army General Martin Dempsey, Chairman of the Joint Chiefs of Staff, told the House Armed Services Committee, "The vast majority of the documents that Snowden ... exfiltrated from our highest levels of security ... had nothing to do with exposing government oversight of domestic activities. The vast majority of those were related to our military capabilities, operations, tactics, techniques, and procedures." When asked in a May 2014 interview to quantify the number of documents Snowden stole, retired NSA director Keith Alexander said there was no accurate way of counting what he took, but Snowden may have downloaded more than a million documents. The September 15, 2016, HPSCI report estimated the number of downloaded documents at 1.5 million. In a 2013 Associated Press interview, Glenn Greenwald stated:"In order to take documents with him that proved that what he was saying was true he had to take ones that included very sensitive, detailed blueprints of how the NSA does what they do." Thus the Snowden documents allegedly contained sensitive NSA blueprints detailing how the NSA operates, and which would allow someone who read them to evade or even duplicate NSA surveillance. Further, a July 20, 2015 New York Times article reported that the terror group Islamic State (ISIS or ISIL) had studied revelations from Snowden, about how the United States gathered information on militants, the main result is that the group's top leaders used couriers or encrypted channels to avoid being tracked or monitoring of their communications by Western analysts. According to Snowden, he did not indiscriminately turn over documents to journalists, stating that "I carefully evaluated every single document I disclosed to ensure that each was legitimately in the public interest. There are all sorts of documents that would have made a big impact that I didn't turn over" and that "I have to screen everything before releasing it to journalists ... If I have time to go through this information, I would like to make it available to journalists in each country." Despite these measures, the improper redaction of a document by The New York Times resulted in the exposure of intelligence activity against al-Qaeda. In June 2014, the NSA's recently installed director, U.S. Navy Admiral Michael S. Rogers, said that while some terrorist groups had altered their communications to avoid surveillance techniques revealed by Snowden, the damage done was not significant enough to conclude that "the sky is falling." Nevertheless, in February 2015, Rogers said that Snowden's disclosures had a material impact on the NSA's detection and evaluation of terrorist activities worldwide. On June 14, 2015, the London Sunday Times reported that Russian and Chinese intelligence services had decrypted more than 1 million classified files in the Snowden cache, forcing the UK's MI6 intelligence agency to move agents out of live operations in hostile countries. Sir David Omand, a former director of the UK's GCHQ intelligence gathering agency, described it as a huge strategic setback that was harming Britain, America, and their NATO allies. The Sunday Times said it was not clear whether Russia and China stole Snowden's data or whether Snowden voluntarily handed it over to remain at liberty in Hong Kong and Moscow. In April 2015, the Henry Jackson Society, a British neoconservative think tank, published a report claiming that Snowden's intelligence leaks negatively impacted Britain's ability to fight terrorism and organized crime. Gus Hosein, executive director of Privacy International, criticized the report for, in his opinion, presuming that the public became concerned about privacy only after Snowden's disclosures. Release of NSA documents Snowden's decision to leak NSA documents developed gradually following his March 2007 posting as a technician to the Geneva CIA station. Snowden later made contact with Glenn Greenwald, a journalist working at The Guardian. He contacted Greenwald anonymously as "Cincinnatus" and said he had sensitive documents that he would like to share. Greenwald found the measures that the source asked him to take to secure their communications, such as encrypting email, too annoying to employ. Snowden then contacted documentary filmmaker Laura Poitras in January 2013. According to Poitras, Snowden chose to contact her after seeing her New York Times article about NSA whistleblower William Binney. What originally attracted Snowden to both Greenwald and Poitras was a Salon article written by Greenwald detailing how Poitras's controversial films had made her a target of the government. Greenwald began working with Snowden in either February or April 2013, after Poitras asked Greenwald to meet her in New York City, at which point Snowden began providing documents to them. Barton Gellman, writing for The Washington Post, says his first direct contact was on May 16, 2013. According to Gellman, Snowden approached Greenwald after the Post declined to guarantee publication within 72 hours of all 41 PowerPoint slides that Snowden had leaked exposing the PRISM electronic data mining program, and to publish online an encrypted code allowing Snowden to later prove that he was the source. Snowden communicated using encrypted email, and going by the codename "Verax". He asked not to be quoted at length for fear of identification by stylometry. According to Gellman, before their first meeting in person, Snowden wrote, "I understand that I will be made to suffer for my actions and that the return of this information to the public marks my end." Snowden also told Gellman that until the articles were published, the journalists working with him would also be at mortal risk from the United States Intelligence Community "if they think you are the single point of failure that could stop this disclosure and make them the sole owner of this information." In May 2013, Snowden was permitted temporary leave from his position at the NSA in Hawaii, on the pretext of receiving treatment for his epilepsy. In mid-May, Snowden gave an electronic interview to Poitras and Jacob Appelbaum which was published weeks later by Der Spiegel. After disclosing the copied documents, Snowden promised that nothing would stop subsequent disclosures. In June 2013, he said, "All I can say right now is the US government is not going to be able to cover this up by jailing or murdering me. Truth is coming, and it cannot be stopped." Publication On May 20, 2013, Snowden flew to Hong Kong, where he was staying when the initial articles based on the leaked documents were published, beginning with The Guardian on June 5. Greenwald later said Snowden disclosed 9,000 to 10,000 documents. Within months, documents had been obtained and published by media outlets worldwide, most notably The Guardian (Britain), Der Spiegel (Germany), The Washington Post and The New York Times (U.S.), O Globo (Brazil), Le Monde (France), and similar outlets in Sweden, Canada, Italy, Netherlands, Norway, Spain, and Australia. In 2014, NBC broke its first story based on the leaked documents. In February 2014, for reporting based on Snowden's leaks, journalists Glenn Greenwald, Laura Poitras, Barton Gellman and The Guardian′s Ewen MacAskill were honored as co-recipients of the 2013 George Polk Award, which they dedicated to Snowden. The NSA reporting by these journalists also earned The Guardian and The Washington Post the 2014 Pulitzer Prize for Public Service for exposing the "widespread surveillance" and for helping to spark a "huge public debate about the extent of the government's spying". The Guardians chief editor, Alan Rusbridger, credited Snowden for having performed a public service. Revelations The ongoing publication of leaked documents has revealed previously unknown details of a global surveillance apparatus run by the United States' NSA in close cooperation with three of its four Five Eyes partners: Australia's ASD, the UK's GCHQ, and Canada's CSEC. On June 5, 2013, media reports documenting the existence and functions of classified surveillance programs and their scope began and continued throughout the entire year. The first program to be revealed was PRISM, which allows for court-approved direct access to Americans' Google and Yahoo accounts, reported from both The Washington Post and The Guardian published one hour apart. Barton Gellman of The Washington Post was the first journalist to report on Snowden's documents. He said the U.S. government urged him not to specify by name which companies were involved, but Gellman decided that to name them "would make it real to Americans." Reports also revealed details of Tempora, a British black-ops surveillance program run by the NSA's British partner, GCHQ. The initial reports included details about NSA call database, Boundless Informant, and of a secret court order requiring Verizon to hand the NSA millions of Americans' phone records daily, the surveillance of French citizens' phone and Internet records, and those of "high-profile individuals from the world of business or politics." XKeyscore, an analytical tool that allows for collection of "almost anything done on the internet," was described by The Guardian as a program that shed light on one of Snowden's most controversial statements: "I, sitting at my desk [could] wiretap anyone, from you or your accountant, to a federal judge or even the president, if I had a personal email." The NSA's top-secret black budget, obtained from Snowden by The Washington Post, exposed the successes and failures of the 16 spy agencies comprising the U.S. intelligence community, and revealed that the NSA was paying U.S. private tech companies for clandestine access to their communications networks. The agencies were allotted $52 billion for the 2013 fiscal year. It was revealed that the NSA was harvesting millions of email and instant messaging contact lists, searching email content, tracking and mapping the location of cell phones, undermining attempts at encryption via Bullrun and that the agency was using cookies to piggyback on the same tools used by Internet advertisers "to pinpoint targets for government hacking and to bolster surveillance." The NSA was shown to be secretly accessing Yahoo and Google data centers to collect information from hundreds of millions of account holders worldwide by tapping undersea cables using the MUSCULAR surveillance program. The NSA, the CIA and GCHQ spied on users of Second Life, Xbox Live and World of Warcraft, and attempted to recruit would-be informants from the sites, according to documents revealed in December 2013. Leaked documents showed NSA agents also spied on their own "love interests," a practice NSA employees termed LOVEINT. The NSA was shown to be tracking the online sexual activity of people they termed "radicalizers" in order to discredit them. Following the revelation of Black Pearl, a program targeting private networks, the NSA was accused of extending beyond its primary mission of national security. The agency's intelligence-gathering operations had targeted, among others, oil giant Petrobras, Brazil's largest company. The NSA and the GCHQ were also shown to be surveilling charities including UNICEF and Médecins du Monde, as well as allies such as European Commissioner Joaquín Almunia and Israeli Prime Minister Benjamin Netanyahu. In October 2013, Glenn Greenwald said "the most shocking and significant stories are the ones we are still working on, and have yet to publish." In November, The Guardians editor-in-chief Alan Rusbridger said that only one percent of the documents had been published. In December, Australia's Minister for Defence David Johnston said his government assumed the worst was yet to come. By October 2013, Snowden's disclosures had created tensions between the U.S. and some of its close allies after they revealed that the U.S. had spied on Brazil, France, Mexico, Britain, China, Germany, and Spain, as well as 35 world leaders, most notably German Chancellor Angela Merkel, who said "spying among friends" was unacceptable and compared the NSA with the Stasi. Leaked documents published by Der Spiegel in 2014 appeared to show that the NSA had targeted 122 high-ranking leaders. An NSA mission statement titled "SIGINT Strategy 2012-2016" affirmed that the NSA had plans for the continued expansion of surveillance activities. Their stated goal was to "dramatically increase mastery of the global network" and to acquire adversaries' data from "anyone, anytime, anywhere." Leaked slides revealed in Greenwald's book No Place to Hide, released in May 2014, showed that the NSA's stated objective was to "Collect it All," "Process it All," "Exploit it All," "Partner it All," "Sniff it All" and "Know it All." Snowden said in a January 2014 interview with German television that the NSA does not limit its data collection to national security issues, accusing the agency of conducting industrial espionage. Using the example of German company Siemens, he said, "If there's information at Siemens that's beneficial to US national interests—even if it doesn't have anything to do with national security—then they'll take that information nevertheless." In the wake of Snowden's revelations and in response to an inquiry from the Left Party, Germany's domestic security agency Bundesamt für Verfassungsschutz (BfV) investigated and found no concrete evidence that the U.S. conducted economic or industrial espionage in Germany. In February 2014, during testimony to the European Union, Snowden said of the remaining undisclosed programs, "I will leave the public interest determinations as to which of these may be safely disclosed to responsible journalists in coordination with government stakeholders." In March 2014, documents disclosed by Glenn Greenwald writing for The Intercept showed the NSA, in cooperation with the GCHQ, has plans to infect millions of computers with malware using a program called TURBINE. Revelations included information about QUANTUMHAND, a program through which the NSA set up a fake Facebook server to intercept connections. According to a report in The Washington Post in July 2014, relying on information furnished by Snowden, 90% of those placed under surveillance in the U.S. are ordinary Americans and are not the intended targets. The newspaper said it had examined documents including emails, message texts, and online accounts, that support the claim. In an August 2014 interview, Snowden for the first time disclosed a cyberwarfare program in the works, codenamed MonsterMind, that would automate the detection of a foreign cyberattack as it began and automatically fire back. "These attacks can be spoofed," said Snowden. "You could have someone sitting in China, for example, making it appear that one of these attacks is originating in Russia. And then we end up shooting back at a Russian hospital. What happens next?" Motivations Snowden first contemplated leaking confidential documents around 2008 but held back, partly because he believed the newly elected Barack Obama might introduce reforms. After the disclosures, his identity was made public by The Guardian at his request on June 9, 2013. "I do not want to live in a world where everything I do and say is recorded," he said. "My sole motive is to inform the public as to that which is done in their name and that which is done against them." Snowden said he wanted to "embolden others to step forward" by demonstrating that "they can win." He also said that the system for reporting problems did not work. "You have to report wrongdoing to those most responsible for it." He cited a lack of whistleblower protection for government contractors, the use of the Espionage Act of 1917 to prosecute leakers and the belief that had he used internal mechanisms to "sound the alarm," his revelations "would have been buried forever." In December 2013, upon learning that a U.S. federal judge had ruled the collection of U.S. phone metadata conducted by the NSA as likely unconstitutional, Snowden said, "I acted on my belief that the NSA's mass surveillance programs would not withstand a constitutional challenge, and that the American public deserved a chance to see these issues determined by open courts ... today, a secret program authorized by a secret court was, when exposed to the light of day, found to violate Americans' rights." In January 2014, Snowden said his "breaking point" was "seeing the Director of National Intelligence, James Clapper, directly lie under oath to Congress." This referred to testimony on March 12, 2013—three months after Snowden first sought to share thousands of NSA documents with Greenwald, and nine months after the NSA says Snowden made his first illegal downloads during the summer of 2012—in which Clapper denied to the U.S. Senate Select Committee on Intelligence that the NSA wittingly collects data on millions of Americans. Snowden said, "There's no saving an intelligence community that believes it can lie to the public and the legislators who need to be able to trust it and regulate its actions. Seeing that really meant for me there was no going back. Beyond that, it was the creeping realization that no one else was going to do this. The public had a right to know about these programs." In March 2014, Snowden said he had reported policy or legal issues related to spying programs to more than ten officials, but as a contractor had no legal avenue to pursue further whistleblowing. Flight from the United States Hong Kong In May 2013, Snowden quit his job, telling his supervisors he required epilepsy treatment, but instead fled the United States for Hong Kong on May 10. Snowden told Guardian reporters in June that he had been in his room at the Mira Hotel since his arrival in the city, rarely going out. On June 10, correspondent Ewen MacAskill said Snowden had left his hotel only briefly three times since May 20. Snowden vowed to challenge any extradition attempt by the U.S. government, and engaged a Hong Kong-based Canadian human rights lawyer Robert Tibbo as a legal adviser. Snowden told the South China Morning Post that he planned to remain in Hong Kong for as long as its government would permit. Snowden also told the Post that "the United States government has committed a tremendous number of crimes against Hong Kong [and] the PRC as well," going on to identify Chinese Internet Protocol addresses that the NSA monitored and stating that the NSA collected text-message data for Hong Kong residents. Glenn Greenwald said Snowden was motivated by a need to "ingratiate himself to the people of Hong Kong and China." After leaving the Mira Hotel, Snowden was housed for two weeks in several apartments by other refugees seeking asylum in Hong Kong, an arrangement set up by Tibbo to hide from the US authorities. The Russian newspaper Kommersant nevertheless reported that Snowden was living at the Russian consulate shortly before his departure from Hong Kong to Moscow. Ben Wizner, a lawyer with the American Civil Liberties Union (ACLU) and legal adviser to Snowden, said in January 2014, "Every news organization in the world has been trying to confirm that story. They haven't been able to, because it's false." Likewise rejecting the Kommersant story was Anatoly Kucherena, who became Snowden's lawyer in July 2013 when Snowden asked him for help in seeking temporary asylum in Russia. Kucherena said Snowden did not communicate with Russian diplomats while he was in Hong Kong. In early September 2013, however, Russian president Vladimir Putin said that, a few days before boarding a plane to Moscow, Snowden met in Hong Kong with Russian diplomatic representatives. On June 22, 18 days after the publication of Snowden's NSA documents began, officials revoked his U.S. passport. On June 23, Snowden boarded the commercial Aeroflot flight SU213 to Moscow, accompanied by Sarah Harrison of WikiLeaks. Hong Kong authorities said that Snowden had not been detained for the U.S. because the request had not fully complied with Hong Kong law, and there was no legal basis to prevent Snowden from leaving. On June 24, a U.S. State Department spokesman rejected the explanation of technical noncompliance, accusing the Hong Kong government of deliberately releasing a fugitive despite a valid arrest warrant and after having sufficient time to prohibit his travel. That same day, Julian Assange said that WikiLeaks had paid for Snowden's lodging in Hong Kong and his flight out. Julian Assange had asked Fidel Narváez, consul at the Ecuadorian embassy in London, to sign an emergency travel document for Snowden. Snowden said that having the document gave him "the confidence, the courage to get on that plane to begin the journey". In October 2013, Snowden said that before flying to Moscow, he gave all the classified documents he had obtained to journalists he met in Hong Kong and kept no copies for himself. In January 2014, he told a German TV interviewer that he gave all of his information to American journalists reporting on American issues. During his first American TV interview, in May 2014, Snowden said he had protected himself from Russian leverage by destroying the material he had been holding before landing in Moscow. In January 2019, Vanessa Rodel, one of the refugees who had housed Snowden in Hong Kong, and her 7-year-old daughter were granted asylum by Canada. In 2021, Supun Thilina Kellapatha, Nadeeka Dilrukshi Nonis and their children found refuge in Canada, leaving only one of Snowden's Hong Kong helpers waiting for asylum. Russia On June 23, 2013, Snowden landed at Moscow's Sheremetyevo Airport. WikiLeaks said he was on a circuitous but safe route to asylum in Ecuador. Snowden had a seat reserved to continue to Cuba but did not board that onward flight, saying in a January 2014 interview that he intended to transit through Russia but was stopped en route. He said "a planeload of reporters documented the seat I was supposed to be in" when he was ticketed for Havana, but the U.S. canceled his passport. He said the U.S. wanted him to stay in Moscow so "they could say, 'He's a Russian spy.'" Greenwald's account differed on the point of Snowden being already ticketed. According to Greenwald, Snowden's passport was valid when he departed Hong Kong but was revoked during the hours he was in transit to Moscow, preventing him from obtaining a ticket to leave Russia. Greenwald said Snowden was thus forced to stay in Moscow and seek asylum. According to one Russian report, Snowden planned to fly from Moscow through Havana to Latin America; however, Cuba told Moscow it would not allow the Aeroflot plane carrying Snowden to land. The Russian newspaper Kommersant reported that Cuba had a change of heart after receiving pressure from U.S. officials, leaving him stuck in the transit zone because at the last minute Havana told officials in Moscow not to allow him on the flight. The Washington Post contrasted this version with what it called "widespread speculation" that Russia never intended to let Snowden proceed. Fidel Castro called claims that Cuba would have blocked Snowden's entry a "lie" and a "libel." Describing Snowden's arrival in Moscow as a surprise and likening it to "an unwanted Christmas gift," Russian president Putin said that Snowden remained in the transit area of Sheremetyevo Airport, had committed no crime in Russia, was free to leave and should do so. Following Snowden's arrival in Moscow, the White House expressed disappointment in Hong Kong's decision to allow him to leave. An anonymous U.S. official not authorized to discuss the matter told the Associated Press Snowden's passport had been revoked before he left Hong Kong, but that a senior official in a country or airline could order subordinates to overlook the withdrawn passport. U.S. Secretary of State John Kerry said that Snowden's passport was canceled "within two hours" of the charges against Snowden being made public which was Friday, June 21. In a July 1 statement, Snowden said, "Although I am convicted of nothing, [the U.S. government] has unilaterally revoked my passport, leaving me a stateless person. Without any judicial order, the administration now seeks to stop me exercising a basic right. A right that belongs to everybody. The right to seek asylum." Four countries offered Snowden permanent asylum: Ecuador, Nicaragua, Bolivia, and Venezuela. No direct flights between Moscow and Venezuela, Bolivia, or Nicaragua existed, however, and the U.S. pressured countries along his route to hand him over. Snowden said in July 2013 that he decided to bid for asylum in Russia because he felt there was no safe way to reach Latin America. Snowden said he remained in Russia because "when we were talking about possibilities for asylum in Latin America, the United States forced down the Bolivian president's plane", citing the Morales plane incident. According to Snowden, "the CIA has a very powerful presence [in Latin America] and the governments and the security services there are relatively much less capable than, say, Russia.... they could have basically snatched me...." On the issue, he said "some governments in Western European and North American states have demonstrated a willingness to act outside the law, and this behavior persists today. This unlawful threat makes it impossible for me to travel to Latin America and enjoy the asylum granted there in accordance with our shared rights." Snowden said that he would travel from Russia if there was no interference from the U.S. government. Four months after Snowden received asylum in Russia, Julian Assange commented: "While Venezuela and Ecuador could protect him in the short term, over the long term there could be a change in government. In Russia, he's safe, he's well-regarded, and that is not likely to change. That was my advice to Snowden, that he would be physically safest in Russia." In an October 2014 interview with The Nation magazine, Snowden reiterated that he had originally intended to travel to Latin America: "A lot of people are still unaware that I never intended to end up in Russia." According to Snowden, the U.S. government "waited until I departed Hong Kong to cancel my passport in order to trap me in Russia." Snowden added, "If they really wanted to capture me, they would've allowed me to travel to Latin America because the CIA can operate with impunity down there. They did not want that; they chose to keep me in Russia." Morales plane incident On July 1, 2013, president Evo Morales of Bolivia, who had been attending a conference in Russia, suggested during an interview with RT (formerly Russia Today) that he would consider a request by Snowden for asylum. The following day, Morales's plane, en route to Bolivia, was rerouted to Austria and landed there, after France, Spain, and Italy denied access to their airspace. While the plane was parked in Vienna, the Spanish ambassador to Austria arrived with two embassy personnel and asked to search the plane but they were denied permission by Morales himself. U.S. officials had raised suspicions that Snowden may have been on board. Morales blamed the U.S. for putting pressure on European countries and said that the grounding of his plane was a violation of international law. In April 2015, Bolivia's ambassador to Russia, María Luisa Ramos Urzagaste, accused Julian Assange of inadvertently putting Morales's life at risk by intentionally providing to the U.S. false rumors that Snowden was on Morales's plane. Assange responded that "we weren't expecting this outcome. The result was caused by the United States' intervention. We can only regret what happened." Asylum applications Snowden applied for political asylum to 21 countries. A statement attributed to him contended that the U.S. administration, and specifically then–Vice President Joe Biden, had pressured the governments to refuse his asylum petitions. Biden had telephoned President Rafael Correa days prior to Snowden's remarks, asking the Ecuadorian leader not to grant Snowden asylum. Ecuador had initially offered Snowden a temporary travel document but later withdrew it, and Correa later called the offer a mistake. In a July 1, 2013 statement published by WikiLeaks, Snowden accused the U.S. government of "using citizenship as a weapon" and using what he described as "old, bad tools of political aggression." Citing Obama's promise to not allow "wheeling and dealing" over the case, Snowden commented, "This kind of deception from a world leader is not justice, and neither is the extralegal penalty of exile." Several days later, WikiLeaks announced that Snowden had applied for asylum in six additional countries, but declined to name them, alleging attempted U.S. interference. After evaluating the law and Snowden's situation, the French interior ministry rejected his request for asylum. Poland refused to process his application because it did not conform to legal procedure. Brazil's Foreign Ministry said the government planned no response to Snowden's asylum request. Germany and India rejected Snowden's application outright, while Austria, Ecuador, Finland, Norway, Italy, the Netherlands, and Spain said he must be on their territory to apply. In November 2014, Germany announced that Snowden had not renewed his previously denied request and was not being considered for asylum. Glenn Greenwald later reported that Sigmar Gabriel, Vice-Chancellor of Germany, told him the U.S. government had threatened to stop sharing intelligence if Germany offered Snowden asylum or arranged for his travel there. Putin said on July 1, 2013, that if Snowden wanted to be granted asylum in Russia, he would be required to "stop his work aimed at harming our American partners." A spokesman for Putin subsequently said that Snowden had withdrawn his asylum application upon learning of the conditions. In a July 12 meeting at Sheremetyevo Airport with representatives of human rights organizations and lawyers, organized in part by the Russian government, Snowden said he was accepting all offers of asylum that he had already received or would receive. He added that Venezuela's grant of asylum formalized his asylee status, removing any basis for state interference with his right to asylum. He also said he would request asylum in Russia until he resolved his travel problems. Slovenian correspondent Polonca Frelih, the only journalist, who presented at the July 12 meeting with Snowden, reported that he “looked like someone without daylight for long time but strong enough psychologically” while expressing worries about his medical condition. Russian Federal Migration Service officials confirmed on July 16 that Snowden had submitted an application for temporary asylum. On July 24, Kucherena said his client wanted to find work in Russia, travel and create a life for himself, and had already begun learning Russian. Amid media reports in early July 2013 attributed to U.S. administration sources that Obama's one-on-one meeting with Putin, ahead of a G20 meeting in St Petersburg scheduled for September, was in doubt due to Snowden's protracted sojourn in Russia, top U.S. officials repeatedly made it clear to Moscow that Snowden should immediately be returned to the United States to face charges for the unauthorized leaking of classified information. His Russian lawyer said Snowden needed asylum because he faced persecution by the U.S. government and feared "that he could be subjected to torture and capital punishment." Snowden married Lindsay Mills in 2017. On April 16, 2020, CNN reported that Edward Snowden had requested a three-year extension of his Russian residency permit. Eric Holder letter to Russian Justice Minister In a letter to Russian Minister of Justice Aleksandr Konovalov dated July 23, 2013, U.S. Attorney General Eric Holder repudiated Snowden's claim to refugee status and offered a limited validity passport good for direct return to the U.S. He stated that Snowden would not be subject to torture or the death penalty, and would receive a trial in a civilian court with proper legal counsel. The same day, the Russian president's spokesman reiterated that his government would not hand over Snowden, commenting that Putin was not personally involved in the matter and that it was being handled through talks between the FBI and Russia's FSB. Criminal charges On June 14, 2013, United States federal prosecutors filed a criminal complaint against Snowden, charging him with three felonies: theft of government property and two counts of violating the Espionage Act of 1917 (18 U. S. C. Sect. 792 et. seq.; Publ. L. 65-24) through unauthorized communication of national defense information and willful communication of classified communications intelligence information to an unauthorized person. Specifically, the charges filed in the Criminal Complaint were: 18 U.S.C. 641 Theft of Government Property 18 U.S.C. 793(d) Unauthorized Communication of National Defense Information 18 U.S.C. 798(a)(3) Willful Communication of Classified Intelligence Information to an Unauthorized Person Each of the three charges carries a maximum possible prison term of ten years. The criminal complaint was initially secret but was unsealed a week later. Analysis of Criminal Complaint Stephen P. Mulligan and Jennifer K. Elsea, Legislative attorneys for the Congressional Research Service, provide a 2017 analysis of the uses of the Espionage Act to prosecute unauthorized disclosures of classified information, based on what was disclosed, to whom, and how; the burden of proof requirements e.g. degrees of Mens Rea (guilty mind), and the relationship of such considerations to the First Amendment framework of protections of free speech are also analyzed. The analysis includes the charges against Snowden, among several other cases. The discussion also covers gaps in the legal framework used to prosecute such cases. Snowden response to Criminal Complaint Snowden was asked in a January 2014 interview about returning to the U.S. to face the charges in court, as Obama had suggested a few days prior. Snowden explained why he rejected the request: What he doesn't say are that the crimes that he's charged me with are crimes that don't allow me to make my case. They don't allow me to defend myself in an open court to the public and convince a jury that what I did was to their benefit. ... So it's, I would say, illustrative that the president would choose to say someone should face the music when he knows the music is a show trial. Snowden's legal representative, Jesselyn Radack, wrote that "the Espionage Act effectively hinders a person from defending himself before a jury in an open court." She said that the "arcane World War I law" was never meant to prosecute whistleblowers, but rather spies who betrayed their trust by selling secrets to enemies for profit. Non-profit betrayals were not considered. Civil lawsuit On September 17, 2019, the United States filed a lawsuit, Civil Action No. 1:19-cv-1197-LO-TCB, against Snowden for alleged violations of non-disclosure agreements with the CIA and NSA. The two-count civil complaint alleged that Snowden had violated prepublication obligations related to the publication of his memoir Permanent Record. The complaint listed the publishers Macmillan Publishing Group, LLC d.b.a. Henry Holt and Company and Holtzbrink, as relief-defendants. The Hon. Liam O'Grady, a judge in the Alexandria Division of the United States District Court for the Eastern District of Virginia found for the United States (Plaintiff) by summary judgement, on both counts of the action. The judgment also found that Snowden had been paid speaker honorariums totaling $1.03 million for a series of 56 speeches delivered by video link. Asylum in Russia On June 23, 2013, Snowden landed at Moscow's Sheremetyevo Airport aboard a commercial Aeroflot flight from Hong Kong. After 39 days in the transit section, he left the airport on August 1 and was granted temporary asylum in Russia for one year by the Federal Migration Service. Snowden had the choice to apply for renewal of his temporary refugee status for 12 months or requesting a permit for temporary stay for three years. A year later, his temporary refugee status having expired, Snowden received a three-year temporary residency permit allowing him to travel freely within Russia and to go abroad for up to three months. He was not granted permanent political asylum. In 2017, his temporary residency permit was extended for another three years. In December 2013, Snowden told journalist Barton Gellman that supporters in Silicon Valley had donated enough bitcoins for him to live on "until the fucking sun dies." (A single bitcoin was then worth about $1,000.) In 2017, Snowden secretly married Lindsay Mills. By 2019, he no longer felt the need to be disguised in public and lived what was described by The Guardian as a "more or less normal life." He was able to travel around Russia and make a living from speaking arrangements, locally and over the internet. His memoir Permanent Record was released internationally on September 17, 2019, and while U.S. royalties were expected to be seized, he was able to receive an advance of $4.2 million. The memoir reached the top position on Amazon's bestseller list that day. Snowden said his work for the NSA and CIA showed him that the United States Intelligence Community (IC) had "hacked the Constitution", and that he had concluded there was no option for him but to expose his revelations via the press. In the memoir he wrote, "I realized that I was crazy to have imagined that the Supreme Court, or Congress, or President Obama, seeking to distance his administration from President George W. Bush's, would ever hold the IC legally responsible – for anything". Of Russia he said, "One of the things that is lost in all the problematic politics of the Russian government is the fact this is one of the most beautiful countries in the world" with "friendly" and "warm" people. On November 1, 2019, new amendments took effect introducing a permanent residence permit for the first time and removing the requirement to renew the pre-2019 so-called "permanent" residence permit every five years. The new permanent residence permit must be replaced three times in a lifetime like an ordinary internal passport for Russian citizens. In accordance with that law, Snowden was in October 2020 granted permanent residence in Russia instead of another extension. In April 2020, an amendment to Russian nationality law allowing foreigners to obtain Russian citizenship without renouncing a foreign citizenship came into force. In November 2020, Snowden announced that he and his wife, Lindsay, who was expecting their son in late December, were applying for dual U.S.-Russian citizenship in order not to be separated from him "in this era of pandemics and closed borders." Reaction United States Barack Obama In response to outrage by European leaders, President Barack Obama said in early July 2013 that all nations collect intelligence, including those expressing outrage. His remarks came in response to an article in the German magazine Der Spiegel. In 2014, Obama stated, "our nation's defense depends in part on the fidelity of those entrusted with our nation's secrets. If any individual who objects to government policy can take it into their own hands to publicly disclose classified information, then we will not be able to keep our people safe, or conduct foreign policy." He objected to the "sensational" way the leaks were reported, saying the reporting often "shed more heat than light." He said that the disclosures had revealed "methods to our adversaries that could impact our operations." During a November 2016 interview with the German broadcaster ARD and the German paper Der Spiegel, then-outgoing President Obama said he "can't" pardon Edward Snowden unless he is physically submitted to US authorities on US soil. Donald Trump In 2013, Donald Trump made a series of tweets in which he referred to Snowden as a "traitor", saying he gave "serious information to China and Russia" and "should be executed". Later that year he added a caveat, tweeting "if it and he could reveal Obama's [birth] records, I might become a major fan". In August 2020, Trump said during a press conference that he would "take a look" at pardoning Snowden, and added that he was "not that aware of the Snowden situation". He stated, "There are many, many people – it seems to be a split decision that many people think that he should be somehow treated differently, and other people think he did very bad things, and I'm going to take a very good look at it." Forbes described Trump's willingness to consider a pardon as "leagues away" from his 2013 views. Snowden responded to the announcement saying, "the last time we heard a White House considering a pardon was 2016, when the very same Attorney General who once charged me conceded that, on balance, my work in exposing the NSA's unconstitutional system of mass surveillance had been 'a public service'." Top members of the House Armed Services Committee immediately voiced strong opposition to a pardon, saying Snowden's actions resulted in "tremendous harm" to national security, and that he needed to stand trial. Liz Cheney called the idea of a pardon "unconscionable". A week prior to the announcement, Trump also said he had been thinking of letting Snowden return to the U.S. without facing any time in jail. Days later, Attorney General William Barr told the AP he was "vehemently opposed" to the idea of a pardon, saying "[Snowden] was a traitor and the information he provided our adversaries greatly hurt the safety of the American people, he was peddling it around like a commercial merchant. We can't tolerate that." Public figures Pentagon Papers leaker Daniel Ellsberg called Snowden's release of NSA material the most significant leak in U.S. history. Shortly before the September 2016 release of his biographical thriller film Snowden, a semi-fictionalized drama based on the life of Edward Snowden with a short appearance by Snowden himself, Oliver Stone said that Snowden should be pardoned, calling him a "patriot above all" and suggesting that he should run the NSA himself. In a December 18, 2013, CNN editorial, former NSA whistleblower J. Kirk Wiebe, known for his involvement in the NSA's Trailblazer Project, noted that a federal judge for the District of Columbia, the Hon. Richard J. Leon had ruled in a contemporaneous case before him that the NSA warrantless surveillance program was likely unconstitutional; Wiebe then proposed that Snowden should be granted amnesty and allowed to return to the United States. Government officials Numerous high-ranking current or former U.S. government officials reacted publicly to Snowden's disclosures. 2013 Director of National Intelligence James Clapper condemned the leaks as doing "huge, grave damage" to U.S. intelligence capabilities. Ex-CIA director James Woolsey said that if Snowden were convicted of treason, he should be hanged. FBI director Robert Mueller said that the U.S. government is "taking all necessary steps to hold Edward Snowden responsible for these disclosures." 2014 House Intelligence Committee chairman Mike Rogers and ranking member Dutch Ruppersberger said a classified Pentagon report written by military intelligence officials contended that Snowden's leaks had put U.S. troops at risk and prompted terrorists to change their tactics and that most files copied were related to current U.S. military operations. Former congressman Ron Paul began a petition urging the Obama Administration to grant Snowden clemency. Paul released a video on his website saying, "Edward Snowden sacrificed his livelihood, citizenship, and freedom by exposing the disturbing scope of the NSA's worldwide spying program. Thanks to one man's courageous actions, Americans know about the truly egregious ways their government is spying on them." Mike McConnell—former NSA director and current vice chairman at Booz Allen Hamilton—said that Snowden was motivated by revenge when the NSA did not offer him the job he wanted. "At this point," said McConnell, "he being narcissistic and having failed at most everything he did, he decides now I'm going to turn on them." Former President Jimmy Carter said that if he were still president today he would "certainly consider" giving Snowden a pardon were he to be found guilty and imprisoned for his leaks. Former Secretary of State Hillary Clinton said, "[W]e have all these protections for whistleblowers. If [Snowden] were concerned and wanted to be part of the American debate...it struck me as...sort of odd that he would flee to China because Hong Kong is controlled by China, and that he would then go to Russia—two countries with which we have very difficult cyberrelationships." As Clinton saw it, "turning over a lot of that material—intentionally or unintentionally—drained, gave all kinds of information, not only to big countries but to networks and terrorist groups and the like. So I have a hard time thinking that somebody who is a champion of privacy and liberty has taken refuge in Russia, under Putin's authority." Clinton later said that if Snowden wished to return to the U.S., "knowing he would be held accountable," he would have the right "to launch both a legal defense and a public defense, which can, of course, affect the legal defense." Secretary of State John Kerry said Snowden had "damaged his country very significantly" and "hurt operational security" by telling terrorists how to evade detection. "The bottom line," Kerry added, "is this man has betrayed his country, sitting in Russia where he has taken refuge. You know, he should man up and come back to the United States." Former Vice President Al Gore said Snowden "clearly violated the law so you can't say OK, what he did is all right. It's not. But what he revealed in the course of violating important laws included violations of the U.S. Constitution that were way more serious than the crimes he committed. In the course of violating important law, he also provided an important service. ... Because we did need to know how far this has gone." In December, President Obama nominated former deputy defense secretary Ashton Carter to succeed outgoing Secretary of Defense Chuck Hagel. Seven months before, Carter had said, "We had a cyber Pearl Harbor. His name was Edward Snowden." Carter charged that U.S. security officials "screwed up spectacularly in the case of Snowden. And this knucklehead had access to destructive power that was much more than any individual person should have access to." Debate In the U.S., Snowden's actions precipitated an intense debate on privacy and warrantless domestic surveillance. President Obama was initially dismissive of Snowden, saying "I'm not going to be scrambling jets to get a 29-year-old hacker." In August 2013, Obama rejected the suggestion that Snowden was a patriot, and in November said that "the benefit of the debate he generated was not worth the damage done, because there was another way of doing it." In June 2013, U.S. Senator Bernie Sanders of Vermont shared a "must-read" news story on his blog by Ron Fournier, stating "Love him or hate him, we all owe Snowden our thanks for forcing upon the nation an important debate. But the debate shouldn't be about him. It should be about the gnawing questions his actions raised from the shadows." In 2015, Sanders stated that "Snowden played a very important role in educating the American public" and that although Snowden should not go unpunished for breaking the law, "that education should be taken into consideration before the sentencing." Snowden said in December 2013 that he was "inspired by the global debate" ignited by the leaks and that NSA's "culture of indiscriminate global espionage ... is collapsing." At the end of 2013, The Washington Post said that the public debate and its offshoots had produced no meaningful change in policy, with the status quo continuing. In 2016, on The Axe Files podcast, former U.S. Attorney General Eric Holder said that Snowden "performed a public service by raising the debate that we engaged in and by the changes that we made." Holder nevertheless said that Snowden's actions were inappropriate and illegal. In September 2016, the bipartisan U.S. House Permanent Select Committee on Intelligence completed a review of the Snowden disclosures and said that the federal government would have to spend millions of dollars responding to the fallout from Snowden's disclosures. The report also said that "the public narrative popularized by Snowden and his allies is rife with falsehoods, exaggerations, and crucial omissions." The report was denounced by Washington Post reporter Barton Gellman, who, in an opinion piece for The Century Foundation, called it "aggressively dishonest" and "contemptuous of fact." Presidential panel In August 2013, President Obama said that he had called for a review of U.S. surveillance activities before Snowden had begun revealing details of the NSA's operations, and announced that he was directing DNI James Clapper "to establish a review group on intelligence and communications technologies." In December, the task force issued 46 recommendations that, if adopted, would subject the NSA to additional scrutiny by the courts, Congress, and the president, and would strip the NSA of the authority to infiltrate American computer systems using backdoors in hardware or software. Panel member Geoffrey R. Stone said there was no evidence that the bulk collection of phone data had stopped any terror attacks. Court rulings (United States) On June 6, 2013, in the wake of Snowden's leaks, conservative public interest lawyer and Judicial Watch founder Larry Klayman filed a lawsuit claiming that the federal government had unlawfully collected metadata for his telephone calls and was harassing him. In Klayman v. Obama, Judge Richard J. Leon referred to the NSA's "almost-Orwellian technology" and ruled the bulk telephone metadata program to be likely unconstitutional. Leon's ruling was stayed pending an appeal by the government. Snowden later described Judge Leon's decision as vindication. On June 11, the ACLU filed a lawsuit against James Clapper, Director of National Intelligence, alleging that the NSA's phone records program was unconstitutional. In December 2013, ten days after Judge Leon's ruling, Judge William H. Pauley III came to the opposite conclusion. In ACLU v. Clapper, although acknowledging that privacy concerns are not trivial, Pauley found that the potential benefits of surveillance outweigh these considerations and ruled that the NSA's collection of phone data is legal. Gary Schmitt, former staff director of the Senate Select Committee on Intelligence, wrote that "The two decisions have generated public confusion over the constitutionality of the NSA's data collection program—a kind of judicial 'he-said, she-said' standoff." On May 7, 2015, in the case of ACLU v. Clapper, the United States Court of Appeals for the Second Circuit said that Section 215 of the Patriot Act did not authorize the NSA to collect Americans' calling records in bulk, as exposed by Snowden in 2013. The decision voided U.S. District Judge William Pauley's December 2013 finding that the NSA program was lawful, and remanded the case to him for further review. The appeals court did not rule on the constitutionality of the bulk surveillance and declined to enjoin the program, noting the pending expiration of relevant parts of the Patriot Act. Circuit Judge Gerard E. Lynch wrote that, given the national security interests at stake, it was prudent to give Congress an opportunity to debate and decide the matter. On September 2, 2020, a US federal court ruled that the US intelligence's mass surveillance program, exposed by Edward Snowden, was illegal and possibly unconstitutional. They also cited that the US intelligence leaders, who publicly defended it, were not telling the truth. USA Freedom Act On June 2, 2015, the U.S. Senate passed, and President Obama signed, the USA Freedom Act which restored in modified form several provisions of the Patriot Act that had expired the day before, while for the first time imposing some limits on the bulk collection of telecommunication data on U.S. citizens by American intelligence agencies. The new restrictions were widely seen as stemming from Snowden's revelations. Europe In an official report published in October 2015, the United Nations special rapporteur for the promotion and protection of the right to freedom of speech, Professor David Kaye, criticized the U.S. government's harsh treatment of, and bringing criminal charges against, whistleblowers, including Edward Snowden. The report found that Snowden's revelations were important for people everywhere and made "a deep and lasting impact on law, policy, and politics." The European Parliament invited Snowden to make a pre-recorded video appearance to aid their NSA investigation. Snowden gave written testimony in which he said that he was seeking asylum in the EU, but that he was told by European Parliamentarians that the U.S. would not allow EU partners to make such an offer. He told the Parliament that the NSA was working with the security agencies of EU states to "get access to as much data of EU citizens as possible." He said that the NSA's Foreign Affairs Division lobbies the EU and other countries to change their laws, allowing for "everyone in the country" to be spied on legally. By mid-2013, Snowden had applied for asylum in 21 countries, including Europe and South America, obtaining negative responses in most cases. Austria, Italy and Switzerland Snowden applied for asylum in Austria, Italy and Switzerland. Snowden, speaking to a Geneva, Switzerland audience via video link from Moscow, said he would love to return to Geneva, where he had previously worked undercover for the CIA. Swiss media said that the Swiss Attorney General had determined that Switzerland would not extradite Snowden if the US request were considered "politically motivated". Switzerland would grant Snowden asylum if he revealed the extent of espionage activities by the United States government. According to the paper Sonntags Zeitung, Snowden would be granted safe entry and residency in Switzerland, in return for his knowledge of American intelligence activities. Swiss paper Le Matin reported that Snowden's activity could be part of criminal proceedings or part of a parliamentary inquiry. Extradition would also be rejected if Snowden faced the death penalty, for which the United States has already provided assurances. The three felony charges which Snowden faces each carry a maximum of 10 years imprisonment. As reported in Der Bund, the upper-level Swiss government could create an obstacle. France On September 16, 2019, it was reported that Snowden had said he "would love" to get political asylum in France. Snowden first applied unsuccessfully for asylum in France in 2013, under then French President François Hollande. His second request under President Emmanuel Macron, was favorably received by Justice Minister Nicole Belloubet. However, no other members of the French government were known to express support for Snowden's asylum request, possibly due to the potential adverse diplomatic consequences. Germany Hans-Georg Maaßen, head of the Federal Office for the Protection of the Constitution, Germany's domestic security agency, speculated that Snowden could have been working for the Russian government. Snowden rejected this insinuation, speculating on Twitter in German that "it cannot be proven if Maaßen is an agent of the SVR or FSB." On October 31, 2013, Snowden met with German Green Party lawmaker Hans-Christian Ströbele in Moscow, to discuss the possibility of Snowden giving testimony in Germany. At the meeting, Snowden gave Ströbele a letter to the German government, parliament, and federal Attorney-General, the details of which were to later be made public. Germany later blocked Snowden from testifying in person in an NSA inquiry, citing a potential grave strain on US-German relations. Nordic Countries The FBI demanded that Nordic countries arrest Snowden, should he visit their countries. Snowden made asylum requests to Sweden, Norway, Finland and Denmark. All requests were ultimately denied, with varying degrees of severity in the response. According to Finnish foreign ministry spokeswoman Tytti Pylkkö, Snowden made an asylum request to Finland by sending an application to the Finnish embassy in Moscow, while he was confined to the transit area of the Sheremetyevo International Airport in Moscow but was told that Finnish law required him to be on Finnish soil. According to SVT News, Snowden met with three Swedish MP's; Matthias Sundin (L), Jakop Dalunde (MP) and Cecilia Magnusson (M), in Moscow, to discuss his views on mass surveillance. The meeting was organized by the Right Livelihood Award Foundation, which awarded Snowden the Right Livelihood Honorary Award, often called Sweden's "Alternative Nobel Prize." According to the foundation, the prize was for Snowden's work on press freedom. Sweden ultimately rejected Snowden's asylum, however, so the award was accepted by his father, Lon Snowden, on his behalf. Snowden was granted a freedom of speech award by the Oslo branch of the writer's group PEN International. He applied for asylum in Norway but Norwegian Justice Secretary insisted that the application be made on Norwegian soil and further expressed doubt that Snowden met the criteria for gaining asylum - being "important for foreign political reasons". Snowden then filed a lawsuit for free passage through Norway in order to receive his freedom of speech award, through Oslo's District Court, followed by an appeals court, and finally Norway's Supreme Court. The lawsuit was ultimately rejected by the Norwegian Supreme Court. Snowden also applied for asylum in Denmark, but this was rejected by the center-right Danish Prime Minister Lars Lokke Rasmussen who said he could see no reason to grant Snowden asylum, calling him a "criminal". Apparently, under an agreement with the Danish government, a US government jet lay in wait on standby in Copenhagen, to transfer Snowden back to the United States from any Scandinavian country. Latin and South America Support for Snowden came from Latin and South American leaders including the Argentinian President Cristina Fernández de Kirchner, Brazilian President Dilma Rousseff, Ecuadorian President Rafael Correa, Bolivian President Evo Morales, Venezuelan President Nicolás Maduro, and Nicaraguan President Daniel Ortega. International community Crediting the Snowden leaks, the United Nations General Assembly unanimously adopted Resolution 68/167 in December 2013. The non-binding resolution denounced unwarranted digital surveillance and included a symbolic declaration of the right of all individuals to online privacy. In July 2014, Navi Pillay, UN High Commissioner for Human Rights, told a news conference in Geneva that the U.S. should abandon its efforts to prosecute Snowden, since his leaks were in the public interest. Public opinion polls Surveys conducted by news outlets and professional polling organizations found that American public opinion was divided on Snowden's disclosures and that those polled in Canada and Europe were more supportive of Snowden than respondents in the U.S. although more Americans have grown more supportive of Snowden's disclosure. In Germany, Italy, France, the Netherlands, and Spain more than 80% of people familiar with Snowden view him positively. Recognition For his global surveillance disclosures, Snowden has been honored by publications and organizations based in Europe and the United States. He was voted as The Guardians person of the year 2013, garnering four times the number of votes as any other candidate. Teleconference speaking engagements In March 2014, Snowden spoke at the South by Southwest (SXSW) Interactive technology conference in Austin, Texas, in front of 3,500 attendees. He participated by teleconference carried over multiple routers running the Google Hangouts platform. On-stage moderators were Christopher Soghoian and Snowden's legal counsel Wizner, both from the ACLU. Snowden said that the NSA was "setting fire to the future of the internet," and that the SXSW audience was "the firefighters." Attendees could use Twitter to send questions to Snowden, who answered one by saying that information gathered by corporations was much less dangerous than that gathered by a government agency, because "governments have the power to deprive you of your rights." Then-Representative Mike Pompeo (R-KS) of the House Intelligence Committee, later director of the CIA and secretary of state, had tried unsuccessfully to get the SXSW management to cancel Snowden's appearance; instead, SXSW director Hugh Forrest said that the NSA was welcome to respond to Snowden at the 2015 conference. Later that month, Snowden appeared by teleconference at the TED conference in Vancouver, British Columbia. Represented on stage by a robot with a video screen, video camera, microphones, and speakers, Snowden conversed with TED curator Chris Anderson and told the attendees that online businesses should act quickly to encrypt their websites. He described the NSA's PRISM program as the U.S. government using businesses to collect data for them, and that the NSA "intentionally misleads corporate partners" using, as an example, the Bullrun decryption program to create backdoor access. Snowden said he would gladly return to the U.S. if given immunity from prosecution, but that he was more concerned about alerting the public about abuses of government authority. Anderson invited Internet pioneer Tim Berners-Lee on stage to converse with Snowden, who said that he would support Berners-Lee's concept of an "internet Magna Carta" to "encode our values in the structure of the internet." On September 15, 2014, Snowden appeared via remote video link, along with Julian Assange, on Kim Dotcom's Moment of Truth town hall meeting held in Auckland. He made a similar video link appearance on February 2, 2015, along with Greenwald, as the keynote speaker at the World Affairs Conference at Upper Canada College in Toronto. In March 2015, while speaking at the FIFDH (international human rights film festival) he made a public appeal for Switzerland to grant him asylum, saying he would like to return to live in Geneva, where he once worked undercover for the Central Intelligence Agency. In April 2015, John Oliver, the host of Last Week Tonight with John Oliver, flew to Moscow to interview Edward Snowden. On November 10, 2015, Snowden appeared at the Newseum, via remote video link, for PEN American Center's "Secret Sources: Whistleblowers, National Security and Free Expression," event. In 2015, Snowden earned over $200,000 from digital speaking engagements in the U.S. On March 19, 2016, Snowden delivered the opening keynote address of the LibrePlanet conference, a meeting of international free software activists and developers presented by the Free Software Foundation. The conference was held at the Massachusetts Institute of Technology and was the first such time Snowden spoke via teleconference using a full free software stack, end-to-end. On July 21, 2016, Snowden and hardware hacker Bunnie Huang, in a talk at MIT Media Lab's Forbidden Research event, published research for a smartphone case, the so-called Introspection Engine, that would monitor signals received and sent by that phone to provide an alert to the user if his or her phone is transmitting or receiving information when it shouldn't be (for example when it's turned off or in airplane mode), a feature described by Snowden to be useful for journalists or activists operating under hostile governments that would otherwise track their activities through their phones. In August 2020, a court filing by the Department of Justice indicated that Snowden had collected a total of over $1.2 million in speaking fees in addition to advances on books since 2013. In September 2021, Yahoo! Finance reported that for 67 speaking appearances by video link from September 2015–May 2020, Snowden had earned more than $1.2 million. In March 2021, Iowa State University paid him $35,000 for one such speech, his first at a public U.S. college since February 2017, when the University of Pittsburgh paid him $15,000. In April 2021, Snowden appeared at a Canadian investment conference sponsored by Sunil Tulsiani, a former policeman who had been barred from trading for life after dishonest behavior. Snowden took the opportunity to affirm his role as a whistleblower, inform viewers of Tulsiani's background, and encourage investors to conduct proper research before spending any money. The "Snowden effect" In July 2013, media critic Jay Rosen defined the "Snowden effect" as "Direct and indirect gains in public knowledge from the cascade of events and further reporting that followed Edward Snowden's leaks of classified information about the surveillance state in the U.S." In December 2013, The Nation wrote that Snowden had sparked an overdue debate about national security and individual privacy. In Forbes, the effect was seen to have nearly united the U.S. Congress in opposition to the massive post-9/11 domestic intelligence gathering system. In its Spring 2014 Global Attitudes Survey, the Pew Research Center found that Snowden's disclosures had tarnished the image of the United States, especially in Europe and Latin America. Jewel v. NSA On November 2, 2018, Snowden provided a court declaration in Jewel v. National Security Agency. Bibliography Permanent Record (2019) In popular culture Snowden's impact as a public figure has been felt in cinema, television, advertising, video games, literature, music, statuary, and social media. Snowden gave Channel 4's Alternative Christmas Message in December 2013. The film Snowden, based on Snowden's leaking of classified US government material, directed by Oliver Stone and written by Stone and Kieran Fitzgerald, was released in 2016. The documentary Citizenfour directed by Laura Poitras won Best Documentary Feature at the 87th Academy Awards. See also Aftermath of the global surveillance disclosures Global surveillance and journalism List of whistleblowers Philip Agee Julian Assange Thomas A. Drake Daniel Ellsberg Chelsea Manning Sophie Zhang Carnivore (software) COINTELPRO ECHELON John Crane German Parliamentary Committee investigating the NSA spying scandal List of people who have lived at airports Mass surveillance in the United States NSA warrantless surveillance (2001–2007) Perry Fellwock Mark Klein Thomas Tamm Diane Roark Russ Tice Operation Socialist (code name) Panetta Review Russian influence operations in the United States Stellar Wind (code name) Terrorist Surveillance Program Haven (software) – free and open-source Android app co-developed by Snowden and The Guardian Project GPG for Journalists Notes References Further reading Lanchester, John. October 3, 2013. Margulies, Joseph. "The Promise of May, the Betrayal of June, and the Larger Lesson of Manning and Snowden." Verdict. Justia. July 17, 2013. External links Edward Snowden on Substack November 1, 2013 (Index of articles) (Index of articles) "Global Surveillance" An annotated and categorized "overview of the revelations following the leaks by the whistleblower Edward Snowden. There are also some links to comments and followups." By Oslo University Library "The NSA Archive" American Civil Liberties Union searchable database of NSA documents disclosed by Edward Snowden, as published between June 5, 2013, and May 6, 2014 Book documents 107 additional pages from the Snowden archive released on May 13, 2014, in conjunction with the publication of Glenn Greenwald's No Place to Hide Snowden documents at Internet Archive 1983 births Activists from North Carolina American computer specialists American dissidents American exiles American memoirists American refugees American whistleblowers Articles containing video clips Booz Allen Hamilton people Dell people Fugitives wanted by the United States Fugitives wanted under the Espionage Act of 1917 Living people National Security Agency people People from Elizabeth City, North Carolina People of the Central Intelligence Agency People of the Defense Intelligence Agency People with epilepsy Privacy activists Refugees in Russia United States Army soldiers American emigrants to Russia
49597978
https://en.wikipedia.org/wiki/Ticketscript
Ticketscript
ticketscript was a European-based self-service event ticketing software. They provided software for event organizers to set up a ticketed event, promote and sell tickets online through their own websites, social media channels, affiliated partner sites and at the door. The company offered entry management tools and software; providing event organizers with live data, sales statistics and reporting tools directly from their computers, mobile devices and smartphones. The company was established in 2006 Amsterdam, The Netherlands, by Frans Jonker and Ruben Meiland. They had offices in five European cities including London, Berlin, Barcelona and Antwerp and employ over 150 people across five countries. The offices in Germany and Belgium were opened in 2009, the London office in 2010 and the Spanish one in 2011. The company was acquired by Eventbrite in January of 2017. Products ticketscript provided software for ticket creation, ticket sales and event management. Their Flow app allowed people to manage guestlists and entrance. Investment The company went through various rounds of investment funding, with the first round taking place in 2010. In 2014, London-based private equity firm, FF&P, invested £7m in the company, which enabled ticketscript to continue its international expansion and development of new products and features. References Companies based in Amsterdam Software companies established in 2006 Ticket sales companies Entertainment companies established in 2006 Dutch companies established in 2006 Software companies disestablished in 2017 Entertainment companies disestablished in 2017 Dutch companies disestablished in 2017 2017 mergers and acquisitions
60534832
https://en.wikipedia.org/wiki/Amir%20Husain
Amir Husain
Amir Husain is a Pakistani-American artificial intelligence (AI) entrepreneur, founder of the Austin-based company, SparkCognition, and author of the book, The Sentient Machine. Childhood & Education Husain was born in Lahore, Punjab, Pakistan. His father was a businessman while his mother was an educator. At the age of four, Husain interacted with his first computer: A Commodore 64. Amazed by what the machine could do, he went back to his room and started building a contraption of a computer out of toys and cardboard, starting his lifelong obsession with computer science. He dropped out of middle school in the eighth grade, and began writing software and selling it for a profit. At the age of 15, Husain began attending the Punjab Institute of Computer Science from which he graduated two years later with a bachelor's degree in computer science. After graduating, Husain spent time searching for an ideal research organization, and eventually found the Distributed Multimedia Computing Laboratory (DMCL) at the University of Texas at Austin, Texas. He joined UT Austin in 1996, but upon arrival was denied entrance to the Masters program, because of his young age. He then spent a year in the Bachelors program in computer science and obtained a second BS degree from the University of Texas at Austin. While still an undergraduate, he obtained his desired position at DMCL. In 1999, while working towards his Ph.D., Husain dropped out and launched his first start-up, Kurion. Career Husain launched Kurion in 1999, a web services company offering website personalization engines. The company was purchased in 2001 by iSyndicate, then the largest internet content syndication company. In 2002, the second startup he had founded, Inframanage, merged with ClearCube Technology. Husain became Chief Technology Officer at ClearCube, and later, CEO of ClearCube's software spin-off. In 2013, Husain founded SparkCognition, an artificial intelligence company. His first investor at the new company was Michael Dell. Boeing, CME Group, Verizon, State Street and others followed. Since its inception, the company has gained clients such as Apergy, Boeing and Aker BP, Honeywell Aerospace, Flowserve, and Defense Innovation Unit. As of 2019 June, SparkCognition has raised more than $72.5M through VC investors. In 2018, Husain became CEO of SkyGrid, a joint venture between Boeing and SparkCognition aimed at developing an aerial operating system that uses AI and blockchain technology to integrate autonomous cargo and passenger aircraft into the aerospace industry. Awards, Patents, & Achievements Husain has been named Top Technology Entrepreneur of the Year by the Austin Business Journal. Other awards he has received include being listed as an Onalytica Top 100 Artificial Intelligence Influencer, receiving the Austin Under 40 Technology and Science Award in 2016, and being a finalist for EY Entrepreneur of the Year in 2018. Husain has 33 awarded patents to his name, and several dozen additional patents pending. He has been published in journals such as Network World Computerworld, and the U.S. Naval Institute’s Proceedings, along with major news outlets such as Foreign Policy. A computer designed by Husain is in the collection of the Computer History Museum in Mountain View, California. He served on the Board of Advisors for IBM Watson and the University of Texas at Austin’s Department of Computer Science, and is a member of the Council on Foreign Relations. He has also received various awards from CRN, Nokia, PC World, and VMworld, among others. Personal life Husain married Zaib (née Iqtidar) in 2002. He lives in Austin, Texas. Bibliography References 20th-century births Living people Pakistani businesspeople Pakistani emigrants to the United States University of Texas at Austin alumni People from Lahore People from Austin, Texas Year of birth missing (living people)
858492
https://en.wikipedia.org/wiki/OpenVPN
OpenVPN
OpenVPN is a virtual private network (VPN) system that implements techniques to create secure point-to-point or site-to-site connections in routed or bridged configurations and remote access facilities. It implements both client and server applications. OpenVPN allows peers to authenticate each other using pre-shared secret keys, certificates or username/password. When used in a multiclient-server configuration, it allows the server to release an authentication certificate for every client, using signatures and certificate authority. It uses the OpenSSL encryption library extensively, as well as the TLS protocol, and contains many security and control features. It uses a custom security protocol that utilizes SSL/TLS for key exchange. It is capable of traversing network address translators (NATs) and firewalls. OpenVPN has been ported and embedded to several systems. For example, DD-WRT has the OpenVPN server function. SoftEther VPN, a multi-protocol VPN server, also has an implementation of OpenVPN protocol. It was written by James Yonan and is free software, released under the terms of the GNU General Public License version 2 (GPLv2). Additionally, commercial licenses are available. Architecture Encryption OpenVPN uses the OpenSSL library to provide encryption of both the data and control channels. It lets OpenSSL do all the encryption and authentication work, allowing OpenVPN to use all the ciphers available in the OpenSSL package. It can also use the HMAC packet authentication feature to add an additional layer of security to the connection (referred to as an "HMAC Firewall" by the creator). It can also use hardware acceleration to get better encryption performance. Support for mbed TLS is available starting from version 2.3. Authentication OpenVPN has several ways to authenticate peers with each other. OpenVPN offers pre-shared keys, certificate-based, and username/password-based authentication. Preshared secret key is the easiest, and certificate-based is the most robust and feature-rich. In version 2.0 username/password authentications can be enabled, both with or without certificates. However, to make use of username/password authentications, OpenVPN depends on third-party modules. Networking OpenVPN can run over User Datagram Protocol (UDP) or Transmission Control Protocol (TCP) transports, multiplexing created SSL tunnels on a single TCP/UDP port (RFC 3948 for UDP). From 2.3.x series on, OpenVPN fully supports IPv6 as protocol of the virtual network inside a tunnel and the OpenVPN applications can also establish connections via IPv6. It has the ability to work through most proxy servers (including HTTP) and is good at working through network address translation (NAT) and getting out through firewalls. The server configuration has the ability to "push" certain network configuration options to the clients. These include IP addresses, routing commands, and a few connection options. OpenVPN offers two types of interfaces for networking via the Universal TUN/TAP driver. It can create either a layer-3 based IP tunnel (TUN), or a layer-2 based Ethernet TAP that can carry any type of Ethernet traffic. OpenVPN can optionally use the LZO compression library to compress the data stream. Port 1194 is the official IANA assigned port number for OpenVPN. Newer versions of the program now default to that port. A feature in the 2.0 version allows for one process to manage several simultaneous tunnels, as opposed to the original "one tunnel per process" restriction on the 1.x series. OpenVPN's use of common network protocols (TCP and UDP) makes it a desirable alternative to IPsec in situations where an ISP may block specific VPN protocols in order to force users to subscribe to a higher-priced, "business grade," service tier. For example, Comcast previously declared that their @Home product was, and had always been, designated as a residential service and did not allow the use of commercial applications. Their argument was that high traffic telecommuting while utilizing a VPN can adversely affect the network performance of their regular residential subscribers. They offered an alternative, @Home Professional, this would cost more than @Home product. So, anyone wishing to use VPN would have to subscribe to higher-priced, business-grade service tier. When OpenVPN uses Transmission Control Protocol (TCP) transports to establish a tunnel, performance will be acceptable only as long as there is sufficient excess bandwidth on the un-tunneled network link to guarantee that the tunneled TCP timers do not expire. If this becomes untrue, performance falls off dramatically. This is known as the "TCP meltdown problem". Security OpenVPN offers various internal security features. It has up to 256-bit encryption through the OpenSSL library, although some service providers may offer lower rates, effectively providing some of the fastest VPN available to consumers. It runs in userspace instead of requiring IP stack (therefore kernel) operation. OpenVPN has the ability to drop root privileges, use mlockall to prevent swapping sensitive data to disk, enter a chroot jail after initialization, and apply a SELinux context after initialization. OpenVPN runs a custom security protocol based on SSL and TLS, rather than supporting IKE, IPsec, L2TP or PPTP. OpenVPN offers support of smart cards via PKCS#11-based cryptographic tokens. Extensibility OpenVPN can be extended with third-party plug-ins or scripts, which can be called at defined entry points. The purpose of this is often to extend OpenVPN with more advanced logging, enhanced authentication with username and passwords, dynamic firewall updates, RADIUS integration and so on. The plug-ins are dynamically loadable modules, usually written in C, while the scripts interface can execute any scripts or binaries available to OpenVPN. In the OpenVPN source code there are some examples of such plug-ins, including a PAM authentication plug-in. Several third-party plug-ins also exist to authenticate against LDAP or SQL databases such as SQLite and MySQL. Platforms It is available on Solaris, Linux, OpenBSD, FreeBSD, NetBSD, QNX, macOS and Windows XP and later. OpenVPN is available for mobile phone operating systems (OS) including Maemo, Windows Mobile 6.5 and below, iOS 3GS+ devices, jailbroken iOS 3.1.2+ devices, Android 4.0+ devices, and Android devices that have had the Cyanogenmod aftermarket firmware flashed or have the correct kernel module installed. It is not compatible with some mobile phone OSes, including Palm OS. It is not a "web-based" VPN shown as a web page such as Citrix or Terminal Services Web access; the program is installed independently and configured by editing text files manually, rather than through a GUI-based wizard. OpenVPN is not compatible with VPN clients that use the IPsec over L2TP or PPTP protocols. The entire package consists of one binary for both client and server connections, an optional configuration file, and one or more key files depending on the authentication method used. Firmware implementations OpenVPN has been integrated into several router firmware packages allowing users to run OpenVPN in client or server mode from their network routers. A router running OpenVPN in client mode, for example, allows any device on a network to access a VPN without needing the capability to install OpenVPN. Notable firmware packages with OpenVPN integration include: OpenVPN has also been implemented in some manufacturer router firmware. Software implementations OpenVPN has been integrated into SoftEther VPN, an open-source multi-protocol VPN server, to allow users to connect to the VPN server from existing OpenVPN clients. OpenVPN is also integrated into Vyos, an open-source routing OS forked from the Vyatta software router. Licensing OpenVPN is available in two versions: OpenVPN Community Edition, which is a free and open-source version OpenVPN Access Server (OpenVPN-AS) is based on the Community Edition, but provides additional paid and proprietary features like LDAP integration, SMB server, Web UI management and provides a set of installation and configuration tools that are reported to simplify the rapid deployment of a VPN remote-access solution. The Access Server edition relies heavily on iptables for load balancing and it has never been available on Windows for this reason. This version is also able to dynamically create client ("OpenVPN Connect") installers, which include a client profile for connecting to a particular Access Server instance. However, the user does not need to have an Access Server client in order to connect to the Access Server instance; the client from the OpenVPN Community Edition can be used. See also OpenConnect OpenSSH Secure Socket Tunneling Protocol stunnel Tunnelblick UDP hole punching WireGuard References External links Community website Tech Talks 2001 software Free security software Tunneling protocols Unix network-related software Virtual private networks Free software programmed in C
174797
https://en.wikipedia.org/wiki/Pool%20of%20Radiance
Pool of Radiance
Pool of Radiance is a role-playing video game developed and published by Strategic Simulations, Inc (SSI) in 1988. It was the first adaptation of TSR's Advanced Dungeons & Dragons (AD&D) fantasy role-playing game for home computers, becoming the first episode in a four-part series of D&D computer adventure games. The other games in the "Gold Box" series used the game engine pioneered in Pool of Radiance, as did later D&D titles such as the Neverwinter Nights online game. Pool of Radiance takes place in the Forgotten Realms fantasy setting, with the action centered in and around the port city of Phlan. Just as in traditional D&D games, the player starts by building a party of up to six characters, deciding the race, gender, class, and ability scores for each. The player's party is enlisted to help the settled part of the city by clearing out the marauding inhabitants that have taken over the surroundings. The characters move on from one area to another, battling bands of enemies as they go and ultimately confronting the powerful leader of the evil forces. During play, the player characters gain experience points, which allow them to increase their capabilities. The game primarily uses a first-person perspective, with the screen divided into sections to display pertinent textual information. During combat sequences, the display switches to a top-down "video game isometric" view. Generally well received by the gaming press, Pool of Radiance won the Origins Award for "Best Fantasy or Science Fiction Computer Game of 1988". Some reviewers criticized the game's similarities to other contemporary games and its slowness in places, but praised the game's graphics and its role-playing adventure and combat aspects. Also well-regarded was the ability to export player characters from Pool of Radiance to subsequent SSI games in the series. Gameplay Pool of Radiance is based on the same game mechanics as the Advanced Dungeons & Dragons rule set. As in many role-playing games (RPGs), each player character in Pool of Radiance has a character race and a character class, determined at the beginning of the game. Six races are offered, including elves and halflings, as well as four classes (fighter, cleric, wizard, and thief). Non-human characters have the option to become multi-classed, which means they gain the capabilities of more than one class, but advance in levels more slowly. During character creation, the computer randomly generates statistics for each character, although the player can alter these attributes. The player also chooses each character's alignment, or moral philosophy; while the player controls each character's actions, alignment can affect how NPCs view their actions. The player can then customize the appearance and colors of each character's combat icon. Alternatively, the player can load a pre-generated party to be used for introductory play. These characters are combined into a party of six or less, with two slots open for NPCs. Players create their own save-game files, assuring character continuation regardless of events in the game. On an MS-DOS computer, the game can be copied to the hard-disk drive. Other computer systems, such as the Commodore 64, require a separate save-game disk. The game's "exploration" mode uses a three-dimensional first-person perspective, with a rectangle in the top left of the screen displaying the party's current view; the rest of the screen displays text information about the party and the area. During gameplay, the player accesses menus to allow characters to use objects; trade items with other characters; parley with enemies; buy, sell, and pool the characters' money; cast spells, and learn new magic skills. Players can view characters' movement from different angles, including an aerial view. The game uses three different versions of each sprite to indicate differences between short-, medium-, and long-range encounters. In combat mode, the screen changes to a top-down mode with dimetric projection, where the player decides what actions the characters will take in each round. These actions are taken immediately, rather than after all commands have been issued as is standard in some RPGs. Optionally, the player can let the computer choose character moves for each round. Characters and monsters may make an extra attack on a retreating enemy that moves next to them. If a character's hit points (HP) fall below zero, he or she must be bandaged by another character or the character will die. The game contains random encounters, and game reviewers for Dragon magazine observed that random encounters seem to follow standard patterns of encounter tables in pen and paper AD&D game manuals. They also observed that the depictions of monsters confronting the party "looked as though they had jumped from the pages of the Monster Manual". Different combat options are available to characters based on class. For example, fighters can wield melee or ranged weapons; magic-users can cast spells; thieves have the option to "back-stab" an opponent by strategically positioning themselves. As fighters progress in level, they can attack more than once in a round. Fighters also gain the ability to "sweep" enemies, effectively attacking each nearby low-level creature in the same turn. Magic-users and clerics are allowed to memorize and cast a set number of spells each day. Once cast, a spell must be memorized again before reuse. The process requires hours of inactivity for all characters, during which they rest in a camp; this also restores lost hit points to damaged characters. This chore of memorizing spells each night significantly added to the amount of game management required by the player. As characters defeat enemies, they gain experience points (XP). After gaining enough XP, the characters "train up a level" to become more powerful. This training is purchased in special areas within the city walls. In addition to training, mages can learn new spells by transcribing them from scrolls found in the unsettled areas. Defeated enemies in these areas also contain items such as weapons and armor, which characters can sell to city stores. Copy protection The MS-DOS, Macintosh and Apple II versions of Pool of Radiance include a 2-ply code wheel for translating elvish and dwarven runes to English. Some dwarven runes have multiple different translations. After the title screen, a copy protection screen is displayed consisting of two runes and a line of varying appearance (a dotted line, a dashed line, or a line with alternating dots and dashes) which correspond to markings on the code wheel. The player is prompted to enter a five or six character code which corresponds to a five or six character word. In the case of a five character word, there is a number at the beginning of the word which is not entered. Under the lines on the wheel are slots which reveal English letters, the coded English word being determined by lining up the runes, matching the correct line appearance, and then entering the word revealed on the code wheel. If the player enters an incorrect code three times, the game closes itself. In the MS-DOS version of Pool of Radiance, the code wheel is also used for some in-game puzzles. For example, in Sokol Keep the player discovers some parchment with elvish runes on it that require use of the code wheel to decipher; this is optional however, but may be used to avoid some combat with undead if the decoded words are said to them. In the NES version, the elvish words are given to the player deciphered without the use of the code wheel, as the NES release did not include a code wheel. Plot Setting Pool of Radiance takes place in the Forgotten Realms fantasy world, in and about the city of Phlan. This is located on the northern shore of the Moonsea along the Barren River, between Zhentil Keep and Melvaunt. The party begins in the civilized section of "New Phlan" that is governed by a council. This portion of the city hosts businesses, including shopkeepers who sell holy items for each temple's worshipers, a jewelry shop, and retailers who provide arms and armor. A party can also contract with the clerk of the city council for various commissions; proclamations fastened to the halls within City Hall offer bits of information to aid the party. These coded clues can be deciphered by using the Adventurer's Journal, included with the game. There are three temples within Phlan, each dedicated to different gods. Each temple can heal those who are wounded, poisoned, or afflicted, and can fully restore deceased comrades for a high price. The party can also visit the hiring hall and hire an experienced NPC adventurer to accompany the party. Encounters with NPCs in shops and taverns offer valuable information. Listening to gossip in taverns can be helpful to characters, although some tavern tales are false and lead characters into great danger. Plot summary The ancient trade city of Phlan has fallen into impoverished ruin. Now only a small portion of the city remains inhabited by humans, who are surrounded by evil creatures. To rebuild the city and clean up the Barren River, the city council of New Phlan has decided to recruit adventurers to drive the monsters from the neighboring ruins. Using bards and publications, they spread tales of the riches waiting to be recovered in Phlan, which draws the player's party to these shores by ship. At the start of the game, the adventurers' ship lands in New Phlan, and they receive a brief but informative tour of the civilized area. They learn that the city is plagued with a history of invasions and wars and has been overtaken by a huge band of humanoids and other creatures. Characters hear rumors that a single controlling element is in charge of these forces. The characters begin a block-by-block quest to rid the ruins of monsters and evil spirits. Beyond the ruins of old Phlan, the party enters the slum area—one of two quests immediately available to new parties. This quest requires the clearing of the slum block and allows a new party to quickly gain experience. The second quest is to clear out Sokol Keep, located on Thorn Island. This fortified area is inhabited by the undead, which can only be defeated with silver weapons and magic. The characters' adventure is later expanded to encompass the outlying areas of the Moonsea region. Eventually, the player learns that an evil spirit named Tyranthraxus, who has possessed an ancient dragon, is at the root of Phlan's problems. The characters fight Tyranthraxus the Flamed One in a climactic final battle. History Development Pool of Radiance was the first official game based on the Advanced Dungeons & Dragons rules. The scenario was created by TSR designers Jim Ward, David Cook, Steve Winter, and Mike Breault, and coded by programmers from Strategic Simulations, Inc's Special Projects team. The section of the Forgotten Realms world in which Pool of Radiance takes place was intended to be developed only by SSI. The game was created on Apple II and Commodore 64 computers, taking one year with a team of thirty-five people. This game was the first to use the game engine later used in other SSI D&D games known as the "Gold Box" series. The SSI team developing the game was led by Chuck Kroegel. Kroegel stated that the main challenge with the development was interpreting the AD&D rules to an exact format. Developers also worked to balance the graphics with gameplay to provide a faithful AD&D feel, given the restrictions of a home computer. In addition to the core AD&D manuals, the books Unearthed Arcana and Monster Manual II were also used during development. The images of monsters were adapted directly from the Monster Manual book. The game was originally programmed by Keith Brors and Brad Myers, and it was developed by George MacDonald. The game's graphic arts were by Tom Wahl, Fred Butts, Darla Marasco, and Susan Halbleib. Pool of Radiance was released in June 1988; it was initially available on the Commodore 64, Apple II series and IBM PC compatible computers. A version for the Atari ST was also announced. The Macintosh version was released in 1989. The Macintosh version featured a slightly different interface and was intended to work on black-and-white Macs like the Mac Plus and the Mac Classic. The screen was tiled into separate windows including the game screen, text console, and compass. Graphics were monochrome and the display window was relatively small compared to other versions. The Macintosh version featured sound, but no music. The game's Amiga version was released two years later. The PC 9800 version in Japan was fully translated (like the Japanese Famicom version) and featured full-color graphics. The game was ported to the Nintendo Entertainment System under the title Advanced Dungeons & Dragons: Pool of Radiance, released in April 1992. The NES version was the only version of the game to feature a complete soundtrack, which was composed by Seiji Toda, as he was signed to the publisher, Pony Canyon's record label at the time. The same soundtrack can be found on the PC-9801 version. The Amiga version also features some extra music, while most other ports contain only one song that plays at the title screen. The original Pool of Radiance game shipped with a 28-page introductory booklet, which describes secrets relating to the game and the concepts behind it. The booklet guides players through the character creation process, explaining how to create a party. The game also included the 38-page Adventurer's Journal, which provides the game's background. The booklet features depictions of fliers, maps, and information that characters see in the game. Sequels and related works Pool of Radiance was the first in a four-part series of computer D&D adventures set in the Forgotten Realms campaign setting. The others were released by SSI one year apart: Curse of the Azure Bonds (1989), Secret of the Silver Blades (1990), and Pools of Darkness (1991). The 1989 game Hillsfar was also created by SSI but was not a sequel to Pool of Radiance. Hillsfar is described instead, by the reviewers of Dragon, as "a value-added adventure for those who would like to take a side trip while awaiting the sequel". A player can import characters from Pool of Radiance into Hillsfar, although the characters are reduced to their basic levels and do not retain weapons or magical items. Original Hillsfar characters cannot be exported to Pool of Radiance, but they can be exported to Curse of the Azure Bonds. A review for Curse of the Azure Bonds in Computer Gaming World noted that "you can transfer your characters from Pool of Radiance and it's a good idea to do so. It will give you a headstart in the game". GameSpot declared that Pool of Radiance, with its detailed art, wide variety of quests and treasure, and tactical combat system, and despite the availability of only four character classes and the low character level cap, "ultimately succeeded in its goal of bringing a standardized form of AD&D to the home computer, and laid the foundation for other future gold box AD&D role-playing games". Scott Battaglia of GameSpy said Pool of Radiance is "what many gamers consider to be the epitome of Advanced Dungeons & Dragons RPGs. These games were so great that people today are using MoSlo in droves to slow down their Pentium III-1000 MHz enough to play these gems". In March 2008, Dvice.com listed Pool of Radiance among its 13 best electronic versions of Dungeons & Dragons. The contributor felt that "the Pool of Radiance series set the stage for Dungeons & Dragons to make a major splash in the video game world". The 1988 Dungeons & Dragons role-playing game module Ruins of Adventure was produced using the same adventure scenario as Pool of Radiance, using the same plot, background, setting, and many of the same characters as the computer game. The module thus contains useful clues to the successful completion of the computer missions. Ruins of Adventure contains four linked miniscenarios, which form the core of Pool of Radiance. According to the editors of Dragon magazine, Pool of Radiance was based on Ruins of Adventure, and not vice versa. Novelization In November 1989 a novelization of Pool of Radiance the video game, also called Pool of Radiance, was written by James Ward and Jane Cooper Hong, published by TSR. The novel is set in the Forgotten Realms setting based on the Dungeons & Dragons fantasy role-playing game. Dragon described the novel's plot: "Five companions find themselves in the unenviable position of defending the soon-to-be ghost town against a rival possessing incredible power". This book was the first in a trilogy, followed by Pools of Darkness and Pool of Twilight. Re-release GOG.com released Pool of Radiance and many Gold Box series games digitally on August 20, 2015, as a part of Forgotten Realms: The Archives - Collection Two. Reception SSI sold 264,536 copies of Pool of Radiance for computers in North America, three times that of Heroes of the Lance, an AD&D-licensed action game SSI also released that year. It became by far the company's most successful game up to that time; even its hint book outsold any earlier SSI game. It spawned a series of games, which combined to sell above 800,000 copies worldwide by 1996. In Computer Gaming Worlds preview of Pool of Radiance in July 1988, the writer noted a sense of deja vu. He described the similarity of the game's screen to earlier computer RPGs. For example, the three-dimensional maze view in the upper-left window was similar to Might & Magic or Bard's Tale, both released in the mid-1980s. The window with a listing of characters was featured in 1988's Wasteland; and the use of an active character to represent the party was part of Ultima V. The reviewer also noted that the design approach for game play was closer to SSI's own Wizard's Crown than to the other games in the genre. Pool of Radiance received positive reviews. G.M. called the game's graphics "good" and praised its role-playing and combat aspects. They felt that "roleplayers will find Pools is an essential purchase, but people who are solely computer games oriented may hesitate before buying it [...] it will be their loss". Tony Dillon from Commodore User gave it a score of 9 out of 10; the only complaint was a slightly slow disk access, but the reviewer was impressed with the game's features, awarding it a Commodore User superstar and proclaiming it "the best RPG ever to grace the C64, or indeed any other computer". Issue #84 of the British magazine Computer + Video Games rated the game highly, saying that "Pools is a game which no role player or adventurer should be without and people new to role playing should seriously consider buying as an introductory guide". Another UK publication, The Games Machine, gave the game an 89% rating. The reviewer noted that the third-person arcade style combat view is a great improvement for SSI, as they had traditionally incorporated simplistic graphics in their role-playing games. The reviewer was critical that Pool of Radiance was not original in its presentation and that the colors were a little drab, but concluded that the game is "classic Dungeons & Dragons which SSI have recreated excellently". A review from Zzap was less positive, giving the game a score of 80%. The reviewer felt that the game required too much "hacking, slicing and chopping" without enough emphasis on puzzle solving. The game was awarded 49% for its puzzle factor. Three reviewers for Computer Gaming World had conflicting reactions. Ken St. Andre—designer of the Tunnels & Trolls RPG—approved of the game despite his dislike of the D&D system, praising the art, the mixture of combat and puzzles, and surprises. He concluded as "take it from a 'rival' designer, Pool of Radiance has my recommendation for every computer fantasy role-playing gamer". Tracie Forman Hicks, however, opined that over-faithful use of the D&D system left it behind others like Ultima and Wizardry. She also disliked the game's puzzles and lengthy combat sequences. Scorpia also disliked the amount of fighting in a game she otherwise described as a "well-designed slicer/dicer", concluding that "patience (possibly of Job) [is] required to get through this one". Shay Addams from Compute! stated that experienced role-playing gamers "won't find anything new here", but recommended it to those who "love dungeons, dragons, and drama". In their March 1989 "The Role of Computers" column in Dragon magazine #143, Hartley, Patricia, and Kirk Lesser (often called "The Lessers") gave Pool of Radiance a three-page review. The reviewers praised Pool of Radiance as "the first offering that truly follows AD&D game rules", calling it a "great fantasy role-playing game" that "falls into the must-buy category for avid AD&D game players". The reviewers advised readers to "rush out to your local dealer and buy Pool Of Radiance". They considered it SSI's flagship product, speculating that it would "undoubtedly bring thousands of computer enthusiasts into the adventure-filled worlds of TSR". The Dragon reviewers criticized the "notoriously slow" technology of the C64/128 system but added that the C64/128 version would become nearly unplayable without a software-based fastloader utility which Strategic Simulations integrated into the game. Conversely, the reviewers felt that the MS-DOS version was extremely fast, so much so that they had to slow the game operation down in order to read all the on-screen messages. They found that the MS-DOS version played at twice the speed of the C64/128 version when using the Enhanced Graphics Adapter (EGA) graphics mode. Alex Simmons, Doug Johns, and Andy Mitchell reviewed the Amiga version of Pool of Radiance for Amiga Action magazine in 1990, giving it a 79% overall rating. Mitchell preferred the game Champions of Krynn, which had been released by the time the Amiga version of Pool of Radiance became available; he felt that Pool of Radiance was "more of the same" when compared to Champions, but was less playable and with more limited actions for players. Simmons felt that Pool of Radiance looked primitive and seemed less polished when compared with Champions of Krynn; he felt that although Pool was not up to the standard of Champions, he said it was still "a fine little game". Johns, on the other hand, felt that Pool of Radiance was well worth the wait, considering it very user-friendly despite being less polished than Champions of Krynn. Pool of Radiance was well received by the gaming press and won the Origins Award for Best Fantasy or Science Fiction Computer Game of 1988. For the second annual "Beastie Awards" in 1989, Dragon'''s readers voted Pool of Radiance the most popular fantasy role-playing game of the year, with Ultima V as the runner-up. The Apple II version was the most popular format, the PC DOS/MS-DOS came in a close second, and the Commodore 64/128 got the fewest votes. The primary factor given for votes was the game's faithfulness to the AD&D system as well as the game's graphics and easy-to-use user interface to activate commands. Pool of Radiance was also selected for the RPGA-sponsored Gamers' Choice Awards for the Best Computer Game of 1989. In 1990 the game received the fifth-highest number of votes in a survey of Computer Gaming World readers' "All-Time Favorites". Allen Rausch, writing for GameSpy's 2004 retrospective "A History of D&D Video Games", concluded that although the game "certainly had its flaws (horrendous load times, interface weirdness, and a low-level cap among others), it was a huge, expansive adventure that laid a good foundation for every Gold Box game that followed". In 1994, PC Gamer US named Pool of Radiance the 43rd best computer game ever. IGN ranked Pool of Radiance No. 3 on their list of "The Top 11 Dungeons & Dragons Games of All Time" in 2014. Ian Williams of Paste rated the game #5 on his list of "The 10 Greatest Dungeons and Dragons Videogames" in 2015. See also Pool of Twilight Pool of Radiance: Ruins of Myth DrannorReferences External links Dragonbait's Pool of Radiance page, screenshots, info and pics of the original Pool of Radiance'' (1988) Pool of Radiance at Game Banshee - Contains a walkthrough and many in-depth specifics about the game Images of Pool of Radiance package, manual and screen for Commodore 64 version Pool of Radiance Interactive Code Wheel at oldgames.sk Review in Compute!'s Gazette Review in Info 1988 video games Amiga games Apple II games Classic Mac OS games Commodore 64 games DOS games Fantasy video games Forgotten Realms video games Gold Box games NEC PC-9801 games Nintendo Entertainment System games Origins Award winners Role-playing video games Sharp X1 games Strategic Simulations games Tactical role-playing video games Ubisoft games U.S. Gold games Video games developed in the United States Video games featuring protagonists of selectable gender Video games using code wheel copy protection Video games with oblique graphics
272545
https://en.wikipedia.org/wiki/IBAC
IBAC
Ibac is a comics character. IBAC may refer to: IBAC (cycling team) in-band adjacent-channel Identity-based access control Independent Broad-based Anti-corruption Commission (Victoria) International Business Audit Consulting (IBAC) (Uzbekistan) International Business Aviation Council International Balloon Arts Conference (IBAC) See also LBAC, Lattice-based access control RBAC, Role-based access control
50141594
https://en.wikipedia.org/wiki/Ed%20Kahan
Ed Kahan
Eduardo T. Kahan (November 21, 1952 – March 13, 2010) was an executive at IBM for 26 years. He was named a distinguished engineer in 1997 and awarded the designation of IBM Fellow in 2005. Kahan was the Chief Architect and CTO of the IBM Software Group, Enterprise Integration and was a certified consultant and an active member of the IBM Academy of Technology. His responsibilities included strategy development, design, and development of advanced technologies for Web Services, service oriented architectures, and enterprise integration products, tools, and solutions for IBM clients. Kahan developed the IBM Architecture Description Standard (ADS), used by the IBM WW technical community, and authored the IBM Software Development Method. Personal life Kahan was born in Ponte Grossa, Brazil, on November 21, 1952, to Israel and Hilda Cohen. When he was nine, his family moved to Israel. Later they joined a Kibbutz (an agricultural, business and social co-op) called Ein Harod where he spent most of his teenage years, and until completing his Israeli army service at 21. Kahan and his family wanted him to have an education more advanced than what the kibbutz area high school could provide, so he attended ORT Singalovski school in Tel Aviv, two hours away, where he had to room and board. Kahan was a diligent student. Several English-proficient students were asked to interview with a visiting Canadian women's group from Toronto ORT to become the spokesperson for their organization and travel to Canada to speak to their ORT groups. They selected Kahan and welcomed him into the homes of the members in Toronto. The women offered to sponsor him for college in Toronto after his high school and army service. Kahan accepted and earned a degree in Mechanical Engineering from Ryerson Polytechnic. In 1981, he moved to Houston, Texas. Kahan married Judy Schiff Kahan and had a son Daniel, and a daughter Michelle. Kahan died at age 57 on March 13, 2010, at his residence in Orlando following a short, intense battle with cancer. Career Kahan started his career with Bruel and Kjaer in Denmark in acoustic, vibration, and signal analysis research in Europe and US. His special research interests involved the effect of vibration on humans in industrial settings (hand, arm, and whole-body vibration) and the link between business models and technology. On joining IBM in 1984 in Houston, Kahan worked on engineering design systems, manufacturing systems, and many complex business systems for IBM clients worldwide. Kahan held systems engineering, development, and teaching positions in IBM divisions. As one of the lead company architects, Kahan developed the IBM Architecture Description Standard (ADS), used by the IBM WW technical community, and authored the IBM Software Development Method. At the peak of his career in IBM, Kahan was the Chief Architect and CTO of the IBM Software Group, Enterprise Integration. Kahan was a certified consultant and an active member of the IBM Academy of Technology. Kahan spent two years (2007–2009) on assignment in IBM Israel Software Lab (ILSL), before he returned to the US. During those years in Israel, Kahan became a mentor to many in the lab, influenced ILSL technical directions, and helped drive new customer engagements. Kahan worked as an executive at IBM for 26 years. External links Ed Kahan Business Optimization Leadership Seminar Toward SOA with ESB Patents 1. 20100058162 AUTOMATIC CUSTOMIZATION OF DIAGRAM ELEMENTS 03-04-2010 2. 20100058161 AUTOMATIC MANAGEMENT OF DIAGRAM ELEMENTS 03-04-2010 3. 20100053215 CREATION AND APPLICATION OF PATTERNS TO DIAGRAM ELEMENTS 03-04-2010 4. 20090210384 VISUALIZATION OF CODE UNITS ACROSS DISPARATE SYSTEMS 08-20-2009 5. 20090182752 AUTOMATICALLY PERSISTING DATA FROM A MODEL TO A DATABASE 07-16-2009 6. 20090119225 METHOD AND SYSTEM FOR PROVIDING A UNIFIED MODEL FOR CANDIDATE SERVICE ASSETS 05-07-2009 7. 20090113544 ACCESSING PASSWORD PROTECTED DEVICES 04-30-2009 8. 20090113382 AUTOMATED DEPLOYMENT IMPLEMENTATION WITH A DEPLOYMENT TOPOLOGY MODEL 04-30-2009 9. 20090113381 AGGREGATION OF CONSTRAINTS ACROSS PROFILES 04-30-2009 10. 20090112909 AUTOMATED GENERATION OF MODELING LANGUAGE PROFILES 04-30-2009 11. 20090112567 PRELIMINARY DATA REPRESENTATIONS OF A DEPLOYMENT ACTIVITY MODEL 04-30-2009 12. 20090112566 AUTOMATED GENERATION OF EXECUTABLE DEPLOYMENT CODE FROM A DEPLOYMENT ACTIVITY MODEL 04-30-2009 13. 20090070777 Method for Generating and Using Constraints Associated with Software Related Products 03-12-2009 IBM employees IBM Fellows 1952 births 2010 deaths
64357281
https://en.wikipedia.org/wiki/Translate%20%28Apple%29
Translate (Apple)
Translate is an iOS and iPadOS translation app developed by Apple for their iOS and iPadOS devices. Introduced on June 22, 2020, it functions as a service for translating text sentences or speech between several languages and was officially released on September 16, 2020, along with iOS 14. All translations are processed through the neural engine of the device, and as such can be used offline. On June 7, 2021, Apple announced that the app would be available on iPad models running iPadOS 15, as well as Macs running macOS Monterey. The app was officially released for iPad models on September 20, 2021, along with iPadOS 15. It was also released for Mac models on October 25, 2021, along with macOS Monterey. Languages Translation of 11 languages, since the 2020 launch, is currently supported between the UK (British) and US (American) dialects of English, Arabic, Mandarin Chinese, French, German, the European dialect of Spanish, Italian, Japanese, Korean, the Brazilian dialect of Portuguese, and Russian. All languages support dictation and can be downloaded for offline use. References IOS-based software made by Apple Inc. iOS software iPadOS software MacOS software Machine translation software Natural language processing software Products introduced in 2020 2020 software
243937
https://en.wikipedia.org/wiki/RWTH%20Aachen%20University
RWTH Aachen University
RWTH Aachen University () or Rheinisch-Westfälische Technische Hochschule Aachen is an elite German public research university located in Aachen, North Rhine-Westphalia, Germany. With more than 47,000 students enrolled in 144 study programs, it is the largest technical university in Germany. RWTH Aachen in 2019 emerged successfully from the final of the third federal and state excellence strategy. The university will be funded as a university of excellence for the next seven years. RWTH Aachen was already part of the federal and state excellence initiative in 2007 and 2012. Since 2007, RWTH Aachen has been continuously funded by the DFG and the German Council of Science and Humanities as one of eleven (previously nine) German Universities of Excellence for its future concept RWTH 2020: Meeting Global Challenges and the follow-up concept The Integrated Interdisciplinary University of Science and Technology: Knowledge, Impact, Networks, also receiving grants for associated graduate schools and clusters of excellence. The university regularly accounts for the highest amount of third-party funds among all German universities, placing first per faculty member and second overall in the most recent survey from 2018. RWTH Aachen is a founding member of IDEA League, a strategic alliance of five leading universities of technology in Europe, as well as its German counterpart TU9. It is also a member of DFG and the Top Industrial Managers for Europe network. History On 25 January 1858, prince Frederick William of Prussia (later German emperor), was given a donation of 5,000 talers from the Aachener und Münchener Feuer-Versicherungs-Gesellschaft, the precursor of the AachenMünchener insurance company, for charity. In March, the prince chose to use the donation to found the first Prussian institute of technology somewhere in the Rhine province. The seat of the institution remained undecided over years; while the prince initially favored Koblenz, the cities of Aachen, Bonn, Cologne and Düsseldorf also applied, with Aachen and Cologne being the main competitors. Aachen finally won with a financing concept backed by the insurance company and by local banks. Groundbreaking for the new Polytechnikum took place on 15 May 1865 and lectures started during the Franco-Prussian War on 10 October 1870 with 223 students and 32 teachers. The new institution had as its primary purpose the education of engineers, especially for the mining industry in the Ruhr area; there were schools of chemistry, electrical and mechanical engineering as well as an introductory general school that taught mathematics and natural sciences and some social sciences. The unclear position of the new Prussian polytechnika (which officially were not universities) affected the first years. Polytechnics lacked prestige in society and the number of students decreased. This began to change in 1880 when the early RWTH, amongst others, was reorganized as a Royal Technical University, gained a seat in the Prussian House of Lords and finally won the right to bestow Dr.-Ing. (1899) degrees and Dipl.-Ing. titles (introduced in 1902). In the same year, over 800 male students enrolled. In 1909 the first women were admitted and the artist August von Brandis succeeded Alexander Frenz at the Faculty of Architecture as a "professor of figure and landscape painting", Brandis became dean in 1929. World War I, however, proved a serious setback for the university. Many students voluntarily joined up and died in the war, and parts of the university were shortly occupied or confiscated. While the (then no more royal) TH Aachen (Technische Hochschule Aachen) flourished in the 1920s with the introduction of more independent faculties, of several new institutes and of the general students' committee, the first signs of nationalist radicalization also became visible within the university. The Third Reich's Gleichschaltung of the TH in 1933 met with relatively low resistance from both students and faculty. Beginning in September 1933, Jewish and (alleged) Communist professors (and from 1937 on also students) were systematically persecuted and excluded from the university. Vacant Chairs were increasingly given to NSDAP party-members or sympathizers. The freedom of research and teaching became severely limited, and institutes important for the regime's plans were systematically established, and existing chairs promoted. Briefly closed in 1939, the TH continued courses in 1940, although with a low number of students. On 21 October 1944, when Aachen capitulated, more than 70% of all buildings of the university were destroyed or heavily damaged. After World War II ended in 1945 the university recovered and expanded quickly. In the 1950s, many professors who had been removed because of their alleged affiliation with the Nazi party were allowed to return and a multitude of new institutes were founded. By the late 1960s, the TH had 10,000 students, making it the foremost of all German technical universities. With the foundation of philosophical and medical faculties in 1965 and 1966, respectively, the university became more "universal". The newly founded faculties in particular began attracting new students, and the number of students almost doubled twice from 1970 (10,000) to 1980 (more than 25,000) and from 1980 to 1990 (more than 37,000). Now, the average number of students is around 42,000, with about one third of all students being women. By relative terms, the most popular study-programs are engineering (57%), natural science (23%), economics and humanities (13%) and medicine (7%). Recent developments In December 2006, RWTH Aachen and the Sultanate of Oman signed an agreement to establish a private German University of Technology in Muscat. Professors from Aachen aided in developing the curricula for the currently five study-programs and scientific staff took over some of the first courses. In 2007, RWTH Aachen was chosen as one of nine German Universities of Excellence for its future concept RWTH 2020: Meeting Global Challenges, earning it the connotation of being a "University of Excellence". However, although the list of universities honored for their future concepts mostly consists of large and already respected institutions, the Federal Ministry of Education and Research claimed that the initiative aimed at promoting universities with a dedicated future concept so they could continue researching on an international level. Having won funds in all three lines of funding, the process brought RWTH Aachen University an additional total funding of € 180 million from 2007–2011. The other two lines of funding were graduate schools, where the Aachen Institute for Advanced Study in Computational Engineering Science received funding and so-called "clusters of excellence", where RWTH Aachen managed to win funding for the three clusters: Ultra High-Speed Mobile Information and Communication (UMIC), Integrative Production Technology for High-wage Countries and Tailor-Made Fuels from Biomass (TMFB). RWTH was selected to receive funding from the German federal and state governments for the third Universities of Excellence funding line starting 2019. RWTH's proposal was called "The Integrated Interdisciplinary University of Science and Technology – Knowledge. Impact. Networks." and has secured funding for a seven-year period. 2019 Clusters of Excellence The Fuel Science Center (FSC) Adaptive Conversion Systems for Renewable Energy and Carbon Sources Internet of Production ML4Q – Matter and Light for Quantum Computing RWTH was already awarded funding in the first and second Universities of Excellence funding lines, in 2007 and 2012 respectively. Campus RWTH Aachen University's campus is located in the north-western part of the city Aachen. There are two core areas – midtown and Melaten district. The Main Building, SuperC student's center and the Kármán Hall are 500 m away from the city centre with the Aachen Cathedral, the Audimax (biggest lecture hall) and the main refectory are 200 m farther. Other points of interest include the university's botanical garden (Botanischer Garten Aachen). A new building, the so-called Central Auditorium for Research and Learning (CARL) was opened in 2017. It offers space for 4000 students and replaces Audimax as the largest lecture hall building. The name of the new central auditorium, which is going to contain different lecture halls, is a reference to Charlemagne, who reigned his empire from Aachen in the middle-ages. The RWTH has external facilities in Jülich and Essen and owns, together with the University of Stuttgart, a house in Kleinwalsertal in the Austrian Alps. The university is currently expanding in the city center and Melaten district. The SuperC, the new central service building for students, was opened in 2008. The groundbreaking for the new Campus-Melaten was in 2009. Internationality Double degrees and student mobility are promoted with other technology universities through the TIME (Top Industrial Managers for Europe) network. Furthermore, the RWTH is member of the IDEA League, which is a strategic partnership among four of Europe's leading research universities, including TU Delft, Chalmers University of Technology, and ETH Zürich, and was the first German university starting an Undergraduate Research Opportunities Program in 2008. Compared to other German universities the RWTH Aachen received the highest amount of funds granted by third-party donors in the last years. More than 7,000 international students are currently enrolled within the undergraduate, graduate or PhD programme. Compared to other German universities the portion of international students at the RWTH Aachen is higher-than-average. The proximity of Aachen to the Netherlands, Belgium, and Luxembourg combined with the subsequent exposure to a variety of cultural heritages has placed RWTH Aachen University in a unique position with regards to the reflection and promotion of international aspects and intensive interaction with other universities. Rankings The RWTH Aachen took 3rd place in 2018 based on the number of top managers in the Germany economy measured by the number of DAX board of management members. In 2019, RWTH Aachen took 2nd place. The top 3 universities in 2019 with the most top managers were the LMU Munich, the RWTH Aachen and the Technische Universität Darmstadt. According to the Stepstone Salary Report for Graduates 2019/2020, graduates of RWTH Aachen are amongst the highest earners in Science, technology, engineering, and mathematics. From 2001 to 2013, national rankings regularly identified RWTH Aachen as one of the best universities in Germany in the fields of engineering (especially mechanical engineering, electrical engineering), as well as amongst the top three in computer science, physics, chemistry, and medicine. The 2019 QS World University Rankings ranked the university 19th in the world in engineering – mechanical, aeronautical & manufacturing (by subject) and 44th in engineering and technology. In 2009, two prominent German newspapers, "Handelsblatt" and "Wirtschaftswoche", ranked RWTH Aachen the first place in Germany in the fields of mechanical engineering, electrical engineering, industrial engineering, and computer science. In 2012, Handelsblatt ranked The RWTH School of Business and Economics amongst top 10 within Germany. In the 2015 ranking published by DAAD together with Centre for Higher Education Development and Die Zeit, RWTH Aachen also stands on top among other German universities in the aforementioned fields of engineering and computer science. Internationally, in the 2020 QS Faculty Rankings RWTH Aachen is placed 20 (Mechanical Engineering), 34 (Electrical Engineering), =40 (Physics & Astronomy), 36 (Chemical Engineering), 21 (Materials Science), =42 (Chemistry), =26 (Mineral and Mining Engineering. The 2019 The Times Higher Education Subject Rankings ranked RWTH 27th in Engineering and Technology, 72nd in Physical Sciences, 47th in Computer Science. Organisation Almost all basic lectures are held in German, but an increasing number of master programs require proficiency in English for admission. Fees RWTH Aachen is run by the federal state of North Rhine-Westphalia. Since the summer semester of 2004 the state of North Rhine-Westphalia allowed universities to request a maximum of €500 per semester as tuition fees. In the past, tuition fees applied solely for long-term students and second studies. Since the summer semester of 2007, all students enrolled at the RWTH Aachen had to pay these €500, if they were not exempt for one of several reasons put forth by the State of North Rhine-Westphalia. Since 24 February 2011 study fees were abolished by the Landtag of North Rhine-Westphalia (Legislation for the Improvement of Equal Opportunities to University Admission) with effect from Winter Term 2011/12. Universities will receive 249 Mio Euro of national funding for measures that improve the quality of teaching (e.g., through additional teachers and tutors) as compensation. Tuition fees per semester are still being charged. Faculties The RWTH is divided into nine (previously ten) faculties: Faculty nine was pedagogical sciences, but it was abandoned in 1989. Teacher education, however, continued. Fraunhofer-Institutes The university cooperates with the Fraunhofer-Institutes situated in the Melaten district of Aachen. The institutes offer workshops, courses and lectures for the students of RWTH Aachen. Applied Information Technology (FIT) Sankt Augustin and Aachen Fraunhofer-Institute for Laser Technology ILT Fraunhofer-Institute for Production Technology IPT Fraunhofer-Institute for Molecular Biology and Applied Ecology Jülich-Aachen Research Alliance (JARA) The Jülich-Aachen Research Alliance (JARA) was founded by the RWTH Aachen and Forschungszentrum Jülich in 2007. Five sections are coordinated by the research facilities: JARA-Brain (Diagnosis and therapy of neurologic sickness) JARA-Fit (Fundamentals of future information technology) JARA-HPC (High Performance Computing) JARA-Energy (Energy research) JARA-FAME (Forces and matter experiments) Graduate Schools Aachen Institute for advanced study in Computational Engineering Science The Aachen Institute for advanced study in Computational Engineering Science (AICES) is a graduate school established in 2006 under the German Universities Excellence Initiative at the RWTH Aachen University. Research at AICES is broadly in the area of Computational engineering, solving inverse problems that find applications in mathematics, computer science and engineering, mechanical engineering and natural sciences. AICES is a collaborative effort of 47 principal investigators from 8 academic divisions of RWTH Aachen University, as well as Max Planck Institute for Iron Research and Forschungszentrum Jülich. Associations RWTH Aachen – North American Alumni Association: Prof. Dr. Burkhard Rauhut, former president of the RWTH, and Prof. Dr. Laszlo Baksay, President of the newly founded "Association of Alumni, Friends and Supporters of RWTH Aachen University in North-America" signed the founding statement for a new branch of the RWTH Alumni Community in Melbourne (Florida) in May 2006. AStA (Students' Union) AISA (Assoc. of Indian Students in Aachen) GATS (Association of Thai Students in Aachen) Pakistan Student Association. MexAS – Mexikanische Aachener Studierende ( Mexican Students´ Association) Flugwissenschaftliche Vereinigung Aachen (abbreviation: FVA, English: Flight Research Association Aachen). The FVA academic flying group is closely affiliated with RWTH Aachen and overseen by IDAFLIEG (Interessengemeinschaft deutscher akademischer Fliegergruppen e.V.). Team Sonnenwagen Aachen Student team founded in late 2015 with the goal to develop and build solar cars for the World Solar Challenge in Australia. In 2017, the team participated for the first time in the challenge. Ecurie Aix – Student team founded in 1999 to compete in the formula student Notable faculty and alumni RWTH Aachen University has educated several notable individuals, including some Nobel laureates in physics and chemistry. The scientists and alumni of the RWTH Aachen played a major role in chemistry, medicine, electrical, and mechanical engineering. For example, Nobel laureate Peter Debye received a degree in electrical engineering from RWTH Aachen and is known for the Debye model and Debye relaxation. Another example, Helmut Zahn and his team of the Institute for Textile Chemistry were the first who synthesised insulin in 1963 and they were nominated for Nobel Prize. Another example is B.J. Habibie, the third President of Indonesia that contributed in many aviation advancements. Franz Josef Och was the chief architect of Google Translate. Werner Tietz is one of the leading engineers of the Volkswagen Group and Vice President of SEAT. Trivia Together with the lord mayor and the cathedral provost of Aachen, the rector of RWTH Aachen University is one of three automatic members of the board of directors of the International Charlemagne Prize of Aachen. The prize is awarded annually for exceptional contributions towards European unity and ranks amongst the most prestigious European prizes. Notes References External links RWTH Aachen Education in Aachen Technical universities and colleges in Germany Educational institutions established in 1870 Universities and colleges in North Rhine-Westphalia Engineering universities and colleges in Germany 1870 establishments in Prussia
10261579
https://en.wikipedia.org/wiki/Thomas%20Strothotte
Thomas Strothotte
Thomas Strothotte is a German-Canadian computer scientist and university administrator living in Germany. He is the President of the Kühne Logistics University in Hamburg. Strothotte was born in 1959 in Regina, Canada, and raised in Vancouver. His first degrees were taken at Simon Fraser University (a B.Sc. in Physics (1980) and an M.Sc. in Computer Science (1981)). His further graduate work was done in Computer Science at the University of Stuttgart, McGill University in Montréal/Québec and the University of Waterloo/Ontario, leading to a Ph.D. in 1984. He also holds an MBA from Columbia University (New York) and an MBA from the London Business School (UK). After a year as a postdoctoral fellow at INRIA Rocquencourt near Paris, he went to the University of Stuttgart as an assistant professor in 1985, earning a D.Sc. (habil) degree in Computer Science in 1989. From 1989 to 1990 he was a visiting scientist at the IBM Scientific Center in Heidelberg, working in the Software Ergonomics Department. From there he went to the Free University of Berlin in 1990 as a professor of computer science. He moved on to the University of Magdeburg in Germany in 1993, where he was the head of the Computer Graphics and Interactive Systems Laboratory. He was the dean of his faculty from 1994 to 1996 and again later on from 2005 to 2006. From April 1996 until September 1998 he was a vice-president of academic planning and budget development of the University; from July until September 1998 he was also president pro tem of the University. He was also the initiator and one of the scientific responsibles for the new degree programme in Computational Visualistics. From March 2001 to December 2002 he served as the director of IT for the government of the State of Saxony-Anhalt. From 2006 until 2008 he was the rector of the University of Rostock and from 2009 to 2013 he served as the new Rector of the University of Regensburg in the State of Bavaria (Germany). In August 2013 he was named president of the Kühne Logistics University, a private, state-recognized and independent university based in Hamburg, Germany. He is the founder of two private schools in Magdeburg. See also List of University of Waterloo people References 1959 births Living people University of Rostock faculty Columbia Business School alumni Alumni of London Business School Canadian expatriate academics Canadian expatriates in Germany Free University of Berlin faculty
3134635
https://en.wikipedia.org/wiki/Seasar
Seasar
Seasar2 is an open-source application framework similar to the Spring Framework (Java). Initially, it was developed for the Java platform by Yasuo Higa, but .NET and PHP platforms are currently supported as well. Seasar2 has a large base of Japanese users, but there is a steady increase of non-Japanese users since English support was announced at the JavaOne 2005 Tokyo conference. Seasar2 is currently supported by the Seasar Foundation, a non-profit open source organization. History Seasar was initially made public on August 2003 at SourceForge.jp as an application server using Jetty (web server) and HSQLDB. The name was coined by the initial developer Yasuo Higa after an Okinawan mystical creature Shisa.On March 2004, Seasar was re-introduced as light weight dependency injection and AOP container and renamed Seasar2. Even though, development of Seasar came to a halt, the last release, seasarsetupV1Final With Nazuna, may still be downloaded from the Seasar2 site. On April 2005, Seasar2 obtained assistance from OSCJ.net (Open Source Collaboration Joint Network) and moved out from SourceForge.jp. Introduction Like other DI container frameworks, components are defined in external XML files. There is, also, a strong support for database and unit testing with JUnit. The main difference with other frameworks is the support of the concept "Convention over Configuration" to reduce the XML configuration prominent when using framework such as Spring. The aim is to reduce the number or eliminate configuration files by making developers conform to programming and configuration conventions and letting the framework do the work. For example, if a property type is an interface and there is an object that implements this interface, dependency is configured by the container. If the test method name ends with a "Tx", a transaction is initiated before the unit test and rolledback after the test. Modules Seasar2 support of other open source software are prefixed with S2. Like most open source software, Seasar2 software may be divided into 3 major categories: Seasar2 core Related software Sandbox software - software still under development Related software may further be subdivided into the following subdivision: Database related: S2DAO, S2Hibernate, S2Unit(JUnit) Presentation: S2JSF, S2Struts, S2Tapestry, Flash player Communication related: S2RMI, S2Axis Miscellaneous: Kijimuna Seasar2 Core Seasar2 core is the central software common to all Seasar2 related software. Transaction control module (S2Tx), database connection pooling (S2DBCP), and JUnit testing (S2Unit) are all bundled with this core. Cross-Platform Support Seasar is currently supported on Java/Java EE, PHP5, and .NET. Future On April 22, 2005 at Seasar Strategies Day 2005, project Kuina was announced as the next release of Seasar2. At the conference, it was announced that Kuina will support for EJB3.0 (JSR220) as well as J2SE 5.0 annotation. From http://ml.seasar.org/archives/seasar-user-en/2010-March/000039.html :The language of all of our documents and error messages is Japanese, Japanese ML is very active, and all committers are Japanese.[...]Unfortunately, we don't prepare english documents for the current version(2.4).'' Events Seasar Foundation periodically holds "Karasawagi" conferences around Japan to allow developers and users to talk with each other. Seasar is also featured in JavaOne conference. External links Seasar Framework Seasar .NET Seasar PHP5 Java enterprise platform
3909828
https://en.wikipedia.org/wiki/Chad%20Kreuter
Chad Kreuter
Chadden Michael Kreuter (; born August 26, 1964) is a former catcher in Major League Baseball and the former head coach of the USC Trojans baseball team. He was most recently the manager of the Syracuse Mets in the AAA East league of minor league baseball. Playing career Kreuter played for seven different ballclubs during his career: the Texas Rangers (1988–91, 2003), Detroit Tigers (1992–94), Seattle Mariners (1995), Chicago White Sox (1996–97, 1998), Anaheim Angels (1997–98), Kansas City Royals (1999) and Los Angeles Dodgers (2000–02). He made his major league debut on September 14, 1988, as the starting catcher wearing #7, and played his final game on April 27, 2003, as the starting catcher wearing #12. Kreuter's best season was 1993 with the Tigers, when he batted .286 with 15 home runs and slugged .484, while appearing in a career high 119 games. Kreuter's career included the unusual occurrence that he was traded from the White Sox to the Angels twice. The White Sox sent him along with Tony Phillips to the Angels on May 18, 1997, and after he signed back with the Sox as a free-agent in the off-season, they again sent him to Anaheim on September 18, 1998. On May 16, 2000, Kreuter was involved in a brawl with fans at Wrigley Field in Chicago as he sat in the Dodgers bullpen along the right field foul line. During the 9th of inning of the game, a Cubs fan smacked the back of Kreuter's head and took his cap, prompting Kreuter and several other Dodgers to enter the stands and fight with fans. Kreuter and several other Dodgers were suspended eight games apiece, and a total of 19 players received fines. The Dodgers later settled a lawsuit with a fan who alleged that Kreuter choked him. Coaching career Kreuter was named the coach of the USC Trojans on June 2, 2006, after former coach Mike Gillespie (who is also his father-in-law) retired. He was relieved as head coach on August 9, 2010, posting a 111–117 record in four years. Kreuter was named as the manager for the St. Lucie Mets of the New York Mets organization for the 2018 season. After two seasons of managing in St. Lucie, he was named as the manager of the Syracuse Mets on February 8, 2020. On January 18, 2022, Kreuter was hired as bench coach for the Acereros de Monclova of the Mexican League for the 2022 season. References External links USC Trojans profile Brawl with the fans BaseballLibrary 1964 births Living people Baseball players from California Anaheim Angels players Burlington Rangers players Cardenales de Lara players American expatriate baseball players in Venezuela Chicago White Sox players Detroit Tigers players Kansas City Royals players Los Angeles Dodgers players Major League Baseball catchers Minor league baseball managers Oklahoma City 89ers players People from Greenbrae, California Pepperdine Waves baseball players Pepperdine University alumni Charlotte Rangers players Salem Redbirds players Seattle Mariners players Syracuse Mets managers Tacoma Rainiers players Texas Rangers players Tiburones de La Guaira players Tulsa Drillers players USC Trojans baseball coaches
796252
https://en.wikipedia.org/wiki/SimTower
SimTower
SimTower: The Vertical Empire (known as in Japan) is a construction and management simulation video game developed by OpenBook Co., Ltd. and published by Maxis for the Microsoft Windows and Macintosh System 7 operating systems in November 1994. In Japan, it was published by OpenBook that same year and was later released for the Sega Saturn and 3DO in 1996. The game allows players to build and manage a tower and decide what facilities to place in it, in order to ultimately build a five-star tower. Random events take place during play, such as terrorist acts that the player must respond to immediately. Critical reception towards the game was generally positive. Reviews praised the game's formula, including its open-ended nature and its ability to immerse the player into the game. Criticism targeted the game's lack of documentation, which some reviewers found made it harder to learn how to play the game. The in-game speed was also criticized for being too slow, which was a crucial issue in the game because time must pass for the player to earn income to purchase new facilities. Gameplay SimTower allows the player to build and manage the operations of a modern, multi-use skyscraper. They must plan where to place facilities in the tower that include restaurants, condominiums, offices, hotel rooms, retail stores and elevators. To prevent tenants from vacating their properties, the player must keep their stress low by fulfilling their demands for medical centers, parking lots, recycling facilities, clean hotel rooms staffed with housekeepers, and an efficient transportation system, which involves managing elevator traffic. SimTower, which was built around an elevator simulation program, places a strong emphasis on good elevator management. The game begins with a one-star tower with limited building options. To increase the tower's star rating, it must attract more tenants by providing more living space (or office space, and later in the game, hotel and various types of commercial space). New facilities are made available while the tower progresses from a one-star rating to a five-star rating. The highest achievable rating is the designation of "Tower" which can only be awarded by building a cathedral at the very top of a five-star building with all possible tower levels above ground developed. The tower is limited to a maximum of 100 floors above ground and nine stories below ground. Standard elevators, which can span a maximum of 30 floors, and express elevators, which can span the entire height of the building, must be used efficiently to decrease tenant stress. Certain events can take place in the course of managing the tower. For example, terrorists may phone the player to let them know that they have hidden a bomb in the building, and that they demand a ransom. If the ransom is not paid, then security services must find the bomb before it detonates, or else the tower will incur significant damages. If the player builds facilities underground, the game may notify them that their workers have discovered gold treasure, which gives the player a significant amount of funds. At random intervals during the game, there are notifications that state that a VIP will be visiting the tower soon, so the player must prepare for their visit. If the VIP enjoys their visit because of variables such as a comfortable hotel suite and efficient navigation, the VIP will give the tower a favorable rating. A favorable rating would then allow the tower to advance to the next star level, assuming the other qualifications are met. Although it does not have any impact on the tower, at the end of the fourth quarter every year in the game, Santa Claus and his reindeer fly across the tower. Development Developed by Yoot Saito of OpenBook, SimTower was originally titled The Tower. It works on computers running the Microsoft Windows or Macintosh System 7 operating systems; the game will operate on 68k-based Macs at a minimum. It requires 8-bit colors and four megabytes of random-access memory. Graphics and sounds used in SimTower are similar to those of previous Sim games, and high-resolution graphics are also used. The sound effects are kept to a minimum; noises that are played in the background include office "buzz" and elevator bells. While attending Waseda University, Saito played SimCity on the Macintosh, which prompted him to pursue video game creation after graduating. His first game was a simulation title that was part of a future media project for a publishing house. When Saito asked to develop a second, the business refused because it was not a video game company. He left the company to personally produce the second game, which built on ideas he conceived while working on his first: elevators and towers. Saito teamed up with freelance programmer Takumi Abe to complete the project. To research the gameplay, Saito contacted an elevator company to learn about elevator scheduling and management. However, the company declined to provide the information. Saito handled the graphic design, starting with a monochromatic scaled tower created in HyperCard. The designer added color to differentiate between office- and hotel-type buildings. As development neared completion, Saito noticed that the Mac's performance had improved and decided to increase the color palette size from 16 to 256 colors. Saito enlisted a second designer to produce animation for the graphics and improve the details for the color increase. Release and reception SimTower was successful in Japan, earning the developers a profit. The Nihon Keizai Shimbun awarded Saito the "Best Young Manager/Venture of the Year" for his work on the game. After the initial Japanese release, Maxis president Jeff Braun contacted Saito regarding a worldwide release; SimCity creator Will Wright had informed Braun of the game. The company localized the game for sale in the United States, and changed the name to capitalize on the popularity of the Sim franchise and increase sales figures. Maxis published SimTower for the Windows and Macintosh System 7 operating systems in November 1994 in the United States. In 1996, it was ported to the Sega Saturn and 3DO Interactive Multiplayer in Japan. The South China Morning Post praised the game's formula, noting that it followed in the footsteps of previous open-ended Maxis games. Comparing SimTower to SimCity 2000, the review remarked that it was more interesting to watch people live out their lives in a tower rather than to observe cars moving around. They also appreciated the "homely" feeling of SimTower, in contrast with other Sim games such as SimEarth and SimLife, which they felt were too universal to take on a personal identity. Benjamin Svetkey of Entertainment Weekly praised the game and commented that it is "more fun than [the concept] sounds". However, he stated that the gameplay may be too much for fans of the series. A reviewer for Next Generation panned the game, saying it lacks the bustling interactivity of previous games in the Sim franchise: "There are bug infestations and the occasional fire with which to deal, but most of the time, SimTower sees you standing around waiting for cash reserves to grow in order to add more floors. Not much fun at all." Australia's The Age found SimTower a pleasing return to form for Maxis, after the release of the disappointing SimFarm. Lisa Karen Savignano of Allgame stated that the game had decent graphics and sound. However, she also felt that SimTower had good replay value due to the non-linear gameplay, giving the game four stars out of five. The game was criticized by the South China Morning Post for lacking documentation, making it more difficult to learn how to play the game. They also predicted that players would be unhappy with the game's speed, as time plays an important role in earning money from tenants. Before the player can purchase new facilities, a long period of time must pass before income is earned from tenants. The newspaper was also unhappy with complaints from tenants; specific reasons for their dissatisfaction are never given. The Age was disappointed by the lack of pre-built towers and scenarios, suggesting that one along the lines of The Towering Infernos plot could have been included. Game Informer referred to SimTower as a "lesser-known" simulation game, and described it as "fun and addictive". Writing for the San Diego Union-Tribune, Matt Miller felt that, when compared to SimCity 2000 (1993), gameplay in SimTower moved slowly. He also disliked the moments when he had to wait several minutes to pass by before he could make enough money to purchase new additions for his building. Dragon magazine's reviewers Jay and Dee praised the visuals and gameplay. However, the two commented that the game can feel slow because it lacks gameplay elements and options present in other strategy games. In 1995, the Software and Information Industry Association listed SimTower as the "Best Simulation Program" in the Consumer software category of their annual Codie awards. The game was followed by Yoot Tower (called The Tower II in Japan), also designed by Yoot Saito, which was initially released on November 24, 1998, for the Macintosh. It was later made available for the Windows operating systems in January 1999. Yoot Towers gameplay is similar to that of SimTower—players build hotels, resorts, and office buildings, and work towards building a five-star tower. Vivarium launched a version of SimTower for the Game Boy Advance, called The Tower SP, published by Nintendo in Japan on April 28, 2005, and by Sega in the United States on March 15, 2006. A version of SimTower called The Tower DS was published by DigiToys in Japan on June 26, 2008. Yoot Tower was also released for iPad devices via the online iOS App Store. See also Project Highrise References External links 1994 video games 3DO Interactive Multiplayer games Business simulation games Classic Mac OS games Maxis Sim games Sega Saturn games Video games developed in Japan Windows games
13960293
https://en.wikipedia.org/wiki/RPG%20Maker%20VX
RPG Maker VX
is a version of PC program RPG Maker series. It has been superseded by RPG Maker VX Ace, which is an improved and enhanced version of RPG Maker VX. Both RPG Maker VX and RPG Maker VX Ace are developed by Enterbrain, following its predecessor, RPG Maker XP. RPG Maker VX follows the naming pattern present in previous RPG Maker releases by having a suffix based on the Windows versions the software was designed for (in this case, Windows Vista and Windows XP). RPG Maker editions RPG Maker VX Trial A Japanese trial for RPG Maker VX was released on Enterbrain's Japanese VX website and was available for download. It features limited and reduced features, like the inability to save games and limited database functionality. An English version of the program is also available from Enterbrain, with full functionality and a 30-day time limit. RPG Maker VX RTP The standard runtime package for RPG Maker VX is available for download on the Enterbrain website. This allows users to play games created with RPG Maker VX. It was developed so that games used mostly default resources and can be distributed to the public with a small file size. RPG Maker VX history Early Japan order included Masterpiece Note notebook. In late January 2008, Enterbrain of Japan released an update that included an extra script which improved performance. This release was called RPG Maker VX 1.01 and is available to those who own the full version of the program. The last version at the moment is 1.02. Bonus contents The retail version of RPG Maker VX includes following game demos, which are also available separately from the Enterbrain website: Dragoness Edge 2820 (by SAURASUDO) Invas ~ Tai Ma Roku (INVAS~退魔録) (by MENCHAN) Michiru Service! ~Spirit World Border Tale~ (ミチル見参! ~魔界境物語~) (by Asa son) Buried Book (by Yuwaka) Futago no kami-sama (フタゴノカミサマ) Rector and the Black Lion's Crest (レクトールと黒獅子の紋章) (by Shine Garden) Sword of Algus (by Yoshimura (Material Quest Online), mackie (First Seed Material), Shine Garden) Abyss Diver They feature advanced effects such as pseudo-3D battle graphics, custom battle systems written in RGSS2 and more. Sword of Algus and Abyss Diver are not included with Japanese version of the product. Expansions Several optional add-ons have been released for RPG Maker VX. These are most commonly resource or graphical add-ons, though not always. Thus far, the following add-ons have been released as purchases from the official RPG Maker website: Materials for VX: SAMURAI - 侍 (ツクールシリーズ素材集 和): It is an official art pack for the RPG Maker VX series. Japanese retail version also includes Chouichirou Kenpuuden (長一郎剣風伝) demo. Chibi Chara Tsukuru (ちびキャラツクール) character graphic generator was released on 2008-03-14 as freeware. High Fantasy Resource Pack: An official resource pack containing resources designed for creating Western style RPGs, with realistically proportioned characters and tiles instead of anime or chibi style graphics. It is only available from the English language site and has not been released in Japan. RPG Maker 3 Music Pack: A set of BGM tracks taken from RPG Maker 3 and remastered for use in the PC edition. Modern tileset add-on: this add-on contains tilesets intended for creating modern or science-fiction environments. It only contains tilesets and does not have any other types of resources. A modern shop expansion was later released for it containing additional tilesets for creating modern interiors and shops. Arabian Nights tileset add-on: this is a pack containing Middle Eastern themed tilesets for creating games that take place in an Arabian setting. Like the Modern tileset add-on, it does not contain resources other than tilesets, though 4 Arabian themed BGM tracks are included in the pack. Features Many of the features included in previous versions of RPG Makers made a return, though other features (such as the use of map fogs and tilesets of infinite size) that were present in RPG Maker XP were removed. Mapping System The mapping system in RPG Maker VX differs greatly from the one used in RPG Maker XP. Instead of assigning tile sets to each map, there are nine global tile sets which can be used indiscriminately. The layer system used in previous releases has also changed, as RPG Maker VX only has two "usable" layers; one layer for Tile set A1 to A5 (which contains tiles for floors, walls, etc.) and another for tile sets B to E. Tilemap class in RGSS supports three layers, but two of them are used for combining tilesets from A1 and A2, in order to produce the autotile mapping system. Any tile from tile sets B to E always go above any tile from tile set A1-A5. It is possible, however, to create another layer of sorts assigning tile set graphics to events. For users to import their own tiles, RPG Maker VX provides a blank, 512x512 tile set (a total of 256 possible tiles) which is by default tile set E. The size of this tile set cannot be changed, unlike in RPG Maker XP, where users could import any number of tile sets which could be of any size. Generally, the tilesets tended to be more general than RMXP, since no tilesets are used aside from global ones; however, this prevents greater and more specialized usage of tiles. There are also fewer tiles than XP. Character and Object Tiles Tiles used for characters and objects in the native RMVX tilesets are "square-sized" with deformed profiles (similar to those found in Final Fantasy IV) rather than the height-proportional figures found in previous RPG Maker versions (or Final Fantasy VI). Designers who prefer the more realistic look of "tall" characters may readily make use of RMXP character and object tiles for this purpose. Tiles from RM2K, however, are essentially unusable because of their much-lower graphical resolution. Text System RPG Maker VX uses a "letter by letter" text system, as opposed to the previous version RPG Maker XP. The Windowskin graphic has been expanded to include overlay graphics, which tile automatically on top of the main Windowskin. It also and now has an image-defined color palette, where in XP users would have to use the script editor to change text colors. The faceset system seen in RPG Maker 2000 and RPG Maker 2003 can be implemented through the "Enter Message" command. Previously, users would use a special event command to change text options. In RPG Maker VX, however, the options have been merged to the "Enter Message" command. Random Dungeon Generator Making a second appearance is the Automatic Dungeon Generator from RPG Maker 2003, which automatically generates a random dungeon map. The Automatic Dungeon Generator works by prompting the user to select wall and floor tiles, then a basic dungeon is generated based on the user's selection. Battle System The default battle system in RPG Maker VX is an update of the front-view battle system seen in RPG Maker 2000 ("Dragon Warrior" style), which does not allow for character graphics. However, user-made add-on scripts exist that change the battle system; side-view battles reminiscent of Final Fantasy, real time battle systems and even tactical battle systems may all be implemented by the user. Ruby Game Scripting System 2 (RGSS2) The script editor from RPG Maker XP was updated, and is based on the language RGSS2. Users can add custom scripts to the game or edit existing ones, and the capabilities of RGSS2 provide sufficient flexibility that programmers with enough time and knowledge can add or modify virtually any game function that suits their purposes. Quick Event Creation A new feature to RPG Maker VX is the "Quick Event Creation" function. It's a tool that allows less experienced users to create events for doors, inns, and treasure chests. Reception RPG Maker VX received a generally positive reception. Cheat Code Central scored it 4 out of 5. It has been praised for being much more user friendly than previous RPG Makers and for including a variety of features that otherwise had to be coded manually in previous RPG Maker installments. The addition of RGSS2 has also been received favorably among users, but the battle system was criticized for having no graphical representation of the actual player characters and for being largely text-based. However, scripts to make party members appear on the battle-screen, akin to Final Fantasy titles, are easily found on the internet, which simply need to be copied and pasted. It has also been criticized for the limitations of the default tile set, which only allow for a very small number of unique-looking town areas. System requirements 64-bit operating systems are supported in RPG Maker VX Ace but others versions of RPG Maker have not yet been confirmed to work with 64-bit operating systems. † Retail versions of the software packaged for sale in stores come on CD-ROM, but no optical drive is required for copies purchased via download. An internet connection is required for product activation, however, whether purchased on CD-ROM or via download. All hardware must be DirectX-compatible. RPG Maker VX VALUE! This version includes RPG Maker VX and Materials for VX: SAMURAI - 侍 art pack, but supports 64-bit operating systems. System requirements mostly follow RPG Maker VX Ace. RPG Maker VX Ace Following the release of RPG Maker VX, Enterbrain released RPG Maker VX Ace on December 15, 2011 in Japan, and March 15, 2012 worldwide. RPG Maker VX Ace is basically an enhanced version of RPG Maker VX. Some of the improvements of RPG Maker VX Ace over its predecessor are: Introduction of RGSS3, which is an improvement over VX's RGSS2. Integrated character generator. The return of unlimited tilesets. Add a third layer to maps to allow for more tiles stacked on top of each other. The Ruby interpreter is updated to 1.9 from VX's 1.8.3 which has significant speed improvements in processor intensive tasks. Improved Event and Mapping Systems. Region ID's. Traits System. Easy Shadow control. Window color changer. Party caterpillar system. Character descriptions. A battle background generator. Support for Ogg Theora video playback. While projects created with RPG Maker VX cannot be imported directly to RPG Maker VX Ace, VX Ace is backwards compatible with map files created in VX (by manually changing their file extension) and certain resources from its predecessor. On December 10, 2012, RPG Maker VX Ace was also released on Steam. The Steam version includes support for the Steam Workshop, allowing content creators to share their games, or resources online. RPG Maker VX Ace was succeeded by RPG Maker MV in October 23, 2015. RPG Maker VX Ace Lite While there is a 30-day evaluation version of RPG Maker VX Ace, Enterbrain has also released a free "Lite" version, called RPG Maker VX Ace Lite. It is a trial version which removes the 30 day limit, however, it adds certain limitations in usage: A reminder window shows every time the program opens. No call script function. No script editing. While the editor can be used to view the scripts in a project, it doesn't allow saving any modifications to them. 10 events per map. 20 maps maximum. While it can export the games created with it, it cannot encrypt them. The Database management has also certain restrictions, including the inability to change the preset maximum of assets per category: 10 heroes. 10 classes. 126 skills. 16 items. 60 weapons. 60 armors. 30 enemies. 30 troops. 25 states. 110 animations. 10 common events. 4 Tilesets. Enterbrain has also released RPG Maker VX Ace Lite Nico Nico Edition (RPGツクールVX Ace Lite ニコニコエディション) in Japan, which is a special version of RPG Maker VX Ace Lite that was distributed for Nico Nico Douga, which also has the following changes: License to publish games at Niconico jisaku game fes site (powered by niwango, inc.) and for non-commercial use. Includes Nico Nico Douga icons and graphic materials. Movie playback feature is disabled. This special version was only available until March 31, 2013. System requirements Internet connection is required for product activation. Reception RPG Maker VX Ace has received a positive reception on Steam. VX Ace holds a 94% user rating, while VX Ace Lite holds a 93% user rating. VX Ace has sold over 500,000 units on Steam, as of October 2021. References External links Official website VX Video game IDE
20067226
https://en.wikipedia.org/wiki/Owen%20Gray
Owen Gray
Owen Gray, also known as Owen Grey (born 5 July 1939), is a Jamaican musician. His work spans the R&B, ska, rocksteady, and reggae eras of Jamaican music, and he has been credited as Jamaica's first home-grown singing star. Biography Gray was born in Jamaica. He won his first talent contest at the age of nine, and by the age of twelve he was already appearing in public, playing drums, guitar, and keyboards. He attended the Alpha Boys School and turned professional aged 19. Gray was a dynamic performer on stage, who could be gritty or suave as the song dictated. He was the first singer (of many) to praise a sound system on record, with his "On the Beach" celebrating Clement Dodd's Sir Coxsone Downbeat system in 1959, one of the first releases on Dodd's Studio One label. He was one of the first artists to be produced by Chris Blackwell, in 1960, and his "Patricia" single was the first record ever released by Island Records. His first single, "Please Let Me Go", reached the top of the charts in Jamaica, and featured a guitar solo from Australian musician Dennis Sindrey who was a member of The Caribs, a studio band that played on many early Owen Gray recordings. The single also sold well in the United Kingdom, as did subsequent releases, prompting Gray to emigrate there in 1962. He toured Europe in 1964, and by 1966 he was well known as a soul singer as well as for his ska songs. During 1966, he worked in the UK and Europe with The Krew, then in 1967 with Tony Knights Chessmen. In the rocksteady era, he recorded for producer Sir Clancy Collins AKA sir collins . His popularity continued throughout the 1960s, working with producers such as Clement Dodd, Prince Buster, Sydney Crooks, Arthur "Duke" Reid, Leslie Kong, and Clancy Eccles, including work as a duo with Millie Small, with songs ranging from ska to ballads. He continued to record regularly, having a big hit in 1968 with "Cupid". His 1970 track "Apollo 12" found favour with the early skinheads, and in 1972 he returned to Island Records, recording reggae versions of The Rolling Stones' "Tumblin' Dice" and John Lennon's "Jealous Guy", although they met with little success. During this period, he regularly had releases on Pama and Pioneer Internacional label, Camel Records, and one single on Hot Lead Records. He had greater success in Jamaica, however, with "Hail the Man", a tribute to Emperor Haile Selassie, which was popular with the increasing Rastafari following. Gray spent a short time living in New Orleans before returning to Jamaica where he turned his hand to roots reggae, working with producer Bunny Lee, and achieving considerable success. In the 1980s relocated to Miami. He has continued to release new material regularly, often concentrating on ballads and Gospel music. Discography Albums Owen Gray Hit After Hit After Hit- Sydney Crooks (Pioneer Internacional) Owen Gray Sings (1961) Starlite Cupid (1969) Forward on the Scene (1975) Third World Fire and Bullets (1977) Trojan Turning Point (1977) Venture Dreams of Owen Gray (1978) Trojan Battle of the Giants Round 1 (1983) Vista Sounds (with Pluggy Satchmo) Oldies But Goodies (1983) Vista Sounds (split with Delroy Wilson) Max Romeo Meets Owen Gray at King Tubby's Studio (1984) Culture Press (with Max Romeo) Little Girl (1984) Vista Sounds Owen Gray Sings Bob Marley (1984) Sarge This is Owen Gray, Pama Room at the Top (1986) World Enterprise Let's Make a Deal World Enterprise Watch This Sound (1986) Sky Note Stand By Me (1986) Hitbound Prince Buster Memory Lane (1986) Phill Pratt Instant Rapport (1989) Bushranger Ready Willing and Able (1989) Park Heights None of Jah-Jah's Children Shall Ever Suffer (198?) Imperial Living Image (1996) Genesis Gospel Singers Out in the Open (1997) VP The Gospel Truth vol 1 Bushranger Something Good Going On Bushranger Gospel Truth, vol. 2 (1997) Jet Star Derrick Morgan and Owen Gray (1998) Rhino (with Derrick Morgan) True Vibration (1998) Jet Star Do You Still Love Me (1998) First Edition The Gospel Truth vol. 3 (1999) Bushranger On Drive (2000) Jet Star Better Days (2002) Worldsound Let's Start All Over (2003) Jet Star Jesus Loves Me (2004) True Gospel Baby It's You (2005) Worldsound Mumbo Jumbo (2005) Revenge Miss Wire Waist -Pioneer Internacional (Sydney Crooks) Excellence (????), Bushranger Jamaica's First Homegrown Star (2020) Owen Gray - Little Girl + Hit After Hit After Hit (2020) Owen Gray - Singles 1969 - 1972 (2020) Compilation albums Hit After Hit After Hit (1998) First Edition Pioneer Internacional Hit After Hit After Hit Vol 2 Pioneer Internacional Hit After Hit After Hit Vol 3 Pioneer Internacional Hit After Hit After Hit Vol 4 (198?) Pioneer Internacional Sly & Robbie Presents Owen Gray on Top (1994) Rhino Memory Lane Vol. 1 (2000) Sydney Crooks (Pioneer Internacional) Shook, Shimmy And Shake: The Anthology (2004) Trojan References External links Peter I (2004) "A Question of Recognition – Interview with Owen Gray", Reggae Vibes Owen Gray at Roots Archives 1939 births Living people Musicians from Kingston, Jamaica Jamaican reggae musicians Island Records artists Trojan Records artists
59112210
https://en.wikipedia.org/wiki/Microphone%20blocker
Microphone blocker
A microphone blocker is a phone microphone connector used to trick feature phones that have a physical microphone switch to disconnect the microphone. Microphone blockers won't operate on smartphones or laptops because the microphone is controlled with software rather than a physical switch. Safety test Hardware devices should always be tested if it is controlled by software which renders a microphone blocker useless. This can simply be done by plugging a headset or a microphone to the jack try to activate the internal microphone (eg with speaker mode on smartphones or feature phones and speak near the phone while keeping the microphone at distance or plugged), or any program that always will use the internal microphone for other hardware devices like laptops. Working alternatives for modern hardware devices Hardware kill switch (HKS): Some hardware devices can physically disconnect and/or cut power to integrated components with security switches. Hacking of consumer electronics: Whistleblower Edward Snowden showed Wired correspondent Shane Smith how to remove the cameras and microphones from a smartphone. The only practical ways are to physically removing all the internal microphones (there can be more than one, like a noise cancellation mic) and only plug headsets and using the headset microphone to record when needed. Modular hardware: Cameras and microphones can be physically removed from modular hardware. Smartphone incompatibility Microphone blockers, including commercial microphone blockers with an integrated circuit marketed to provide "extra security", are not useful for smartphones because it is controlled entirely by software. It can be demonstrated by connecting a microphone blocker to a smartphone, and make a phone call with speaker mode which will also active the internal microphone. However, although they would work, there are further problems: Since Apple started to exclude the headphone jack in 2016 from iPhone 7, iPhone 7 Plus and later versions, more and more phone companies are eliminating it. 3.5 mm TRRS male microphone blocker adapters with connectors to Lightning cables exist, and cables with USB-C connectors can be produced. Apple has filed dozens of wireless patents, and there are rumors that they are planning to produce products without lightning ports in the future to make them completely wireless. Bluetooth vendors advise customers with vulnerable Bluetooth devices to either turn them off in areas regarded as unsafe or set them to undiscoverable. Portable Bluetooth adapters for wired headsets, can be used as workaround to connect the microphone blocker to wireless hardware devices with Bluetooth connectivity, however while making them susceptible to bluesnarfing. Some hardware devices (eg some Google Nexus smartphones) have in addition to the internal recording microphone an internal noise cancellation microphone that may be on all the time, or that may be on in a way that is independent from what is plugged into the audio jack connector. Feature phone compatibility A phone connector without a microphone channel cannot be used as a microphone blocker because it will not deactivate the external microphone. Three- or four-conductor ( or ) 2.5 mm and 3.5 mm sockets are common on older cell phones and newer smartphones respectively, providing mono (three conductor) or stereo (four conductor) sound and a microphone input, together with signaling (e.g., push a button to answer a call). Older hardware devices CTIA/AHJ is the de facto TRRS standard. OMTP was mostly used on older hardware devices. However, the old mobile phones have a 2.5 mm jack connectors socket and cannot be used with modern microphone blockers that are typically 3.5 mm, but old mobile phones are notorious for their low security of the hardware itself. If a CTIA headset is connected to a mobile phone with OMTP interface, the external microphone will stay active. There, internal microphone will only be active when holding the microphone key on the headset. A standard TCIA/AHJ TRRS microphone blocker cannot be used with OMTP socket hardware devices and it is recommended to test all microphone blockers to make sure they really work. Operation Microphone blockers disable the internal microphone by tricking the device into believing an external microphone is connected. A 3.5 mm microphone blocker with just channel is enough to disconnect the internal microphone, but most commercial microphone blockers have connections which in theory makes them headset blockers that in smartphones also disconnect the internal speaker in media player software because they will try to connect to the headphones, while ringtones, and alarms, will functioning as normal because they will use both the internal speaker and the external speaker(s). Successful operation of a microphone blocker depends on the internal scheme of the mobile device, which may fully block the microphone without possibility of recovering data, or just disregard the signal from internal microphone with the possibility of recording if needed. Issues Some devices allow internal and external microphone works simultaneously or may not recognize when an external microphone is connected. Types Microphone blocking plug A microphone blocking plug is a phone connector with a microphone channel that cannot be used due to the plugged end. Some products are shipped with a female connectors (with a keychain hole, or a small strap attached directly to smartphone cases) to prevent loss when the male connector is detached. A mobile phone charm (especially with TRS connector instead of a rubber plug) can be used to conceal a dummy blocker. A microphone blocking plug can be used to debug software-defined radio that demands a connector to be plugged but they cannot be used to stream radio due to its low antenna efficiency. Life hack Common products that can be used as microphone blockers: A soldering jack plug (TR, TRS, or TRRS), with metal or plastic base - A slim plug with right angle is recommended to fit the jack plug hole in smartphone cases and to not cause frictions in the socket. A TRRS male-male jack plug cable - Another cheap solution that provides two microphone blocking plugs, the cost per plug is usually cheaper than commercial microphone blocking plugs. The cable can either be cut to provide two separate plugs or be left intact to allow plugging into two mobile phones. A headphone cable with microphone, a wired headset, or a wired microphone - More expensive and will provide just one blocking plug. It's possible that microphone connectors without a microphone circuit like the above solutions offer low security, because when you plug a connector that has no microphone or microphone circuit, software has the ability to override the default behavior. Microphone blocking adapter Headset with an integrated microphone blocker also exist, allowing users to use the headphones (ie. for listening to music) without risking being eavesdropped. Microphone blocking adapters are phone connectors adapters with a microphone channel and a mechanism that produces a false positive signal simulating a connected microphone. This mechanism cannot be built by pairing multiple connectors: a headset connected to a 3.5 mm TRRS headset extension cable adapter further connected to a 3.5 TRS headphone cable adapter won't trick a connected mobile phone to disconnect its external microphone. Applications This section describes use for both microphone blocking plugs and adapters. Use Eavesdropping protection for feature phones A microphone blocker is a cheap, simple accessory that provides countersurveillance against eavesdropping, for example recording eavesdropping from interception (like cellphone surveillance), or phone hacking, but it doesn't work on smartphones because they are controlled by software. However, there are a variety of computing vulnerabilities like proprietary software and firmware, backdoors, hardware security bugs, hardware backdoors, hardware Trojans, spyware, and malware programs that can turn on a mobile device's microphone remotely, and the vast majority of devices do not have internal hardware protection to prevent eavesdropping. Most antivirus software, and anti-spying software does not guarantee that the microphone will be fully blocked or disabled and can even be prevented doing so by spyware and malware that are constantly changing and improving. Leaked documents published by WikiLeaks, codenamed Vault 7 and dated from 2013 to 2016, described the capabilities of the United States Central Intelligence Agency (CIA) to perform electronic surveillance and cyber warfare, including the ability to compromise the operating systems of most smartphones, turning them into permanent listening devices. Millions of smartphones could also be vulnerable to software cracking via accelerometers. A new acoustic cryptanalysis technique discovered by a research team at Israel's Ben-Gurion University Cybersecurity Research Center allows data to be extracted using a computer's speakers and headphones. Forbes published a report stating that researchers found a way to see information being displayed, by using microphone, with 96.5% accuracy. Pocket dialing protection A microphone blocker is useful to prevent a mobile phone against audio interception from pocket dialing. Abuse Social engineering A person can wiretap conversations from persons they with social engineering have deceived that microphone blockers are safe to use with smartphones. This can in theory be exploited by companies that manufacture and sell commercial microphone blockers if they require a mobile phone number when people order their products or ask for support. Marketing ethic issue Manufacturers of commercial microphone blockers with 3.5 mm phone jacks intended for smartphones, sometimes claim that they their blocker has an inbuilt semiconductor integrated circuit (sometimes patented form marketing purpose) that will offer superior security but doesn't give any security at all, they just deceive people to make money on them. This has raised questions about marketing ethics. See also Crypto phone Faraday cage Mobile phone jammer Mobile phone accessory Mobile security Secure voice References External links How to keep snoops from listening to your laptop's microphone Keep hackers from listening through your computer with plug Telephone connectors Hardware device blockers Social engineering (computer security) Confidence tricks
6100856
https://en.wikipedia.org/wiki/Meta-scheduling
Meta-scheduling
Meta-scheduling or super scheduling is a computer software technique of optimizing computational workloads by combining an organization's multiple job schedulers into a single aggregated view, allowing batch jobs to be directed to the best location for execution. Meta-Scheduling for MPSoCs Meta-scheduling technique is a solution for scheduling a set of dependent or independent faults with different scenarios that are mapping and modeling in an event-tree. It can be used as a dynamic or static scheduling method. Scenario-Based Meta-Scheduling (SBMeS) for MPSoCs and NoCs Scenario-based and multi-mode approaches are essential techniques in embedded-systems, e.g., design space exploration for MPSoCs and reconfigurable systems. Optimization techniques for the generation of schedule graphs supporting such a SBMeS approach have been developed and implemented. SBMeS  can promise better performance by reducing dynamic scheduling overhead and recovering from faults. Implementations The following is a partial list of noteworthy open source and commercial meta-schedulers currently available. GridWay by the Globus Alliance Community Scheduler Framework by Platform Computing & Jilin University MP Synergy by United Devices Moab Cluster Suite and Maui Cluster scheduler from Adaptive Computing DIOGENES (DIstributed Optimal GENEtic algorithm for grid applications Scheduling, started project) SynfiniWay's meta-scheduler. MeS is designed to generate schedules for anticipated changes of scenarios by Dr.-Ing. Babak Sorkhpour & Prof. Dr.-Ing.Roman Obermaisser in Chair for Embedded Systems in university of Siegen for Energy-Efficient, Robust and Adaptive Time-Triggered Systems (multi-core architectures with Networks-on-chip). References B. Sorkhpour and R. Obermaisser. "MeSViz: Visualizing Scenario-based Meta-Schedules for Adaptive Time-Triggered Systems.". in AmE 2018-Automotive meets Electronics; 9th GMM-Symposium, 2018, pp. 1–6 B. Sorkhpour, R. Obermaisser and A. Murshed, "Meta-Scheduling Techniques for Energy-Efficient, Robust and Adaptive Time-Triggered Systems," in Knowledge-Based Engineering and Innovation (KBEI), 2017 IEEE 4th International Conference on, Tehran, 2017. B. Sorkhpour, O. Roman, and Y. Bebawy, Eds., Optimization of Frequency-Scaling in Time-Triggered Multi-Core Architectures using Scenario-Based Meta-Scheduling: “in AmE 2019-Automotive meets Electronics; 10th GMM-Symposium VDE, 2019 B. Sorkhpour. "Scenario-based meta-scheduling for energy-efficient, robust and adaptive time-triggered multi-core architectures", University of Siegen, Doctoral thesis, July 2019. Grid computing References of SBMeS [1]   H. Kopetz, Real-time systems: design principles for distributed embedded applications: Springer Science & Business Media, 2011. [2]   J. Theis, G. Fohler, and S. Baruah, “Schedule table generation for time-triggered mixed criticality systems,” Proc. WMC, RTSS, pp. 79–84, 2013. [3]   Safepower, D3.8 User guide of the heterogeneous MPSoC design. [Online] Available: http://safepower-project.eu/wp-content/uploads/2019/01/D3.8-User_guide_of_the_heterogeneous_MPSoC_design_v1-0_final.pdf. [4]   DW BUSINESS, BMW increases R&D spending on e-cars, autonomous vehicles. [5]   S. R. Sakhare and M. S. Ali, “Genetic algorithm based adaptive scheduling algorithm for real time operating systems,” International Journal of Embedded Systems and Applications (IJESA), vol. 2, no. 3, pp. 91–97, 2012. [6]   R. Obermaisser, Ed., Time-triggered communication. Boca Raton, FL: CRC Press, 2012. [7]   P. Munk, “Visualization of scheduling in real-time embedded systems,” University of Stuttgart, Institute of Software Technology, Department of Programming Languages and Compilers, 20103. [8]   S. Hunold, R. Hoffmann, and F. Suter, “Jedule: A Tool for Visualizing Schedules of Parallel Applications,” in 2010 International Conference on Parallel Processing Workshops (ICPPW), San Diego, CA, USA, pp. 169–178. [9]   H. Wang, L.-S. Peh, and S. Malik, “Power-driven design of router microarchitectures in on-chip networks,” in 36th International symposium on microarchitecture, San Diego, CA, USA, 2003, pp. 105–116. [10] B. Sorkhpour and R. Obermaisser, “MeSViz: Visualizing Scenario-based Meta-Schedules for Adaptive Time-Triggered Systems,” in AmE 2018-Automotive meets Electronics; 9th GMM-Symposium, 2018, pp. 1–6. [11] A. C. Persya and T. R. G. Nair, “Model based design of super schedulers managing catastrophic scenario in hard real time systems,” in 2013 International Conference on Information Communication and Embedded Systems (ICICES), Chennai, 2013, pp. 1149–1155. [12] B. Sorkhpour, O. Roman, and Y. Bebawy, Eds., Optimization of Frequency-Scaling in Time-Triggered Multi-Core Architectures using Scenario-Based Meta-Scheduling: VDE, 2019. [13] B. Sorkhpour, A. Murshed, and R. Obermaisser, “Meta-scheduling techniques for energy-efficient robust and adaptive time-triggered systems,” in Knowledge-Based Engineering and Innovation (KBEI), 2017 IEEE 4th International Conference on, 2017, pp. 143–150. [14] J. Huang, C. Buckl, A. Raabe, and A. Knoll, “Energy-Aware Task Allocation for Network-on-Chip Based Heterogeneous Multiprocessor Systems,” in 2011 19th International Euromicro Conference on Parallel, Distributed and Network-Based Processing, Ayia Napa, Cyprus, 2011, pp. 447–454. [15] S. Prabhu, B. Grot, P. Gratz, and J. Hu, “Ocin tsim-DVFS aware simulator for NoCs,” Proc. SAW, vol. 1, 2010. [16] Roman Obermaisser et al., “Adaptive Time-Triggered Multi-Core Architecture,” Designs, vol. 3, no. 1, p. 7, 2019. [17] H. Kopetz, Ed., Real-Time Systems: Design Principles for Distributed Embedded Applications (Real-Time Systems Series) // Real-time systems: Design principles for distributed embedded applications, 2nd ed. New York: Springer, 2011. [18] F. Guan, L. Peng, L. Perneel, H. Fayyad-Kazan, and M. Timmerman, “A Design That Incorporates Adaptive Reservation into Mixed-Criticality Systems,” Scientific Programming, vol. 2017, 2017. [19] Y. Lin, Y.-l. Zhou, S.-t. Fan, and Y.-m. Jia, “Analysis on Time Triggered Flexible Scheduling with Safety-Critical System,” in Chinese Intelligent Systems Conference, 2017, pp. 495–504. [20] IEEE, “TTP - A Time-Triggered Protocol For Fault-tolerant Real-time System - Fault-Tolerant Computing, 1993. FTCS-23. Digest of Papers., The Twenty-Third International Symposi,” [21] J. Cortadella, A. Kondratyev, L. Lavagno, C. Passerone, and Y. Watanabe, “Quasi-static scheduling of independent tasks for reactive systems,” IEEE Trans. Comput.-Aided Des. Integr. Circuits Syst., vol. 24, no. 10, pp. 1492–1514, 2005. [22] R. Rajaei, S. Hessabi, and B. V. Vahdat, “An energy-aware methodology for mapping and scheduling of concurrent applications in MPSOC architectures,” in Electrical Engineering (ICEE), 2011 19th Iranian Conference on, 2011, pp. 1–6. [23] A. Menna, “Allocation, Assignment and Scheduling for Multi-processor System on Chip,” PhD, Universit ´e des Sciences et Technologies de Lille, 2006. [24] A. Murshed, “Scheduling Event-Triggered and Time-Triggered Applications with Optimal Reliability and Predictability on Networked Multi-Core Chips,” [25] A. Maleki, H. Ahmadian, and R. Obermaisser, “Fault-Tolerant and Energy-Efficient Communication in Mixed-Criticality Networks-on-Chips,” in 2018 IEEE Nordic Circuits and Systems Conference (NORCAS): NORCHIP and International Symposium of System-on-Chip (SoC), 2018, pp. 1–7. [26] P. Eitschberger, S. Holmbacka, and J. Keller, “Trade-Off Between Performance, Fault Tolerance and Energy Consumption in Duplication-Based Taskgraph Scheduling,” in Architecture of Computing Systems – ARCS 2018. [27] R. Lent, “Grid Scheduling with Makespan and Energy-Based Goals,” Journal of Grid Computing, vol. 13, no. 4, pp. 527–546, 2015. [28] P. Eitschberger, “Energy-efficient and Fault-tolerant Scheduling for Manycores and Grids,” Fakultät für Mathematik und Informatik, FernUniversität in Hagen, Hagen, 2017. [29] A. Murshed, “Scheduling event-triggered and time-triggered applications with optimal reliability and predictability on networked multi-core chips,” Dissertation, Embedded Systems, Universität Siegen, Siegen, 2018. [30] E. Dubrova, Fault-Tolerant Design. New York, NY: Springer; Imprint, 2013. [31] A. Avizienis, J.-C. Laprie, B. Randell, and others, Fundamental concepts of dependability: University of Newcastle upon Tyne, Computing Science, 2001. [32] I. Bate, A. Burns, and R. I. Davis, “A Bailout Protocol for Mixed Criticality Systems,” in 2015 27th Euromicro Conference on Real-Time Systems: ECRTS 2015 : proceedings, Lund, Sweden, 2015, pp. 259–268. [33] A. Burns and R. Davis, “Mixed criticality systems-a review,” Department of Computer Science, University of York, Tech. Rep, pp. 1–69, 2013. [34] B. Hu et al., “FFOB: efficient online mode-switch procrastination in mixed-criticality systems,” Real-Time Syst, vol. 79, no. 1, p. 39, 2018. [35] H. Isakovic and R. Grosu, “A Mixed-Criticality Integration in Cyber-Physical Systems: A Heterogeneous Time-Triggered Architecture on a Hybrid SoC Platform,” in Computer Systems and Software Engineering: Concepts, Methodologies, Tools, and Applications: IGI Global, 2018, pp. 1153–1178. [36] B. Sorkhpour and R. Obermaisser, “MeSViz: Visualizing Scenario-based Meta-Schedules for Adaptive Time-Triggered Systems,” in AmE 2018-Automotive meets Electronics; 9th GMM-Symposium, 2018, pp. 1–6. [37] B. Hu, “Schedulability Analysis of General Task Model and Demand Aware Scheduling in Mixed-Criticality Systems,” Technische Universität München. [38] H. Ahmadian, F. Nekouei, and R. Obermaisser, “Fault recovery and adaptation in time-triggered Networks-on-Chips for mixed-criticality systems,” in 12th International Symposium on Reconfigurable Communication-centric Systems-on-Chip, (ReCoSoC 2017): July 12-14, 2017, Madrid, Spain : proceedings, Madrid, Spain, 2017, pp. 1–8. [39] R. Trüb, G. Giannopoulou, A. Tretter, and L. Thiele, “Implementation of partitioned mixed-criticality scheduling on a multi-core platform,” ACM Transactions on Embedded Computing Systems (TECS), vol. 16, no. 5s, p. 122, 2017. [40] C. Schöler, “Novel scheduling strategies for future NoC and MPSoC architectures,” 2017. [41] M. I. Huse, “FlexRay Analysis, Configuration Parameter Estimation, and Adversaries,” NTNU. [42] W. Steiner, “Synthesis of Static Communication Schedules for Mixed-Criticality Systems,” in 2011 14th IEEE International Symposium on Object/Component/Service-Oriented Real-Time Distributed Computing workshops (ISORCW): 28-31 March 2011, Newport Beach, California, USA ; proceedings, Newport Beach, CA, USA, 2011, pp. 11–18. [43] G. Marchetto, S. Tahir, and M. Grosso, “A blocking probability study for the aethereal network-on-chip,” in Proceedings of 2016 11th International Design & Test Symposium (IDT): December 18-20-2016, Hammamet, Tunisia, Hammamet, Tunisia, 2016, pp. 104–109. [44] M. Ruff, “Evolution of local interconnect network (LIN) solutions,” in VTC2003-Fall Orlando: 2003 IEEE 58th Vehicular Technology Conference : proceedings : 6-9 October, 2003, Orlando, Florida, USA, Orlando, FL, USA, 2004, 3382-3389 Vol.5. [45] R. B. Atitallah, S. Niar, A. Greiner, S. Meftali, and J. L. Dekeyser, “Estimating Energy Consumption for an MPSoC Architectural Exploration,” in Lecture Notes in Computer Science, Architecture of Computing Systems - ARCS 2006, W. Grass, B. Sick, and K. Waldschmidt, Eds., Berlin, Heidelberg: Springer Berlin Heidelberg, 2006, pp. 298–310. [46] U. U. Tariq, H. Wu, and S. Abd Ishak, “Energy-Aware Scheduling of Conditional Task Graphs on NoC-Based MPSoCs,” in Proceedings of the 51st Hawaii International Conference on System Sciences, 2018. [47] Compiler-Directed Frequency and Voltage Scaling for a Multiple Clock Domain: ACM Press. [48] A. B. Mehta, “Clock Domain Crossing (CDC) Verification,” in ASIC/SoC Functional Design Verification. [49] Mecanismo de controle de QoS através de DFS em MPSOCS: Pontifícia Universidade Católica do Rio Grande do Sul; Porto Alegre, 2014. [50] Marc Boyer, Benoît Dupont de Dinechin, Amaury Graillat, and Lionel Havet, “Computing Routes and Delay Bounds for the Network-on-Chip of the Kalray MPPA2 Processor,” [51] B. D. de Dinechin and A. Graillat, “Feed-Forward Routing for the Wormhole Switching Network-on-Chip of the Kalray MPPA2 Processor,” in Proceedings of the 10th International Workshop on Network on Chip Architectures - NoCArc'17, Cambridge, MA, USA, 2017, pp. 1–6. [52] KALRAY Corporation, Kalray’s MPPA network-on-chip. [Online] Available: http://www.kalrayinc.com/portfolio/processors/. [53] I. Lee, J. Y. T. Leung, and S. H. Son, Handbook of real-time and embedded systems: CRC Press, 2007. [54] Wikipedia, XtratuM - Wikipedia. [Online] Available: https://en.wikipedia.org/w/index.php?oldid=877711274. Accessed on: Feb. 11 2019. [55] I. Ripoll et al., “Configuration and Scheduling tools for TSP systems based on XtratuM,” Data Systems In Aerospace (DASIA 2010), 2010. [56] V. Brocal et al., “Xoncrete: a scheduling tool for partitioned real-time systems,” Embedded Real-Time Software and Systems, 2010. [57] R. Jejurikar and R. Gupta, “Dynamic slack reclamation with procrastination scheduling in real-time embedded systems,” in Proceedings of the 42nd annual Design Automation Conference, 2005, pp. 111–116. [58] H. Li, S. Bhunia, Y. Chen, T. N. Vijaykumar, and K. Roy, “Deterministic clock gating for microprocessor power reduction,” in The 9th international symposium on high-performance computer architecture, Anaheim, CA, USA, 2003, pp. 113–122. [59] H. Matsutani, M. Koibuchi, H. Nakamura, and H. Amano, “Run-Time Power-Gating Techniques for Low-Power On-Chip Networks,” in Low Power Networks-on-Chip. [60] P. W. Cook et al., “Power-aware microarchitecture: design and modeling challenges for next-generation microprocessors,” IEEE Micro, vol. 20, no. 6, pp. 26–44, 2000. [61] W. Kim, J. Kim, and S. L. Min, “Dynamic voltage scaling algorithm for dynamic-priority hard real-time systems using slack time analysis,” in Design, Automation and Test in Europe Conference and Exhibition, 2002. Proceedings, 2002, pp. 788–794. [62] D. M. Brooks et al., “Power-aware microarchitecture: Design and modeling challenges for next-generation microprocessors,” IEEE Micro, vol. 20, no. 6, pp. 26–44, 2000. [63] S. Prabhu, Ocin_tsim - A DVFS Aware Simulator for NoC Design Space Exploration and Optimization. [College Station, Tex.]: [Texas A & M University], 2010. [64] A. Bianco, P. Giaccone, and N. Li, “Exploiting Dynamic Voltage and Frequency Scaling in networks on chip,” in IEEE 13th International Conference on High Performance Switching and Routing (HPSR), 2012, Belgrade, Serbia, 2012, pp. 229–234. [65] M. Caria, F. Carpio, A. Jukan, and M. Hoffmann, “Migration to energy efficient routers: Where to start?,” in IEEE International Conference on Communications (ICC), 2014: 10-14 June 2014, Sydney, Australia, Sydney, NSW, 2014, pp. 4300–4306. [66] D. M. Brooks et al., “Power-aware microarchitecture: Design and modeling challenges for next-generation microprocessors,” IEEE Micro, vol. 20, no. 6, pp. 26–44, 2000. [67] S. Chai, Y. Li, J. Wang, and C. Wu, “An energy-efficient scheduling algorithm for computation-intensive tasks on NoC-based MPSoCs,” Journal of Computational Information Systems, vol. 9, no. 5, pp. 1817–1826, 2013. [68] P. K. Sharma, S. Biswas, and P. Mitra, “Energy efficient heuristic application mapping for 2-D mesh-based network-on-chip,” Microprocessors and Microsystems, vol. 64, pp. 88–100, 2019. [69] H. M. Kamali, K. Z. Azar, and S. Hessabi, “DuCNoC: A High-Throughput FPGA-Based NoC Simulator Using Dual-Clock Lightweight Router Micro-Architecture,” IEEE Trans. Comput., vol. 67, no. 2, pp. 208–221, 2018. [70] H. Farrokhbakht, H. M. Kamali, and S. Hessabi, “SMART,” in Proceedings of the Eleventh IEEE/ACM International Symposium on Networks-on-Chip - NOCS '17, Seoul, Republic of Korea, 2017, pp. 1–8. [71] W. Hu, X. Tang, B. Xie, T. Chen, and D. Wang, “An Efficient Power-Aware Optimization for Task Scheduling on NoC-based Many-core System,” in 2010 10th IEEE International Conference on Computer and Information Technology, Bradford, United Kingdom, 2010, pp. 171–178. [72] H. F. Sheikh and I. Ahmad, “Simultaneous optimization of performance, energy and temperature for DAG scheduling in multi-core processors,” in Green Computing Conference (IGCC), 2012 International, 2012, pp. 1–6. [73] J. Hu and R. Marculescu, “Energy-aware communication and task scheduling for network-on-chip architectures under real-time constraints,” in Design, Automation and Test in Europe Conference and Exhibition, 2004. Proceedings, 2004, pp. 234–239. [74] H. Bokhari, H. Javaid, M. Shafique, J. Henkel, and S. Parameswaran, “darkNoC,” in Proceedings of the 51st Annual Design Automation Conference, San Francisco, CA, USA, 2014, pp. 1–6. [75] H. Aydin, R. Melhem, D. Mosse, and P. Mejia-Alvarez, “Dynamic and aggressive scheduling techniques for power-aware real-time systems,” in 22nd IEEE real-time systems symposium: (RTSS 2001), London, UK, 2001, pp. 95–105. [76] R. Jejurikar and R. Gupta, “Dynamic slack reclamation with procrastination scheduling in real-time embedded systems,” in DAC 42, San Diego, California, USA, 2005, p. 111. [77] G. Ma, L. Gu, and N. Li, “Scenario-Based Proactive Robust Optimization for Critical-Chain Project Scheduling,” J. Constr. Eng. Manage., vol. 141, no. 10, p. 4015030, 2015. [78] H. K. Mondal and S. Deb, “Power-and performance-aware on-chip interconnection architectures for many-core systems,” IIIT-Delhi. [79] J. Wang et al., “Designing Voltage-Frequency Island Aware Power-Efficient NoC through Slack Optimization,” in International Conference on Information Science and Applications (ICISA), 2014: 6-9 May 2014, Seoul, South Korea, Seoul, South Korea, 2014, pp. 1–4. [80] K. Han, J.-J. Lee, J. Lee, W. Lee, and M. Pedram, “TEI-NoC: Optimizing Ultralow Power NoCs Exploiting the Temperature Effect Inversion,” IEEE Trans. Comput.-Aided Des. Integr. Circuits Syst., vol. 37, no. 2, pp. 458–471, 2018. [81] D. Li and J. Wu, “Energy-efficient contention-aware application mapping and scheduling on NoC-based MPSoCs,” Journal of Parallel and Distributed Computing, vol. 96, pp. 1–11, 2016. [82] W. Y. Lee, Y. W. Ko, H. Lee, and H. Kim, “Energy-efficient scheduling of a real-time task on dvfs-enabled multi-cores,” in Proceedings of the 2009 International Conference on Hybrid Information Technology, 2009, pp. 273–277. [83] B. Sprunt, L. Sha, and J. Lehoczky, “Aperiodic task scheduling for Hard-Real-Time systems,” Real-Time Syst, vol. 1, no. 1, pp. 27–60, 1989. [84] J. K. Strosnider, J. P. Lehoczky, and L. Sha, “The deferrable server algorithm for enhanced aperiodic responsiveness in hard real-time environments,” IEEE Trans. Comput., vol. 44, no. 1, pp. 73–91, 1995. [85] N. Chatterjee, S. Paul, and S. Chattopadhyay, “Task mapping and scheduling for network-on-chip based multi-core platform with transient faults,” Journal of Systems Architecture, vol. 83, pp. 34–56, 2018. [86] R. N. Mahapatra and W. Zhao, “An energy-efficient slack distribution technique for multimode distributed real-time embedded systems,” IEEE Trans. Parallel Distrib. Syst., vol. 16, no. 7, pp. 650–662, 2005. [87] G. Avni, S. Guha, and G. Rodriguez-Navas, “Synthesizing time-triggered schedules for switched networks with faulty links,” in Proceedings of the 13th International Conference on Embedded Software, Pittsburgh, Pennsylvania, 2016, pp. 1–10. [88] F. Benhamou, Principle and Practice of Constraint Programming - CP 2006: 12th International Conference, CP 2006, Nantes, France, September 25-29, 2006, Proceedings. Berlin Heidelberg: Springer-Verlag, 2006. [89] Satisfiability Modulo Graph Theory for Task Mapping and Scheduling on Multiprocessor Systems, 2011. [90] A. Murshed, R. Obermaisser, H. Ahmadian, and A. Khalifeh, “Scheduling and allocation of time-triggered and event-triggered services for multi-core processors with networks-on-a-chip,” pp. 1424–1431. [91] F. Wang, C. Nicopoulos, X. Wu, Y. Xie, and N. Vijaykrishnan, “Variation-aware task allocation and scheduling for MPSoC,” in IEEE/ACM International Conference on Computer-Aided Design, 2007, San Jose, CA, USA, 2007, pp. 598–603. [92] D. Mirzoyan, B. Akesson, and K. Goossens, “Process-variation-aware mapping of best-effort and real-time streaming applications to MPSoCs,” ACM Trans. Embed. Comput. Syst., vol. 13, no. 2s, pp. 1–24, 2014. [93] C. MacLean and G. COWIE, Data flow graph: Google Patents. [94] S. K. Baruah, A. Burns, and R. I. Davis, “Response-Time Analysis for Mixed Criticality Systems,” in IEEE 32nd Real-Time Systems Symposium (RTSS), 2011, Vienna, Austria, 2011, pp. 34–43. [95] A. Burns and S. Baruah, “Timing Faults and Mixed Criticality Systems,” in Lecture notes in computer science, 0302-9743, 6875. Festschrift, Dependable and historic computing: Essays dedicated to Brian Randell on the occasion of his 75th birthday /  Cliff B. Jones, John L. Lloyd (eds.), B. Randell, C. B. Jones, and J. L. Lloyd, Eds., Heidelberg: Springer, 2011, pp. 147–166. [96] P. Ekberg and W. Yi, “Outstanding Paper Award: Bounding and Shaping the Demand of Mixed-Criticality Sporadic Tasks,” in Proceedings of The 24th Euromicro Conference on Real-Time Systems: 10-13 July 2012, Pisa, Italy, Pisa, Italy, 2012, pp. 135–144. [97] M. R. Garey, D. S. Johnson, and L. Stockmeyer, “Some simplified NP-complete problems,” in Proceedings of the sixth annual ACM symposium on Theory of computing - STOC '74, Seattle, Washington, United States, 1974, pp. 47–63. [98] L. Su et al., “Synthesizing Fault-Tolerant Schedule for Time-Triggered Network Without Hot Backup,” IEEE Trans. Ind. Electron., vol. 66, no. 2, pp. 1345–1355, 2019. [99] A. Carvalho Junior, M. Bruschi, C. Santana, and J. Santana, “Green Cloud Meta-Scheduling : A Flexible and Automatic Approach,” (eng), Journal of Grid Computing : From Grids to Cloud Federations, vol. 14, no. 1, pp. 109–126, http://dx.doi.org/10.1007/s10723-015-9333-z, 2016. [100]   T. Tiendrebeogo, “Prospect of Reduction of the GreenHouse Gas Emission by ICT in Africa,” in e-Infrastructure and e-Services. [101]   Deutsche Welle (www.dw.com), Carmaker BMW to invest heavily in battery cell center | DW | 24.11.2017. [Online] Available: https://p.dw.com/p/2oD3x. Accessed on: Dec. 03 2018. [102]   G. Fohler, “Changing operational modes in the context of pre run-time scheduling,” IEICE transactions on information and systems, vol. 76, no. 11, pp. 1333–1340, 1993. [103]   H. Jung, H. Oh, and S. Ha, “Multiprocessor scheduling of a multi-mode dataflow graph considering mode transition delay,” ACM Transactions on Design Automation of Electronic Systems (TODAES), vol. 22, no. 2, p. 37, 2017. [104]   A. Das, A. Kumar, and B. Veeravalli, “Energy-Aware Communication and Remapping of Tasks for Reliable Multimedia Multiprocessor Systems,” in IEEE 18th International Conference on Parallel and Distributed Systems (ICPADS), 2012, Singapore, Singapore, 2012, pp. 564–571. [105]   S. A. Ishak, H. Wu, and U. U. Tariq, “Energy-Aware Task Scheduling on Heterogeneous NoC-Based MPSoCs,” in IEEE 35th IEEE International Conference on Computer Design: ICCD 2017 : 5-8 November 2017 Boston, MA, USA : proceedings, Boston, MA, 2017, pp. 165–168. [106]   C. A. Floudas and V. Visweswaran, “Quadratic Optimization,” in Nonconvex Optimization and Its Applications, vol. 2, Handbook of Global Optimization, R. Horst and P. M. Pardalos, Eds., Boston, MA, s.l.: Springer US, 1995, pp. 217–269. [107]   R. Lazimy, “Mixed-integer quadratic programming,” Mathematical Programming, vol. 22, no. 1, pp. 332–349, 1982. [108]   A. Majd, G. Sahebi, M. Daneshtalab, and E. Troubitsyna, “Optimizing scheduling for heterogeneous computing systems using combinatorial meta-heuristic solution,” in 2017 IEEE SmartWorld: Ubiquitous Intelligence & Computing, Advanced & Trusted Computed, Scalable Computing & Communications, Cloud & Big Data Computing, Internet of People and Smart City Innovation (SmartWorld/SCALCOM/UIC/ATC/CBDCom/IOP/SCI) : 2017 conference proceedings : San Francisco Bay Area, California, USA, August 4-8, 2017, San Francisco, CA, 2017, pp. 1–8. [109]   B. Xing and W.-J. Gao, “Imperialist Competitive Algorithm,” in Intelligent Systems Reference Library, Innovative computational intelligence: A rough guide to 134 clever algorithms, B. Xing and W.-J. Gao, Eds., New York NY: Springer Berlin Heidelberg, 2013, pp. 203–209. [110]   J. D. Foster, A. M. Berry, N. Boland, and H. Waterer, “Comparison of Mixed-Integer Programming and Genetic Algorithm Methods for Distributed Generation Planning,” IEEE Trans. Power Syst., vol. 29, no. 2, pp. 833–843, 2014. [111]   J. Yin, P. Zhou, A. Holey, S. S. Sapatnekar, and A. Zhai, “Energy-efficient non-minimal path on-chip interconnection network for heterogeneous systems,” in ISPLED'12: Proceedings of the international symposium on low power electronics and design, Redondo Beach, California, USA, 2012, p. 57. [112]   J. Falk et al., “Quasi-static scheduling of data flow graphs in the presence of limited channel capacities,” in The 13th IEEE Symposium on Embedded Systems for Real-time Multimedia: October 8-9, 2015, Amsterdam, Netherlands, Amsterdam, Netherlands, 2015, pp. 1–10. [113]   T. Wei, P. Mishra, K. Wu, and J. Zhou, “Quasi-static fault-tolerant scheduling schemes for energy-efficient hard real-time systems,” Journal of Systems and Software, vol. 85, no. 6, pp. 1386–1399, 2012. [114]   M. J. Ryan, “A Case Study on the Impact of Convergence on Physical Architectures—The Tactical Communications System,” [115]   Y. Huang and D. P. Palomar, “Randomized Algorithms for Optimal Solutions of Double-Sided QCQP With Applications in Signal Processing,” IEEE Trans. Signal Process., vol. 62, no. 5, pp. 1093–1108, 2014. [116]   X. Cai, W. Hu, T. Ma, and R. Ma, “A hybrid scheduling algorithm for reconfigurable processor architecture,” in Proceedings of the 13th IEEE Conference on Industrial Electronics and Applications (ICIEA 2018): 31 May-2 June 2018 Wuhan, China, Wuhan, 2018, pp. 745–749. [117]   P.-A. Hsiung and J.-S. Shen, Dynamic reconfigurable network-on-chip design: Innovations for computational processing and communication. Hershey, Pa.: IGI Global, 2010. [118]   R. Misener and C. A. Floudas, “Global optimization of mixed-integer quadratically-constrained quadratic programs (MIQCQP) through piecewise-linear and edge-concave relaxations,” Mathematical Programming, vol. 136, no. 1, pp. 155–182, 2012. [119]   D. Axehill, “Applications of integer quadratic programming in control and communication,” Institutionen för systemteknik, 2005. [120]   A. Nemirovskii, “Several NP-hard problems arising in robust stability analysis,” Math. Control Signal Systems, vol. 6, no. 2, pp. 99–105, 1993. [121]   A. Sarwar, “Cmos power consumption and cpd calculation,” Proceeding: Design Considerations for Logic Products, 1997. [122]   S. Kaxiras and M. Martonosi, “Computer Architecture Techniques for Power-Efficiency,” Synthesis Lectures on Computer Architecture, vol. 3, no. 1, pp. 1–207, 2008. [123]   D. Kouzoupis, G. Frison, A. Zanelli, and M. Diehl, “Recent Advances in Quadratic Programming Algorithms for Nonlinear Model Predictive Control,” Vietnam Journal of Mathematics, vol. 46, no. 4, pp. 863–882, 2018. [124]   R. Fourer, “Strategies for “Not Linear” Optimization,” Houston, TX, Mar. 6 2014. [125]   L. A. Cortes, P. Eles, and Z. Peng, “Quasi-Static Scheduling for Multiprocessor Real-Time Systems with Hard and Soft Tasks,” in 11th IEEE International Conference on Embedded and Real-Time Computing Systems and Applications: 17-19 August 2005, Hong Kong, China : proceedings, Hong Kong, China, 2005, pp. 422–428. [126]   L. Benini, “Platform and MPSoC Design,” [127]   R. Obermaisser and P. Peti, “A Fault Hypothesis for Integrated Architectures,” in Proceedings of the Fourth Workshop on Intelligent Solutions in Embedded Systems: Vienna University of Technology, Vienna, Austria, 2006 June 30, Vienna, Austria, 2005, pp. 1–18. [128]   R. Obermaisser et al., “Adaptive Time-Triggered Multi-Core Architecture,” Designs, vol. 3, no. 1, p. 7, https://www.mdpi.com/2411-9660/3/1/7/pdf, 2019. [129]   IBM, IBM ILOG CPLEX Optimization Studio CPLEX User’s Manual: IBM, 1987-2016. [130]   Chart Component and Control Library for .NET (C#/VB), Java, C++, ASP, COM, PHP, Perl, Python, Ruby, ColdFusion. [Online] Available: https://www.advsofteng.com/product.html. Accessed on: Jan. 10 2019. [131]   J. Ellson, E. Gansner, L. Koutsofios, S. C. North, and G. Woodhull, “Graphviz—open source graph drawing tools,” in International Symposium on Graph Drawing, 2001, pp. 483–484. [132]   T. Lei and S. Kumar, “Algorithms and tools for network on chip based system design,” in Chip in sampa, Sao Paulo, Brazil, 2003, pp. 163–168. [133]   G. D. Micheli and L. Benini, “Powering networks on chips: energy-efficient and reliable interconnect design for SoCs,” in ''isss'', 2001, pp. 33–38. External links Super Scheduler project by the Asia-Pacific Science Technology Center. Meta-Scheduling Techniques for Energy-Efficient by Dr.-Ing. Babak sorkhpour. Community Scheduler Framework by Platform Computing & Jilin University Adaptive Computing DIOGENES (DIstributed Optimal GENEtic algorithm for grid applications Scheduling) Prof. Dr.-Ing.Roman Obermaisser Chair for Embedded Systems
2277458
https://en.wikipedia.org/wiki/Crystal%20Enterprise
Crystal Enterprise
Crystal Enterprise is the Business Objects server-based delivery platform for Crystal Reports and Crystal Analysis originally developed by Crystal Decisions. Crystal Enterprise is what is called a delivery platform in Business Intelligence terms. It provides an infrastructure for data access, which can store report templates. By using Crystal Enterprise, report designers can store report objects, instances of reports, schedule reports and request reports on demand through the use of clients, such as Web Browsers. For example, an administrator could store a sales report on Crystal Enterprise, and schedule this report to be run at the beginning of every month. When the report is triggered, Crystal Enterprise would access the data sources specified in the report and save an instance of this report, which can be made available or automatically distributed to the relevant parties. Supported platforms Due to the complexity of the Crystal Enterprise, many factors must be considered for platform compatibility, such as Operating System, Web server, Application Server, databases and a combination of these factors. On the Crystal Enterprise installation CD, there is a text file called platforms.txt which covers every platform supported by Crystal Enterprise. As for Operating Systems, Crystal Enterprise can run in many different Operating Systems, such as Windows 2000 Server, Windows Server 2003, Solaris, Linux, AIX and HP-UX. Newer versions of Crystal Enterprise provide several add-ons such as integration with Microsoft Office applications (such as Microsoft Excel) and SharePoint Portal Integration kit. Editions Crystal Enterprise 11 was the last version, which was released after the acquisition of Crystal Decisions by Business Objects. This version was subsequently renamed Business Objects XI after adding Web Intelligence and Desktop Intelligence support. Crystal Enterprise 10 had the following editions: Crystal Enterprise Express Crystal Enterprise Embedded Edition Crystal Enterprise Professional Crystal Enterprise Premium See also Business Objects CORBA Crystal Reports External links Crystal Enterprise Official Website Business software for Linux Business software for Windows Unix software
37363294
https://en.wikipedia.org/wiki/Saint%20Paul%20University%20Surigao
Saint Paul University Surigao
The Saint Paul University Surigao, also referred to as SPUS or SPU Surigao, is a private, Catholic basic and higher education institution run by the Sisters of Saint Paul of Chartres in Surigao City, Surigao del Norte, Philippines. It has two campuses: the main campus in the heart of Surigao City houses the college academic units, graduate school and offices and the satellite Campus at Brgy. Luna which houses the high school and grade school. SPUS became the first university in the Caraga region and is identified as the center for development in teacher education and the regional center for Gender and Development, it being the seat of CARAGA Women's resources center established in 1906. History St. Paul University Surigao traces its roots to the year 1906 when the last group of Spanish Benedictine Missionaries, who had worked zealously as founders of the Cartilla or Doctrina School (which soon evolved into Escuela Catolica de San Nicolas), with the Religious of the Virgin Mary as administrators, vacated Surigao. Soon after their departure, the Missionaries of the Sacred Heart, also known as the Dutch Fathers, succeeded them, created the parish of Surigao, and made Escuela Catolica de San Nicolas a parochial school which they renamed San Nicolas School. MILESTONES 1915 The Bureau of Commerce issued Articles and Certificates of Incorporation to legitimize the school's existence. Rev. Adriano Muskins, MSC, parish priest and school head, expanded the primary school for the poor children of the parish who aimlessly roamed the streets. The school continued to grow, taking much time and attention from the parochial duties for the Dutch Fathers, who sought the assistance of others. 1926 Three Sisters from the Congregation of the Sisters of St. Paul of Chartres arrived in Surigao in response to the invitation of Fr. Muskins to administer San Nicolas School. The pioneers were Sister Louise Marguerite Prevoust, Superior; Sr. Consolacion Cruz, Principal; and Sr. Valentine, who would take charge of the boarders. They were welcomed by the parish priest at the time, Rev. Juan Vrakking, MSC, who was to become the first bishop of Surigao in 1940. 1936 The Little Flower Dormitory", a three-story reinforced concrete building completed in 1936, became the nucleus of the growing institution. It housed both the dormitory and the school. Sr. Adela Catalina Llorente, then, Superior, helped speed up the construction with donations solicited from her American friends. 1938 In June, the High School Department opened with Sr. Stella de Jesus Villanueva, SPC, as principal. Rev. Luis Boeren, MSC, took over as Director. The high school courses offered were Secondary Academic and Secondary Normal, to provide the mission with thoroughly trained Christian teachers. Initially, more than twenty female students enrolled. Two years later, male students were accepted. Rec. Jose Croonen, MSC, was parish priest at the time. 1941 The Pacific War that broke out wrought destruction in Surigao. The Surigao Cathedral, the rectory, the school, and the dormitory were destroyed in the heavy bombing. All classes had to be suspended. There was much fear and panic among the people, but the Sisters of St. Paul remained as bulwarks of inner courage and strength. Forced to flee to the mountains, they continued their catechetal work, hiking from one barrio to another and providing a source of comfort and an example of courage. The Sisters were also of great help to the guerrilla forces, filling their medical needs and transmitting important messages, thou death was a constant threat. 1945 The liberation of the Philippines was a source of great rejoicing. When the school resumed operation in 1945, classes had to be held in Quonset huts and rehabilitated portions of the Little Flower Dormitory. Rev. Gerardo Tangelder, MSC, was Director of the school. 1948 The Junior Normal School opened and a year later, it was followed by courses in Commerce, Education, and Liberal Arts. This was facilitated by Rev. Carlos Van Den Ouwelant, MSC; Sr. Mary Nathaniel Rocero, SPC; Sr. Marie du Rosaiere Vogel, SPC; Sr. Virginie de Marie de Manuel, SPC; and Sr. Maria Fidela Nerida, SPC. It was also on this year that the first college building was constructed to accommodate the collegiate classes. 1949 San Nicolas School changed its name to San Nicolas College. 1951 The school became the venue of dramatic talents when Rev. Anthony Jansen, MSC, succeeded Fr. Tangelder. Rev. Carlos Van Den Ouwelant, MSC, then school chaplain, who later became the second Bishop of Surigao, and Rev. Nicasio Jansen, MSC, directed the school in the absence of Fr. Anthony. Sr. Adela was succeeded as Superior by Sr. Teresita of St. Paul Soledad. SNC offered Secondary Home Economics (BSHE) and Junior Normal Home Economics (ETC-HE). 1953 Courses in Secondary Trade and Bachelor of Science in Elementary Education (BSEEd) were offered and recognition was given to the BSE, BSHE and Liberal Arts Programs. 1960 The BSEEd Program received recognition, the same year, which saw the start of a new two-storey building for the Grade School. 1961 When Rev. Herman Van Der Sman, MSC, became Director, more buildings were erected to accommodate the increasing student population, and a stronger faculty built up with the aid of scholarships and grants. The science laboratories were equipped with sophisticated apparatuses and equipment. The Instructional Media Center and the gymnasium were furnished with additional instruction aids. Meanwhile, Mrs. Natividad Abella-Felicio organized the SNC Dance Troupe with the assistance of Mr. Concordio Leva. Sr. Jude of the Holy Spirit Paat, SPC, formed the SNC Troubadours. 1966 Two significant events marked this year: the blessing of the new four-storey concrete Administration Building and the celebration of the Golden Jubilee of San Nicolas College's founding. 1968 The Government of Netherlands donated a large amount of money, a gift to the people of Surigao, to be used for the construction of the Science Building, the High School Building, and the Gymnasium. 1972 Sr. Emilie de Marie Manzano, SPC, became the High School Principal. Martial Law banned the SNC Student Council and school paper. 1973 Sr. Aniceta du Sacre-Couer Ochoa, SPC, took over as Principal. On March 13, 1973. Installation of Msgr. Miguel C. Cinches as the third bishop of Surigao. He was also the Chairman of the Board of Trustees of San Nicolas College. 1975 SNC was made the Regional Staff Development Center (RSDC) for Region X. That same year, in compliance with the Filipinization Law Sr. Pura du Sacre-Coeur Belmonte, SPC, became the first Filipino President of SNC, assuming the office vacated by Rev. Herman Van Der Sman, MSC. 1976 After its first survey of the Colleges of Liberal Arts, Education, and Commerce, the Philippine Accrediting Association of Schools, Colleges, Universities (PAASCU) accepted San Nicolas College as an accredited member. 1977 Mrs. Dulcemina O. Leva became the first lay High School Principal. 1978 Master of Arts in Teaching (MAT) with specialization in Educational Management, English, and Home Economics was offered and received government recognition four years after. 1980 Doctoral and MBA Programs were offered in consortium with Xavier University in Cagayan de Oro City. 1981 Sr. Maria Josefa Grey, SPC, was installed as second SNC President. She improved the school's physical plant and systematized services. 1983 Bachelor of Elementary Education (BEEd) program was offered. 1984 The Computer Laboratory was installed. Typhoon Nitang brought exceedingly heavy damage to the school, but Sr. Josefa's effort program soon brought rehabilitation. Doctoral and MBA Programs were granted recognition. The new Elementary School Building was built. Bachelor of Secondary Education (BSEd) program was opened and Mr. Rodolfo G. Gier became the second lay High School Principal. PAASCU granted a five-year reaccreditation to Liberal Arts, Education and Commerce Programs. 1985 Organization of San Nicolas College Alumni Association. 1987 Sr. Maria Felicina Gubuan, SPC came in as President. She soon became known for the school's Outreach Program. Equipped with new instruments, a new building housing administration and staff offices was completed. 1988 The Securities and Exchange Commission approved the revised SNC By-Laws. 1989 A course leading to the degree of Bachelor of Science in Commerce, major in Computer Science was offered. Additional units were installed in the computer Laboratory and the Typing Laboratory was refurbished. A two-storey building accommodating several offices and a conference room was completed. 1990 SNC Diamond Jubilee year-long celebration started in February. In May, DECS granted a permit to operate an MBA program. Sr. Felicina was re-appointed by the Board of Trustees for a second term of three years. 1991 DECS granted the permit to offer courses leading to the degree of Bachelor of Science in Accountancy (BSA). Computer classes were offered to the high school students. The Community Extension Service/School Outreach Program was strengthened and institutionalized. PAASCU granted SNC Level II status. 1992 The SNC Credit Cooperative was converted to a multipurpose cooperative approved by the SPU Surigao FECOCU General Assembly. The High School Department was adjudged one of the five “Excellent Private Secondary Schools” in Region X. 1993 DECS allowed the opening of courses leading to the degree of Bachelor of Science in Computer Science and Computer Secretarial. Empowerment through English Enrichment (SETEE) classes was organized. 1994 Bachelor of Science in Accountancy Program was recognized by the government. Bachelor of Science in Commerce, major in Management Accounting was approved. The blessing of the Computer Laboratory II in July coincided with the installation of the Local Area Network (LAN), the first in the province of Surigao del Norte. 1995 Computer Secretarial (CS) Program was issued government recognition in May. In June Sr. Marie Renee Javato, SPC was installed as the fourth President of San Nicolas College. PAASCU granted the High School Department Level I Accreditation, and SNC celebrated its eightieth anniversary (1915-1995) as an institution granted legal personality by the government. 1996 PAASCU granted SNC's Liberal Arts, Commerce, and Educational Programs another five-year re-accreditation until 2001. SNC was chosen one of the 22 Centers of Development in Teacher Education throughout the country and a member school of the Higher Education Institutions (HEI) CARAGA Networking. The High School Computer Laboratory was set up. SNC received the Rotary Outstanding Surigaonon Award (ROSA) for Education from the Rotary Club of Metro Surigao, the Philippines-Australia Project in Basic Education (PROBE), and the Commission on Higher Education (CHED) scholarship in Teacher Education. Sr. Marie Renee Javato, SPC, was a recipient of the DECS Salamat Po Award. 1997 The construction of a concrete bridge connecting the Main Building and St. Paul University Surigao Building was started. Computer Technology was granted a permit and Biology was offered as a new major in BSEd. 1998 The internet system, local and international, started operation in SNC. A new Graduate School Program, Master in Public Administration (MPA) was offered in June and Filipino was an additional major in the Master of Arts program. The CARAGA Women's Resource Center (CWRC), a regional center for Gender and Development, a project of the Philippines-Canada Local Government Support Program (LGSP), San Nicolas College (SNC), Surigao del Norte (SDN), and Surigao City (SC), was housed in San Nicolas College. The Diocese of Surigao, in the person of Bishop Miguel Cinches, SVD, donated San Nicolas College to the Sisters of St. Paul of Chartres through Sr. Agnes Therese Teves, SPC. 1999 Curriculum Initiatives for Teacher Education (CITE) was offered in SNC, the college being one of the fourteen (14) teacher training institutions nationwide chosen to field the program. The revised retirement policy was implemented. Sister Mary Magdalen Torres, SPC assumed Chairmanship of the SNC Board of Trustees: Dr. Dulcemina O. Leva was appointed as the first lay Vice President for Academic Affairs and Dr. Lucy L. Teves as the Research and Planning Officer. 2000 Sr. Concepcion Dacanay, SPC and Sr. Teresita Bayona, SPC were installed Local Superior and the 6th SNC President, respectively. Organizational development activities were held: revision of the Faculty Manual by representatives from the three departments; Strategic Planning Workshop and Basic Management Course for SNC Managers; Personal Effectiveness Seminar, Documentation Procedures, and Health Safety and Security Seminar for the whole institution by work groups; formation of the SNC Vision-Mission Team. Curricular innovations were introduced such as the Integrative Modular Curriculum Delivery System, Multimedia Courseware Development, Writing Thinking Test Items, Developing a Curriculum (DACUM), Grade Master and Introduction of Number Grades. Training workshops/sessions were conducted for the guidance counselors on basic counseling and test interpretation; the librarians on library operations; the Homeroom advisers on the Homeroom Program; the Paulthenics advisers on module making and facilitating skills; the student leaders on leadership training; the canteen personnel on baking and cooking; the carpenters on carpentry skills; the laboratory technician on laboratory techniques and management; the secretaries on filing system. Infrastructure development included the following: construction of a five-storey building to house the canteen, instructional media center, guidance center, student affairs and alumni offices, hotel and restaurant laboratories, and computer laboratories; construction of a four-storey building housing the high school computer laboratories and classrooms; renovation of the library, relocation and installation of a new speech laboratory; renovation of offices – the President, the Deans, Christian Formation, Finance, Registrar, General Services, FEMUCO, clinic, ROTC, CAT, and the High School HE; renovation of the comfort rooms; installation of drinking fountains in the three departments; renovation of the gymnasium; improvement of the water and drainage system; renovation of the Little Flower Dormitory; and landscaping of the grounds. Additional equipments were acquired in the libraries and laboratories – Science, Computer, and Home Economics. Additional instructional facilities were added like the PROBE English laboratory, PROBE Science laboratory, and PROBE Math laboratory. Automation of the College library, finance, and registration was done. SNC celebrated its 85th Anniversary. 2001 The High School topped in the division ranking in the National Secondary Achievement Test (NSAT); Total Quality Management System and Innovative Processes were adopted; PMS, Instructional Modules, DACUM, CAI, Computerized Grading System, among others; a Science section in every level was created. Nursery class was opened in June 2001. The Electronics Engineering Laboratory was constructed in the 5th floor of the main Building. A lot in KM 3 for the new HS Building of SPC was acquired. Research subject was introduced in 2nd year Science section and computerized – aided math for all high school students. In July 2001, Liberal Arts and Commerce programs were granted 5-year Level II re-accreditation by the PAASCU Board of Directors, valid until 2006. 2002 On June 3, Sr. Helen Malubay, SPC, was proclaimed Acting President and Sr. Cecilia Sto. Tomas, SPC as new HS Principal. Permits were granted by CHED to offer Bachelor of Science in Office Administration (TP R13-340402-01, s. 2002), Bachelor of Science in Public Administration (TP R13-3445401-01, s. 2002) and Bachelor of Science in Accounting Technology (TP R13-343205-01, s. 2002) on May 6, 2002. Bachelor of Science in Commerce (BSC) was changed to Bachelor of Science in Business Administration (BSBA) effective June 2002. On June 20, the Board of Trustees amended its by-laws, approved the new nomenclature, St. Paul University System, San Nicolas Campus which was registered and approved by the Securities and Exchange Commission on August 29, 2002. The new seal and motto, Caritas et Scientia was approved by the Board of Trustees. March 31, 2002 Marked the Ground Breaking Ceremony for the new SPUS-SNC Building at Kilometer 3. Construction of the new HS building started during the year. BS Nursing, Pharmacy, Medical Technology, Radiologic Technology, Office Administration and Criminology were opened as new undergraduate programs in June 2002. In support of its new BS Criminology program, the school purchased equipment such as Forensic Optical, Polygraph or Lie-Detector Machine, Questioned Documents and Fingerprints Kits. Extension classes of St. Paul University Philippines in Doctor of Business Management and Master of Science in Information Technology were offered in SPUS-SNC Graduate School. Computer-Aided English Diagnostic Test to improve reading and comprehensive ability was administered to all high school students. The result was used to enhance the HS curriculum. In support of its Quality Management System, the 5S (sort, systematize, sweep, sanitize, self-discipline) was implemented. March 31, 2002 Marked the Ground Breaking Ceremony for the new SPUS-SNC High School Building. Construction of the building started. 2003 Sister Teresita Bayona, SPC was reinstalled as President of SPUS-SNC. On January 13, 2003 The articles and by laws of SPUS - SNC Students’ Multipurpose Cooperative were presented for registration to and approved by the Cooperative Development Authority. On February 8, 2003 The blessing of the New High School building at Km. 3 was officiated by his Excellency, the Most Reverend Antonieto Cabajog and assisted by the Surigao Diocesan Clergy. In June 2003 The high school students transferred to the new building in Km. 3 Campus. In August 2003 The new SPUS-SNC IT/Entrepreneurship College laboratory was completed and blessed. In October 2003 A team from CHED FAPE visited the Graduate School for the Evaluation of Graduate Education Programs (EGEP). December 3 CHED evaluators were in SPU Surigao to validate reports pertinent to the school's qualification for membership to the St. Paul University System. 2004 On February 9, 2004, Certificate for University System was granted by CHED to the six Higher Education Institutions (HEIs) namely: St. Paul University Tuguegarao, St. Paul College Manila, St. Paul College Quezon City, St. Paul College Dumaguete, St. Paul College Iloilo and St. Paul University System San Nicolas campus. The St. Paul University System was regarded as the first University System to be recognized by CHED in the country. March 10, 2004 SPUS-SNC changed its name to St. Paul University Surigao. Bachelor of Science in Information Technology (BSIT) was granted government recognition R13-464108-01, s. 2004 by CHED. This was issued on June 24, 2004. BS in Civil Engineering and BS in Electronics and Communications Engineering were offered. October 4–5, 2004 - PAASCU Accreditors visited SPU Surigao for the preliminary survey of the High School Department. 2005 PAASCU granted level I accredited status to the High School Department. March 3–4, 2005 - PAASCU accreditors visited SPU Surigao for the preliminary survey of the Teacher Education and Computer Science Programs. March 28–29, 2005 – SPU Surigao underwent Strategic Planning for academic year 2005-2010. CHED granted the school Government Recognition to operate the programs. Bachelor of Science in Criminology by virtue of R13-891301-02, s. 2005 issued on April 16, 2005 and Bachelor of Science in Accounting Technology (BSAT) per GR No. R13-343205-01, s. 2005 issued on April 18, 2005. Government Recognitions were likewise issued by CHED to Bachelor of Science in Public Administration (BSPA), GR No. R13-345201-01, s. 2005, and Bachelor of Science in Office Administration (BSOA), GR No. R13-340402-01, s. 2005 programs on April 18, 2005. June 14, 2005 - Bachelor of Science in Hotel and Restaurant Management was recognized by the government. September 2, 2005 – SPU Surigao was awarded the DIN EN ISO 9001-2000 Certificate of TUV Rheinland Philippines Ltd., member of TUV Group by Ma. Gloria Gita Sehwani Benitez, Operations Manager. 2006 Sister Merceditas Ang, SPC was installed as the 8th President of St. Paul University Surigao on June 19. Caregiver was offered as a short-term course by the University. A Certificate of TVET Program Registration, WTR No. 0615032006, was issued by TESDA on May 26, 2006 to allow the school to offer a Seven-Month Professional Caregiver NC II program. Bachelor of Science in Information Management (BSIM) was granted government recognition R13-464107-02, s. 2006 by CHED, issued on July 6, 2006. August 17–18, 2006 - Resurvey of Liberal Arts and Commerce programs by PAASCU September 6–8, 2006 - San Nicolas College, now St. Paul University Surigao celebrated its centenary with the theme “SNC-SPU@100:Legacies and Challenges” September 7, 2006 - The Sacred Heart Statue in front of the University's Science Building was blessed and unveiled in honor of the Missionaries of the Sacred Heart (MSC) Fathers. October 5–6, 2006 - First formal survey of HS program by PAASCU. CHED granted government recognition R13-542202-01 s. 2006, issued on October 25, 2006 to the Bachelor of Science in Computer Engineering (BSCpE) program. November 23–24, 2006 - ISO Surveillance Audit December 11–15, 2006 – A team composed of four (4) evaluators came to SPU Surigao to look into the capabilities of the school in relation to the CHED's Institutional Monitoring & Evaluation for Quality Assurance in Higher Education (IQUAME). 2007 Certificate of Accreditation was awarded to St. Paul University Surigao as TESDA's Accredited Assessment Center for Computer Hardware Servicing NC II (Accreditation No. R33-07-03-021), PC Operation NC II (Accreditation No. R33-07-03-022), and Food and Beverages Services NC II (Accreditation No. R33-07-03-023) on May 10, 2007. CHED issued Government Recognition R13-541601-07 s. 2007 on May 15, 2008 to the Bachelor of Science in Civil Engineering (BSCE) program. New majors for the Bachelor of Science in Business Administration (BSBA) program were offered effective June 2007. These are Financial Management, Operations Management, Marketing Management, Business Economics, and Human Resource Development Management. Certificates of TVET Program Registration were issued by TESDA on June 20, 2007 to offer specialized programs in the University-WTR No. 715032176 for PC Operations NC II (214 hours), WTR No. 715032177 for Hardware Servicing NC II(356 hours), WTR No. 715032178 for Computer Programming NC IV (252 hours), WTR No. 715032179 for Housekeeping NC II(436 hours), WTR No. 715032180 for Bartending NC II (286 hours), and WTR No.715032181 for Food & Beverage Services NC II (336 hours). TESDA issued Certificates of TVET Program Registration to offer Caregiving NC II (786 hours), WTR No. 0715032203 and Healthcare Services NC II (996 hours), WTR No. 0715032203 on July 25, 2007. The Liberal Arts and Commerce programs were granted a four- year re accredited status by PAASCU on November 28, 2007. 2008 Government Recognitions were issued by CHED to Bachelor of Science in Electronics and Communications Engineering (BSECE), GR No. R13-542207-01, s. 2001 and Bachelor of Science in Tourism Management (BSTM), GR No. R13-787201-07, s. 2008 programs on February 6, 2008. Certificate of Recognition was awarded to St. Paul University Surigao as Center of Training Institution for the Department of Education (DepEd) Certificate and INSET programs on March 7, 2008, having passed the criteria set by the Commission on Higher Education (CHED), DepEd, and Teacher Education Council. Government Recognition was issued by CHED to Bachelor of Science in Psychology per GR No. R13-305201-01, series of 2008 dated June 12, 2008. Permit to Operate Review Center No. 050, Series of 2008 was granted to St. Paul University Surigao by the Commission on Higher Education (CHED) to operate a Review Center for Nursing Program in consortium with Review for Global Review Center, Davao City through CHED EnBanc Resolution No. 320-2008, dated June 23, 2008. CHED, likewise granted a Permit to Operate Review Center No. 051, Series of 2008 to St. Paul University Surigao to operate a Review Center for Nursing Program in consortium with Review for Global Opportunities, Iloilo City per CHED En Banc Resolution No. 320-2008, dated June 23, 2008. St. Paul University Surigao was granted a Permit to Operate Review Center No. 054, Series of 2008 by the Commission on Higher Education (CHED) to operate a Review Center for Teacher Education Program by virtue of CHED En Banc Resolution No. 353-2008, dated July 14, 2008. 2009 Temporary permit to operate Bachelor of Science in Mining Engineering was granted by CHED per TP No. R13-543601-01, series of 2009 dated May 6, 2009. Temporary Permit to operate Bachelor of Science in Mathematics (BS Math), Level 1 was granted by CHED per TP No. R13-460100-01, series of 2009 dated May 6, 2009 ISO recertification was granted by TUV Rheinland Philippines, Ltd after its certification audit on September 2–3, 2009. The Grade School Department underwent Consultancy Visit by a PAASCU Consultant on November 27, 2009 2010 PAASCU Team of Accreditors conducted preliminary survey of Criminology, Information Technology, and Hotel & Restaurant Programs on January 18–19, 2010. Ground Breaking Ceremony and Laying of Cornerstone of the Grade School Building were done on April 24, 2010 at the New Site. Contractor: ARSD Construction Corp. Temporary permit to operate Bachelor of Library and Information Science (BLIS) Level I was granted by CHED per TP No. R13-842201-01, series of 2010 dated May 25, 2010 Temporary permit to operate Bachelor of Elementary Education (BEED) with Pre-School as area of concentration was granted by CHED per TP No. R13-141203-01, series of 2010 dated May 25, 2010 Approval to offer Values edition as additional major in the Bachelor of Secondary Education (BSED) program effective Academic Year 2010-2011, was given by CHED per 1st endorsement dated May 25, 2010 Sr. Marie Rosanne Mallillin, SPC, the 9th president of St. Paul University Surigao, was installed to office on June 11, 2010 Recertification of SPU Surigao as an ISO firm was issued by TUV Rheinland Philippines, Ltd. after the August 26–27, 2010 Surveillance Audit CHED Caraga declared SPU Surigao to have complied with CHED standards beyond the minimum requirements after its CHED monitoring and evaluation on September 15, 2010. PAASCU preliminary visit of the Elementary Department was done on September 30 - October 1, 2010. 2011 The Blessing of the new Elementary Building, Km. 3 campus, took place on June 3, 2011. Grade school classes started operating in the new campus on June 6, 2011 and pre-school classes on June 13, 2011. July 25–26, 2011 -ISO Surveillance Audit. Result: SPU Surigao was reconfirmed as an ISO certified firm up to November 16, 2015. August 5, 2011 - Blessing of the New Canteen in the New Site. 2012 On Jan 24-25, PAASCU Resurvey of Liberal Arts and Business Programs and Formal Survey of Elementary and Secondary Education Programs Result: LA and Business programs granted re-accreditation for five (5) years, valid up to January 2017. Elementary and Secondary Education programs granted initial accreditation for three (3) years, valid up to January 2015. February 4, 2012 - Blessing of the Learning Resource Center (LRC) in the New Site. May 7 - June 1, 2012-SPU Surigao was the training center of K to 12 for Grade 7 in the whole CARAGA Region. August 20, 2012 - Blessing of the Swimming Pool in the New Site. August 23–24, 2012 - Recertification Audit by TUV Rheinland, Inc. Result: SPU Surigao is recertified as having met the requirements of ISO 9001:2008 Standard. Validity of Recertification is from December 28, 2012 until November 16, 2015. 2013 On January 21, SPU Surigao was granted temporary permit to offer MA in Curriculum Design and Development and MA in Cultural Education effective AY 2013 – 2014. February 28-March 1, 2013 - PAASCU Formal Survey of Criminology Program and Preliminary Survey of Engineering Program Result: Criminology program granted initial accreditation for three years valid up to May 2016. April 23, 2013 - Blessings of the new SPUS Gym in the New Site at Km. 4 July 22, 2013- PAASCU Revisit of High School Department. Result: High School was granted Level II Re-accreditation for four years valid until May 2017. July 25, 2013 - Bachelor of Early Childhood Education was granted Government Recognition R13-141202-02, s. 2013 and granted temporary permit to offer Bachelor of Science in Mining Engineering 4th year level; Bachelor of Library and Information Science 3rd year level; Bachelor of Physical Education major in PE 2nd year level by CHED. August 27, 2013 - ISO Surveillance Audit. Result: SPU Surigao was reconfirmed as an ISO certified firm up to November 16, 2015. November 2013 - SPU Surigao was identified as Learning Resource Institute (LRI) for DILG in the implementation of the 2013 Citizen Satisfaction Index Survey (CSIS) for Surigao City. Three significant events marked this year: the blessing of the two big guest's houses (one in the new site and another in the main site) which could accommodate from 12 to 15 guests altogether in 5 big rooms plus another single room guest house (new site), the newly constructed 4-level staircase to the Science Building in the main site, and a newly acquired Isuzu Van to augment two old school buses. 2014 The High School Department was awarded by the Scholastic as the Highest Growth in Lexile. SPU Surigao was authorized by CHED to operate the 2nd year and 3rd year levels of the Bachelor of Physical Education Major in School P.E. program effective AY 2014-2015 dated on February 14, 2014. February 24, 2014 -SPU Surigao was granted government recognition to operate the Bachelor of Science in Mining Engineering from 1st year to 5th year levels and Bachelor of Library and Information Science programs. March 5, 2014 - Awarded as Accredited Competency Assessment Center by TESDA for Housekeeping NC II, Computer Hardware Servicing NC II and Food and Beverage Services NC II. March 19, 2014 - SPU Surigao held its Diamond High School Graduation (75th). June 20, 2014 - Granted government permits to operate the Senior High School Program – Academic Track: General Academic and Accountancy & Business Management; TechVoc Track: Industrial Arts August 2, 2014 – Blessing of the CARAGA Culinary Center and C3 Café that were located at the ground floor, LFD building, Main Campus. September 2, 2014 - ISO Surveillance Audit. Result: SPU Surigao was reconfirmed as an ISO certified firm up to November 16, 2015. December 4, 2014 – Groundbreaking Ceremony of St. Paul Surigao University Hospital Incorporated (SPSUHI) at Km. 3, Barangay Luna, Surigao City. University Seal The University Seal is the insignia of the St. Paul University Surigao. The Seal was adapted from the coat of arms of the Sisters of St. Paul of Chartres (SPC) the founder and the owner of St. Paul University Surigao and other Paulinian institutions. Programs and Courses Graduate School Doctor of Philosophy (PhD) Major: Educational Management Doctor of Business Management (DBM) Master of Arts (MA) Major: English Filipino Science Math Home Economics Educational Management Master of Science in Information Technology (MSIT) Master of Business Administration (MBA) Master of Public Administration (MPA) College of Business & Technology Bachelor of Science in Accountancy Bachelor of Science in Business Administration Major: Human Resource Development Management Financial Management Marketing Management Bachelor of Science in Computer Science Bachelor of Science in Information Technology Bachelor of Science in Office Administration Bachelor of Science in Public Administration Bachelor of Science in Accounting Technology Bachelor of Science in Hotel Restaurant Management Bachelor of Science in Tourism Management 2 year Computer Secretarial 2 year Hotel & Restaurant Management 2 year Computer Hardware Servicing College of Arts and Sciences Bachelor of Arts (AB) Major: Philosophy Sociology English Language Mass Communication Political Science Bachelor of Science in Mathematics Bachelor of Library and Information Science College of Engineering Bachelor of Science in Civil Engineering Bachelor of Science in Computer Engineering Bachelor of Science in Mining Engineering College of Health Sciences Bachelor of Science in Nursing Bachelor of Science in Psychology College of Teacher Education Bachelor in Elementary Education Bachelor in Secondary Education Major: English Filipino Science Mathematics Biological Home Economics Business Technology Physical Science Social Studies P.E., Health and Music Library and Info Science College of Criminology Bachelor of Science in Criminology Basic Education Secondary education, Elementary education, Pre-School Other Soon to be offered Bachelor in Agricultural Technology Diploma in Agricultural Technology Mining Courses Citations References http://www.spusedu.com University Student Handbook 2015 Edition External links St. Paul University Surigao Blog Catholic universities and colleges in the Philippines Catholic elementary schools in the Philippines Catholic secondary schools in the Philippines Universities and colleges in Surigao del Norte Schools in Surigao City Educational institutions established in 1906 1906 establishments in the Philippines
1393711
https://en.wikipedia.org/wiki/Atari%20DOS
Atari DOS
Atari DOS is the disk operating system used with the Atari 8-bit family of computers. Operating system extensions loaded into memory were required in order for an Atari computer to manage files stored on a disk drive. These extensions to the operating system added the disk handler and other file management features. The most important extension is the disk handler. In Atari DOS 2.0, this was the File Management System (FMS), an implementation of a file system loaded from a floppy disk. This meant at least an additional RAM was needed to run with DOS loaded. Versions There were several versions of Atari DOS available, with the first version released in 1979. Atari was using a cross assembler with Data General AOS. DOS 1.0 In the first version of DOS from Atari all commands were only accessible from the menu. It was bundled with the 810 disk drives. This version was entirely memory resident, which made it fast but occupied memory space. DOS 2.0 Also known as DISK OPERATING SYSTEM II VERSION 2.0S The second, more popular version of DOS from Atari was bundled with the 810 disk drives and some early 1050 disk drives. It is considered to be the lowest common denominator for Atari DOSes, as any Atari-compatible disk drive can read a disk formatted with DOS 2.0S. DOS 2.0S consisted of DOS.SYS and DUP.SYS. DOS.SYS was loaded into memory, while DUP.SYS contained the disk utilities and was loaded only when the user exited to DOS. In addition to bug fixes, DOS 2.0S featured improved NOTE/POINT support and the ability to automatically run an Atari executable file named AUTORUN.SYS. Since user memory was erased when DUP.SYS was loaded, an option to create a MEM.SAV file was added. This stored user memory in a temporary file (MEM.SAV) and restored it after DUP.SYS was unloaded. The previous menu option from DOS 1.0, N. DEFINE DEVICE, was replaced with N. CREATE MEM.SAV in DOS 2.0S. Version 2.0S was for single-density disks, 2.0D was for double-density disks. 2.0D shipped with the 815 Dual Disk Drive, which was both expensive and incompatible with the standard 810, and thus sold only a small number; making DOS version 2.0D rare and unusual. DOS 3 A new version of DOS that came originally bundled with the 5.25-inch Atari 1050 disk drive. This made use of the new Enhanced Density (ED) capability (referred to by Atari as Dual Density.. This increased storage from 88 KB to 130 KB per disk. There was a single density (88 KB) formatting option to maintain compatibility with older Atari 810 disk drives. By organizing sectors into blocks, Atari was anticipating larger capacity floppy disks, but this resulted in incompatibility with DOS 2.0S. Files converted to DOS 3 could not be converted back to DOS 2.0. As a result, DOS 3 was extremely unpopular and did not gain widespread acceptance amongst the Atari user community. DOS 3 provided built-in help via the Atari HELP key and/or the inverse key. Help files needed to be present on the system DOS disk to function properly. DOS 3 also used special XIO commands to control disc operations within BASIC programs. DOS 2.5 Also known as DISK OPERATING SYSTEM II VERSION 2.5 Version 2.5 is an upgrade to 3.0. After listening to complaints by their customers, Atari released an improved version of their previous DOS. This allowed the use of Enhanced Density disks, and there was a utility to read DOS 3 disks. An additional option was added to the menu (P. FORMAT SINGLE) to format single-density disks. DOS 2.5 was shipped with 1050 disk drives and some early XF551 disk-drives. Included utilities were DISKFIX.COM, COPY32.COM, SETUP.COM and RAMDISK.COM. DOS 4.0 Codename during production: QDOS DOS 4.0 was designed for the never-released 1450XLD. The rights were returned to the author, Michael Barall, who placed it in the public domain. It was later published by Antic Software. DOS 4.0 used blocks instead of single sectors, and supported single, enhanced, and double density, as well as both single- and double-sided drives. DOS 4.0 was not compatible with DOS 2 or 3 disks but could read files from them. It also did not automatically switch densities, and it was necessary to go to the menu and manually select the correct density. DOS XE Codename during production: ADOS DOS XE supported the double-density and double-sided capabilities of the Atari XF551 drive, as well as its burst I/O. DOS XE used a new disk format which was incompatible with DOS 2.0S and DOS 2.5, requiring a separate utility for reading older 2.0 files. It also required bank-switched RAM, so it did not run on the 400/800 machines. It supported date-stamping of files and sub-directories. DOS XE was the last DOS made by Atari for the Atari 8-bit family. Third-party DOS programs Many of these DOSes were released by manufacturers of third-party drives, anyone who made drive modifications, or anyone who was dissatisfied with the available DOSes. Often, these DOSes could read disks in higher densities, and could set the drive to read disks faster (using Warp Speed or Ultra-Speed techniques). Most of these DOSes (except SpartaDOS) were DOS 2.0 compatible. DOS 2.6 Someone in the Atari hacker community modified DOS 2.0 to add a few features and allow the use of dual density disk drives, with the "look and feel" of DOS 2.0. One new feature added was "RADIX", which one could use to translate hexadecimal numbers to base 10 or base 10 to hex. SmartDOS Menu driven DOS that was compatible with DOS 2.0. Among the first third-party DOS programs to support double-density drives. Many enhancements including sector copying and verifying, speed checking, turning on/off file verifying and drive reconfiguration. Published by Rana Systems. Written by John Chenoweth and Ron Bieber, last version 8.2D. OS/A+ and DOS XL DOS produced by Optimized Systems Software. Compatible with DOS 2.0 - Allowed the use of Double Density floppies. Unlike most ATARI DOSses, this used a command line instead of a menu. DOS XL provided a menu program in addition to the command line. SuperDOS This DOS could read SS/SD, SS/ED, SS/DD and DS/DD disks, and made use of all known methods of speeding up disk-reads supported by the various third-party drive manufacturers. Published by Technical Support. Written by Paul Nicholls. Top-DOS Menu driven DOS with enhanced features. Sorts disk directory listings and can set display options. File directory can be compressed. Can display deleted files and undelete them. Some advanced features required a proprietary TOP-DOS format. Published by Eclipse Software. Written by R. K. Bennett. Turbo-DOS This DOS supports Turbo 1050, Happy, Speedy, XF551 and US Doubler highspeed drives. XL/XE only. Published by Martin Reitershan Computertechnik. Written by Herbert Barth and Frank Bruchhäuser. MyDOS This DOS adds the ability to use sub-directories, and supports hard-drives. Published by Wordmark Systems, includes complete source code. SpartaDOS This DOS used a command-line interface. Was not compatible with DOS 2.0, but could read DOS 2.0 disks. Supports subdirectories and hard drives being capable of handling filesystems sized up to 16 MB. Included the capability to create primitive batch files. SpartaDOS X A more sophisticated version of SpartaDOS, which strongly resembles MS-DOS in its look and feel. It was shipped on a 64 KB ROM cartridge. RealDOS A SpartaDOS compatible DOS (in fact, a renamed version of SpartaDOS 3.x, due to legal reasons). RealDOS is Shareware by Stephen J. Carden and Ken Ames. BW-DOS A SpartaDOS compatible DOS, the last version 1.30 was released in December 1995. It has a much lower memory footprint compared to the original SpartaDOS and does not use the RAM under the ROM of XL/XE machines, allowing it to be used on the older Atari 400/800 models. BW-DOS is Freeware by Jiří Bernasek. XDOS XDOS is Freeware by Stefan Dorndorf. Disk formats A number of different formats existed for Atari disks. Atari DOS 2.0S, single-sided, single-density disk had 720 sectors divided into 40 tracks. After formatting, 707 sectors were free. Each 128-byte sector used the last 3 bytes for housekeeping data (bytes used, file number, next sector), leaving 125 bytes for data. This meant each disk held 707 × 125 = 88,375 bytes of user data. The single-density disk holding a mere 88 KB per side remained the most popular Atari 8-bit disk format throughout the series' lifetime, and almost all commercial software continued to be sold in that format (or variants of it modified for copy protection), since it was compatible with all Atari-made disk drives. Single-Sided, Single-Density: 40 tracks with 18 sectors per track, 128 bytes per sector. 90 KB capacity. Single-Sided, Enhanced-Density: 40 tracks with 26 sectors per track, 128 bytes per sector. 130 KB capacity. Readable by the 1050 and the XF551. Single-Sided, Double-Density: 40 tracks with 18 sectors per track, 256 bytes per sector. 180 KB capacity. Readable by the XF551, the 815, or modified/upgraded 1050. Double-Sided, Double-Density: 80 tracks (40 tracks per side) with 18 sectors per track, 256 bytes per sector. 360 KB capacity. Readable by the XF551 only. Percom standard In 1978, Percom established a double-density layout standard which all other manufacturers of Atari-compatible disk drives such as Indus, Amdek, and Rana —except Atari itself— followed. A configuration block of 12 bytes defines the disk layout. References Notes (Online version) Mapping the Atari, Revised Edition by Ian Chadwick External links Atari DOS Reference Manual — Reference manual for DOS 3. Antic Vol.4 No.3 Everything You Wanted To Know About Every DOS Atari Dos 4 (aka ANTIC Dos aka QDOS) Documentation on Atari DOS 4 MyDOS Source Code from Wordmark Systems. Atari 8-bit family software Atari operating systems Disk operating systems Discontinued operating systems 1979 software
262830
https://en.wikipedia.org/wiki/Translatio%20imperii
Translatio imperii
Translatio imperii (Latin for "transfer of rule") is a historiographical concept that originated from the Middle Ages, in which history is viewed as a linear succession of transfers of an imperium that invests supreme power in a singular ruler, an "emperor" (or sometimes even several emperors, e.g., the Eastern Roman Empire and the Western Holy Roman Empire). The concept is closely linked to translatio studii (the geographic movement of learning). Both terms are thought to have their origins in the second chapter of the Book of Daniel in the Hebrew Bible (verses 39–40). Definition Jacques Le Goff describes the translatio imperii concept as "typical" for the Middle Ages for several reasons: The idea of linearity of time and history was typical for the Middle Ages; The translatio imperii idea typically also neglected simultaneous developments in other parts of the world (of no importance to medieval Europeans); The translatio imperii idea didn't separate "divine" history from the history of "worldly power": medieval Europeans considered divine (supernatural) and material things as part of the same continuum, which was their reality. Also the causality of one reign necessarily leading to its successor was often detailed by the medieval chroniclers, and is seen as a typical medieval approach. Each medieval author described the translatio imperii as a succession leaving the supreme power in the hands of the monarch ruling the region of the author's provenance: Adso of Montier-en-Der (French area, 10th century): Roman Empire → Carolingian Franks → Saxons Otto of Freising (living in German region): Rome → Franks → Longobards → Germans (=Holy Roman Empire); Chrétien de Troyes (living in medieval France): Greece → Rome → France Richard de Bury (England, 14th century): "Athens" (Greece) → Rome → "Paris" (France) → England Ibrahim Pasha (Ottoman Empire, 16th century) Roman Empire → Eastern Roman Empire → Seljuk Empire → Sultanate of Rum → Ottoman Empire Snorri Sturluson (Prose Edda Prologue, Iceland/Norway, 13th c.): "Troy" (Turkey) → "Thrúdheim" (Thrace) → Norway Later, continued and reinterpreted by modern and contemporary movements and authors (some known examples): Fifth Monarchists (England, 17th century): Caldeans (Babylonians) → Persians → Macedonian Empire → Rome → England (and the British Empire later) António Vieira (Portugal, 17th century): Assyro-Caldeans (Babylonians) → Persians → Greeks → Romans → Portuguese Empire Fernando Pessoa (Portugal, 20th century): Greece → Rome → Christianity → Europe → Portugal Medieval and Renaissance authors often linked this transfer of power by genealogically attaching a ruling family to an ancient Greek or Trojan hero; this schema was modeled on Virgil's use of Aeneas (a Trojan hero) as progenitor of the city of Rome in his Aeneid. Continuing with this tradition, the twelfth-century Anglo-Norman authors Geoffrey of Monmouth (in his Historia Regum Britanniae) and Wace (in his Brut) linked the founding of Britain to the arrival of Brutus of Troy, son of Aeneas. In a similar way, the French Renaissance author Jean Lemaire de Belges (in his Les Illustrations de Gaule et Singularités de Troie) linked the founding of Celtic Gaul to the arrival of the Trojan Francus (i.e. Astyanax), the son of Hector; and of Celtic Germany to the arrival of Bavo, the cousin of Priam; in this way he established an illustrious genealogy for Pepin and Charlemagne (the legend of Francus would also serve as the basis for Ronsard's epic poem, "La Franciade"). From the Roman Empire/Byzantine Empire to the Holy Roman Empire The cardinal point in the idea of the translatio imperii is the link between the Roman Empire/Byzantine Empire and the Holy Roman Empire. Emperor Constantine I established Constantinople, a New Rome, as a second capital of the Roman Empire in 330. After the death of Emperor Theodosius I (347–395), the Roman Empire was permanently divided into the Western and the Eastern Roman Empire (Byzantine Empire). With the demise of the Western Empire in 476/480, the Byzantine Empire remained the sole Roman Empire. Byzantine Emperor Constantine V married his son Leo IV to Irene of Athens on 17 December 768, brought to Constantinople by the father on 1 November 768. On 14 January 771, Irene gave birth to a son, Constantine. Following the deaths of Constantine V in 775 and Leo IV in 780, Irene became regent for their nine-year-old son, Constantine VI. As early as 781, Irene began to seek a closer relationship with the Carolingian dynasty and the Papacy. She negotiated a marriage between her son Constantine and Rotrude, a daughter of the ruling Frankish king, Charlemagne. Irene went as far as to send an official to instruct the Frankish princess in Greek; however, Irene herself broke off the engagement in 787, against her son's wishes. As Constantine VI approached maturity, the relationship between mother/regent and son/emperor was increasingly strained. In 797 Irene deposed her son, with his eyes being mutilated, who died before 805. Some Western authorities considered the Byzantine throne, now occupied by a woman, to be vacant and instead recognized that Charlemagne, who controlled Italy and much part of the former Western Roman Empire, had a valid claim to the imperial title. Pope Leo III, crowned Charlemagne as Roman Emperor in 800, an act not recognized by the Byzantine Empire. Irene is said to have endeavored to negotiate a marriage between herself and Charlemagne, but according to Theophanes the Confessor, who alone mentioned it, the scheme was frustrated by Aetios, one of her favorites. In 802, Empress Irene was deposed by a conspiracy and replaced by Nikephoros I. She was exiled and died the following year. Pax Nicephori, a peace treaty in 803 between the Holy Roman Emperor Charlemagne and Byzantine Emperor Nikephoros I, Basileus of the Eastern Roman Empire. Recognition of Charlemagne as Emperor (Basileus) in 812 by Emperor Michael I Rangabe of the Byzantine Empire (crowned on 2 October 811 by the Patriarch of Constantinople), after he reopened negotiations with the Franks. While acknowledging Charlemagne strictly as “Emperor”, Michael only referred to himself as “Emperor of the Romans”. In exchange for that recognition, Venice was returned to the Byzantine Empire. On February 2, 962, Otto I was solemnly crowned Holy Roman Emperor by Pope John XII. Ten days later at a Roman synod, Pope John XII, at Otto's desire, founded the Archbishopric of Magdeburg and the Bishopric of Merseburg, bestowed the pallium on the Archbishop of Salzburg and Archbishop of Trier, and confirmed the appointment of Rather as Bishop of Verona. The next day, the emperor issued a decree, the famous Diploma Ottonianum, in which he confirmed the Roman Church in its possessions, particularly those granted by the Donation of Pepin. On April 972 14, Otto I married his son and heir Otto II to the Byzantine Princess Theophanu. Through their wedding contract, Otto was recognized as Emperor in the West, a title Theopanu was to assume together with her husband through the Consortium imperii after his death. See also Succession of the Roman Empire Legacy of the Roman Empire Problem of two emperors Third Rome Rûm Sultanate of Rûm Rumelia Surah ar-Rum Fifth Empire Caliphate Emperor of China Mandate of Heaven References Historiography Historiography of the Middle Ages Latin words and phrases
2924421
https://en.wikipedia.org/wiki/Layered%20Service%20Provider
Layered Service Provider
Layered Service Provider (LSP) is a deprecated feature of the Microsoft Windows Winsock 2 Service Provider Interface (SPI). A Layered Service Provider is a DLL that uses Winsock APIs to attempt to insert itself into the TCP/IP protocol stack. Once in the stack, a Layered Service Provider can intercept and modify inbound and outbound Internet traffic. It allows processing of all the TCP/IP traffic taking place between the Internet and the applications that are accessing the Internet (such as a web browser, the email client, etc.). For example, it could be used by malware to redirect web browers to rogue websites, or to block access to sites like Windows Update. Alternatively, a computer security program could scan network traffic for viruses or other threats. The Winsock Service Provider Interface (SPI) API provides a mechanism for layering providers on top of each other. Winsock LSPs are available for a range of useful purposes, including parental controls and Web content filtering. The parental controls web filter in Windows Vista is an LSP. The layering order of all providers is kept in the Winsock Catalog. Details Unlike the well-known Winsock 2 API, which is covered by numerous books, documentation, and samples, the Winsock 2 SPI is relatively unexplored. The Winsock 2 SPI is implemented by network transport service providers and namespace resolution service providers. The Winsock 2 SPI can be used to extend an existing transport service provider by implementing a Layered Service Provider. For example, quality of service (QoS) on Windows 98 and Windows 2000 is implemented as an LSP over the TCP/IP protocol stack. Another use for LSPs would be to develop specialized URL filtering software to prevent Web browsers from accessing certain sites, regardless of the browser installed on a desktop. The Winsock 2 SPI allows software developers to create two different types of service providers—transport and namespace. Transport providers (commonly referred to as protocol stacks) are services, which supply functions that set up connections, transfer data, exercise flow control, error control, and so on. Namespace providers are services that associate the addressing attributes of a network protocol with one or more human-friendly names and enable protocol-independent name resolution. The SPI also allows you to develop two types of transport service providers—base and layered service providers. Base service providers implement the actual details of a transport protocol: setting up connections, transferring data, and exercising flow control and error control. Layered service providers implement only higher-level custom communication functions and rely on an existing underlying base provider for the actual data exchange with a remote endpoint. Winsock 2 LSPs are implemented as Windows DLLs with a single exported entry function, WSPStartup. All other transport SPI functions are made accessible to ws2_32.dll or an upper chain layered provider via the LSP's dispatch table. LSPs and base providers are strung together to form a protocol chain. The LSP DLL has to be registered using a special LSP registrant which instructs Winsock 2, the loading order of the LSPs (there can be more than one LSP installed) and which protocols to intercept. LSPs work by intercepting Winsock 2 commands before they are processed by ws2_32.dll; they can therefore modify the commands, drop a command, or just log the data which makes them a useful tool for malware, network filters, network intercepters, and stream based sniffers. Sniffing network traffic through LSP can sometimes be troublesome since anti-virus vendors typically flag such activity as malicious — a network packet analyzer is therefore a better alternative for capturing network traffic. A feature of LSP and Winsock proxy sniffing is that they allow traffic to be captured from a single application and also enable traffic going to localhost (127.0.0.1) to be sniffed on Windows. There are two kinds of LSP: IFS and non IFS LSP. Currently most LSPs on the market are non IFS. The difference between the two LSPs is that non IFS LSPs modify the socket handle to a non valid Windows IFS handle and therefore the LSP must implement all Winsock 2 methods. IFS LSPs, on the other hand, preserve the socket handle, which allows the LSP to implement only the functions it wants to intercept. IFS LSPs have much less performance impact than non IFS LPS, but they are limited by the fact that they cannot inspect or modify data on the receive path. Deprecation and LSP bypass LSPs have been deprecated since Windows Server 2012. Systems that include LSPs will not pass the Windows logo checks. Windows 8 style "metro" apps that use networking will automatically bypass all LSPs. The Windows Filtering Platform provides similar functionality and is compatible with both Windows 8 style "metro" apps and conventional desktop applications. Corruption issues A major issue with LSPs is that any bugs in the LSP can cause applications to break. For example, an LSP that returns the wrong number of bytes sent through an interface can cause applications to go into an infinite loop while waiting for the network stack to indicate that data has been sent. Another major common issue with LSPs was that if they were to be removed or unregistered improperly or if the LSP was buggy, it would result in corruption of the Winsock catalog in the registry, and the entire TCP/IP stack would break and the computer could no longer access the network. LSP technology is often exploited by spyware and adware programs in order to intercept the communication across the Internet. For example, malware may insert itself as an LSP in the network stack and forward all of the user's traffic to an unauthorized external site, where it can be data-mined to find the user's interests to bombard him/her with targeted advertisements, as well as spam e-mail. If a malware LSP is not removed correctly, older versions of Windows may be left without a working network connection. Such potential loss of all network connectivity is prevented in Windows XP Service Pack 2, Windows Server 2003 Service Pack 1 and all later Windows operating systems, in which Winsock has the ability to self-heal after a user uninstalls such an LSP. Installed LSPs can be viewed using the XP/Vista Windows Defender's Software Explorer or using third-party utilities. References Unraveling the Mysteries of Writing a Winsock 2 Layered Service Provider - Microsoft Systems Journal Categorizing LSPs and Applications External links New PowerPoint Trojan installs itself as LSP The "Dark Side of Winsock": PDF of a DefCon presentation dealing with the creation and exploitation of Winsock Layered Service Providers the "Dark Side of Winsock": Video of same presentation - Microsoft application programming interfaces Windows communication and services
194445
https://en.wikipedia.org/wiki/SWTPC
SWTPC
Southwest Technical Products Corporation, or SWTPC, was an American producer of electronic kits, and later complete computer systems. It was incorporated in 1967 in San Antonio, Texas, succeeding the Daniel E. Meyer Company. In 1990, SWTPC became Point Systems, before ceasing a few years later. History In the 1960s, many hobbyist electronics magazines such as Popular Electronics and Radio-Electronics published construction articles, for many of which the author would arrange for a company to provide a kit of parts to build the project. Daniel Meyer published several popular projects and successfully sold parts kits. He soon started selling kits for other authors such as Don Lancaster and Louis Garner. Between 1967 and 1971, SWTPC sold kits for over 50 Popular Electronics articles. Most of these kits were intended for audio use, such as hi-fi, utility amplifiers, and test equipment such as a function generator based on the Intersil ICL8038. Many of these early kits used analog electronics technology, since digital technology was not yet affordable for most hobbyists. Some of the kits took advantage of new integrated circuits to allow low-cost construction of projects. For example, the new Signetics NE565 phase-locked loop chip was the core of a subsidiary communications authority (SCA) decoder board, which could be built and added to an FM radio to demodulate special programming (often, background music) not previously available to the general public. FCC regulations did not ban reception or decoding of radio transmissions, but SCA demodulation had previously required complex and expensive circuitry. Another popular new integrated circuit was the Signetics NE555, a versatile and low-cost timing oscillator chip, which was used in signal generators and simple timers. In 1972, SWTPC had a large enough collection of kits to justify printing a 32-page catalog. In January 1975, SWTPC introduced a computer terminal kit, the "TV Typewriter", or CT-1024. By November 1975, they were delivering complete computer kits based on Motorola MPUs. They were very successful for the next 5 or so years and grew to over 100 employees. As the new market evolved rapidly, most of the companies that were selling a computer kit in 1975 were out of business by 1978. Around 1987, SWTPC moved to selling point of sale computer systems, eventually changing its name to Point Systems. This new company lasted only a few years. Microcomputer pioneers When microprocessors (CPU chips) became available, SWTPC became one of the first suppliers of microcomputers to the general public, focusing on designs using the Motorola 6800 and, later, the 6809 CPUs. Many of these products were available in kit form as well. SWTPC also designed and supplied computer terminals, chassis, processor cards, memory cards, motherboards, I/O cards, disk drive systems, and tape storage systems. From the older "TV Typewriter" design a Video terminal had evolved the CT-64 terminal system, which was an essential part of many early SWTPC systems. Later a more intelligent version of this terminal, the CT-82, was introduced, and a graphical terminal the GT-6144 Graphics Terminal. Still later a SS-50 bus plug-in board, the "Data Systems 68 6845 Video Display Board" was introduced, and a keyboard could be connected to this board. With this solution an external terminal was no longer needed. SWTPC's SS-50 backplane bus was also supported or used by other manufacturers: (Midwest Scientific, Smoke Signal Broadcasting, Gimix, Helix, Tano, Percom Data, Safetran), etc. It was extended to the SS-64 (for the 68000 CPU) by Helix. SWTPC also designed one of the first affordable printers available for microcomputer users; it was based on a receipt printer mechanism. Technical Systems Consultants, first of West Lafayette, Indiana (ex Purdue University) and later of Chapel Hill, North Carolina, was the foremost supplier of software for SWTPC compatible hardware. Their software included operating systems (Flex, mini-FLEX, FLEX09, and UniFLEX) and various languages (several BASIC variants, FORTRAN, Pascal, C, assemblers, etc.) and other applications. Other software, from third parties, included Introl's C compiler, Omegasoft's Pascal compiler, the Lucidata Pascal system (from Cambridge, UK), and assorted spread sheets and text processors. By about 1980, TSC had developed a Unix-like multi-user, multi-programming operating system (UniFlex), for 6809 systems with DMA 8" floppy disks and extended memory. Several of TSC's languages were ported to the UniFlex, as was the Lucidata Pascal system. SWTPC's software catalog included the TSC software, and software from many other sources (including SWTPC itself). Much of it was also available in source code, at a higher price. Inspired by People's Computer Company's call for Tiny BASICs, Robert Uiterwyk wrote the MICRO BASIC 1.3 interpreter for the SWTPC 6800, which SWTPC published in the June 1976 issue of the SWTPC newsletter. Uiterwyk had handwritten the language on a legal tablet. He later expanded the language to 4K, adding support for floating point; this implementation was unique among BASIC interpreters by using Binary Coded Decimal to 9 digits of precision, with a range up to 10E99. An 8K version added string variables and trigonometry functions. Both the 4K and 8K versions were sold by SWTPC. In January, 1978, Uiterwyk sold the rights of the source code to Motorola. Product gallery References External links SWTPC product history website – By Bill Dawson and Michael Holley; DeRamp SWTPC 6800 Pages, including a lot of information from Michael Holley and a mirror of Michael Holly's site above. SWTPC page at Old Computer Museum SWTPC M6800 at PC-History.org SWTPC M6800 specs and pictures at Erik Klein's computer page Flex User Group Home pages sponsored by Micheal Evenson 6800/6809 Flex Emulator for x86 based Microsoft operating systems Exorsim Open source 6800 Flex (and Motorola EXORciser) Emulator for Linux/Cygwin American companies established in 1967 American companies disestablished in 1990 Companies based in San Antonio Computer companies established in 1967 Computer companies disestablished in 1990 Defunct companies based in Texas Defunct computer companies of the United States Defunct computer hardware companies Early microcomputers Electronic kit manufacturers Electronics companies of the United States
14322929
https://en.wikipedia.org/wiki/GNU%20Affero%20General%20Public%20License
GNU Affero General Public License
The GNU Affero General Public License is a free, copyleft license published by the Free Software Foundation in November 2007, and based on the GNU General Public License, version 3 and the Affero General Public License. The Free Software Foundation has recommended that the GNU AGPLv3 be considered for any software that will commonly be run over a network. The Free Software Foundation explains the need for the license in the case when a free program is run on a server: The GNU Affero General Public License is a modified version of the ordinary GNU GPL version 3. It has one added requirement: if you run a modified program on a server and let other users communicate with it there, your server must also allow them to download the source code corresponding to the modified version running there. The purpose of the GNU Affero GPL is to prevent a problem that affects developers of free programs that are often used on servers. The Open Source Initiative approved the GNU AGPLv3 as an open source license in March 2008 after the company Funambol submitted it for consideration through its CEO Fabrizio Capobianco. Compatibility with the GPL GNU AGPLv3 and GPLv3 licenses each include clauses (in section 13 of each license) that together achieve a form of mutual compatibility for the two licenses. These clauses explicitly allow the "conveying" of a work formed by linking code licensed under the one license against code licensed under the other license, despite the licenses otherwise not allowing relicensing under the terms of each other. In this way, the copyleft of each license is relaxed to allow distributing such combinations. Examples of applications under GNU AGPL Stet was the first software system known to be released under the GNU AGPL, on November 21, 2007, and is the only known program to be used mainly for the production of its own license. Flask developer Armin Ronacher noted in 2013 that the GNU AGPL is a "terrible success, especially among the startup community" as a "vehicle for dual commercial licensing", and gave Humhub, MongoDB, Odoo, RethinkDB, Shinken, Slic3r, SugarCRM, and WURFL as examples. MongoDB dropped the AGPL in late-2018 in favor of the "Server Side Public License" (SSPL), a variation of GPLv3 that requires those who provide "the program as a service", accessible to third-parties, to make the entire source code of all software used to facilitate the service available under the same license. The SSPL has been rejected by the Open Source Initiative and banned by both Debian and the Fedora Project, who state that the license's intent is to discriminate against cloud computing providers offering services based on the software without purchasing its commercial license. Criticism Héctor Martín Cantero has criticized the Affero GPL for being an EULA and causing side effects. See also References External links for GNU Affero General Public License (GNU AGPL). also includes info on version 2 of the Affero GPL. Free Software Foundation Free and open-source software licenses Copyleft software licenses
26465920
https://en.wikipedia.org/wiki/3%20GB%20barrier
3 GB barrier
In computing, the term 3 GB barrier refers to a limitation of some 32-bit operating systems running on x86 microprocessors. It prevents the operating systems from using all of 4 GiB () of main memory. The exact barrier varies by motherboard and I/O device configuration, particularly the size of video RAM; it may be in the range of 2.75 GB to 3.5 GB. The barrier is not present with a 64-bit processor and 64-bit operating system, or with certain x86 hardware and an operating system such as Linux or certain versions of Windows Server and macOS that allow use of Physical Address Extension (PAE) mode on x86 to access more than 4 GiB of RAM. Whatever the actual position of the "barrier", there is no code in operating system software nor any hardware architectural limit that directly imposes it. Rather, the "barrier" is the result of interactions between several aspects of both. Physical address limits Many 32-bit computers have 32 physical address bits and are thus limited to 4 GiB (232 words) of memory. x86 processors prior to the Pentium Pro have 32 or fewer physical address bits; however, most x86 processors since the Pentium Pro, which was first sold in 1995, have the Physical Address Extension (PAE) mechanism, which allows addressing up to 64 GiB (236 words) of memory. PAE is a modification of the protected mode address translation scheme which allows virtual or linear addresses to be translated to 36-bit physical addresses, instead of the 32-bit addresses available without PAE. The CPU pinouts likewise provide 36 bits of physical address lines to the motherboard. Many x86 operating systems, including any version of Linux with a PAE kernel and some versions of Windows Server and macOS, can use PAE to address up to 64 GiB of memory on an x86 system. There are other factors that may limit this ability to use up to 64 GiB of memory, and lead to the "3 GB barrier" under certain circumstances, even on processors that implement PAE. These are described in the following sections. Chipset and other motherboard issues Although, as noted above, most x86 processors from the Pentium Pro onward are able to generate physical addresses up to 64 GiB, the rest of the motherboard must participate in allowing RAM above the 4 GiB point to be addressed by the CPU. Chipsets and motherboards allowing more than 4 GiB of RAM with x86 processors do exist, but in the past, most of those intended for other than the high-end server market could access only 4 GiB of RAM. This, however, is not sufficient to explain the "3 GB barrier" that appears even when running some x86 versions of Microsoft Windows on platforms that can access more than 4 GiB of RAM. Memory-mapped I/O and disabled RAM Modern personal computers are built around a set of standards that depend on, among other things, the characteristics of the original PCI bus. The original PCI bus implemented 32-bit physical addresses and 32-bit-wide data transfers. PCI (and PCI Express and AGP) devices present at least some, if not all, of their host control interfaces via a set of memory-mapped I/O locations (MMIO). The address space in which these MMIO locations appear is the same address space as that used by RAM, and while RAM can exist and be addressable above the 4 GiB point, these MMIO locations decoded by I/O devices cannot be. They are limited by PCI bus specifications to addresses of 0xFFFFFFFF (232 − 1) and below. With 4 GiB or more of RAM installed, and with RAM occupying a contiguous range of addresses starting at 0, some of the MMIO locations will overlap with RAM addresses. On machines with large amounts of video memory, MMIO locations have been found to occupy as much as 1.8 GB of the 32-bit address space. The BIOS and chipset are responsible for detecting these address conflicts and disabling access to the RAM at those locations. Due to the way bus address ranges are determined on the PCI bus, this disabling is often at a relatively large granularity, resulting in relatively large amounts of RAM being disabled. Address remapping x86 chipsets that can address more than 4 GiB of RAM typically also allow memory remapping (referred to in some BIOS setup screens as "memory hole remapping"). In this scheme, the BIOS detects the memory address conflict and in effect relocates the interfering RAM so that it may be addressed by the processor at a new physical address that does not conflict with MMIO. On the Intel side, this feature once was limited to server chipsets; however, newer desktop chipsets like the Intel 955X and 965 and later have it as well. On the AMD side, the AMD K8 and later processors' built-in memory controller had it from the beginning. As the new physical addresses are above the 4 GiB point, addressing this RAM does require that the operating system be able to use physical addresses larger than 232. This capability is provided by PAE. Note that there is not necessarily a requirement for the operating system to support more than 4 GiB total of RAM, as the total RAM might be only 4 GiB; it is just that a portion of it appears to the CPU at addresses in the range from 4 GiB and up. This form of the 3 GB barrier affects one generation of MacBooks, lasting 1 year (Core2Duo (Merom) – November 2006 to October 2007): the prior generation was limited to 2 GiB, while later generations (November 2007 – October 2009) allowed 4 GiB through the use of PAE and memory-hole remapping, and subsequent generations (late 2009 onwards) use 64-bit processors and therefore can address over 4 GiB. Windows version dependencies The "non-server", or "client", x86 SKUs of Windows XP and later operate x86 processors in PAE mode by default when the CPU present implements the NX bit. Nevertheless, these operating systems do not permit addressing of physical memory above the 4 GiB address boundary. This is not an architectural limit; it is a limit imposed by Microsoft via license enforcement routines as a workaround for device driver compatibility issues that were allegedly discovered during testing. Thus, the "3 GB barrier" under x86 Windows "client" operating systems can therefore arise in two slightly different scenarios. In both, RAM near the 4 GiB point conflicts with memory-mapped I/O space. Either the BIOS simply disables the conflicting RAM; or, the BIOS remaps the conflicting RAM to physical addresses above the 4 GiB point, but x86 Windows client editions refuse to use physical addresses higher than that, even though they are running with PAE enabled. The conflicting RAM is therefore unavailable to the operating system whether it is remapped or not. See also 640 KB barrier x86-64 PSE-36 — an alternative to PAE on x86 processors to extend the physical memory addressing capabilities from 32 bits to 36 bits PCI hole Protection ring RAM drive — a use for remapped RAM Virtual memory — which governs the memory available to processes User space — and kernel space, which imposes another limit References External links How to use full 4 GB RAM in Windows 7 32 Bit (Gavotte RAMDisk in Windows 7) Why you should forget about 4 GiB of RAM on 32-bit systems and move on X86 memory management Computer memory X86 architecture
18921118
https://en.wikipedia.org/wiki/Intimacy%20%28Bloc%20Party%20album%29
Intimacy (Bloc Party album)
Intimacy is the third studio album by English indie rock band Bloc Party. It was recorded in two weeks at several locations in London and Kent during 2008 and was produced by Jacknife Lee and Paul Epworth. The band members made the album available for purchase on their website as a digital download on 21 August 2008. Minimal promotion was undertaken in the UK. The record was released in compact disc form on 24 October 2008, with Wichita Recordings as the primary label. It peaked at number 8 on the UK Albums Chart and entered the Billboard 200 in the United States at number 18. Bloc Party wanted to create an album that further distanced the band from the traditional rock set-up by incorporating more electronic elements and unconventional musical arrangements. As the record's title suggests, its tracks are about personal relationships and are loosely based on one of frontman Kele Okereke's break-ups in 2007. Three songs were released as singles: "Mercury", "Talons", and "One Month Off"; the first two tracks entered the UK Top 40. Intimacy was generally well received by critics. Reviewers often focused on its rush-release and central theme, and considered them either bold steps or poor choices. Origins and recording Bloc Party's second album A Weekend in the City, released in 2007, allowed the quartet to evolve sonically by including more electronically tampered soundscapes, but the band members were not entirely comfortable with more daring musical arrangements when making the record. According to multi-instrumentalist Gordon Moakes, the impromptu November 2007 single "Flux" "opened a door to the fact that we could go in any direction" in future works. After the NME Big Gig in February 2008, the band members took a month off from touring and did not interact with each other during that period. Moakes felt that there were no rules when the band re-assembled for studio work. Chief lyricist Okereke completed most of the songwriting before the recording process. In mid-2008, Bloc Party attended secret sessions at studios in the south-east of England. The band aimed to use a similar process to the creation of "Flux", which was crafted in a week. Paul Epworth and Jacknife Lee—from Bloc Party's previous albums, Silent Alarm and A Weekend in the City, respectively—returned to the production staff for Intimacy, because the band members felt that they had "unfinished business" with both. Okereke has stated that having two producers allowed for musical experimentation. Epworth focused on capturing the dynamic of a live band by working on fully developed songs and emphasising the rhythm section in the mix. Lee aided the band members' evolution towards a more electronic style by creating tracks with them. Each producer worked on five of the record's original ten tracks. According to Okereke, Bloc Party wanted to make something as stylised as R&B or electronica, combining the rawness of Silent Alarm and the recording experience gained from A Weekend in the City. The frontman drew inspiration from Siouxsie and the Banshees' 1988 song "Peek-a-Boo" and aimed to create "rock interpretations of The band worked by initially performing soundchecks with only guitar chords, keyboard notes, and drum beats. Discussing the interplay between rhythm guitarist Okereke and lead guitarist Russell Lissack, Epworth has stated that "Kele will do one thing that creates a great deal of impact, whereas Russell's very good at subtle embellishments and leading the melodic side of things outside of the vocal". The band members decided to record the first ten tracks crafted after judging first ideas to often be the best. They "thrived" under the pressure of timed sessions, which lasted only two weeks. Moakes has indicated that there was no worry about whether a song could be recreated live in concert in the same way as it would appear on record. A brass section and a chamber choir were hired as additional musicians. Drum machines and distorted guitars were used more extensively than in Bloc Party's previous works to create a sense of manipulation to the basic rock palette. Drummer Matt Tong was initially sceptical of moulding songs with programmed drums, as opposed to using his physical output, but agreed to the idea when the band recorded some of the tracks in their entirety. On some songs, the guitars were disregarded and the band focused solely on the beat. Okereke's voice was often used as an instrument by being looped, vocoded, or run through effects pedals. Promotion and release After the studio sessions, Bloc Party embarked on a tour of North American and European summer festivals. One of the recorded tracks, "Mercury", was released as a single on 11 August 2008 and peaked at number 16 on the UK Singles Chart. At the time, the band confirmed the existence of further material, but noted that a record release date was scheduled for the end of 2008 at the earliest. Bloc Party unexpectedly announced the completion of Intimacy on 18 August 2008 via a webcast and confirmed a release within 60 hours. The band members wanted to revive the importance of a new album's release in an era in which the excitement has dissipated because of extensive Internet coverage. They were inspired by Radiohead's marketing of In Rainbows in 2007, but did not consider a "free" sale option. Little press was undertaken in the UK to promote the record because of Okereke's reluctance to discuss personal aspects of his life. Intimacy was made available for download on Bloc Party's website on 21 August 2008. Ten MP3 tracks were sold with a plain black JPEG cover for £5, and a £10 option for the online songs and the future expanded CD was also available. The album title was picked as a "double bluff" with regard to people's expectations; Okereke has explained, "You'd think of wet balladeering. You don't think it's gonna be ugly or harsh. But that's what relationships are really like. It's not just about good times." The release was called "rushed" by publications such as Billboard and The Independent. Tong disagreed with the label and stated that Bloc Party wanted to make a statement that was surprising to anyone interested in their work. The band showcased tracks from Intimacy at Reading Festival at the end of August 2008 and embarked on a North American tour during September. UK appearances on the MTV2 Gonzo Tour and the release of the second single, "Talons", preceded the physical release of the album in October, which entered the UK Albums Chart at number eight. In the US, the record sold 24,000 copies during the first week of release and debuted at number 18 on the Billboard 200. By August 2012 it had sold 85,000 copies in the United States. Comprehensive sales figures have not been published because the digital download data has not been publicly reported by Bloc Party. The chosen cover art is a stylised shot of a couple kissing, taken by freelance photographer Perry Curties. It was ranked at number 23 on Gigwise's list of The Best Album Covers of 2008, in which the publication called it "intimate and rather ambiguous". Content Lyrics The lyrics of Intimacy were inspired by a relationship break-up Okereke went through at the end of 2007. The lyricist told Rolling Stone, "I wouldn't want anyone to think it's the clichéd break-up record but I haven't written about true, personal experiences all that much in the past." The move to more intimate subject matter was "semi-conscious" because the band members did not want to focus on socio-political issues as they had in their previous works. Three tracks allude to Greek mythology: "Ares" draws its name from the god of war, "Trojan Horse" is named after the Trojan War military ruse, and "Zephyrus" draws its name from the god of the west wind. The narrative in the songs occurs between two people and focuses on the relations between lovers, friends, and enemies; Okereke indicated that "it's about moments of shared vulnerability". "Better Than Heaven" references the Garden of Eden and Corinthians (15:22), because the lyricist wanted to explore the themes of sex and death, especially in a biblical context. "Biko" means "Please" (or more accurately "I implore you") in Igbo—a language spoken in Nigeria, the homeland of Okereke's parents—and is used "when you're beseeching someone to do something". Okereke denied that it is about the murdered South African anti-apartheid protester Steve Biko. The lyrics of "One Month Off" reference feelings of anger and are about being in love with someone younger and unfaithful, while "Zephyrus" concerns an apology following neglect. The lyrics in the chorus of "Ion Square", the last track on the original download release, are based on E. E. Cummings' poem "I Carry Your Heart with Me". Okereke considers the song a personal favourite because it evokes the initial exciting stages of a new relationship when everything is going right. Composition Okereke has discussed a natural progression in Bloc Party's compositional style to a more explorative, electronic direction. For the opening track on Intimacy, "Ares", Okereke was inspired to rap his lyrics after listening to the old-school hip hop of Afrika Bambaataa. According to Heather Phares of AllMusic, the song includes siren-like guitar chords and loud, complex drumming in the vein of dance acts The Prodigy and The Chemical Brothers. "Mercury" continues the complex drumming theme by incorporating layered percussion and contains a vocally manipulated chorus. The track is an attempt at drum and bass and features brass dissonance, effects Okereke has called "harsh, glacial, layered and energetic". "Zephyrus" begins with a solitary vocal line accompanied only by a drum machine pattern, while the Exmoor Singers provide background vocals in the rest of the composition. "Signs" is the only song that does not include guitars; instead, it is made up of a synthesiser pulse and multitracked samples of glockenspiel and mbira resembling the work of minimalist composer Steve Reich. Okereke has conceded that Intimacy covers Bloc Party's typical indie rock elements, but noted that the guitars have an artificial and manipulated sound, "almost like all the humanity has been bleached out". "Halo" has a fast tempo coupled with a guitar melody that uses only four chords, while "Trojan Horse" features syncopated guitars and distortion. "Talons" also incorporates distortion from both lead and rhythm guitars, while the final single "One Month Off" consists of tribal rhythms and sixteenth note guitar riffs. "Biko" has a slower tempo and includes guitar arpeggi throughout, while "Ion Square" incorporates guitar overdubbing and the use of hi-hat patterns throughout. According to Nick Southall of Drowned in Sound, "Better Than Heaven" encapsulates what Bloc Party had been trying to achieve in their previous works, "namely aligning all their different directional desires: to swoon, to rock, and to experiment all at once". The track features broken beats and layered vocals. Critical reception Media response to Intimacy was generally favourable; aggregating website Metacritic reports a normalised rating of 69% based on 27 critical reviews. Steven Robertshaw of Alternative Press described the album as arguably Bloc Party's finest career moment and noted that it offers "sweat and circuitry, savagery and submission, and a captivating energy that's severely lacking in many music scenes on the planet". Kyle Anderson of Rolling Stone claimed that by "replacing Bloc Party's distant cool with vivid honesty, Okereke makes Intimacy a confident new peak for his band", while PopMatters' Ross Langager explained that the record "might not actually be all that intimate, but it is a thing of rough, recycled beauty". Adam Mazmanian of The Washington Times commented that the album's final mix showed that producers Epworth and Lee preserved the essence of Bloc Party's signature sound—"minor key rock thrumming with rhythmic intensity"—while taking the band in new musical directions. Dave Simpson of The Guardian concluded that it would please old and new fans alike by being "brave, individual and heartfelt". Pitchfork's Ian Cohen was less receptive and asserted that the record seems like a document of a band disconnected from its musical strengths. Josh Modell of Spin felt that Intimacy sometimes gets "sonically or lyrically precarious", while John Robinson of Uncut commented that "there's an air of slightly hedged bets". Drowned in Sounds Nick Southall claimed that the record is not quite the radical statement Bloc Party set out to achieve, but concluded that it is "definitely a little bit of invigorating redemption at a time when doubts were beginning to cloud what was, initially, a flawless reputation". In its year-end music review for 2008, Under the Radar stated about the band members, "They are so solid and so confident that it seems inevitable that they will get many chances to slowly drift into more daring lands. But without more risk, they may be destined to make albums like Intimacy – accomplished and intriguing, but not life changing, not classic." The record figured in several publications' end-of-year best album lists for 2008—notably, at number 14 by Gigwise, at number 36 by Drowned in Sound, and at number 49 by NME. Track listing The download-only release in August 2008 did not include "Talons". The iTunes version of the October release included an extra Bloc Party EP, Live from London, which contains six songs from Intimacy performed live. The deluxe edition includes access to an online exclusive film, Live and Intimate, which contains footage of Bloc Party performing several Intimacy tracks plus "Banquet" live at The Pool, Miloco Studios. In 2009, the deluxe edition of Intimacy was remixed as Intimacy Remixed by artists including Mogwai, Armand Van Helden, and No Age. The Gold Panda remix of "Letter to My Son" was erroneously labelled as being by Golden Panda on the Rolling Stone CD. Vinyl A standard black LP copy in a gatefold sleeve was released in October 2008 with the normal track listing, but with an original mix of "Mercury" instead of the CD version. The North American edition also included a code for the free online download of the tracks in MP3 format. A limited edition picture disc vinyl version was additionally released in the UK; it had the album cover printed on Side A and the track listing printed on Side B. Personnel The people involved in the making of Intimacy are the following: Band Kele Okereke – lead vocals, rhythm guitar, loops Russell Lissack – lead guitar Gordon Moakes – bass guitar, backing vocals, synthesizer, glockenspiel, electronic drums, sampler Matt Tong – drums, drum machine, backing vocals Brass section Avshalom Caspi – brass arrangements Guy Barker – trumpet Paul Archibald – trumpet Sid Gault – trumpet Derek Watkins – trumpet Christopher Dean – trombone Roger Harvey – trombone Dan Jenkins – trombone Colin Sheen – trombone Chamber choir James Jarvis – music director The Exmoor Singers of London – sopranos, altos, tenors, basses Production Paul Epworth – producer; programming; keyboards Jacknife Lee – producer; programming; keyboards Sam Bell – recording; additional programming Mark Rankin – recording Phil Rose – recording (choral and brass) Matt Wiggins – recording assistant Tom Hough – recording assistant Alan Moulder – mixing Darren Lawson – mixing assistant Guy Davie – mastering Artwork Perry Curties – photography Rob Crane – design Chart positions Weekly charts Year-end charts Singles Release history References External links Intimacy lyrics at Bloc Party official site Intimacy critical reviews at Metacritic Live and Intimate in photos at The Guardian 2008 albums Albums produced by Jacknife Lee Atlantic Records albums Bloc Party albums Wichita Recordings albums Albums produced by Paul Epworth