id
stringlengths 3
8
| url
stringlengths 32
207
| title
stringlengths 1
114
| text
stringlengths 93
492k
|
---|---|---|---|
11768428
|
https://en.wikipedia.org/wiki/QDOS
|
QDOS
|
QDOS may refer to:
QDOS (Qasar DOS), the Motorola 6800-based operating system of the Fairlight CMI digital sampling synthesizer series, based on the MDOS (Motorola DOS)
Seattle Computer Products QDOS, SCP's Quick and Dirty Operating System in 1980, later renamed to 86-DOS (predecessor of MS-DOS)
Sinclair QDOS, the Sinclair QL operating system written in Motorola 68000 assembly language
Atari QDOS, the production codename of Disk Operating System 4.0 for Atari 8-bit computers
Gazelle Systems QDOS, a file-manager-type environment for DOS, published in 1991
Qdos Entertainment, the UK-based entertainment company who is the world's largest pantomime producer
Q:Dos, a recording name for trance musicians Scott Bond, Darren Hodson, John Purser, Nick Rose
Qdos, range of no-valve metering pumps
See also
DOS (disambiguation)
Quality of service (QOS)
|
2625968
|
https://en.wikipedia.org/wiki/DO-178B
|
DO-178B
|
DO-178B, Software Considerations in Airborne Systems and Equipment Certification is a guideline dealing with the safety of safety-critical software used in certain airborne systems. It was jointly developed by the safety-critical working group RTCA SC-167 of the Radio Technical Commission for Aeronautics (RTCA) and WG-12 of the European Organisation for Civil Aviation Equipment (EUROCAE). RTCA published the document as RTCA/DO-178B, while EUROCAE published the document as ED-12B. Although technically a guideline, it was a de facto standard for developing avionics software systems until it was replaced in 2012 by DO-178C.
The Federal Aviation Administration (FAA) applies DO-178B as the document it uses for guidance to determine if the software will perform reliably in an airborne environment, when specified by the Technical Standard Order (TSO) for which certification is sought. In the United States, the introduction of TSOs into the airworthiness certification process, and by extension DO-178B, is explicitly established in Title 14: Aeronautics and Space of the Code of Federal Regulations (CFR), also known as the Federal Aviation Regulations, Part 21, Subpart O.
Software level
The Software Level, also termed the Design Assurance Level (DAL) or Item Development Assurance Level (IDAL) as defined in ARP4754 (DO-178C only mentions IDAL as synonymous with Software Level), is determined from the safety assessment process and hazard analysis by examining the effects of a failure condition in the system. The failure conditions are categorized by their effects on the aircraft, crew, and passengers.
Catastrophic – Failure may cause a crash. Error or loss of critical function required to safely fly and land aircraft.
Hazardous – Failure has a large negative impact on safety or performance, or reduces the ability of the crew to operate the aircraft due to physical distress or a higher workload, or causes serious or fatal injuries among the passengers. (Safety-significant)
Major – Failure is significant, but has a lesser impact than a Hazardous failure (for example, leads to passenger discomfort rather than injuries) or significantly increases crew workload (safety related)
Minor – Failure is noticeable, but has a lesser impact than a Major failure (for example, causing passenger inconvenience or a routine flight plan change)
No Effect – Failure has no impact on safety, aircraft operation, or crew workload.
DO-178B alone is not intended to guarantee software safety aspects. Safety attributes in the design and implemented as functionality, must receive additional mandatory system safety tasks to drive and show objective evidence of meeting explicit safety requirements. Typically IEEE STD-1228-1994 Software Safety Plans are allocated and software safety analyses tasks are accomplished in sequential steps (requirements analysis, top level design analysis, detailed design analysis, code level analysis, test analysis and change analysis). These software safety tasks and artifacts are integral supporting parts of the process for hazard severity and DAL determination to be documented in system safety assessments (SSA). The certification authorities require and DO-178B specifies the correct DAL be established using these comprehensive analyses methods to establish the software level A-E. Any software that commands, controls, and monitors safety-critical functions should receive the highest DAL - Level A. It is the software safety analyses that drive the system safety assessments that determine the DAL that drives the appropriate level of rigor in DO-178B. The system safety assessments combined with methods such as SAE ARP 4754A determine the after mitigation DAL and may allow reduction of the DO-178B software level objectives to be satisfied if redundancy, design safety features and other architectural forms of hazard mitigation are in requirements driven by the safety analyses. Therefore, DO-178B central theme is design assurance and verification after the prerequisite safety requirements have been established.
The number of objectives to be satisfied (eventually with independence) is determined by the software level A-E. The phrase "with independence" refers to a separation of responsibilities where the objectivity of the verification and validation processes is ensured by virtue of their "independence" from the software development team. For objectives that must be satisfied with independence, the person verifying the item (such as a requirement or source code) may not be the person who authored the item and this separation must be clearly documented. In some cases, an automated tool may be equivalent to independence. However, the tool itself must then be qualified if it substitutes for human review.
Processes and documents
Processes are intended to support the objectives, according to the software level (A through D—Level E was outside the purview of DO-178B). Processes are described as abstract areas of work in DO-178B, and it is up to the planners of a real project to define and document the specifics of how a process will be carried out. On a real project, the actual activities that will be done in the context of a process must be shown to support the objectives. These activities are defined by the project planners as part of the Planning process.
This objective-based nature of DO-178B allows a great deal of flexibility in regard to following different styles of software life cycle. Once an activity within a process has been defined, it is generally expected that the project respect that documented activity within its process. Furthermore, processes (and their concrete activities) must have well defined entry and exit criteria, according to DO-178B, and a project must show that it is respecting those criteria as it performs the activities in the process.
The flexible nature of DO-178B's processes and entry/exit criteria make it difficult to implement the first time, because these aspects are abstract and there is no "base set" of activities from which to work. The intention of DO-178B was not to be prescriptive. There are many possible and acceptable ways for a real project to define these aspects. This can be difficult the first time a company attempts to develop a civil avionics system under this standard, and has created a niche market for DO-178B training and consulting.
For a generic DO-178B based process, a visual summary is provided including the Stages of Involvement (SOIs) defined by FAA on the "Guidance and Job Aids for Software and Complex Electronic Hardware".
Planning
System requirements are typically input to the entire project.
The last 3 documents (standards) are not required for software level D..
Development
DO-178B is not intended as a software development standard; it is software assurance using a set of tasks to meet objectives and levels of rigor.
The development process output documents:
Software requirements data (SRD)
Software design description (SDD)
Source code
Executable object code
Traceability from system requirements to all source code or executable object code is typically required (depending on software level).
Typically used software development process:
Waterfall model
Spiral model
V model
Verification
Document outputs made by this process:
Software verification cases and procedures (SVCP)
Software verification results (SVR):
Review of all requirements, design and code
Testing of executable object code
Code coverage analysis
Analysis of all code and traceability from tests and results to all requirements is typically required (depending on software level).
This process typically also involves:
Requirements based test tools
Code coverage analyzer tools
Other names for tests performed in this process can be:
Unit testing
Integration testing
Black-box and acceptance testing
Configuration management
Documents maintained by the configuration management process:
Software configuration index (SCI)
Software life cycle environment configuration index (SECI)
This process handles problem reports, changes and related activities. The configuration management process typically provides archive and revision identification of:
Source code development environment
Other development environments (for e.g. test/analysis tools)
Software integration tool
All other documents, software and hardware
Quality assurance
Output documents from the quality assurance process:
Software quality assurance records (SQAR)
Software conformity review (SCR)
Software accomplishment summary (SAS)
This process performs reviews and audits to show compliance with DO-178B. The interface to the certification authority is also handled by the quality assurance process.
Certification liaison
Typically a Designated Engineering Representative (DER) reviews technical data as part of the submission to the FAA for approval.
Tools
Software can automate, assist or otherwise handle or help in the DO-178B processes. All tools used for DO-178B development must be part of the certification process. Tools generating embedded code are qualified as development tools, with the same constraints as the embedded code. Tools used to verify the code (simulators, test execution tool, coverage tools, reporting tools, etc.) must be qualified as verification tools, a much lighter process consisting in a comprehensive black box testing of the tool.
A third party tool can be qualified as a verification tool, but development tools must have been developed following the DO-178 process. Companies providing these kind of tools as COTS are subject to audits from the certification authorities, to which they give complete access to source code, specifications and all certification artifacts.
Outside of this scope, output of any used tool must be manually verified by humans.
A problem management tool can provide traceability for changes.
SCI and SECI can be created from logs in a revision control tool.
Requirements management
Requirements traceability is concerned with documenting the life of a requirement. It should be possible to trace back to the origin of each requirement and every change made to the requirement should therefore be documented in order to achieve traceability. Even the use of the requirement after the implemented features have been deployed and used should be traceable.
Criticism
VDC Research notes that DO-178B has become "somewhat antiquated" in that it is not adapting well to the needs and preferences of today's engineers. In the same report, they also note that DO-178C seems well-poised to address this issue.
Resources
FAR Part 23/25 §1301/§1309
FAR Part 27/29
AC 23/25.1309
AC 20-115B
RTCA/DO-178B
FAA Order 8110.49 Software Approval Guidelines
See also
DO-178C
Avionics software
ARP4761 (Safety assessment process)
ARP4754 (System development process)
DO-248B (Final Report for clarification of DO-178B)
DO-254 (similar to DO-178B, but for hardware)
Requirements management (too general to be "directly applied" to DO-178B)
IEC 61508
ISO/IEC 12207 (software life cycle process development standard)
ED-153 (Guidelines for ANS software safety assurance)
Modified condition/decision coverage
References
External links
AC 25.1309-1A
AC 20-115B
FAA Order 8110.49 Change 1
Computer-related introductions in 1992
RTCA standards
Computer standards
Avionics
Safety engineering
Software requirements
|
9302262
|
https://en.wikipedia.org/wiki/Douglas%20T.%20Ross
|
Douglas T. Ross
|
Douglas Taylor "Doug" Ross (21 December 1929 – 31 January 2007) was an American computer scientist pioneer, and chairman of SofTech, Inc. He is most famous for originating the term CAD for computer-aided design, and is considered to be the father of Automatically Programmed Tools (APT), a programming language to drive numerical control in manufacturing. His later work focused on a pseudophilosophy he developed and named Plex.
Biography
Ross was born in China, where his parents both worked as medical missionaries, and he then grew up in the United States in Canandaigua, New York. He received a Bachelor of Science (B.Sc.) cum laude in mathematics from Oberlin College in 1951, and a Master of Science (M.Sc.) in electrical engineering from the Massachusetts Institute of Technology (MIT) in 1954. Afterward, he began but didn't finish his Ph.D., at MIT due to his pressing work as head of MIT's Computer Applications Group.
In the 1950s, he participated in the MIT Whirlwind I computer project. In 1969, Ross founded SofTech, Inc., which began as an early supplier of custom compilers for the United States Department of Defense (DoD) for the languages Ada and Pascal. Ross lectured at MIT Electrical Engineering and Computer Science Department and was chairman emeritus. He retired at Softech, having served as the company's president from 1969 to 1975, when he became chairman of the board of directors.
Among his many honors are the Joseph Marie Jacquard Memorial Award from the Numerical Control Society, in 1975, and the Distinguished Contributions Award from the Society of Manufacturing Engineers in 1980, and Honorary Engineer of the Year Award from the San Fernando Valley Engineer's Council, 1981. The MIT Department of Electrical Engineering and Computer Science named after him the Douglas T. Ross Career Development Associate Professor of Software Development. The D.T.Ross Medal Award of the Berliner Kreis Scientific Forum for Product Development of the WiGeP Academic Society of Product Development Berliner Kreis & WGMK was named in his honor.
Work
Ross contributed to the MIT Whirlwind I computer project, which was the first to display real-time text and graphics. Many consider him to be the father of Automatically Programmed Tools (APT), the language that drives numerical control in manufacturing. Also he originated the term CAD for computer-aided design.
MIT Whirlwind project
Ross came to MIT in the fall of 1951 as a teaching assistant in the mathematics department. His wife, Pat, was a "computer banging away on a Marchant calculator" at Lincoln Laboratory before it officially took over the Whirlwind I computer. Her group used the Servomechanisms Labs analog correlation computer, built by Norbert Wiener. It had ball-and-disk integrators and arms used to hand trace strip chart curves of radar noise data. When the machine was in use, variables in equations were represented by rotations in its shafts. These were connected with mechanical pens which plot an accurate curve worked out by the shafts continuous movement. Interpreted correctly, this curve gave a graphic solution to the problem. This initiated Ross's entry to the Servo Lab with a summer job in June 1952 in the field of airborne fire-control system evaluation and power density spectra analyses.
The first programming language Ross designed was one in which the computer was a group of people, six or eight part-time students. It was suggested that Ross could use Whirlwind in his work. Whirlwind at that time had exactly one kilobyte (k, 1024 words) of 16-bit memory. He taught himself to program it in the summer of 1952. His masters thesis related to Computational Techniques for Fourier Transformation.
Automatically Programmed Tool
He worked on numerous projects around the Whirlwind secret room of the Cape Cod System SAGE air defense system and at the Eglin Air Force Base ERA 1103. Around 1954, Ross wrote the first hand-drawn graphics input program to a computer. He stated it was "One of the few programs that I ever wrote that worked the first time" The Air Force was interested in continuing beyond MIT's Numerical Control Projects objective of standardizing the numerical control of machine tools.
Starting in 1956, MIT had a contract for a new program in numerical control, this time emphasizing automatic programming for three-dimensional parts to be produced by 3- and 5-axis machine tools. Ross stated his work with radar vector handling led naturally to his defining tool paths as space curves rather than points in APT II, and allowed him to conceptualize their realization in a machine tool's rectilinear framework. The Servo Lab received Air Force sponsorship for numerical control hardware, software, and adaptive control, followed by computer-aided design, computer graphics hardware and software, and software engineering and software technology, from 1951. This continued for almost 20 years. In 1957 the last of Ross's original three research assistants, Sam Matsa, left for IBM to develop AUTOPROMT, a three-dimensional APT derivative, and later (1967) co-founded, with Andy Van Dam, the ACM SICGRAPH.
The APT project largely finished in February 1959. It had the copyright status of works by the federal government of the United States, and thus was released into the public domain. The legacy of this work can be found in next generation NC programs of the 21st century.
Computer-aided design
At the conclusion of APT I, Ross and John Francis Reintjes were interviewed for MIT science reporter television by Robert S. Woodbury. There was considerable public interest in the increasing sophistication of numerically controlled machine tools. The interview is illustrative of Ross's long stated belief in the graphics potential of the computer. He showed the audience a photograph of a vector sweep image from a display scope in the form of a Disney cartoon character coupled in a coordinate space with a canonical gnomon.
The next few years would see the completing of APT's influential Arithmetic Elements and then the broad collaboration pioneered in the APT project was repeated in building the computer-aided design system named Automated Engineering Design (AED). Ross sometimes called it informally The Art of Engineering Design or ALGOL Extended for Design.
Early industry practitioners of computer aided drafting and manufacturing visited MIT in formal exchanges of the developing technologies. Ross organized many standards making conferences for the American National Standards Institute (ANSI) and Business Equipment Manufacturers Association (BEMA, renamed Information Technology Industry Council), solidifying his place as a touchstone in any future history of CAD. The next decade brought a refining of his philosophy of system design. He was a founding member of Society for Industrial and Applied Mathematics (SIAM).
MIT's electrical engineering and computer science
He was involved with developing international standards in programming and informatics, as an early active participant in the International Federation for Information Processing (IFIP). He was a member of IFIP Working Group 2.1 on Algorithmic Languages and Calculi, which specified, maintains, and supports the programming languages ALGOL 60 and ALGOL 68. In 1968, Ross taught what he suggested was the world's first software engineering course at MIT. He participated in the foundational NATO Software Engineering Conference in Garmisch, Germany, 7–11 October 1968. Many MIT project users built their systems on AED. Post Assembly revisions of Jay Wright Forrester's famous Dynamo feedback-modeling, System Dynamics simulation language were written in AED-0, Ross's extended version of ALGOL 60 and used into the 1980s.
Ross wrote the only ALGOL X compiler known to have existed, with the AED-0 system.
SofTech's work on airborne and other instrumentation systems involved building software development tools. By the late 70's microprocessors like the 8086 were starting to be used for these embedded systems. The University of California at San Diego Pascal System (UCSD p-System, see UCSD Pascal) was developed in 1978 to provide students with a common operating system to use on various machines like the PDP-11 minicomputer. Versions of p-System were freely exchanged between interested users. The p-System was brought to Ross's attention by a developer at their San Diego branch [who had an Apple I computer]. Ross visited UCSD and was smitten by a college operation building a system he recognized as kindred to his AED efforts. SofTech licensed the p-System and established a Microsystems subsidiary in 1979. SofTech's compiling, dynamic loading, and linking tools helped make the p-System a powerful development environment. UCSD p-System was used on IBM Personal Computer, Apple II, and other Zilog Z80, MOS Technology 6502, Motorola 68000 based machines. Ross later bought the PDP-11 based Terak 8510/a "graphics workhorse" computer of Ken Bowles which now resides in the Computer History Museum collections.
Structured analysis and design technique
As the inventor of structured analysis and design technique (SADT), Ross was an early developer of structured analysis methods. During the 1970s, along with other contributors from SofTech, Inc., Ross helped develop SADT into the IDEF0 method for the Air Force's Integrated Computer-Aided Manufacturing (ICAM) program's IDEF suite of analysis and design methods.
He was a member of the Institute of Electrical and Electronics Engineers (IEEE) IDEF0 Working Group which produced the IEEE Icam DEFinition for Function Modeling (IDEF0) standard in 1998. The IEEE IDEF0 standard superseded FIPS PUB 183, which was retired in 2002.
Plex
Ross' Structured Analysis grew out of his "philosophy of problem-solving", which he named Plex in the late 1950s. Later in Ross's life, this became something of an obsession. In the 1980s, he minimized his role at SofTech to concentrate on developing Plex into a wide-ranging pseudophilosophy touching on epistemology, ontology, and philosophy of science. Ross wrote a wealth of material on Plex, delivering lectures at conferences and holding an abortive seminar at MIT in 1984. However, he was unable to find the audience he believed Plex deserved, and by the late 1980s he considered it an "intolerable burden of responsibility" to be its sole proponent and prophet.
See also
Semi-Automatic Ground Environment
Publications
Ross wrote dozens of articles and some reports. A selection:
References
External links
Three oral history interviews with Douglas T. Ross, Charles Babbage Institute, University of Minnesota, 21 February 1984, 1 November 1989 and 7 May 2004.
Oral history Siggraph Sam Matsa
Douglas T. Ross papers, MC 414. Massachusetts Institute of Technology, Institute Archives and Special Collections, Cambridge, Massachusetts.
1929 births
2007 deaths
American computer scientists
MIT School of Engineering faculty
Oberlin College alumni
MIT School of Engineering alumni
American chief executives
People from Canandaigua, New York
American expatriates in China
Scientists from New York (state)
|
31815644
|
https://en.wikipedia.org/wiki/CIC%20%28Nintendo%29
|
CIC (Nintendo)
|
The Checking Integrated Circuit, or CIC, is a lockout chip designed for the Nintendo Entertainment System which had three main purposes:
To give Nintendo complete control over the software released for the platform
To prevent unlicensed and pirate game cartridges from running
To facilitate regional lockout
Improved designs of the CIC chip were also used in the later Super Nintendo Entertainment System and Nintendo 64, although running an updated security program which performs additional checks.
10NES
The 10NES system is a lock-out system designed for the North American and European versions of the Nintendo Entertainment System (NES) video game console. The electronic chip serves as a digital lock which can be opened by a key in the games, designed to restrict the software that could be operated on the system.
The chip was not present originally for Famicom games in 1983, but was discovered to be part of NES games after 1985 due to Nintendo's patent filings for the chip. The chip was developed as a result of the 1983 video game crash in North America, partially caused by an oversaturated market of console games due to lack of publishing control. Nintendo president Hiroshi Yamauchi said in 1986, "Atari collapsed because they gave too much freedom to third-party developers and the market was swamped with rubbish games." By requiring the presence of the 10NES in a game cartridge, Nintendo prevented third-party developers from producing games without Nintendo's approval, and provided the company with licensing fees, a practice it had already established earlier with Famicom games.
Design
The system consists of two parts, a Sharp Corporation 4-bit SM590 microcontroller in the console (the "lock") that checks the inserted cartridge for authentication, and a matching chip in the game cartridge (the "key") that gives the code upon demand. If the cartridge does not successfully provide the authentication, then the CIC repeatedly resets the CPU at a frequency of 1 Hz. This causes the television and power LED to blink at the same 1 Hz rate and prevents the game from being playable.
The program used in the NES CIC is called 10NES and was patented under . The source code is copyrighted; only Nintendo can produce the authorization chips. The patent covering the 10NES expired on January 24, 2006, although the copyright is still in effect for exact clones. Compatible clones exist that use different code.
Circumvention
Nintendo Entertainment System
Some unlicensed companies created circuits that used a voltage spike to knock the authentication unit offline.
A few unlicensed games released in Europe and Australia (such as HES games) came in the form of a dongle that would be connected to a licensed cartridge, in order to use that cartridge's CIC lockout chip for authentication. This method also worked on the SNES and was utilized by Super 3D Noah's Ark.
Tengen (Atari’s NES games subsidiary) took a different tactic: the corporation obtained a description of the code in the lockout chip from the United States Copyright Office by claiming that it was required to defend against present infringement claims in a legal case. Tengen then used these documents to design their Rabbit chip, which duplicated the function of the 10NES. Nintendo sued Tengen for these actions. The court found that Tengen did not violate the copyright for copying the portion of code necessary to defeat the protection with current NES consoles, but did violate the copyright for copying portions of the code not being used in the communication between the chip and console. Tengen had copied this code in its entirety because future console releases could have been engineered to pick up the discrepancy. On the initial claim, the court sided with Nintendo on the issue of patent infringement, but noted that Nintendo's patent would likely be deemed obvious as it was basically with the addition of a reset pin, which was at the time already commonplace in the world of electronics. An eight-person jury later found that Atari did infringe. While Nintendo was the winner of the initial trial, before they could actually enforce the ruling they would need to have the patent hold up under scrutiny, as well as address Tengen's antitrust claims. Before this occurred, the sides settled.
A small company called RetroZone, the first company to publish games on the NES in over a decade, uses a multi-region lockout chip for NTSC, PAL A, and PAL B called the Ciclone which was created by reverse engineering Tengen's Rabbit chip. It will allow games to be played in more than one region. It is intended to make the games playable on older hardware that uses the 10NES lockout chip and the two other regions, although the top-loading NES does not use a lockout chip. The Ciclone chip is the first lockout chip to be developed after the patent for the 10NES had expired. Since then there have been a few other open source implementations to allow the general public to reproduce multi-region CICs on AVR microcontrollers.
Because the 10NES in the model NES-001 Control Deck occasionally fails to authenticate legal cartridges, a common modification is to disable the chip entirely by cutting pin 4 on the Control Deck's internal 10NES lockout chip.
Super Nintendo Entertainment System
Towards the end of the SNES lifespan, the CIC was cloned and used in pirate games. Often, the clone CIC chip would be rebranded with an inconspicuous brand/part number to prevent detection by authorities. Alternatively the aforementioned method of using a licensed game's CIC chip was possible, as it was used in the SNES version of Super 3D Noah's Ark.
Super Famicom
See also
Regional lockout
Lockout chip
References
External links
Kevin Horton. "The Infamous Lockout Chip." Accessed on August 22, 2010.
"Ed Logg (Atari) interview" discussing Tengen lock chip
Ciclone lockout chip Information from RetroZone
Disabling the NES "Lockout Chip ( 2009-04-29) (rev. 0.5 26-Dec-97)
Source code to compatible key
Hardware restrictions
Nintendo chips
Nintendo Entertainment System
|
10560362
|
https://en.wikipedia.org/wiki/Halcyon%20Monitoring%20Solutions
|
Halcyon Monitoring Solutions
|
Halcyon Monitoring Solutions, Inc. was a software company who provided software products and services for monitoring infrastructure in the data center or the cloud. The company's software monitors the health, performance, and availability of heterogeneous hardware and software, including servers, storage, networking devices, operating systems and applications. Halcyon specializes in the technologies of the Oracle Corporation.
Halcyon provides software products and services for managing IT infrastructure in the data center. Halcyon's software monitors the health, performance, and availability of heterogeneous hardware and software, including physical and virtual servers, storage, networking devices, operating systems and applications.
History
Halcyon was founded in 1994 by software engineers at SystemWare Innovation Corporation (SWI). In 1996, Halcyon entered into an agreement with Sun Microsystems, which was acquired by Oracle, to co-develop Sun Enterprise SyMON 2.0, based on Halcyon's PrimeAlert technology. The product was renamed to Sun Management Center and was Sun's framework for monitoring and managing its hardware and the Solaris operating system including Solaris Zones/Containers, and Logical Domains (LDoms - now known as Oracle VM Server for SPARC).
Halcyon provided add-on software for Sun Management Center to extend its capabilities to include monitoring for other operating systems such as Microsoft Windows, Linux, and IBM AIX in addition to storage (NetApp, EMC) and networking devices (Cisco Systems, Brocade Communications Systems, Hitachi) and some applications (Oracle DB, Sybase, various App and Web Servers).
In addition, Halcyon also announced light-weight standalone software called Neuron, which also supported all the add-ons created for Sun Management Center. Neuron included common ITIL related components such as asset management, change management, configuration management, compliance, capacity planning, virtualization and public cloud management.
Software companies based in Washington (state)
Companies based in Bellevue, Washington
Software companies of the United States
American companies established in 1994
|
25361546
|
https://en.wikipedia.org/wiki/Carnegie%20Mellon%20University%20Masters%20in%20Software%20Engineering
|
Carnegie Mellon University Masters in Software Engineering
|
The Master of Software Engineering (MSE) at Carnegie Mellon University is a professional master's program founded in 1989 with the intent of developing technical leaders in software engineering practice. Originally as a joint effort between Carnegie Mellon's School of Computer Science and the Software Engineering Institute, the MSE was on the forefront of software engineering education at a time when no academic programs existed.
At the heart of the MSE curriculum is the Studio Project, a capstone project that spans the entire duration of the 16-month degree. The Studio element is unique from most software engineering programs at other universities in that the project sponsors are real-world, external industry clients, and that the projects themselves are considerably larger in scope than typical capstone projects.
Carnegie Mellon partners with other universities and software engineering departments throughout the world including in Portugal, India, and Korea in an effort to enhance software engineering education, globally. Through these partnerships, the same methods and practices used at the Pittsburgh campus are transferred to international educational partners.
History
Centered around software engineering workshops conducted at the Software Engineering Institute, the degree program's original core concepts and curriculum were developed. The original faculty included many educators who remain currently active, while others have retired or died. The latter notable individuals include Norm Gibbs and Jim 'Coach' Tomayko. Dr. Tomayko was responsible for the MSE Studio concept and remained deeply committed to the MSE program throughout his career.
A hallmark of the MSE program is that it targets software practitioners, those who are already working in the field.
Following its inception, the program has evolved to address a demand for lighter, faster software development processes, enabled by the rapid and widening adoption of the Internet. This included extreme programming, which later became part of agile methods, all of which sought to more rapidly respond to customer requirements in contrast to more deliberative, plan-driven development. In the early twenty-first century, software engineering has experienced a literal explosion of software development services and frameworks (e.g., GitLab, Jira, and Confluence) that allowed engineers to push development beyond "the release" to embody continuous development, a modern practice called DevOps. While this process evolution is perhaps unique to a special class of software, the scale and influence of these systems has led the MSE program to rethink how they teach software engineering.
Program directors
2019-Present, Travis Breaux, Director, Masters Programs in Software Engineering
2016-2019, Anthony Lattanze, Director, Masters Programs in Software Engineering
2002–2016, Dr. David Garlan, Director, Masters Programs in Software Engineering
1989-2004, Dr. James E. Tomayko, Director, Master of Software Engineering Program
2001-2008, Mel Rosso-Llopart, Director of Software Engineering Distance Program
1996-2001, Dr. James E. Tomayko, Director of Software Engineering Distance Program
Curriculum
The MSE program began as a joint effort of the School of Computer Science and the Software Engineering Institute. The degree program is an intensive 16-month curriculum designed for professional software engineers. Class sizes are generally around 20 students. Applicants to the program must have a strong background in computer science, no less than two years of relevant industry experience with an average of five years of experience.
The MSE curriculum has three basic components:
Core Courses develop foundational skills in the fundamentals of software engineering, with an emphasis on design, analysis, and the management of large-scale software systems.
The Studio Project, a capstone project that spans the duration of the program, allows for students to plan and implement a significant software project for an external client. Inspired by the design projects in architecture programs, students work as members of a team under the guidance of faculty advisors (mentors), analyzing a problem, planning the software development effort, executing a solution, and evaluating their work.
Electives allow students to develop deeper expertise in an area of speciality within the software engineering domain, or to pursue study in areas relevant to their personal and professional interests.
Core Courses
Models of Software Systems - This course considers many of the standard models for representing sequential and concurrent systems, such as state machines, algebras, and traces.
Methods: Deciding What to Design - This course considers the variety of ways of understanding the problem to be solved by the system one is developing and of framing an appropriate solution to that problem.
Management of Software Development - This course considers how to lead a project team, understand the relationship of software development to overall product engineering, estimate time and costs, and understand the software process.
Analysis of Software Artifacts - This course considers the analysis of software artifacts—primarily code, but also including analysis of designs, architectures, and test suites.
Architectures for Software Systems - The course considers commonly-used software system structures, techniques for designing and implementing these structures, models and formal notations for characterizing and reasoning about architectures, tools for generating specific instances of an architecture, and case studies of actual system architectures.
Studio Project
Proposal based studio
Partnership Program
University of Coimbra
Carnegie Mellon Silicon Valley
KAIST
SSN School of Advanced Software Engineering
Notable faculty
James 'Coach' Tomayko
David Garlan
Mary Shaw
Anthony Lattanze
Mark Paulk
James D. Herbsleb
Nancy Mead
References
External links
Official MSE Website
SSN School of Advanced Software Engineering, Tamil Nadu, India - partner program with Carnegie Mellon MSE
1989 establishments in Pennsylvania
Carnegie Mellon University
|
52045745
|
https://en.wikipedia.org/wiki/BASHLITE
|
BASHLITE
|
BASHLITE (also known as Gafgyt, Lizkebab, PinkSlip, Qbot, Torlus and LizardStresser) is malware which infects Linux systems in order to launch distributed denial-of-service attacks (DDoS). Originally it was also known under the name Bashdoor, but this term now refers to the exploit method used by the malware. It has been used to launch attacks of up to 400 Gbps.
The original version in 2014 exploited a flaw in the bash shell - the Shellshock software bug - to exploit devices running BusyBox. A few months later a variant was detected that could also infect other vulnerable devices in the local network. In 2015 its source code was leaked, causing a proliferation of different variants, and by 2016 it was reported that one million devices have been infected.
Of the identifiable devices participating in these botnets in August 2016 almost 96 percent were IoT devices (of which 95 percent were cameras and DVRs), roughly 4 percent were home routers - and less than 1 percent were compromised Linux servers.
Design
BASHLITE is written in C, and designed to easily cross-compile to various computer architectures.
Exact capabilities differ between variants, but the most common features generate several different types of DDoS attacks: it can hold open TCP connections, send a random string of junk characters to a TCP or a UDP port, or repeatedly send TCP packets with specified flags. They may also have a mechanism to run arbitrary shell commands on the infected machine. There are no facilities for reflected or amplification attacks.
BASHLITE uses a client–server model for command and control. The protocol used for communication is essentially a lightweight version of Internet Relay Chat (IRC). Even though it supports multiple command and control servers, most variants only have a single command and control IP-address hardcoded.
It propagates via brute forcing, using a built-in dictionary of common usernames and passwords. The malware connects to random IP addresses and attempts to login, with successful logins reported back to the command and control server.
See also
Low Orbit Ion Cannon – a stress test tool that has been used for DDoS attacks
High Orbit Ion Cannon – the replacement for LOIC used in DDoS attacks
Denial-of-service attack (DoS)
Fork bomb
Mirai (malware)
Hajime (malware)
Slowloris (computer security)
ReDoS
References
Denial-of-service attacks
Botnets
IoT malware
Linux malware
|
60671546
|
https://en.wikipedia.org/wiki/Sandersville%20Giants
|
Sandersville Giants
|
The Sandersville Giants was the final moniker of the minor league baseball teams based in Sandersville, Georgia from 1953 to 1956. Sandersville teams played as exclusively members of the Class D level Georgia State League. The Team was first called the Sandersville Wacos in 1953 and 1954.
Sandersville played as a minor league affiliate of the Milwaukee Braves in 1953 and New York Giants in 1955 to 1956.
Baseball Hall of Fame member Willie McCovey played for the 1955 Sandersville Giants in his first professional season and led the Georgia State League in RBI.
History
Sandersville first began play in 1953, as an new franchise in the eight–team Class D level Georgia State League, replacing the Fitzgerald Pioneers. The Douglas Trojans, Dublin Irish, Eastman Dodgers, Hazlehurst-Baxley Cardinals, Jesup Bees, Statesboro Pilots and Vidalia Indians joined Sandersville in league play.
Beginning play on April 20, 1953, the Sandersville Wacos finished their first season of play with a record of 48–77. Playing as an affiliate of the Milwaukee Braves, the Wacos were managed by Gabby Grant, Parnell Ruark, Luscius Morgan and Julian Morgan, placing 6th in the Georgia State League and finishing 33.0 games behind the 1st place Hazlehurst-Baxley Cardinals.
In the 1954 season, the Sandersville Wacos finished a distant last in the six–team Georgia State League. With a record of 33–97, the Wacos finished 52.5 games behind the league champion Vidalia Indians, placing 6th in the Georgia State League final standings, while playing under managers Dave Madison and Sid West.
In 1955, Sandersville became the Sandersville Giants as an affiliate of the New York Giants and placed 2nd in the Georgia State League regular season standings, finishing 5.5 gases behind the 1st place Douglas Trojans. The Giants finished with a record of 60–47 under player/manager Pete Pavlick. In the 1955 playoffs, the Giants defeated the Hazlehurst-Baxley Cardinals 3 games to 1 to advance. In the Finals against the Douglas Trojans, the series was tied at 3 games each. Game 7 was cancelled due to rain and the teams were declared Georgia State League co–champions. Jack Elias of Sandersville led the Georgia State League with a .332 batting average. Pitchers Leo Quatro and Victor Davis led the league with 15 wins and 183 strikeouts, respectively. Future Hall of Fame player Willie McCovey played for the 1955 Sandersville Giants. Having signed his first professional contract for $175.00 per month, McCovey was 17 years old and in his first professional season out of Mobile, Alabama. McCovey hit .305 with 19 home runs and led the league with 113 RBI in 107 games for Sandersville.
Said McCovey in his 1986 Baseball Hall of Fame acceptance speech: "My first manager in pro ball, Pete Pavlick; Pete was the skipper of the Class D Sandersville Club in the Georgia State League where I broke in, in 1955. He and his wife were the first to adopt me. They used to invite me to their home after the game and became very close to me. I also remember one of my teammates there, his name was Ralph Crosby, he was out of New York. We were the only two black players on that team and we had to stay in separate parts of town back then, so Ralph and I became good friends."
In the Georgia State League's final season of 1956, the Sandersville Giants placed 2nd in the Georgia State League. The Giants finished the regular season with a record of 70–50 under returning player/manager Pete Pavlick and placed 2nd in the regular season standings, finishing 7.0 games behind the Douglas Reds. In the first round of the playoffs, Sandersville defeated the Thomson Orioles 3 games to 2. In the last Georgia State League games ever played, the Giants lost to the Douglas Reds 3 games to 1 in the Finals. Frank Reveira of Sandersville led the Georgia State League in batting average, hitting .332, while Giant players Pete Pavlick and Dan Sarver both led the league with 95 runs scored. Sandersville's Al Milley led the Georgia State League with 103 RBI and Giant pitcher Gilbert Bassetti had 21 wins to lead the league. The Georgia State League permanently folded following the season.
Sandersville, Georgia has not hosted another minor league team.
The ballpark
The Sandersville Wacos and Giants were noted to have played home games at Sandersville Baseball Park, Sandersville drew 31,000 total in each of the two Giants years after drawing 33,000 and 25,000 in their first two seasons. The ballpark was located in Sandersville, Georgia.
Timeline
Year–by–year records
Notable alumni
Baseball Hall of Fame alumni
Willie McCovey (1955) Inducted, 1986
Notable alumni
Dave Madison (Player/MGR, 1954)
Julio Navarro (1955)
Pete Pavlick (1955–1956, MGR)
See also
Sandersville Wacos playersSandersville Giants players
References
External references
Baseball Reference Giants
Baseball Reference Wacos
New York Giants minor league affiliates
Baseball teams established in 1953
Baseball teams disestablished in 1956
Defunct Georgia State League teams
Professional baseball teams in Georgia (U.S. state)
1953 establishments in Georgia (U.S. state)
1956 disestablishments in Georgia (U.S. state)
Baseball teams established in 1955
Defunct baseball teams in Georgia
Milwaukee Braves minor league affiliates
|
64826
|
https://en.wikipedia.org/wiki/Motorola%2068000%20series
|
Motorola 68000 series
|
The Motorola 68000 series (also known as 680x0, m68000, m68k, or 68k) is a family of 32-bit complex instruction set computer (CISC) microprocessors. During the 1980s and early 1990s, they were popular in personal computers and workstations and were the primary competitors of Intel's x86 microprocessors. They were best known as the processors used in the early Apple Macintosh, the Sharp X68000, the Commodore Amiga, the Sinclair QL, the Atari ST, the Sega Genesis (Mega Drive), the Capcom System I (Arcade), the AT&T Unix PC, the Tandy Model 16/16B/6000, the Sun Microsystems Sun-1, Sun-2 and Sun-3, the NeXT Computer, NeXTcube, NeXTstation, and NeXTcube Turbo, the Texas Instruments TI-89/TI-92 calculators, the Palm Pilot (all models running Palm OS 4.x or earlier) and the Space Shuttle. Although no modern desktop computers are based on processors in the 680x0 series, derivative processors are still widely used in embedded systems.
Motorola ceased development of the 680x0 series architecture in 1994, replacing it with the PowerPC RISC architecture, which was developed in conjunction with IBM and Apple Computer as part of the AIM alliance.
Family members
Generation one (internally 16/32-bit, and produced with 8-, 16-, and 32-bit interfaces)
Motorola 68000
Motorola 68EC000
Motorola 68SEC000
Motorola 68HC000
Motorola 68008
Motorola 68010
Motorola 68012
Generation two (internally fully 32-bit)
Motorola 68020
Motorola 68EC020
Motorola 68030
Motorola 68EC030
Generation three (pipelined)
Motorola 68040
Motorola 68EC040
Motorola 68LC040
Generation four (superscalar)
Motorola 68060
Motorola 68EC060
Motorola 68LC060
Others
Freescale 683XX (CPU32 aka 68330, 68360 aka QUICC)
Freescale ColdFire
Freescale DragonBall
Philips 68070
Improvement history
68010:
Virtual memory support (restartable instructions)
'loop mode' for faster string and memory library primitives
multiply instruction uses 14 clock ticks less
68020:
32-bit address & arithmetic logic unit (ALU)
Three stage pipeline
Instruction cache of 256 bytes
Unrestricted word and longword data access (see alignment)
8× multiprocessing ability
Larger multiply (32×32 -> 64 bits) and divide (64÷32 -> 32 bits quotient and 32 bits remainder) instructions, and bit field manipulations
Addressing modes added scaled indexing and another level of indirection
Low cost, EC = 24-bit address
68030:
Split instruction and data cache of 256 bytes each
On-chip memory management unit (MMU) (68851)
Low cost EC = No MMU
Burst Memory Interface
68040:
Instruction and data caches of 4 KB each
Six stage pipeline
On-chip floating-point unit (FPU)
FPU lacks IEEE transcendental function ability
FPU emulation works with 2E71M and later chip revisions
Low cost LC = No FPU
Low cost EC = No FPU or MMU
68060:
Instruction and data caches of 8 KB each
10 stage pipeline
Two cycle integer multiplication unit
Branch prediction
Dual instruction pipeline
Instructions in the address generation unit (AGU) and thereby supply the result two cycles before the ALU
Low cost LC = No FPU
Low cost EC = No FPU or MMU
Feature map
Main uses
The 680x0 line of processors has been used in a variety of systems, from modern high-end Texas Instruments calculators (the TI-89, TI-92, and Voyage 200 lines) to all of the members of the Palm Pilot series that run Palm OS 1.x to 4.x (OS 5.x is ARM-based), and even radiation-hardened versions in the critical control systems of the Space Shuttle.
However, the 680x0 CPU family became most well known as the processors powering advanced desktop computers and video game consoles such as the Apple Macintosh, the Commodore Amiga, the Sinclair QL, the Atari ST, the SNK NG AES/Neo Geo CD, Atari Jaguar, Commodore CDTV, and several others. The 680x0 were also the processors of choice in the 1980s for Unix workstations and servers such as AT&T's UNIX PC, Tandy's Model 16/16B/6000, Sun Microsystems' Sun-1, Sun-2, Sun-3, NeXT Computer, Silicon Graphics (SGI), and numerous others. There was a 68000 version of CP/M called CP/M-68K, which was initially proposed to be the Atari ST operating system, but Atari chose Atari TOS instead. Many system specific ports of CP/M-68K were available, for example, TriSoft offered a port of the CP/M-68K for the Tandy Model 16/16B/6000.
Also, and perhaps most significantly, the first several versions of Adobe's PostScript interpreters were 68000-based. The 68000 in the Apple LaserWriter and LaserWriter Plus was clocked faster than the version used then in Macintosh computers. A fast 68030 in later PostScript interpreters, including the standard resolution LaserWriter IIntx, IIf and IIg (also 300 dpi), the higher resolution LaserWriter Pro 600 series (usually 600 dpi, but limited to 300 dpi with minimum RAM installed) and the very high resolution Linotronic imagesetters, the 200PS (1500+ dpi) and 300PS (2500+ dpi). Thereafter, Adobe generally preferred a RISC for its processor, as its competitors, with their PostScript clones, had already gone with RISCs, often an AMD 29000-series. The early 68000-based Adobe PostScript interpreters and their hardware were named for Cold War-era U.S. rockets and missiles: Atlas, Redstone, etc.
Today, these systems are either end-of-line (in the case of the Atari), or are using different processors (in the case of Macintosh, Amiga, Sun, and SGI). Since these platforms had their peak market share in the 1980s, their original manufacturers either no longer support an operating system for this hardware or are out of business. However, the Linux, NetBSD and OpenBSD operating systems still include support for 68000 processors.
The 68000 processors were also used in the Sega Genesis (Mega Drive) and SNK Neo Geo consoles as the main CPU. Other consoles such as the Sega Saturn used the 68000 for audio processing and other I/O tasks, while the Atari Jaguar included a 68000 which was intended for basic system control and input processing, but due to the Jaguar's unusual assortment of heterogeneous processors was also frequently used for running game logic. Many arcade boards also used 68000 processors including boards from Capcom, SNK, and Sega.
Microcontrollers derived from the 68000 family have been used in a huge variety of applications. For example, CPU32 and ColdFire microcontrollers have been manufactured in the millions as automotive engine controllers.
Many proprietary video editing systems used 68000 processors. In this category we can name the MacroSystem Casablanca, which was a black box with an easy to use graphic interface (1997). It was intended for the amateur and hobby videographer market. It is also worth noting its earlier, bigger and more professional counterpart, called "DraCo"(1995), The groundbreaking Quantel Paintbox series of early based 24-bit paint and effects system was originally released in 1981 and during its lifetime it used nearly the entire range of 68000 family processors, with the sole exception of the 68060, which was never implemented in its design. Another contender in the video arena, the Abekas 8150 DVE system, used the 680EC30, and the Trinity Play, later renamed Globecaster, uses several 68030s. The Bosch FGS-4000/4500 Video Graphics System manufactured by Robert Bosch Corporation, later BTS (1983), used a 68000 as its main processor; it drove several others to perform 3D animation in a computer that could easily apply Gouraud and Phong shading. It ran a modified Motorola Versados operating system.
Architecture
People who are familiar with the PDP-11 or VAX usually feel comfortable with the 68000 series. With the exception of the split of general-purpose registers into specialized data and address registers, the 68000 architecture is in many ways a 32-bit PDP-11.
It had a more orthogonal instruction set than those of many processors that came before (e.g., 8080) and after (e.g., x86). That is, it was typically possible to combine operations freely with operands, rather than being restricted to using certain addressing modes with certain instructions. This property made programming relatively easy for humans, and also made it easier to write code generators for compilers.
The 68000 series has eight 32-bit general-purpose data registers (D0-D7), and eight address registers (A0-A7). The last address register is the stack pointer, and assemblers accept the label SP as equivalent to A7.
In addition, it has a 16-bit status register. The upper 8 bits is the system byte, and modification of it is privileged. The lower 8 bits is the user byte, also known as the condition code register (CCR), and modification of it is not privileged. The 68000 comparison, arithmetic, and logic operations modify condition codes to record their results for use by later conditional jumps. The condition code bits are "zero" (Z), "carry" (C), "overflow" (V), "extend" (X), and "negative" (N). The "extend" (X) flag deserves special mention, because it is separate from the carry flag. This permits the extra bit from arithmetic, logic, and shift operations to be separated from the carry for flow-of-control and linkage.
While the 68000 had a 'supervisor mode', it did not meet the Popek and Goldberg virtualization requirements due to the single instruction 'MOVE from SR', which copies the status register to another register, being unprivileged but sensitive. In the Motorola 68010 and later, this was made privileged, to better support virtualization software.
The 68000 series instruction set can be divided into the following broad categories:
Load and store (MOVE)
Arithmetic (ADD, SUB, MULS, MULU, DIVS, DIVU)
Bit shifting (ASL, ASR, LSL, LSR)
Bit rotation (ROR, ROL, ROXL, ROXR)
Logic operations (AND, OR, NOT, EOR)
Type conversion (byte to word and vice versa)
Conditional and unconditional branches (BRA, Bcc - BEQ, BNE, BHI, BLO, BMI, BPL, etc.)
Subroutine invocation and return (BSR, RTS)
Stack management (LINK, UNLK, PEA)
Causing and responding to interrupts
Exception handling
There is no equivalent to the x86 CPUID instruction to determine what CPU or MMU or FPU is present.
The Motorola 68020 added some new instructions that include some minor improvements and extensions to the supervisor state, several instructions for software management of a multiprocessing system (which were removed in the 68060), some support for high-level languages which did not get used much (and was removed from future 680x0 processors), bigger multiply (32×32→64 bits) and divide (64÷32→32 bits quotient and 32 bits remainder) instructions, and bit field manipulations.
The standard addressing modes are:
Register direct
data register, e.g. "D0"
address register, e.g. "A0"
Register indirect
Simple address, e.g. (A0)
Address with post-increment, e.g. (A0)+
Address with pre-decrement, e.g. −(A0)
Address with a 16-bit signed offset, e.g. 16(A0)
Register indirect with index register & 8-bit signed offset e.g. 8(A0,D0) or 8(A0,A1)
Note that for (A0)+ and −(A0), the actual increment or decrement value is dependent on the operand size: a byte access adjusts the address register by 1, a word by 2, and a long by 4.
PC (program counter) relative with displacement
Relative 16-bit signed offset, e.g. 16(PC). This mode was very useful for position-independent code.
Relative with 8-bit signed offset with index, e.g. 8(PC,D2)
Absolute memory location
Either a number, e.g. "$4000", or a symbolic name translated by the assembler
Most assemblers used the "$" symbol for hexadecimal, instead of "0x" or a trailing H.
There were 16 and 32-bit versions of this addressing mode
Immediate mode
Data stored in the instruction, e.g. "#400"
Quick immediate mode
3-bit unsigned (or 8-bit signed with moveq) with value stored in opcode
In addq and subq, 0 is the equivalent to 8
e.g. moveq #0,d0 was quicker than clr.l d0 (though both made D0 equal to 0)
Plus: access to the status register, and, in later models, other special registers.
The Motorola 68020 added a scaled indexing address mode, and added another level of indirection to many of the pre-existing modes.
Most instructions have dot-letter suffixes, permitting operations to occur on 8-bit bytes (".b"), 16-bit words (".w"), and 32-bit longs (".l").
Most instructions are dyadic, that is, the operation has a source, and a destination, and the destination is changed. Notable instructions were:
Arithmetic: ADD, SUB, MULU (unsigned multiply), MULS (signed multiply), DIVU, DIVS, NEG (additive negation), and CMP (a sort of comparison done by subtracting the arguments and setting the status bits, but did not store the result)
Binary-coded decimal arithmetic: ABCD, NBCD, and SBCD
Logic: EOR (exclusive or), AND, NOT (logical not), OR (inclusive or)
Shifting: (logical, i.e. right shifts put zero in the most-significant bit) LSL, LSR, (arithmetic shifts, i.e. sign-extend the most-significant bit) ASR, ASL, (rotates through eXtend and not) ROXL, ROXR, ROL, ROR
Bit test and manipulation in memory or data register: BSET (set to 1), BCLR (clear to 0), BCHG (invert) and BTST (no change). All of these instructions first test the destination bit and set (clear) the CCR Z bit if the destination bit is 0 (1), respectively.
Multiprocessing control: TAS, test-and-set, performed an indivisible bus operation, permitting semaphores to be used to synchronize several processors sharing a single memory
Flow of control: JMP (jump), JSR (jump to subroutine), BSR (relative address jump to subroutine), RTS (return from subroutine), RTE (return from exception, i.e. an interrupt), TRAP (trigger a software exception similar to software interrupt), CHK (a conditional software exception)
Branch: Bcc (where the "cc" specified one of 14 tests of the condition codes in the status register: equal, greater than, less-than, carry, and most combinations and logical inversions, available from the status register). The remaining two possible conditions (always true and always false) have separate instruction mnemonics, BRA (branch always), and BSR (branch to subroutine).
Decrement-and-branch: DBcc (where "cc" was as for the branch instructions), which, provided the condition was false, decremented the low word of a D-register and, if the result was not -1 ($FFFF), branched to a destination. This use of −1 instead of 0 as the terminating value allowed the easy coding of loops that had to do nothing if the count was 0 to start with, with no need for another check before entering the loop. This also facilitated nesting of DBcc.
68050 and 68070
There was no 68050, though at one point it was a project within Motorola. Odd-numbered releases had always been reactions to issues raised within the prior even numbered part; hence, it was generally expected that the 68050 would have reduced the 68040's power consumption (and thus heat dissipation), improved exception handling in the FPU, used a smaller feature size and optimized the microcode in line with program use of instructions. Many of these optimizations were included with the 68060 and were part of its design goals. For any number of reasons, likely that the 68060 was in development, that the Intel 80486 was not progressing as quickly as Motorola assumed it would, and that 68060 was a demanding project, the 68050 was cancelled early in development.
There is also no revision of the 68060, as Motorola was in the process of shifting away from the 68000 and 88k processor lines into its new PowerPC business, so the 68070 was never developed. Had it been, it would have been a revised 68060, likely with a superior FPU (pipelining was widely speculated upon on Usenet).
Motorola mainly used even numbers for major revisions to the CPU core such as 68000, 68020, 68040 and 68060. The 68010 was a revised version of the 68000 with minor modifications to the core, and likewise the 68030 was a revised 68020 with some more powerful features, none of them significant enough to classify as a major upgrade to the core.
There was a CPU with the 68070 designation, which was a licensed and somewhat slower version of the 16/32-bit 68000 with a basic DMA controller, I²C host and an on-chip serial port. This 68070 was used as the main CPU in the Philips CD-i. This CPU was, however, produced by Philips and not officially part of Motorola's 680x0 lineup.
Last generation
The 4th-generation 68060 provided equivalent functionality (though not instruction-set-architecture compatibility) to most of the features of the Intel P5 microarchitecture.
Other variants
The Personal Computers XT/370 and AT/370 PC-based IBM-compatible mainframes each included two modified Motorola 68000 processors with custom microcode to emulate S/370 mainframe instructions.
After the mainline 68000 processors' demise, the 68000 family has been used to some extent in microcontroller and embedded microprocessor versions. These chips include the ones listed under "other" above, i.e. the CPU32 (aka 68330), the ColdFire, the QUICC and the DragonBall.
With the advent of FPGA technology an international team of hardware developers have re-created the 68000 with many enhancements as an FPGA core. Their core is known as the 68080 and is used in Vampire-branded Amiga accelerators.
Magnetic Scrolls used a subset of the 68000's instructions as a base for the virtual machine in their text adventures.
Competitors
Desktop
During the 1980s and early 1990s, when the 68000 was widely used in desktop computers, it mainly competed against Intel's x86 architecture used in IBM PC compatibles. Generation 1 68000 CPUs competed against mainly the 16-bit 8086, 8088, and 80286. Generation 2 competed against the 80386 (the first 32-bit x86 processor), and generation 3 against the 80486. The fourth generation competed with the P5 Pentium line, but it was not nearly as widely used as its predecessors, since much of the old 68000 marketplace was either defunct or nearly so (as was the case with Atari and NeXT), or converting to newer architectures (PowerPC for the Macintosh and Amiga, SPARC for Sun, and MIPS for Silicon Graphics (SGI)).
Embedded
There are dozens of processor architectures that are successful in embedded systems. Some are microcontrollers which are much simpler, smaller, and cheaper than the 68000, while others are relatively sophisticated and can run complex software. Embedded versions of the 68000 often compete with processor architectures based on PowerPC, ARM, MIPS, SuperH, and others.
See also
VMEbus, an external computer bus standard designed for the 68000 series
References
Bibliography
Howe, Dennis, ed. (1983). Free On-Line Dictionary of Computing. Imperial College, London. http://foldoc.org. Retrieved September 4, 2007.
External links
BYTE Magazine, September 1986: The 68000 Family
68k architecture
32-bit computers
|
14203676
|
https://en.wikipedia.org/wiki/Electronic%20voting%20by%20country
|
Electronic voting by country
|
Electronic voting by country varies and may include voting machines in polling places, centralized tallying of paper ballots, and internet voting. Many countries use centralized tallying. Some also use electronic voting machines in polling places. Very few use internet voting. Several countries have tried electronic approaches and stopped, because of difficulties or concerns about security and reliability.
Electronic voting requires capital spending every few years to update equipment, as well as annual spending for maintenance, security and supplies. If it works well, its speed can be an advantage where there are many contests on each ballot. Hand-counting is more feasible in parliamentary systems where each level of government is elected at different times, and only one contest is on each ballot, for the national or regional member of parliament, or for a local council member.
Polling place electronic voting or Internet voting examples have taken place in Australia, Belgium, Brazil, Estonia, France, Germany, India, Italy, Namibia, the Netherlands ( Rijnland Internet Election System) , Norway, Peru, Switzerland, the UK, Venezuela, Pakistan and the Philippines.
Summary table
Argentina
Used in provincial elections in Salta since 2009 and in local elections in Buenos Aires City in 2015.
Australia
Origin
The first known use of the term CyberVote was by Midac Technologies in 1995 when they ran a web based vote regarding the French nuclear testing in the Pacific region. The resulting petition was delivered to the French government on a Syquest removable hard disk.
In October 2001 electronic voting was used for the first time in an Australian parliamentary election. In that election, 16,559 voters (8.3% of all votes counted) cast their votes electronically at polling stations in four places. The Victorian State Government introduced electronic voting on a trial basis for the 2006 State election.
Accessibility
Approximately 300,000 impaired Australians voted independently for the first time in the 2007 elections. The Australian Electoral Commission has decided to implement voting machines in 29 locations.
Internet voting
iVote is a remote electronic voting system in New South Wales that allows eligible voters a chance to vote over the Internet. However, during the New South Wales state election in 2015, there were several reports that over 66,000 electronic votes could have been compromised. Although the iVote website is secure, security specialist believe that a third party website was able to attack the system. This was the first time a major vulnerability was discovered in the middle of an ongoing poll.
In 2007 Australian Defence Force and Defence civilian personnel deployed on operations in Iraq, Afghanistan, Timor Leste and the Solomon Islands had the opportunity to vote via the Defence Restricted Network with an Australian Electoral Commission and Defence Department joint pilot project. After votes were recorded, they were encrypted and transmitted from a Citrix server to the REV database A total of 2012 personnel registered for and 1511 votes were successfully cast in the pilot, costing an estimated $521 per vote. Electronically submitted votes were printed following polling day, and dispatched to the relevant Divisions for counting.
Bangladesh
Belgium
Electronic voting in Belgium started in 1991. It is widely used in Belgium for general and municipal elections and has been since 1999. Electronic voting in Belgium has been based on two systems known as Jites and Digivote. Both of these have been characterized as "indirect recording electronic voting systems" because the voting machine does not directly record and tabulate the vote, but instead, serves as a ballot marking device. Both the Jites and Digivote systems record ballots on cardboard magnetic stripe cards. Voters deposit their voted ballots into a ballot box that incorporates a magnetic stripe reader to tabulate the vote. In the event of a controversy, the cards can be recounted by machine.
In the elections on 18. May 2003 there was an electronic voting problem reported where one candidate got 4096 extra votes. The error was only detected because she had more preferential votes than her own list which is impossible in the voting system. The official explanation was The spontaneous inversion of a bit at the position 13 in the memory of the computer (i.e. a soft error).
Brazil
Electronic voting in Brazil was introduced in 1996, when the first tests were carried in the state of Santa Catarina. Since 2000, all Brazilian elections have been fully electronic. By the 2000 and 2002 elections more than 400,000 electronic voting machines were used nationwide in Brazil and the results were tallied electronically within minutes after the polls closed.
In 1996, after tests conducted by more than 50 municipalities, the Brazilian Electoral Justice launched their "voting machine". Since 2000, all Brazilian voters are able to use the electronic ballot boxes to choose their candidates. In 2010 presidential election which had more than 135 million voters, the result was announced 75 minutes after the end of voting. The electronic ballot box is made up of two micro-terminals (one located in the voting cabin and the other with the voting board representative) which are connected by a 5-meter cable. Externally, the micro-terminals have only a numerical keyboard, which does not accept any command executed by the simultaneous pressure of more than one key. In case of power failure, the internal battery provides the energy or it can be connected to an automotive battery.
A 2017 study of Brazil found no systematic difference in vote choices between online and offline electorates.
Canada
Federal and provincial elections use paper ballots, but electronic voting has been used since at least the 1990s in some municipalities. Today optical scan voting systems are common in municipal elections.
There are no Canadian electronic voting standards.
Committee reports and analysis from Nova Scotia, New Brunswick, Quebec, Ontario and British Columbia have all recommended against provincial Internet voting. A federal committee has recommended against national Internet voting.
Some municipalities in Ontario and Nova Scotia provide Internet voting.
The 2012 New Democratic Party leadership election was conducted partially online, with party members who were not in attendance at the convention hall able to cast their leadership vote online. However, for part of the day the online voting server was affected by a denial-of-service attack, delaying the completion and tabulation of results.
In the 2018 Ontario municipal elections, over 150 municipalities in the Canadian province of Ontario conducted their elections primarily online, with physical polling stations either abandoned entirely or limited to only a few central polling stations for voters who could not or did not want to vote online. On election day, however, 51 of those municipalities, all of which had selected Dominion Voting Systems as their online voting contractor, were affected by a technical failure. According to Dominion, the company's colocation centre provider imposed a bandwidth cap, without authorization from or consultation with Dominion, due to the massive increase in voting traffic in the early evening, thus making it impossible for many voters to get through to the server between 5:00 and 7:30 p.m. All of the affected municipalities extended voting for at least a few hours to compensate for the outage; several, including Pembroke, Waterloo, Prince Edward County and Greater Sudbury, opted to extend voting for a full 24 hours into the evening of October 23.
Estonia
Electronic voting was first used in Estonia during the October 2005 local elections. Estonia became the first country to have legally binding general elections using the Internet as a means of casting the vote. The option of voting via the Internet in the local election was available nationally. It was declared a success by the Estonian election officials, with 9,317 people voting online.
In 2007 Estonia held its and the world's first national Internet election. Voting was available from February 26 to 28. A total of 30,275 citizens used Internet voting.
In the 2009 local municipal elections, 104,415 people voted over the Internet. This means that roughly 9,5% of the persons with the right to vote gave their vote over the Internet.
By 2009, Estonia had advanced the farthest in utilizing Internet voting technology.
In the 2011 parliamentary elections between 24 February and March 2, 140,846 people cast their votes online. 96% of the electronic votes were cast in Estonia and 4% by Estonian citizens residing in 106 foreign countries.
In the 2014 European Parliament elections 31.3% of all participating voters gave their vote over the Internet.
In the 2019 parliamentary elections 43.75% of all participating voters gave their vote over the Internet.
Each Estonian citizen possesses an electronic chip-enabled ID card, which allows the user to vote over the internet. The ID card is inserted into a card reader, which is connected to a computer. Once the user's identity is verified (using the digital certificate on the electronic ID card), a vote can be cast via the internet. Votes are not considered final until the end of election day, so Estonian citizens can go back and re-cast their votes until election day is officially over. The popularity of online voting in Estonia has increased widely throughout the nation, as in the elections of 2014 and 2015, nearly one third of Estonian votes were cast online.
Research in Estonia showed that internet voting is less expensive than other voting channels.
Security officials said that they have not detected any unusual activity or tampering of the votes.
However researchers have found weaknesses in the security design of Estonia's online voting systems,
as well as massive operational lapses in security from transferring election results on personal thumb drives to posting network credentials on the wall in view of the public. The researchers concluded that these systems are insecure in their current implementation, and due to the rise of nation state interest in influencing elections, should be "discontinue[d]."
European Union
In September 2000, the European Commission launched the CyberVote project with the aim of demonstrating "fully verifiable on-line elections guaranteeing absolute privacy of the votes and using fixed and mobile Internet terminals". Trials were performed in Sweden, France, and Germany.
Finland
On October 24, 2016 the Finnish government announced it would study the introduction of national online voting. On February 21, 2017 the working group studying Internet voting for Finland launched, with a target date for completion of its work of November 30, 2017. The working group recommended against Internet voting, concluding that the risks outweighed the benefits.
In Finland, electronic voting has never been used in large scale; all voting is conducted by pen and paper and the ballots are always counted by hand. In 2008, the Finnish government wanted to test electronic voting, and organized a pilot electronic vote for the 2008 Finnish municipal elections. Internet-enabled DRE machines, supplied by the company Scytl, were piloted in the October 2008 municipal elections in three municipalities (Karkkila, Kauniainen and Vihti). The government considered the pilot program a success. However, following complaints, the Supreme Administrative Court declared the results invalid, and ordered a rerun of the elections with the regular pen-and-paper method in the affected municipalities. The system had a usability problem where the messages were ambiguous on whether the vote had been cast. In a total of 232 cases (2% of votes), voters had logged in, selected their vote but not confirmed it, and left the booth; the votes were not recorded. Following the failure of the pilot election, the Finnish government has abandoned plans to continue electronic voting based on voting machines. In the memo it was concluded that the voting machine will not developed any more, but the Finnish government will nevertheless follow the development of different electronic voting systems worldwide.
France
In January 2007 France's UMP party held a national presidential primary using both remote electronic voting and with 750 polling stations using touch screen electronic voting over the Internet. The election resulted in over 230,000 votes representing a near 70% turnout.
Elections in France utilized remote Internet voting for the first time in 2003 when French citizens living in the United States elected their representatives to the Assembly of French Citizens Abroad. Over 60% of voters chose to vote using the Internet rather than paper. The Forum des droits sur l'Internet (Internet rights forum), published a recommendation on the future of electronic voting in France, stating that French citizens abroad should be able to use Internet voting for Assembly of the French Citizens Abroad elections. This recommendation became reality in 2009, with 6000 French citizens choosing to make use of the system.
On March 6, 2017 France announced that Internet voting (which had previously been offered to citizens abroad) would not be permitted in the 2017 legislative elections due to cybersecurity concerns.
As of 2020 citizens abroad vote by internet in legislative and consular elections, not for President or EU.
Germany
In Germany the only accredited voting machines after testing by the PTB for national and local elections are the ESD1 and ESD2 from the Dutch company Nedap. About 2000 of them have been used in the 2005 Bundestag elections covering approximately 2 million voters. These machines differ only in certain details due to different voting systems from the ES3B hacked by a Dutch citizen group and the Chaos Computer Club on October 5, 2006. Because of this, additional security measures have been applied in the municipality elections on 22. October 2006 in Cottbus, including reading the software from the EPROM to compare it with the source and sealing the machines afterwards. The city of Cottbus ultimately decided not to purchase the Nedap voting system it had previously been leasing.
At the moment there are several lawsuits in court against the use of electronic voting machines in Germany. One of these reached the Federal Constitutional Court of Germany in February 2007. Critics cite a lack transparency when recording the votes as intended by the voter and concerns relating to recounts. The certified Nedap machines are DRE systems which do not produce any paper records.
Following a 2005 pilot study during the national elections, wide public support and a unanimous decision by the Senate launched a plan for the implementation of an optical scan voting system based on digital paper in the 2008 state elections of Hamburg. After public claims in September 2007 by the Fraktion der Grünen/GAL and the Chaos Computer Club that the system was vulnerable, the Federal Election Office (Bundeswahlamt) found in public surveys that public distrust of the system was evident. Due to concerns over public confidence, plans for use of the new voting system were canceled.
Germany ended electronic voting in 2009, with the German Federal Constitutional Court finding that the inability to have meaningful public scrutiny meant that electronic voting was unconstitutional.
India
Electronic voting was first introduced in 1982 and was used on an experimental basis in the North Paravur assembly constituency in the State of Kerala. However the Supreme Court of India struck down this election as against the law in A. C. Jose v. Sivan Pillai case. Amendments were made to the Representation of the People Act, 1951 to legalise elections using Electronic Voting Machines. In 2003, all state elections and by-elections were held using EVMs.
The EVMs were also used during the national elections held for the Parliament of India in 2004 and 2009. According to the statistics available through the mainstream media, more than 400 million voters (about 60% of India's eligible voters) exercised their franchise through EVMs in 2009 elections. Tallying such a large number of votes took just a few hours.
In India, Voter-verified paper audit trail (VVPAT) system was introduced in 8 of 543 parliamentary constituencies as a pilot project in 2014 Indian general election. VVPAT was implemented in Lucknow, Gandhinagar, Bangalore South, Chennai Central, Jadavpur, Raipur, Patna Sahib and Mizoram constituencies. Voter-verified paper audit trail was first used in an election in India in September 2013 in Noksen in Nagaland.
Electronic Voting Machines ("EVM") are being used in Indian general and state elections to implement electronic voting in part from 1999 general election and recently in 2018 state elections held in five states across India. EVMs have replaced paper ballots in the state and general (parliamentary) elections in India. There were earlier claims regarding EVMs' tamperability and security which have not been proved. After rulings of Delhi High Court, Supreme Court and demands from various political parties, Election Commission of India decided to introduce EVMs with voter-verified paper audit trail (VVPAT) system. The VVPAT system was introduced in 8 of 543 parliamentary constituencies as a pilot project in 2014 general election. Voter-verified paper audit trail (VVPAT) system which enables electronic voting machines to record each vote cast by generating the EVM slip, was introduced in all 543 Lok sabha constituencies in 2019 Indian general election.
There are three kinds of electronic voting machines M1, M2 and M3. The most modern M3 EVMs, which are in current use since its introduction in 2013, allow writing of machine code into the chips at PSU premises itself- Bharat Electronics Limited, Bangalore and Electronics Corporation of India Limited, Hyderabad. Election Commission of India introduced EVM Tracking Software (ETS) as a modern inventory management system where the identity and physical presence of all EVMS/ VVPATs is tracked on real time basis. M3 EVMs has digital verification system coded into each machine which is necessary to establish contact between its two component units. There are several layers of seals to ensure it is tamper-proof. Indian EVMs are stand-alone non-networked machines.
India, issues
Omesh Saigal, an IIT alumnus and IAS officer, demonstrated that the 2009 elections in India when Congress Party of India came back to power might be rigged. This forced the election commission to review the current EVMs.
Background
From the initial introduction in 1982, to the country-wide use of EVM in 2004, the Election Commission of India took long and measured steps spanning over a period of nearly two decades, in the matter of electronic voting. In the meanwhile, general elections to various legislative assemblies, and numerous by-elections and two general elections to the Lok Sabha have been conducted using EVMs at all polling stations. The tamper-proof technological soundness of the EVM has been endorsed by a technical experts subcommittee appointed at the initiative of the Parliamentary Committee on Electoral Reforms in 1990. This experts committee (1990) was headed by Prof. S. Sampath, then Chairman RAC, Defence Research and Development Organisation, with Prof. P. V. Indiresan, then with IIT Delhi, and Dr C. Rao Kasarabada, then Director Electronic Research and Development Center, Trivandrum as members. Subsequently, the Commission has also been consulting a group of technical experts comprising Prof. P. V. Indiresan (who was also part of the earlier committee referred to above) and Prof. D. T. Sahani and Prof A. K. Agarwal both of IIT Delhi, regularly, on all EVM related technical issues.
The Commission has in place elaborate administrative measures and procedural checks and balances aimed at total transparency and prevention of any possible misuse or procedural lapses. These measures include rigorous pre-election checking of each EVM by the technicians, two level randomization with the involvement of political parties, candidates, their agents, for the random allotment of the EVMs to various constituencies and subsequently to various polling stations, preparation of the EVMs for elections in the presence of the candidates/their agents, and the Election Observers, provision for various thread seal and paper seal protection against any unauthorized access to the EVMs after preparation, mock poll in the presence of polling agents and mock poll certification system before the commencement of poll, post poll sealing and strong room protection, randomization of counting staff, micro observers at the counting tables, and so on.
The Election Commission of India is amply satisfied about the non-tamperability and the fool-proof working of the EVMs. The Commission's confidence in the efficacy of the EVMs has been fortified by the judgments of various courts and the views of technical experts. The Karnataka High Court has hailed the EVM as ‘a national pride’ (judgment dated 5.2.2004 in Michael B. Fernandes v. C. K. Jaffer Sharrief and others in E.P. No 29 of 1999). The Election commission issued a press brief after the 2009 Indian general election, clarifying the same On 8 October 2013, Supreme Court of India delivered its verdict on Dr. Subramanian Swamy's PIL, that the Election Commission of India will use VVPATs along with EVMs in a phased manner and the full completion should be achieved by 2019 Indian general election.
Internet voting
In April 2011 Gujarat became the first Indian state to experiment with Internet voting.
Ireland
Ireland bought voting machines from the Dutch company Nedap for about €40 million. The machines were used on a 'pilot' basis in 3 constituencies for the 2002 Irish general election and a referendum on the Treaty of Nice. Following a public report by the Commission on Electronic Voting, then Minister for the Environment and Local Government, Martin Cullen again delayed the use of the machines
On 23 April 2009, the Minister for the Environment John Gormley announced that the electronic voting system was to be scrapped by an as yet undetermined method, due to cost and the public's dissatisfaction with the current system.
On 6 October 2010, the Taoiseach Brian Cowen said that the 7,000 machines would not be used for voting and would be disposed of. As of October 2010, the total cost of the electronic voting project has reached €54.6 million, including €3 million spent on storing the machines over the previous five years.
Italy
On 9 and 10 April 2006 the Italian municipality of Cremona used Nedap Voting machines in the national elections. The pilot involved 3000 electors and 4 polling stations were equipped with Nedap systems. The electoral participation was very high and the pilot was successful.
In the same elections (April 2006) the Ministry of New Technologies in cooperation with two big American companies organized a pilot only concerning e-counting. The experiment involved four regions and it cost 34 million euro.
Kazakhstan
In 2003, the Kazakh Central Election Commission entered into a partnership with the United Institute of Informatics Problems of the National Academy of Sciences of Belarus to develop an electronic voting system. This system, known as the Sailau Electronic Voting System (АИС «Сайлау»), saw its first use in Kazakhstan's 2004 Parliamentary elections. The final form of the system, as used in the presidential election of 2005 and the parliamentary election of 2007, has been described as using "indirect recording electronic voting." In this case, voters signing into use the Sailau system were issued smart cards holding the ballot to be voted. Voters then carried these cards to a voting booth, where they used the Sailau touch-screen ballot marking device to record their votes on the card. Finally, the voters returned the ballot cards to the sign-in table where the ballot was read from the card into the electronic "ballot box" before the card was erased for reuse by another voter.
On Nov. 16, 2011, Kuandyk Turgankulov, head of the Kazakh Central Election Commission, said that use of the Sailau system would be discontinued because voters prefer paper, the political parties do not trust it, and the lack of funds required to update the system.
Lithuania
Lithuania is planning national online voting, with a target of 20% of votes cast online by 2020.
The Lithuanian President Dalia Grybauskaitė is quoted however as stating concerns that online voting would not ensure confidentiality and security.
Malaysia
Electronic voting had been used in 2018 People's Justice Party leadership election, which is the party election for the country's largest party. It had suffer many technical problems and many polls had been postponed due to the poor system.
Namibia
In 2014, Namibia became the first African nation to use electronic voting machines. Electronic voting machines (EVMs) used in the election were provided by Bharat Electronics Limited, an Indian state owned company.
Netherlands
From the late 1990s until 2007, voting machines were used extensively in elections. Most areas in the Netherlands used electronic voting in polling places. After security problems with the machines were widely publicized, they were banned in 2007.
The most widely used voting machines were produced by the company Nedap. In the 2006 parliamentary elections, 21,000 persons used the Rijnland Internet Election System to cast their vote.
On 5 October 2006 the group "Wij vertrouwen stemcomputers niet" ("We do not trust voting machines") demonstrated on Dutch television how the Nedap ES3B machines could be manipulated in five minutes. The tampering of the software would not be recognisable by voters or election officials.
On October 30, 2006, the Dutch Minister of the Interior withdrew the license of 1187 voting machines from manufacturer Sdu NV, about 10% of the total number to be used, because it was proven by the General Intelligence and Security Service that one could eavesdrop on voting from up to 40 meters using Van Eck phreaking. National elections are to be held 24 days after this decision. The decision was forced by the Dutch grass roots organisation Wij vertrouwen stemcomputers niet ("We do not trust voting computers").
Apparently there was a case of an election official misinforming voters of when their vote is recorded and later recording it himself in municipality elections in Landerd, Netherlands in 2006. A candidate was also an election official and received the unusual number of 181 votes in the polling place where he was working. In the other three polling places combined he received only 11 votes. Only circumstantial evidence could be found, because the voting machine was a direct-recording electronic voting machine; in a poll by a local newspaper the results were totally different. The case is still under prosecution.
In September 2007 a committee chaired by Korthals Altes reported to the government that it would be better to return to paper voting. The deputy minister for the interior Ank Bijleveld said in a first response she would accept the committee's advice, and ban electronic voting. The committee also concluded that the time wasn't ready for voting over the Internet. State secretary Ank Bijleveld responded by announcing a return to paper voting.. It was reported in September 2007 that "a Dutch judge has declared the use of Nedap e-voting machines in recent Dutch elections unlawful."
On February 1, 2017 the Dutch government announced that all ballots in the 2017 general election would be counted by hand.
Norway
The Ministry of Local Government and Regional Development of Norway carried out pilots in three municipalities at local elections in 2011 on voting machines in the polling stations using touch screens.
Norway tried to conduct voting by using the online voting method and it “did not increase voter turnout, not even among younger demographics.” People in Norway wanted to ensure that there was high voter confidence and believed that online voting would bring along with it security and political controversy. There are firsthand accounts given of some of the worries that are present with the introduction of a technology such as online voting.
The use of the Internet in elections is a fairly recent concept and as with any new technology it will undergo a certain amount of scrutiny until people can fully trust it and implement it into worldwide elections. Critics of online voting argued that online voting isn't secure enough and thus creates a large number of skeptics who oppose the use of online voting, which in turn will result in a challenge to implement online voting as the primary method of casting votes. Another area that people are worried about is the process of authentication of votes. In other words, what process will voters have to go through to ensure that they are who they say they are?
The Institute of Social Research in Norway conducted a study in which we can see signs that voters are afraid that their votes will become public, which they will see as a compromise of their democratic rights. In addition, voters’ fears are embedded in the encryption system that guards the privacy of their votes. How can voters be sure that their votes are safe from hackers? This led them to believe that in order to make this a viable voting system, governments have to ensure that the encryption system used to protect votes is as safe as possible. Until governments can ensure a certain level of safety for people's votes, the outcomes in Norway are unlikely to change - the voter turnout will still be low even if the convenience of voting is made easier.
Pakistan
On 17 November 2021 National Assembly has passed EVM voting bill by majority of 221
Members and Prime Minister Imran Khan presented this bill in the National Assembly.
Philippines
In May 2010, the government of the Philippines planned to carry out its first ever entirely electronically tabulated election, using and optical scan voting system. The government invested $160 million into the new system. This included the electronic voting machines, printers, servers, power generators, memory cards, batteries, and broadband and satellite transmission equipment. This national implementation of electronic voting was intended to increase the accuracy and speed of vote tallying. In addition, it was expected to decrease the fraud and corruption found in past Philippine elections.
On May 3, 2010, the Philippines pre-tested the electronic voting systems. The Commission on Elections (Comelec) found 76,000 of the total 82,000 Precinct Count Optical Scan Machines to have faulty memory cards. The machines had miscounted votes and had given some votes to the rival candidate. After discovering discrepancies between manual and automated voting tallies, the memory cards were changed throughout the country. Many Filipino voters became skeptical of the e-voting system after the national recall. Because of past violent elections, 250,000 troops were placed on high alert around the country. These forces were instructed to guard the machines and voting stations in order to preempt any violent protests against the system. Some election officials attempted to postpone the May 10 election day but elections proceeded as scheduled.
On May 10, 2010, the Philippines had its first presidential election using electronic voting. Comelec reported that only 400 of the 82,000 machines malfunctioned. Most voter complaints were related to waiting in long lines and learning the new technology.
Romania
Romania first implemented electronic voting systems in 2003, on a limited basis, to extend voting capabilities to soldiers and others serving in Iraq, and other theaters of war. Despite the publicly stated goal of fighting corruption, the equipment was procured and deployed in less than 30 days after the government edict passed.
South Korea
Elections in South Korea use a central-count system with optical scanners at each regional election office. A separate ballot paper is used for each office being contested, and votes are cast using red ink and a rubber stamp. Ballots are similar in size to paper currency, and the optical scanners resemble cash sorter machines. After the ballots are sorted, stacks of ballots are counted using machines resembling currency counting machines. The Korean system has been praised as a model of best practice, but it has also been the subject of controversy, including questions about its legality and allegations of rigged counting in 2012.
Spain
In 2014, during its first party congress, the political party Podemos, conducted 3 elections using Agora Voting open source software to vote via the Internet on a series of documents which would determine the political and organizational principles of the party (112070 voters), the resolutions the party will adopt (38279 voters), and the people that would fill the positions defined by this structure (107488 voters). After the municipal elections carried out in May 2015 several city mayors have announced their plans to carry out public consultation processes using electronic voting.
Switzerland
Several cantons (Geneva, Neuchâtel and Zürich) have developed Internet voting test projects to allow citizens to vote via the Internet.
In 2009 and 2011, the 110,000 Swiss voters living abroad will have the option of voting using the Internet through a new pilot project introduced in September 2008.
Up until the vote on February 9, 2014, internet voting was only open to expatriates who lived in the countries in the Wassenaar Arrangement because of their communication standards. After this vote in 2014, internet voting has opened to all expatriates of Switzerland. Although this will cause more risk with voting from abroad, it will allow more people to participate in voting, and there no longer has to be a separation of expatriates during voting and registration.
On February 27, 2017 Swiss Post announced that it was offering a public demonstration version of its e-voting system. The Swiss Post solution has been used in Fribourg and will be used in Neuchâtel.
On November 2, 2018, it was reported that Swiss Post has invited hackers from around the world to participate in a four-week public intrusion test of online voting software provided by the Spanish company, Scytl to take place in Spring 2019.
Sign-ups were accepted until 31 December 2018: pit.post.ch
Researchers found flaws in the software.
On December 19, 2018, the Swiss Federal Council completed the legislation to approve electronic voting and submitted it for consultation (Vernehmlassung).
A 2020 study found that online voting reduced the number of errors that voters made when casting ballots, thus reducing the number of ballots that were ineligible by 0.3 percent.
Thailand
In November 2018, Thailand Democrat Party conducted electronic voting using blockchain where more than 120,000 votes were cast. The voting data were stored on IPFS (InterPlanetary File System) and the IPFS hashes were then stored on Zcoin blockchain. Former premier of Thailand, Abhisit Vejjajiva won the popular vote by winning 67,505 votes against his opponent Warong Dechgitvigrom who went close behind with 57,689 votes.
United Arab Emirates
UAE Federal National Council and 2005 Elections
On December 2, 1971, with the adoption of the constitution, the federation of the United Arab Emirates (UAE) was officially established. A few months later, in February 1972, the country's first ever federal national council (FNC) was set up as the country's legislative and constitutional body. In 2005, the UAE held its first national elections. This was recognized as a step forward to enhance a well-structured political participation in line with citizens’ aspirations, and as a major milestone towards modernization and development of the federation.
2011 Federal National Council Elections
After the first electoral experience in the UAE in 2005, the National Election Committee (NEC) approved electronic voting instead of traditional voting procedures as it had been attracting the attention of governments around the world. The same election model was used for the 2011 FNC elections, except for the electoral college, where the number of voters increased from around 6,000 to almost 130,000.
The 2011 FNC elections were considered to be more challenging due to the short time frame and the size of the electoral college, as well as the fact that the majority of voters were first-time voters and had never seen a ballot box. The government decided to take innovative steps to encourage participation and introduced technology-driven systems to facilitate the overall program. Hence a process was designed which required detailed planning in the areas of site preparation and capacity computation, technical infrastructure development, communication planning,
addressing logistical and staff requirements, and the overall specifications of the electronic voting system.
United Kingdom
England
Voting pilots have taken place in May 2006, June 2004, May 2003, May 2002, and May 2000.
In 2000, the London Mayoral and Assembly elections were counted using an optical scan voting system with software provided by DRS plc of Milton Keynes. In 2004, the London Mayoral, Assembly and European Parliamentary elections were scanned and processed using optical character recognition from the same company. Both elections required some editing of the ballot design to facilitate electronic tabulation, though they differed only slightly from the previous 'mark with an X' style ballots.
As of January 2016, the UK Parliament has no plans to introduce electronic voting for statutory elections, either using electronic voting in polling booths or remotely via the internet.
Scotland
An optical scan voting system was used to electronically count paper ballots in the Scottish Parliament general election and Scottish council elections in 2007. A report commissioned by the UK Electoral Commission found significant errors in ballot design produced more than 150,000 spoilt votes. The BBC reported that 86,000 constituency ballots and 56,000 list ballots were rejected, with suggestions that it was caused by voters being asked to vote for both sections of the election on the same ballot paper, rather than on separate ballots as had been the case in the previous elections. In addition to this, Scottish Parliamentary elections and Scottish council elections use different electoral systems. The council elections uses single transferable vote, a preferential voting system, while the Parliament elections uses the additional member system; the former requires the voter to place numbers in order of their preference, while the latter requires a cross to indicate their single preference.
The electronic counting was used again in the 2012 and 2017 council elections without any problems being detected.
United States
Electronic voting in the United States involves several types of machines: touch screens for voters to mark choices, scanners to read paper ballots, scanners to verify signatures on envelopes of absentee ballots, and web servers to display tallies to the public. Aside from voting, there are also computer systems to maintain voter registrations and display these electoral rolls to polling place staff.
To audit computer tallies in a small percent of locations, five states check all contests by hand, two states check by machines independent of the election machines, seventeen states check one or a few contests by hand, four states reuse the same machines or ballot images as the election, so errors can persist, and 23 states do not require audits.
Three vendors sell most of the machines used for voting and for counting votes. As of September 2016, the American Election Systems & Software (ES&S) served 80 million registered voters, Canadian Dominion Voting Systems 70 million, American Hart InterCivic 20 million, and smaller companies less than 4 million each.
More companies sell signature verification machines: ES&S, Olympus, Vantage, Pitney Bowes, Runbeck, and Bell & Howell.
A Spanish company, Scytl, manages election-reporting websites statewide in 12 U.S. states.
Another website management company is VR Systems, active in 8 states.
Election machines are computers, often 10–20 years old, since certification and purchase processes take at least two years, and offices lack money to replace them until they wear out.
Like all computers they are subject to errors, which have been widely documented,
and hacks, which have not been documented, though security flaws which would permit undetectable hacks have been documented.
In large election offices, computers check signatures on postal ballot envelopes to prevent fraudulent votes. Error rates of computerized signature reviews are not published.
Error rates in signature verification are higher for computers than for experts,
and at best experts wrongly reject 5% of true signatures and wrongly accept 29% of forgeries. Lay people make more mistakes.
Venezuela
Elections in Venezuela first introduced electronic voting in the 1998 presidential election. The 2004 Venezuelan recall referendum was the first national election to feature a voter-verified paper audit trail (VVPAT). This allow the voter to verify that the machine has properly recorded their vote. It also permits audits and recounts.
References
Articles containing video clips
|
37332874
|
https://en.wikipedia.org/wiki/Adobe%20Shockwave%20Player
|
Adobe Shockwave Player
|
Adobe Shockwave Player (formerly Macromedia Shockwave Player, and also known as Shockwave for Director) is a discontinued freeware software plug-in for viewing multimedia and video games created on the Adobe Shockwave platform in web pages. Content was developed with Adobe Director and published on the Internet. Such content could be viewed in a web browser on any computer with the Shockwave Player plug-in installed. It was first developed by Macromedia and released in 1995; it was later acquired by Adobe Systems in 2005.
Shockwave Player ran DCR files published by the Adobe Director environment. Shockwave Player supported raster graphics, basic vector graphics, 3D graphics, audio, and an embedded scripting language called Lingo. Hundreds of free online video games were developed using Shockwave, and published on websites such as Miniclip and Shockwave.com.
As of July 2011, a survey found that Flash Player had 99% market penetration in desktop browsers in "mature markets" (United States, Canada, United Kingdom, France, Germany, Japan, Australia, and New Zealand), while Shockwave Player claimed only 41% in these markets. As of 2015, Flash Player is a suitable alternative to Shockwave Player, with its 3D rendering capabilities and object-oriented programming language. Flash Player cannot display Shockwave content, and Shockwave Player cannot display Flash content.
In February 2019, Adobe announced that Adobe Shockwave, including the Shockwave Player, would be discontinued in April 2019. The final update for Adobe Shockwave Player was released on March 15, 2019. Shockwave Player is no longer available for download (as of October 8, 2019), and it cannot be used anymore since web browsers have blocked the Shockwave Player plug-in upon its discontinuation.
History
The Shockwave player was originally developed for the Netscape browser by Macromedia Director team members Harry Chesley, John Newlin, Sarah Allen, and Ken Day, influenced by a previous plug-in that Macromedia had created for Microsoft's Blackbird. Version 1.0 of Shockwave was released independent of Director 4 and its development schedule has coincided with the release of Director since version 5. Its version has since been tied to Director's, thus there were no Shockwave 2–4 releases.
Shockwave 1 The Shockwave plug-in for Netscape Navigator 2.0 was released in 1995, along with the stand-alone Afterburner utility to compress Director files for Shockwave playback. The first large-scale multimedia site to use Shockwave was Intel's 25th Anniversary of the Microprocessor.
Shockwave 5 Afterburner is integrated into the Director 5.0 authoring tool as an Xtra.
Shockwave 6 Added support for Shockwave Audio (swa) which consisted of the emerging MP3 file format with some additional headers.
Shockwave 7 Added support for linked media including images and casts.
Added support for Shockwave Multiuser Server.
Shockwave 8.5 Added support for Intel's 3D technologies including rendering.
Shockwave 9
Shockwave 10 Last version to support Mac OS X 10.3 and lower, and Mac OS 9.
Shockwave 11 Added support for Intel-based Macs.
Shockwave 12
Shockwave 12.1 It is supported by 32-bit and 64-bit versions of Windows XP, Vista, 7, and 8. It has content made from previous versions as well as Director MX 2004. From version 12.1.5.155 Shockwave is supported in both Internet Explorer and Mozilla Firefox.
Shockwave 12.2 Last update for macOS before discontinuation.
Shockwave 12.3 Last update before overall discontinuation.
Platform support
Shockwave was available as a plug-in for the classic Mac OS, macOS, and 32 bit Windows for most of its history. However, there was a notable break in support for the Macintosh between January 2006 (when Apple Inc. began the Mac transition to Intel processors based on the Intel Core Duo) and March 2008 (when Adobe Systems released Shockwave 11, the first version to run natively on Intel Macs).
Unlike Flash Player, Shockwave Player is not available for Linux or Solaris despite intense lobbying efforts. However, the Shockwave Player can be installed on Linux with CrossOver (or by running a Windows version of a supported browser in Wine with varying degrees of success). It is also possible to use Shockwave Player in the native Linux version of Firefox by using the Pipelight plugin (which is based on a modified version of Wine).
In 2017, the authoring tool for Shockwave content, Adobe Director, was discontinued on February 1; and the following month, Shockwave Player for macOS was officially discontinued. In February 2019, Adobe announced that Shockwave Player would be officially discontinued and unsupported on Microsoft Windows, the last OS that supported the Shockwave Player, effective April 9, 2019.
Security
Some security experts advise users to uninstall Adobe Shockwave Player because "it bundles a component of Adobe Flash that is more than 15 months behind on security updates, and which can be used to backdoor virtually any computer running it", in the words of Brian Krebs. This opinion is based on research by Will Dormann, who goes on to say that Shockwave is architecturally flawed because it contains a separate version of the Flash runtime that is updated much less often than Flash itself. Additionally Krebs writes that "Shockwave has several modules that don't opt in to trivial exploit mitigation techniques built into Microsoft Windows, such as SafeSEH."
Branding and name confusion
In an attempt to raise its brand profile, all Macromedia players prefixed Shockwave to their names in the late 1990s. Although this campaign was successful and helped establish Shockwave Flash as a multimedia plugin, Shockwave and Flash became more difficult to maintain as separate products. In 2005, Macromedia marketed three distinct browser player plugins under the brand names Macromedia Authorware, Macromedia Shockwave, and Macromedia Flash.
Macromedia also released a web browser plug-in for viewing Macromedia FreeHand files online. It was branded Macromedia Shockwave for FreeHand and displayed specially compressed .fhc Freehand files.
Later, with the acquisition of Macromedia, Adobe Systems slowly began to rebrand all products related to Shockwave.
See also
Adobe Flash
Adobe AIR
Adobe Reader
References
External links
Adobe Shockwave Player
Adobe.com/Technote Adobe.com/Technote using The Wayback Machine - What's the difference between Shockwave and Flash? (dated 2004)
How Stuff Works - The Difference Between Flash and Shockwave
1995 software
Shockwave
Animation software
Graphics file formats
Macromedia software
Multimedia frameworks
Macintosh multimedia software
MacOS multimedia software
Products and services discontinued in 2019
Windows multimedia software
Discontinued Adobe software
Web 1.0
Video game development software
|
5933039
|
https://en.wikipedia.org/wiki/X-Forwarded-For
|
X-Forwarded-For
|
The X-Forwarded-For (XFF) HTTP header field is a common method for identifying the originating IP address of a client connecting to a web server through an HTTP proxy or load balancer.
The X-Forwarded-For HTTP request header was introduced by the Squid caching proxy server's developers.
X-Forwarded-For is also an email-header indicating that an email-message was forwarded from one or more other accounts (probably automatically).
Without the use of XFF or another similar technique, any connection through the proxy would reveal only the originating IP address of the proxy server, effectively turning the proxy server into an anonymizing service, thus making the detection and prevention of abusive accesses significantly harder than if the originating IP address were available. The usefulness of XFF depends on the proxy server truthfully reporting the original host's IP address; for this reason, effective use of XFF requires knowledge of which proxies are trustworthy, for instance by looking them up in a whitelist of servers whose maintainers can be trusted.
Format
The general format of the field is:
X-Forwarded-For: client, proxy1, proxy2
where the value is a comma+space separated list of IP addresses, the left-most being the original client, and each successive proxy that passed the request adding the IP address where it received the request from. In this example, the request passed through proxy1, proxy2, and then proxy3 (not shown in the header). proxy3 appears as remote address of the request.
Examples:
X-Forwarded-For: 203.0.113.195, 70.41.3.18, 150.172.238.178
X-Forwarded-For: 203.0.113.195
X-Forwarded-For: 2001:db8:85a3:8d3:1319:8a2e:370:7348
Since it is easy to forge an X-Forwarded-For field the given information should be used with care. The right-most IP address is always the IP address that connects to the last proxy, which means it is the most reliable source of information. X-Forwarded-For data can be used in a forward or reverse proxy scenario.
Just logging the X-Forwarded-For field is not always enough as the last proxy IP address in a chain is not contained within the X-Forwarded-For field, it is in the actual IP header. A web server should log BOTH the request's source IP address and the X-Forwarded-For field information for completeness.
Proxy servers and caching engines
The X-Forwarded-For field is supported by most proxy servers.
X-Forwarded-For logging is supported by many web servers including Apache. IIS can also use a HTTP Module for this filtering.
Zscaler will mask an X-Forwarded-For header with Z-Forwarded-For, before adding its own X-Forwarded-For header identifying the originating customer IP address. This prevents internal IP addresses leaking out of Zscaler Enforcement Nodes, and provides third party content providers with the true IP address of the customer. This results in a non-RFC compliant HTTP request.
Alternatives and variations
standardized a Forwarded HTTP header with similar purpose but more features compared to the X-Forwarded-For HTTP header. An example of a Forwarded header's syntax:
Forwarded: for=192.0.2.60;proto=http;by=203.0.113.43
HAProxy defines the PROXY protocol which can communicate the originating client's IP address without using the X-Forwarded-For or Forwarded header. This protocol can be used on multiple transport protocols and does not require inspecting the inner protocol, so it is not limited to HTTP.
See also
Internet privacy
List of proxy software
X-Originating-IP for SMTP equivalent
List of HTTP header fields
References
External links
Apache mod_extract_forwarded
Anonymity
Hypertext Transfer Protocol headers
|
2046117
|
https://en.wikipedia.org/wiki/International%20Computers%20and%20Tabulators
|
International Computers and Tabulators
|
International Computers and Tabulators or ICT was a British computer manufacturer, formed in 1959 by a merger of the British Tabulating Machine Company (BTM) and Powers-Samas. In 1963 it acquired the business computer divisions of Ferranti. It exported computers to many countries and in 1968 became part of International Computers Limited (ICL).
Products
The ICT 1101 was known as the EMIDEC 1100 computer before the acquisition of the EMI Computing Services Division who designed and produced it.
The ICT 1201 computer used thermionic valve technology and its main memory was drum storage. Input was from 80-column punched cards and output was to 80-column cards and a printer. Before the merger, under BTM, this had been known as the HEC4 (Hollerith Electronic Computer, fourth version).
The drum memory held 1K of 40-bit words. The computer was programmed using binary machine code instructions. When programming the 1201, the machine code instructions were not sequential but were spaced to allow for the drum's rotation. This ensured the next instruction was passing under the drum's read heads just as the current instruction had been executed.
The ICT 1301, and its smaller cousin the ICT 1300, used germanium transistors and core memory. Backing store was magnetic drum, and optionally one-inch-, half-inch- or quarter-inch-wide magnetic tape. Input was from 80-column punched cards and optionally 160-column punched cards and punched paper tape. Output was to 80-column punched cards, printer and optionally to punched paper tape. The first customer delivery was in 1962, a 1301 sold to the University of London. One of their main attractions was that they performed British currency calculations (pounds, shillings and pence) in hardware. They also had the advantage of programmers not having to learn binary or octal arithmetic as the instruction set was pure decimal and the arithmetic unit had no binary mode, only decimal or pounds, shillings and pence. Its clock ran at 1 MHz. The London University machine still exists (January 2006) and is being reinstated to working condition by a group of enthusiasts.
The ICT 1302, used similar technology to the 1300/1301 but was a multiprogramming system capable of running three programs in addition to the Executive. It also used the 'Standard Interface' for the connection of peripherals allowing much more flexibility in peripheral configuration. The 'Standard Interface' was originally prototyped on the 1301 and went on to be used on the 1900 series.
The ICT 1400 was a first generation computer using thermionic valves, but was overtaken by transistor technology in 1959 and no sales were made.
The ICT 1500 series was a design bought in from the RCA Corporation, who called it the RCA 301. RCA also sold the design to Siemens in Germany and Compagnie des Machines Bull in France who called it the Gamma 30. It used a six-bit byte and had core stores of 10,000, 20,000 or 40,000 bytes.
The ICT 1900 series was devised after the acquisition of Ferranti's assets which brought in the new Ferranti-Packard 6000 machine from Ferranti's Canadian subsidiary, which was considerably more advanced than the existing 130x line. It was decided to adapt this machine to use the 'Standard Interface', and it was put on the market as the ICT 1904, the first in a range of upward-compatible computer systems.
In 1968 ICT merged with English Electric Computers, itself formed from the prior mergers of English Electric Leo Marconi (EELM) and Elliott Automation. The resulting company became International Computers Limited (ICL). At the time of the merger English Electric Computers was in the process of making a line of large IBM System/360-compatible mainframes based on the RCA Spectra 70, which was sold as the ICL System-4. Both 1900 and System-4 were eventually replaced by the ICL 2900 Series which was introduced in 1974.
External links
Oral history interview with Arthur L. C. Humphreys (1981), Charles Babbage Institute, University of Minnesota. Humphreys, a former managing director of ICL, reviews the history of the British computer industry, including the merger in 1959 of British Tabulating Machine Company and the Powers Samas company into International Computers and Tabulators, Ltd. (ICT), and the merger in 1968 of English Electric Computers Limited and ICT into ICL.
Boards from the ICT 1301 – archived in November 2006
ICT 1301 Resurrection Project – The National Museum of Computing
Two articles on the HEC4 at Morgan Crucible Co – archived in April 2014
References
Computer companies of the United Kingdom
Defunct computer hardware companies
Defunct manufacturing companies of the United Kingdom
Defunct technological companies of the United Kingdom
Former defence companies of the United Kingdom
Computer companies established in 1959
Electronics companies established in 1959
Manufacturing companies established in 1959
Manufacturing companies disestablished in 1968
1959 establishments in England
1968 disestablishments in England
British companies established in 1959
British companies disestablished in 1968
|
420195
|
https://en.wikipedia.org/wiki/DOS%20Protected%20Mode%20Interface
|
DOS Protected Mode Interface
|
In computing, the DOS Protected Mode Interface (DPMI) is a specification introduced in 1989 which allows a DOS program to run in protected mode, giving access to many features of the new PC processors of the time not available in real mode. It was initially developed by Microsoft for Windows 3.0, although Microsoft later turned control of the specification over to an industry committee with open membership. Almost all modern DOS extenders are based on DPMI and allow DOS programs to address all memory available in the PC and to run in protected mode (mostly in ring 3, least privileged).
Overview
DPMI stands for DOS Protected Mode Interface.
It is an API that allows a program to run in protected mode on 80286 series and later processors, and do the
calls to real mode without having to set up these CPU modes manually. DPMI also provides the functions for managing
various resources, notably memory. This allows the DPMI-enabled programs to work in
multi-tasking OSes,
allowing an OS kernel to distribute such resources between multiple applications. DPMI provides only the functionality that
needs to be implemented in supervisor mode. It can be
thought of as a single-tasking microkernel. The rest of the functionality is available to DPMI-enabled programs
via the calls to real-mode DOS and BIOS services, allowing the DPMI API itself to remain mostly independent of DOS.
Things that make DPMI API DOS-specific, are just 3 functions for managing DOS memory, and the letter "D" in the "DPMI" acronym.
A DPMI service can be 16-bit, 32-bit, or "universal" and is called the DPMI kernel, DPMI host, or DPMI server. It is provided either by the host operating system (virtual DPMI host) or by a DOS extender (real DPMI host). The DPMI kernel can be a part of a DOS extender such as in DOS/4GW or DOS/32A, or separate, like CWSDPMI or HDPMI.
The primary use of DPMI API is to allow DOS extenders to provide the host-OS-agnostic environment.
DOS extender checks the presence of a DPMI kernel, and installs its own only if the one was not installed already. This allows
DOS-extended programs to run either in a multitasking OS that provides its own DPMI kernel, or directly
under bare-metal DOS, in which case DOS extender uses its own DPMI kernel. Windows 3.x and 9.x's user-mode
kernels are built with a DOS extender, so they fully rely on a DPMI API that is provided by windows's ring-0 kernel.
History
The first DPMI specification drafts were published in 1989 by Microsoft's Ralph Lipe. While based on a prototypical version of DPMI for Windows 3.0 in 386 enhanced mode, several features of this implementation were removed from the official specification, including a feature named MS-DOS Extensions or DOS API translation that had been proposed by Ralph Lipe in the original drafts. Most of it was implementing DOS and BIOS interfaces (due to this history some INT 21h APIs like 4Ch have to be implemented by all DPMI implementations). DPMI version 0.9 was published in 1990 by the newly formed DPMI Committee. The version number 0.9 of the resulting specification was chosen to reflect the stripped-down nature and incomplete status of the standard the members of the DPMI Committee could agree upon. While Windows reports DPMI version 0.9 for compatibility, it actually implements the other parts as well, since they present a vital part of the system. This undocumented full nature of DPMI has become known as "true DPMI" in the industry. The DPMI standard was not the only effort to overcome the shortcomings of the VCPI specification. At the same time that Microsoft developed DPMI for Windows 3.0, another industry alliance including Intel's Software Focus Group, Lotus, Digital Research, Interactive Systems and others developed a specification named Extended VCPI (XVCPI) to make the memory management and multitasking capabilities of the 386 available for extended DOS applications.
When it turned out that Microsoft's DPMI proposal addressed a number of similar issues and was supported by Windows, these efforts led to the creation of the DPMI Committee in February 1990 during a meeting at Intel in Santa Clara.
In 1991, the DPMI Committee revised DPMI to version 1.0 in order to incorporate a number of clarifications and extensions, but it still did not include the missing "true DPMI" bits implemented in Windows. In fact, "true DPMI" never became part of the official DPMI specification, and Windows likewise never implemented the DPMI 1.0 extensions (and not many DPMI hosts did).
While DPMI is tailored to run extended DOS application software in protected mode and extended memory, it is not particularly well suited for resident system extensions. Another specification named DPMS, developed by Digital Research / Novell around 1992, specifically addresses requirements to easily relocate modified DOS driver software into extended memory and run them in protected mode, thereby reducing their conventional memory footprint down to small stubs. This is also supported by Helix Cloaking.
The DPMI "method" is specific to DOS and the IBM PC. Other computer types were upgraded from 16-bit to 32-bit, and the advanced program support was provided by upgrading the operating system with a new 32-bit API and new memory management/addressing capabilities. For example, the OS/2 core system supports 32-bit programs, and can be run without the GUI. The DPMI solution appears to be mainly needed to address third party need to get DOS protected mode programs running stably on Windows 3.x before the dominant operating system vendor, Microsoft, could or would address the future of 32-bit Windows. In addition, Microsoft didn't see the answer to the 32-bit transition as a 32-bit DOS, but rather a 32-bit Windows with a completely different (and incompatible) API.
Compatibility
While Windows 3.0 implements "true DPMI" and reports support for DPMI 0.9, DPMI version 1.0 was never implemented in Microsoft Windows, so most programs and DOS extenders were mostly only written for version 0.9. Few extenders, however, implement "true DPMI".
Beta versions of Qualitas 386MAX implemented "true DPMI" and could run Windows' KRNL386.EXE from the command line, but it was claimed that was disabled in the released product in an internal email. However, DPMIONE (by Bob Smith based on the 386MAX code) can do it. Currently DPMIONE and 386MAX is also the only DPMI host which supports DPMI 1.0 completely (e.g. uncommitted memory) and they are the main supporter of DPMI 1.0.
The KRNL386.SYS (aka "MultiMAX") of DR DOS "Panther" and "StarTrek", which has been under development since 1991, and the EMM386.EXE memory managers of Novell DOS 7, Caldera OpenDOS and DR-DOS 7.02 and higher have built-in support for DPMI when loaded with the /DPMI[=ON] option. KRNL386.SYS even had a command line option /VER=0.9|1.0 to provide support for either DPMI 1.0 or 0.9. DOS API translation was referred to as "called interrupt 21 from protected mode". Multiuser DOS, System Manager and REAL/32 support DPMI as well.
The most famous separate DPMI kernel is probably CWSDPMI; it supports DPMI 0.9, but no undocumented "DOS API translation".
Another variant called PMODE by "TRAN" aka Thomas Pytel was popular with 32-bit programmers during the demo scene of the 1990s.
Many games used DOS/4GW, which was developed by Rational Systems as a subset of DOS/4G and was distributed with the Watcom C compiler.
HDPMI (part of HX DOS Extender) provides "DOS API translation" and almost complete DPMI 1.0 implementation.
DPMI Committee
The DPMI 1.0 Committee met between 1990 through 1991 and consisted of 12 groups:
Borland International (Borland C, Turbo Pascal)
IBM Corporation (PC DOS, OS/2)
AI Architects/Ergo Computer Solutions/Eclipse Computer Solutions/Ergo Computing (OS/286, OS/386 DOS extenders)
Intelligent Graphics Corporation (VM/386 multi-user DOS)
Intel Corporation (286, 386, 486 microprocessors)
Locus Computing Corporation (Merge)
Lotus Development Corporation (Lotus 1-2-3)
Microsoft Corporation (MS-DOS, Microsoft Windows)
Phar Lap Software (DOS|286, DOS|386, TNT)
Phoenix Technologies (Phoenix BIOS, PMate, PForCe, Plink-86)
Quarterdeck Office Systems (QEMM, DESQview, DESQview/X)
Rational Systems/Tenberry Software (DOS/16M, DOS/4G, DOS/4GW DOS extenders)
See also
Virtual Control Program Interface (VCPI)
DOS Protected Mode Services (DPMS)
Helix Cloaking
NetWare I/O Subsystem (NIOS)
Multiuser DOS Federation
Notes
References
Further reading
(22 pages)
(112 pages)
(160 pages)
(MSDPMI on Japanese MS-DOS 5.00A for PC-98 platform.)
External links
DOS technology
DOS memory management
DOS extenders
Computer-related introductions in 1989
|
40617071
|
https://en.wikipedia.org/wiki/National%20Cyber%20Security%20Policy%202013
|
National Cyber Security Policy 2013
|
National Cyber Security Policy is a policy framework by Department of Electronics and Information Technology (DeitY) It aims at protecting the public and private infrastructure from cyber attacks. The policy also intends to safeguard "information, such as personal information (of web users), financial and banking information and sovereign data". This was particularly relevant in the wake of US National Security Agency (NSA) leaks that suggested the US government agencies are spying on Indian users, who have no legal or technical safeguards against it. Ministry of Communications and Information Technology (India) defines Cyberspace as a complex environment consisting of interactions between people, software services supported by worldwide distribution of information and communication technology.
Reason for Cyber Security policies
India had no Cyber security policy before 2013. In 2013, The Hindu newspaper, citing documents leaked by NSA whistle-blower Edward Snowden, has alleged that much of the NSA surveillance was focused on India's domestic politics and its strategic and commercial interests. This sparked a furore among people. Under pressure, the government unveiled a National Cyber Security Policy 2013 on 2 July 2013.
Vision
To build a secure and resilient cyberspace for citizens, business, and government and also to protect anyone from intervening in user's privacy.
Mission
To protect information and information infrastructure in cyberspace, build capabilities to prevent and respond to cyber threat, reduce vulnerabilities and minimize damage from cyber incidents through a combination of institutional structures, people, processes, technology, and cooperation.
Objective
Ministry of Communications and Information Technology (India) define objectives as follows:
To create a secure cyber ecosystem in the country, generate adequate trust and confidence in IT system and transactions in cyberspace and thereby enhance adoption of IT in all sectors of the economy.
To create an assurance framework for the design of security policies and promotion and enabling actions for compliance to global security standards and best practices by way of conformity assessment (Product, process, technology & people).
To strengthen the Regulatory Framework for ensuring a SECURE CYBERSPACE ECOSYSTEM.
To enhance and create National and Sectoral level 24x7 mechanism for obtaining strategic information regarding threats to ICT infrastructure, creating scenarios for response, resolution and crisis management through effective predictive, preventive, protective response and recovery actions.
-To improve visibility of integrity of ICT products and services by establishing infrastructure for testing & validation of security of such product.
To create workforce for 500,000 professionals skilled in next 5 years through capacity building skill development and training.
To provide fiscal benefit to businesses for adoption of standard security practices and processes.
To enable Protection of information while in process, handling, storage & transit so as to safeguard privacy of citizen's data and reducing economic losses due to cyber crime or data theft.
To enable effective prevention, investigation and prosecution of cybercrime and enhancement of law enforcement capabilities through appropriate legislative intervention.
Strategies
Creating a secured Ecosystem.
Creating an assurance framework.
Encouraging Open Standards.
Strengthening The regulatory Framework.
Creating a mechanism for Security Threats Early Warning, Vulnerability management, and response to security threats.
Securing E-Governance services.
Protection and resilience of Critical Information Infrastructure.
Promotion of Research and Development in cyber security.
Reducing supply chain risks
Human Resource Development (fostering education and training programs both in formal and informal sectors to Support the Nation's cyber security needs and build capacity.
Creating cyber security awareness.
Developing effective Public-Private partnerships.
To develop bilateral and multilateral relationships in the area of cyber security with another country. (Information sharing and cooperation)
a Prioritized approach for implementation.
See also
National Security Strategy (India)
References
External links
Indian Cyber Security
Proposed laws of India
2013 in India
Ministry of Communications and Information Technology (India)
Computer security
Internet in India
Cyber Security in India
|
24095253
|
https://en.wikipedia.org/wiki/Adobe%20FreeHand
|
Adobe FreeHand
|
Adobe FreeHand (formerly Macromedia FreeHand and Aldus FreeHand) was a computer application for creating two-dimensional vector graphics oriented primarily to professional illustration, desktop publishing and content creation for the Web. FreeHand was similar in scope, intended market, and functionality to Adobe Illustrator, CorelDRAW and Xara Designer Pro. Because of FreeHand's dedicated page layout and text control features, it also compares to Adobe InDesign and QuarkXPress. Professions using FreeHand include graphic design, illustration, cartography, fashion and textile design, product design, architects, scientific research, and multimedia production.
FreeHand was created by Altsys Corporation in 1988 and licensed to Aldus Corporation, which released versions 1 through 4. In 1994, Aldus merged with Adobe Systems and because of the overlapping market with Adobe Illustrator, FreeHand was returned to Altsys by order of the Federal Trade Commission. Altsys was later bought by Macromedia, which released FreeHand versions 5 through 11 (FreeHand MX). In 2005, Adobe Systems acquired Macromedia and its product line which included FreeHand MX, under whose ownership it presently resides.
Since 2003, FreeHand development has been discontinued; in the Adobe Systems catalog, FreeHand has been replaced by Adobe Illustrator.
FreeHand MX continues to run under Windows 7 using compatibility mode and under Mac OS X 10.6 (Snow Leopard) within Rosetta, a PowerPC code emulator, and requires a registration patch supplied by Adobe . Freehand 10 runs without problems on Mac OS X 10.6 with Rosetta enabled, and does not require a registration patch. Someone using a later version of Mac OS X than 10.6 might be able to use VMware Fusion, VirtualBox or Parallels to virtualize Mac OS X Snow Leopard Server and run Freehand using this emulator.
History
Altsys and Aldus FreeHand
In 1984, James R. Von Ehr founded Altsys Corporation to develop graphics applications for personal computers. Based in Plano, Texas, the company initially produced font editing and conversion software; Fontastic Plus, Metamorphosis, and the Art Importer. Their premier PostScript font-design package, Fontographer, was released in 1986 and was the first such program on the market. With the PostScript background having been established by Fontographer, Altsys also developed FreeHand (originally called Masterpiece) as a Macintosh Postscript-based illustration program that used Bézier curves for drawing and was similar to Adobe Illustrator. FreeHand was announced as "... a Macintosh graphics program described as having all the features of Adobe's Illustrator plus drawing tools such as those in Mac Paint and Mac Draft and special effects similar to those in Cricket Draw." Seattle's Aldus Corporation acquired a licensing agreement with Altsys Corporation to release FreeHand along with their flagship product, Pagemaker, and Aldus FreeHand 1.0 was released in 1988. FreeHand's product name used intercaps; the F and H were capitalized.
The partnership between the two companies continued with Altsys developing FreeHand and with Aldus controlling marketing and sales. After 1988, a competitive exchange between Aldus FreeHand and Adobe Illustrator ensued on the Macintosh platform with each software advancing new tools, achieving better speed, and matching significant features. Windows PC development also allowed Illustrator 2 (aka, Illustrator 88 on the Mac) and FreeHand 3 to release Windows versions to the graphics market.
FreeHand 1.0 sold for $495 in 1988. It included the standard drawing tools and features as other draw programs including special effects in fills and screens, text manipulation tools, and full support for CMYK color printing. It was also possible to create and insert PostScript routines anywhere within the program. FreeHand performed in preview mode instead of keyline mode but performance was slower.
FreeHand 2.0 sold for $495 in 1989. Besides improving on the features of FreeHand 1.0, FreeHand 2 added faster operation, Pantone colors, stroked text, flexible fill patterns and automatically import graphic assets from other programs. It added accurate control over a color monitor screen display, limited only by its resolution.
FreeHand 3.0 sold for $595 in 1991. New features included resizable color, style, and layer panels including an Attributes menu. Also tighter precision of both the existing tools and aligning of objects. FH3 created compound Paths. Text could be converted to paths, applied to an ellipse, or made vertical. Carried over from version 1.0, FreeHand 3 suffered by having text entered into a dialog box instead of directly to the page. In October 1991, a 3.1 upgrade made FreeHand work with Mac OS 7 but additionally, it supported pressure-sensitive drawing which offered varying line widths with a users stroke. It improved element manipulation and added more import/export options.
FreeHand 4.0 sold for $595 in 1994. Altsys ported FreeHand 3.0 to the NeXT system creating a new program named Virtuoso. Virtuoso continued its development at Altsys and version 2.0 of Virtuoso was feature-equivalent to FreeHand 4 (with the addition of NeXT-specific features such as Services and Display PostScript) and file compatible, with Virtuoso 2 able to open FreeHand 4 files and vice versa. A prominent feature of this version was the ability to type directly into the page and wrap inside or outside any shape. It also included drag-and-drop color imaging, a larger pasteboard, and a user interface that featured floating, rollup panels. The colors palette included a color mixer for adding new colors to the swatch list. Speed increases were made.
In the same year of FreeHand 4 release, Adobe Systems announced merger plans with Aldus Corporation for $525 million. Fear about the end of competition between these two leading applications was reported in the media and expressed by customers (Illustrator versus FreeHand and Adobe Photoshop versus Aldus PhotoStyler.) Because of this overlapping of the market, Altsys stepped in by suing Aldus, saying that the merger deal was "a prima facie violation of a non-compete clause within the FreeHand licensing agreement." Altsys CEO Jim Von Ehr explained, "No one loves FreeHand more than we do. We will do whatever it takes to see it survive." The Federal Trade Commission issued a complaint against Adobe Systems on October 18, 1994, ordering a divestiture of FreeHand to "remedy the lessening of competition resulting from the acquisition as alleged in the Commission's complaint," and further, the FTC ordering, "That for a period of ten (10) years from the date on which this order becomes final, respondents shall not, without the prior approval of the Commission, directly or indirectly, through subsidiaries, partnerships, or otherwise .. Acquire any Professional Illustration Software or acquire or enter into any exclusive license to Professional Illustration Software;" (referring to FreeHand.)
FreeHand was returned to Altsys with all licensing and marketing rights as well as Aldus FreeHand's customer list.
Macromedia Freehand
By late 1994, Altsys still retained all rights to FreeHand. Despite brief plans to keep it in-house to sell it along with Fontographer and Virtuoso, Altsys reached an agreement with the multimedia software company, Macromedia, to be acquired. This mutual agreement provided FreeHand and Fontographer a new home with ample resources for marketing, sales, and competition against the newly merged Adobe-Aldus company. Altsys would remain in Richardson, Texas, but would be renamed as the Digital Arts Group of Macromedia and was responsible for the continued development of FreeHand. Macromedia received FreeHand's 200,000 customers and expanded its traditional product line of multimedia graphics software to illustration and design graphics software. CEO James Von Ehr became a Macromedia vice-president until 1997 when he left to start another venture.
FreeHand 5.0 sold for $595 in 1995. This version featured a more customizable and expanded workspace, multiple views, stronger design and editing tools, a report generator, spell check, paragraph styles, multicolor gradient fills up to 64 colors, speed improvements, and it accepted Illustrator plugins. In September 1995, a 5.5 upgrade added Photoshop plug-in support, PDF import capabilities, the Extract feature, inline graphics to text, improved auto-expanding text containers, the Crop feature, and the Create PICT Image feature.
A FreeHand 5.5 upgrade was part of the FreeHand Graphics Studio (a suite that included Fontographer, Macromedia xRes image editing application, and Extreme 3D animation and modeling application).
FreeHand 6.0 in 1996. This version only existed in beta. Some Freehand 7 prerelease versions were released under the Freehand 6 tag.
FreeHand 7.0 sold for $399 in 1996, or $449 as part of the FreeHand Graphics Studio (see above.) Features included a redesigned user interface that allowed recombining Inspectors, Panel Tabs, Dockable Panels, Smart Cursors, Drag and Drop with Adobe applications and QuarkXPress, Graphic Search and Replace, Java (programming language) and AppleScript Automation, Chart creation, and new Effects tools and functions. Shockwave was introduced to leverage graphics for the Web.
FreeHand 8.0 sold for $399 in 1998. This version began integrating to the Web with the ability to export graphics directly to Macromedia Flash. Customizable toolbars and keyboard shortcuts were prominent features. Also Lens Fill and Transparency, Freeform tool, Graphic Hose, Emboss Effects, and a "Collect for Output" function for print.
FreeHand 9.0 sold for $399 in 2000 or $449 as part of the Flash 4 FreeHand Studio bundle. This was a major repositioning for FreeHand emphasizing the Web and especially Flash output. Creating simple Flash animation from layers was featured. The Perspective Grid, Magic Wand Tracing tool, Lasso tool, and a Page tool that treated pages like objects (resize, clone, rotate, etc.)
FreeHand 10.0 sold for $399 in 2000 or $799 as part of the Studio MX bundle. Macromedia released this as Carbonized for both Mac OS 9 and Mac OS X. It shared a common Macromedia GUI Interface and several tools were added or renamed to match Flash tools. New features include Brushes, Master Pages, Print Area, and a Navigation Panel for adding links, names, and adding actions or notes to objects. Also "Smart cursor" Pen and Bezigon Tools and a Contour Gradient Fill.
A minor version of FreeHand 10 (10.0.1) came as a result of Adobe winning a lawsuit against Macromedia for infringement on a Tabbed Panels patent. A reworking of the user interface produced this temporary fix for the panel issue. 10.0.1 was available with the Studio MX bundle or as a new purchase but not available as a patch to existing users.
FreeHand MX sold for $399 in 2003 or for $1,580 as part of the Studio MX 2004 bundle. FreeHand 11 was marketed as FreeHand MX and featured tighter interface integration with the Macromedia MX line of products. This release also featured a revamped Object Panel where all attributes and text properties are centralized for editing, Multiple Attributes for unlimited effects, Live Effects, Live-edit of basic shapes, Connector Lines tool, Flash and Fireworks integration, Extrude, Erase, and Chart tools, along with improvements to the standard tools.
During the development of FreeHand MX, the customer install base was 400,000 users worldwide but because of competition with Adobe Illustrator's market share, Macromedia focused instead on its web oriented lineup of Flash, Dreamweaver, Fireworks, and Contribute. In 2003, Macromedia reduced the FreeHand development team to a few core members to produce the 11.0.2 update released in February 2004. The company released a final product suite prior to the 2005 merger with Adobe, called Studio 8, which was characterized by the absence of FreeHand from the suite's interactive online applications of Dreamweaver, Flash, Fireworks, Contribute, and FlashPaper.
Adobe FreeHand
On April 18, 2005, Adobe Systems announced an agreement to acquire Macromedia in a stock swap valued at about $3.4 billion. The Department of Justice regulated the transaction that came 10 years after the Federal Trade Commission's 1994 ruling which barred Adobe from acquiring FreeHand. The acquisition took place on December 3, 2005, and Adobe integrated the companies' operations, networks, and customer-care organizations shortly thereafter. Adobe acquired FreeHand along with the entire Macromedia product line that included Flash, Dreamweaver, and Fireworks, but not including Fontographer, which FontLab Ltd. had licensed with an option to buy all rights. Adobe's acquisition of Macromedia cast doubt on the future of FreeHand, primarily because of Adobe's competing product, Illustrator. Adobe announced in May 2006 that it planned to continue to support FreeHand and develop it "based on [their] customers' needs". One year later on May 15, 2007, Adobe said that it would discontinue development and updates to the program and the company would provide tools and support to ease the transition to Illustrator. In a 2008 interview with Senior Product Manager of Illustrator, Terry Hemphill, he told FreeHand users: "FreeHand is not going to be revived; time to move on, really. The Illustrator team is making a determined effort to bring the best of FreeHand into Illustrator, which should be evident from some of the new features in CS4."
Controversy
In 2006, the FreeHand community protested Adobe's announcement of discontinuing development with the "FreeHand Support Page" petition. It was followed in 2007 by the "FreeHand Must Not Die" petition. In 2008, the Adobe FreeHand Forum listed, "Adobe latest FreeHand MX upgrade, Would you pay?" which continued to receive signatures in 2012. In February 2009, Creative Review magazine published ""Freehand Anonymous" about the present use of FreeHand in the UK. In September 2009, the Free FreeHand Organization (a user community with the goal of securing a future for FreeHand MX) was founded and by 2011, its membership had surpassed 6,000 members worldwide. In May 2011, the Free FreeHand Organization filed a civil antitrust complaint against Adobe Systems, Inc. alleging that "Adobe has violated federal and state antitrust laws by abusing its dominant position in the professional vector graphic illustration software market." In spite of the aforementioned petitions with the advent of Flash Player 11 in October 2011 Adobe intentionally ditched the support for SWF contents created in FreeHand supposedly aiming to urge the transition to its Illustrator software. Early 2012 the lawsuit initiated by Free FreeHand Organization resulted in a settlement with Adobe Systems, Inc., by which members of the group received discount to Adobe products and a promise for product-development of Adobe Illustrator based on their requests.
Release history
See also
Comparison of vector graphics editors
References
External links
freehand-forum.org — User community forum, formerly freefreehand.org
Graphics software
Vector graphics editors
Raster to vector conversion software
Aldus software
1988 software
Discontinued Adobe software
|
13121471
|
https://en.wikipedia.org/wiki/The%20Blackwell%20Legacy
|
The Blackwell Legacy
|
The Blackwell Legacy is a graphic adventure video game developed by Wadjet Eye Games for the Microsoft Windows, Linux, macOS, iOS and Android. It is the first part of the Blackwell series and follows Rosangela Blackwell, a young freelance writer living a solitary life in New York City. She experiences headaches throughout the day and it culminates in a ghost named Joey Mallone making an appearance in her apartment. It is revealed that Rosa is a medium like her aunt and that her job is to help ghosts that are stuck in the real world move on.
Gameplay
The Blackwell Legacy is a point-and-click adventure, where the player can interact with objects and characters of interest by clicking on them. A left click interacts with objects and people, and directs Rosa where to go, while right clicking allows examination of items. There is an inventory that houses all of your items, whatever oddities you collect during your adventure. Rosa carries along a notepad, where she writes down any important names or keywords. While trying to figure out connections between certain people or objects, two terms can be combined. That often gives the players a clue as to what to do next. The game is designed in the way that fully voice acted dialogue and characterization play a big part in the narrative.
Plot
Rosangela 'Rosa' Blackwell, an introverted book review columnist for the Village Eye newspaper, returns home after scattering the ashes of her late aunt, Lauren Blackwell, at Queensboro Bridge. She briefly laments over how she barely knew her and now has no family. Her grandparents have long since passed; her parents died in a car accident when she was young; and her aunt had been institutionalized in an induced coma for decades after a mental breakdown. Moments later, she's contacted by Dr. Quentin from Bellevue Hospital, who warns her of her aunt's, as well as Rosa's grandmother's, mental disorder, and that it is likely hereditary. Unfazed despite her recent headaches, Rosa reads the Blackwell family letters previously in hospital possession, depicting both women's gradual breakdowns and isolationist behaviors at the same time they began interacting with a non-existing person both call "Joey". Afterwards, Rosa is assigned by the Village Eye to write about the recent suicide of a New York University student, JoAnn Sherman.
Despite barely getting information on the suicide case, Rosa manages to write the article and sends it out. After doing so, her ever-increasing migraine intensifies until a ghostly figure of a man in a fedora and business suit appears before her. The ghost introduces himself as Joey Mallone, the Blackwell family's spirit guide, and explains that Rosa is a medium who can see and communicate with other ghosts. Rosa's recent migraines since her aunt's death were symptoms of her latent medium abilities awakening. As such, she must help them pass on to the afterlife by making them become self-aware and come to terms with their deaths. When Rosa tries asking Joey to go away, he explains he's irremediably attached to her, as he was previously to the other women of her family, and thus cannot go very far from her. Joey also explains that Rosa's grandmother had outright rejected her role as a medium, and her aunt quit being a medium after several years of being one. This rejection of the role apparently caused their eventual mental breakdowns.
Joey asks Rosa to take him to Washington Square Park, where they discover the ghost of JoAnn's friend by the dog park, Alli Montego. Unable to convince the ghost of her death, Rosa starts a proper investigation on JoAnn and her two friends, one whom also committed suicide and another whom is currently admitted in Bellevue after an attempted suicide. Rosa discovers JoAnn and her friends had played with an ouija board, accidentally summoning a restless ghost called the "Deacon", which led JoAnn and Alli to take their own lives when the ghost wouldn't stop haunting them. Rosa borrows her neighbor's dog and takes it to the dog park to convince Alli's ghost, who once aspired to be a veterinarian, to pass on. As a final request, Alli asks Rosa to keep an eye on Susan Lee, the friend who is still alive in Bellevue.
Breaking into the Bellevue Hospital late at night to watch over her, Rosa and Joey intercept the "Deacon", revealed to be the ghost of a priest whom fell from grace and into alcoholism after his wife passed away. After being summoned, he constantly harassed JoAnn, Allie, and Susan to save him from his condemnation in Hell. Joey and Rosa eventually convince the Deacon to give in and allow himself to resign to his fate, until Rosa and the Deacon come face to face with a demon, blocking the way to the afterlife unless the Deacon accepts the punishment for his sins. Through hints the demon ends up giving, Rosa realizes the Deacon's alcohol flask is the source of all his sins and destroys it, redeeming the remorseful Deacon's soul and finally granting him passage to eternal rest.
With the case finally closed, Rosa and Joey return home. Fascinated by her recent experiences, Rosa asks Joey why Lauren stopped being a medium. Joey explains that Lauren decided to retire as a medium only after adopting Rosa when Rosa's parents died in a car crash. Intending to honor both the aunt and grandmother she barely knew, Rosa embraces her newfound identity as a medium.
Development
The game runs under the Adventure Game Studio engine. Due to technical issues, it has remained a Windows-only title at the beginning, even though the runtime itself has been ported to Linux and Mac OS X already. Ian Schlaepfer was a designer of the characters art, while Chris Femo and Tom Scary did the backgrounds, which include some of the New York cityscapes, like the Bellevue Hospital, the Queensboro Bridge, and more.
The project originally began as Bestowers of Eternity, and was released as free-to-play in 2003. Subsequently, it was decided for the project to be extended and redone into a proper commercial product, which ultimately became The Blackwell Legacy.
Reception
Upon its release, The Blackwell Legacy was met with "generally favourable" reviews from critics for the Microsoft Windows, with an aggregate score of 80% on Metacritic.
It was nominated for 4 AGS Awards in 2006 and won the award for Best Character Art.
References
External links
2006 video games
Video games developed in the United States
AGS Award winners
Adventure Game Studio games
Video games about ghosts
Indie video games
Windows games
Android (operating system) games
Linux games
MacOS games
IOS games
Adventure games
Point-and-click adventure games
Video games featuring female protagonists
Video games set in 2006
Video games set in New York City
|
945644
|
https://en.wikipedia.org/wiki/Linux.conf.au
|
Linux.conf.au
|
linux.conf.au (often abbreviated as lca) is Australasia's regional Linux and Open Source conference. It is a roaming conference, held in a different Australian or New Zealand city every year, coordinated by Linux Australia and organised by local volunteers.
The conference is a non-profit event, with any surplus funds being used to seed the following year's conference and to support the Australian Linux and open source communities. The name is the conference's URL, using the uncommon second-level domain .conf.au.
Conference history
In 1999, Linux kernel hacker Rusty Russell organised the Conference of Australian Linux Users in Melbourne. The first conference held under the linux.conf.au name was held two years later in
Sydney. The conference is generally held in a different Australian city each time; although from 2006 onward, New Zealand cities have also been hosts.
Highlights from past conferences include:
1999: CALU (Conference of Australian Linux Users) was conceived, bankrolled (via his personal credit card) and executed by Linux kernel hacker Rusty Russell. It laid the foundation for a successful, strongly technical, eclectic and fun conference series.
2001: the first conference held under the linux.conf.au name.
2004: a major highlight was the dunking of Linus Torvalds for charity.
2006: the first conference to be held outside Australia, recognising the importance of the New Zealand Linux community.
2007: a new feature was an Open Day for non-conference attendees, in which community groups, interest groups and Linux businesses held stands and demonstrations.
2008: the second time the conference was held in Melbourne. 100 OLPC machines were distributed to random attendees to encourage development. The Speakers dinner was held at St Paul's Cathedral Chapter House, and the Penguin Dinner was held in conjunction with Melbourne's Night Market, playing on the title of Eric Raymond's book, The Cathedral and the Bazaar.
2009: during the Penguin Dinner, a substantial sum of money was raised for the Save Tasmanian Devils fund – and a pledge made to replace the Tux Logo with the conference mascot, Tuz, to help raise awareness.
2010: over $33,000 raised for Wellington Lifeflight Helicopter Ambulance service.
2011: the event was almost washed out by the floods that devastated southern Queensland.
2016: preparations almost derailed by a massive storm just before the conference opened.
2020: $24,342 raised and donated to Red Cross for Australian Bush-fire relief
2021: in May 2020 Linux Australia announced that the planned 2021 conference in Canberra was postponed until 2022 due to the COVID-19 pandemic and a lightweight virtual conference would be held in 2021 instead.
Miniconfs
Since 2002, a key feature of the conference are the associated "miniconfs". These are half – 2 days streamed gatherings run before the main conference. They have their own programme, but are open for any conference attendee to participate in.
The first event to have a miniconf was in 2002, with the Debian Miniconf, organised by James Bromberger. This was based upon the idea that DebConf 1 in Bordeaux was a "mini-conf" of the French Libre Software Meeting. The concept grew in 2004, with the Open-Source in Government (ossig) miniconf, EducationaLinux, Debian Miniconf and GNOME.conf.au. In 2010 the Arduino Miniconf was introduced by Jonathan Oxer, the author of Practical Arduino.
Miniconfs have included those devoted to computer programming, education, security, multimedia, arduino and system administration.
See also
Open Source Developers' Conference
References
External links
linux.conf.au
Linux conferences
Free-software conferences
Recurring events established in 1999
|
29287604
|
https://en.wikipedia.org/wiki/Adobe%20Systems%2C%20Inc.%20v.%20Southern%20Software%2C%20Inc.
|
Adobe Systems, Inc. v. Southern Software, Inc.
|
Adobe Systems, Inc. v. Southern Software, Inc. was a case in the United States District Court for the Northern District of California regarding the copyrightability of digitized typefaces (computer fonts). The case is notable since typeface designs in general are not protected under United States copyright law, as determined in Eltra Corp. v. Ringer. Since that case, the United States Copyright Office has published policy decisions acknowledging the registration of computer programs that generate typefaces. In this case, the court held that Adobe's Utopia font was protectable under copyright and Southern Software, Inc.'s Veracity font was substantially similar and infringing.
Background
Eltra Corp. v. Ringer
In 1979, The United States Court of Appeals for the Fourth Circuit held in Eltra Corp. v. Ringer that typefaces are industrial designs which cannot exist independently as works of art.
1988: Policy Decision On Copyrightability Of Digitized Typefaces
In 1988, the U.S. Copyright Office published a policy decision specifically addressing attempts to register fonts. The Copyright Office stated that the representation of a glyph as pixels was not protectable expression, since the raw source typeface design was not protectable, and no original authorship occurred in the conversion process.
The policy decision described the digitization of typefaces as bitmap images, which was the leading format for fonts at the time. A limitation of this format is that different sizes of the font must have different bitmap representations.
1992: Registrability Of Computer Programs That Generate Typefaces
Having received applications to register copyrights for computer programs that generated typefaces using "typeface in digitized form", the Copyright Office revisited the 1988 Policy Decision in 1992. The Office was concerned that the claims indicated a significant technological advance since the previous policy decision. One advance was scalable font representations (Bézier curves). This format can output a font at any resolution, and stores its data as control points rather than pixels. The Office acknowledged that these fonts might involve original computer instructions to generate typefaces, and thus be protectable as computer programs, but ended saying that "The scope of the copyright will be, as in the past, a matter for the courts to determine."
Case background
Adobe filed suit against Southern Software in multiple complaints between 1995 and 1997. Adobe's allegations were:
Copyright infringement related to SSI's Key Fonts Pro products 1555 ('Veracity'), 2002 and 2003
Copyright infringement on intermediate copying
Patent infringement on Adobe's design patents
Adobe's fonts under dispute were created from previously digitized font files in bitmap form. Such bitmap images are not protectable under copyright, as addressed by the 1988 Policy Decision. These were imported into a program where an Adobe editor dragged control points to best match the outline of the bitmap image. When finished, these control points were translated into computer instructions to create the final font file.
Paul King, the sole employee of Southern Software, Inc. altered Adobe's Utopia Font using the commercially available tools FontMonger and Fontographer. He produced three new fonts which differed trivially from the original font. These font editing programs extracted the control points from Adobe's font files and made them available for manipulation and saving to a new font file. He had scaled the coordinates of the fonts by 101% on the vertical axis in order to slightly change the fonts. Adobe also alleged that King modified the font-editing tools in order to remove Adobe's copyright notices. In total, King was accused of infringing Adobe's copyrights on more than 1100 fonts.
Claims
The central issue of the case was whether or not Adobe's font program was protectable under copyright. That the output of the computer program is not protectable does not affect whether the program itself is protectable.
Application of Policy Decisions and Regulations
King contended that the Regulation of 1992 was only a clarification of 1988's Policy Decision. The regulation stated that it "does not represent a substantive change in the rights of copyright claimants." Adobe contended that the 1992 Regulation's consideration of programs that generate typefaces covered its fonts.
Substantial similarity
King argued that after application of the Abstraction-Filtration-Comparison test to Adobe's font, there could not be any remaining protectable expression, since both the input to and output of the program was unprotectable. Adobe argued that each rendering of a character by the program was the result of an editor's choices, and thus were a result of original authorship.
Functional concerns
King contended that the selection of control points in glyph outlines was determined by efficiency, namely, the minimization of control points. Adobe contended that the outlines only determined some of the control points, and that there was creativity involved in picking the rest. One glyph may be expressed identically in a variety of ways with different numbers of control points.
Protection under patent
Typeface designs may be protectable by a design patent under United States patent law. The requirements are more strict than those for copyright. Term of protection is 14 years from filing date. SSI claimed that the six design patents Adobe held related to its fonts were invalid since they did not disclose any article of manufacture. They also claimed that the designs lacked the required novelty and nonobviousness to be valid. Adobe's response was that the computer program was a sufficient article of manufacture. A previous case, before the U.S. Patent and Trademark Office's Board of Patent Appeals and Interferences, had held that novel and non-obvious typefaces designs could be subject to a design patent. Adobe used the deposition of the font's designer to attempt to show that the font was novel and non-obvious.
Analysis and holdings
The court held that Adobe's Utopia font was protectable expression, and that King infringed upon Adobe's copyright. Essentially, the court determined that the choice of control points by the font editor was a work of original authorship. The facts that the court used to reach this conclusion were that:
Creativity was involved in the selection of control points, since two independent font editors could begin with the same images and produce indistinguishable output, yet have few control points in common.
King's modifications to Adobe's fonts were mechanical and did not eliminate copyright infringement. Since the font editing programs extracted control points from Adobe's fonts, use of those programs constituted copying of literal expression. Thus, King also infringed on Adobe's copyright through intermediate copying into RAM.
Finally, the court denied Adobe's motion on patent infringement, stating that the novelty and non-obviousness in Adobe's fonts was a "genuine issue of material fact."
See also
Intellectual property protection of typefaces
References
Further reading
Legal Battles that Shaped the Computer Industry, Lawrence D Graham pg. 44 Quorum Books (August 30, 1999)
Typeface Designers Wrestle With the World of Pixels
Adobe Inc.
United States copyright case law
Typography
1998 in United States case law
|
54834616
|
https://en.wikipedia.org/wiki/Eva%20Galperin
|
Eva Galperin
|
Eva Galperin is the Director of Cybersecurity at the Electronic Frontier Foundation (EFF) and technical advisor for the Freedom of the Press Foundation. She is noted for her extensive work in protecting global privacy and free speech and for her research on malware and nation-state spyware.
Biography
Galperin became interested in computers at an early age through her father, who was a computer security specialist. When she was 12, she created a desktop for her on his Unix/Solaris computer and she became active in Usenet discussion areas about science fiction novels and playing interactive text games, and she later became active in web development. She attended college at San Francisco State University for political science and international relations while working as a Unix system administrator at various companies in Silicon Valley.
Galperin joined the EFF in 2007. Prior to EFF, she worked at the Center for US–China policy studies, where she helped to organize conferences and researched Chinese energy policy. At EFF, she led the Threat Lab project before she was promoted as the EFF's Director of Cybersecurity in 2017. Since 2018, she focused on the eradication of the "stalkerware" spyware used for domestic abuse industry, working with victims of stalkerwares. These malicious applications, which are being marketed to abusive spouses, overbearing parents, and stalkers, can be installed secretly on mobile devices, allowing their owners to monitor their targets' activities.
In April 2019, she convinced anti-virus provider Kaspersky Lab to begin explicitly alerting users of security threats upon detection of stalkerware on the company's Android product. She also asked Apple to allow antivirus applications in its marketplace and, like Kaspersky, to alert its users if their mobile devices have been jailbroken or rooted. Galperin stated that due to competition, more cybersecurity companies will be prompted to follow suit to meet this heightened standard. She has also called on U.S. state and federal officials to arrest and prosecute executives of companies that are developing and selling stalkerwares on charges of hacking.
References
Computer security specialists
Cypherpunks
Electronic Frontier Foundation people
Internet activists
Living people
Privacy activists
San Francisco State University alumni
Year of birth missing (living people)
|
2755852
|
https://en.wikipedia.org/wiki/Naval%20Aviation%20Warfighting%20Development%20Center
|
Naval Aviation Warfighting Development Center
|
The Naval Aviation Warfighting Development Center (NAWDC, pronounced NAW-DIK) was formerly known as the Naval Strike and Air Warfare Center (NSAWC, pronounced "EN-SOCK") at Naval Air Station Fallon located in the city of Fallon in western Nevada. It is the center of excellence for naval aviation training and tactics development. NAWDC provides service to aircrews, squadrons and air wings throughout the United States Navy through flight training, academic instructional classes, and direct operational and intelligence support. The name was changed from NSAWC to NAWDC to align with the naming convention of the Navy's other Warfighting Development Centers (including Naval Surface and Mine Warfighting Development Center (SMWDC), Naval Information Warfighting Development Center (NIWDC), and the Undersea Warfighting Development Center (UWDC).
History
NSAWC (now NAWDC) consolidated three commands into a single command structure under a flag officer on 11 July 1996 to enhance aviation training effectiveness. The Naval Strike Warfare Center (STRIKE "U"), based at NAS Fallon since 1984, was amalgamated with the Navy Fighter Weapons School ("TOPGUN") and the Carrier Airborne Early Warning Weapons School (TOPDOME). Both schools had moved from NAS Miramar as a result of a Base Realignment and Closure decision in 1993 which transferred that installation back to the Marine Corps as MCAS Miramar. The Seahawk Weapon School was added in 1998 to provide tactical training for Navy SH-60 / HH-60 / MH-60 series helicopters and the Airborne Electronic Attack Weapons School (HAVOC) for the EA-18G aircraft was added in 2014, augmenting the legacy Electronic Attack Weapons School (EAWS) for the EA-6B and EA-18G at NAS Whidbey Island, WA.
Mission
NAWDC is the primary authority on training and tactics development. NAWDC provides training, assessment, aviation requirements recommendations, research and development priorities for integrated strike warfare, maritime and overland air superiority, strike fighter employment, airborne battle management, Combat Search and Rescue (CSAR), Close Air Support (CAS), and associated planning support systems. The command is also responsible for the development, implementation, and administration of several courses of instruction while functioning as the Navy point of contact for all issues relating to the Air Combat Training Continuum. Additionally, NAWDC is the Navy point of contact for all issues related to the Fallon Range Training Complex (FRTC).
Command structure
NAWDC consists of ten departments. Personnel Resources (N1) oversees administrative functions, supply, security, automated information systems, and first lieutenant. The Intelligence Department (N2) provides support to air wing training in Fallon as well as to fleets and battle groups based all over the world. Additionally, N2 contains the CIS (Computer Information Systems) division. Operations (N3) manages scheduling for aircraft, aircrew, the training ranges, and keeps aircrew log books and records. The Maintenance Department (N4) maintains all NAWDC aircraft, including parts and supplies, manages the loading, unloading and storage of ordnance, and maintains aircrew flight equipment.
Strike (N5) is involved in tactics development and assessment for tactical aircraft and H-60 helicopters, program management and participation, mission planning, and inter/intra service liaison. N5 is the legacy "Strike U" organization and its primary function is the execution of Air Wing Fallon.
The C2 (Command and Control) Department (N6), known as the Carrier Airborne Early Warning Weapons School (CAEWWS) provides graduate-level command, control, communication, battle management, and training to E-2 Hawkeye aircrew, joint and combined personnel. CAEWWS is responsible for the development of community TTP, community tactical standardization and the production of Hawkeye WTIs. In addition to the course of instruction N6 Department conducts, N6 instructors support the N5 Department as Command and Control instructors and evaluators during Air Wing Fallon Detachment training. N6 Department resides in the Fleet Training Building with the N3, N7, and N8 departments.
The Navy Fighter Weapon School (N7) instructs advanced methods of strike-fighter employment through the "TOPGUN" Strike Fighter Tactics Instructor (SFTI) course. It also conducts the Senior Officers Course (SOC); and manages air wing power projection training. N7 personnel retain the traditional light blue T-shirts and light brown leather nametags worn by TOPGUN personnel and have their own spaces (shared with N6 and N8) separate from the main NAWDC building that house the heritage of TOPGUN legacy in forms of photos and other memorabilia. The NAWDC F-16 and F/A-18 aircraft sport the TOPGUN patch on the tail.
The Navy Rotary Wing Weapons School (N8) instructs graduate-level rotary wing employment through the "SEAWOLF" Seahawk Weapons and Tactics Instructor (SWTI) course. It also conducts the Strike Leader Attack Training Syllabus (SLATS), Senior Officers Course (SOC), assists N5 with airwing training, and manages the Navy's Mountain Flying Course.
Operational Risk Management/Safety Department (N9) manages air-and-ground related safety programs as well as medical training programs.
The Airborne Electronic Attack Weapons School (N10) is the EA-18G Growler weapons school and conducts the "HAVOC" Growler Tactics Instructor course.
Training
There are two distinct areas of NAWDC training using the FRTC extensively – carrier air wing (CVW) training and the "TOPGUN" SFTI, "CAEWWS" HEWTI, "HAVOC" GTI, and "SEAWOLF" SWTI graduate level courses. Air wing training brings together all of an air wing's squadrons for four weeks, providing strike planning and execution training opportunities in a dynamic, realistic, scenario-driven simulated wartime environment.
Air wing training consists of power projection training in strike warfare, amphibious operations, joint battlefield operations, CAS, and CSAR. The Strike Fighter Tactics Instructor (SFTI) course is advanced tactics training for FA-18A-F aircrew in the Navy and Marine Corps through the execution of a demanding air combat syllabus and it produces graduate-level strike fighter tacticians, adversary instructors, and Air Intercept Controllers (AIC). The Growler Tactics Instructor (GTI) course is advanced tactics training for EA-18G aircrew in the Navy through the execution of a demanding air combat and electronic warfare syllabus and produces graduate-level electronic warfare tacticians. The Seahawk Weapons and Tactics Instructor (SWTI) course develops the Navy's helicopter tactics doctrine via the SEAWOLF Manual; instructs the Navy's Mountain Flying School; provides high-altitude, mountainous flight experience for sea-going squadrons; and provide academic, ground, flight, and opposing-forces instruction for visiting aircrew during Air Wing Fallon detachments. NAWDC staff members augment "adversary" air support, or "bandit" presentations, to support airborne portions of the training. NAWDC also annually hosts a ten-day CSAR exercise providing all-service participation with one full week of exercise flying involved.
Concurrent with each SFTI course, NAWDC conducts an Adversary Training Course where pilots receive individual instruction in threat simulation, effective threat presentation and adversary tactics. Each class trains five to six Air Intercept Controllers in effective strike/fighter command and control.
In the classroom, NAWDC also conducts tactically oriented courses. The SOC addresses strategic and tactical issues at the battle group commander, air wing commander and squadron commanding officer level. SLATS introduces junior Navy and Marine Corps officers to all aspects of air wing, battle group and joint force tactics, planning and hardware. Another important course is the Advanced Mission Commander's Course (AMCC) which focuses on the airborne battle management, providing graduate-level command, control and communication training to E-2C mission commanders and other carrier aircraft plane commanders.
Tactics development
The Plans, Programs and Tactics (N5) department utilizes both NAWDC and fleet aircraft to develop the latest in airwing tactics. These are standardized and promulgated to the fleet via the Naval Warfare Publication 3-01 Carrier Airwing Tactical Memo, and updated bi-annually. The N5 department forms a core of expertise which functions to advise the Chief of Naval Operations on programmatic issues, and lends its support to real world operations as targeteers providing extensive liaison and standardization to other Naval and joint training agencies.
Range
The Fallon Range Training Complex (FRTC) encompasses more than of airspace east of NAS Fallon, including a vast array of electronic systems supporting squadron, airwing and SFTI training. The heart of this program is the Advanced Digital Display System or ADDS. This computer-supported real-time digital display allows monitoring of each training event as it occurs on the ranges and recording capability for debriefing. Information is transmitted instantaneously from each aircraft to large screen displays at NAWDC and recorded for playback to the aircrews for post flight analysis of procedures and tactics. This system also allows controllers and aircrews to view an event from several different aspects in three dimensions.
Naval intelligence
One of NAWDC's most interactive departments is N2, naval intelligence. Within this department are targeting and weapons experts, assisted by enlisted intelligence specialists, who gather data on potential trouble areas around the globe where deployed naval forces might be called for presence or action. Inherent in the intelligence mission is preparation of aircrews for all circumstances they may face in combat. Another function of NAWDC's intelligence department is contingency preparation. When called upon, members will deploy, armed with the latest intelligence gathered, to assist commanders in theater.
See also
USAF Weapons School (Air Force equivalent)
References
External links
Air units and formations of the United States Navy
Churchill County, Nevada
Military units and formations established in 1996
1996 establishments in Nevada
|
630253
|
https://en.wikipedia.org/wiki/Fort%C3%A9%20Agent
|
Forté Agent
|
Forté Agent is an email and Usenet news client used on the Windows operating system. Agent was conceived, designed and developed by Mark Sidell and the team at Forté Internet Software in 1994 to address the need for an online/offline newsreader which capitalized on the emerging Windows GUI framework. By 1995, Agent had expanded to become a full-featured email client and remains a widely used application for integrating news and email communication on Windows. Agent supports POP email but not IMAP.
Agent's Usenet features include access to multiple news servers, import/export of NZB files, threaded discussions and a highly configurable user interface which has been criticized as difficult to use. It has long supported yEnc as well as many other coding schemes, and has the capability of joining incomplete binary attachments, which is useful in the event of posting errors.
In the past, a free version was offered alongside the commercial one. The free version lacked some features of the commercial version or, later, had them disabled until a registration key was entered. The last free version was 3.3.
Forté Internet Software
Originally called Forte Advanced Management Systems, Forté Internet Software, produced in the 1980s and 1990s enterprise-level products including network optimization and station administration tools that were licensed by Nortel Networks. In 1996, Forté created Adante, software for managing high volumes of inbound corporate email.
In late 1997, Forté was acquired by Genesys Telecommunications (which was then purchased by Alcatel) to integrate Adante into the Genesys and Alcatel product lines. In 2000, Alcatel sold Forte's Consumer Software Group to Charles Dazler Knuff. Now known as Forté Internet Software, this group continues to develop Agent and to research the areas of social software, email and wireless communications using trust networks.
In 2003, Forté created Forté Internet Services, which offers Agent Premium Newsgroups (APN), a high-speed, high-retention Usenet news service.
See also
List of Usenet newsreaders
Comparison of Usenet newsreaders
References
External links
Forté Internet Software
alt.usenet.offline-reader.forte-agent, Agent newsgroup
Usenet clients
Windows email clients
1994 software
|
3819064
|
https://en.wikipedia.org/wiki/Event%20monitoring
|
Event monitoring
|
In computer science, event monitoring is the process of collecting, analyzing, and signaling event occurrences to subscribers such as operating system processes, active database rules as well as human operators. These event occurrences may stem from arbitrary sources in both software or hardware such as operating systems, database management systems, application software and processors. Event monitoring may use a time series database.
Basic concepts
Event monitoring makes use of a logical bus to transport event occurrences from sources to subscribers, where event sources signal event occurrences to all event subscribers and event subscribers receive event occurrences. An event bus can be distributed over a set of physical nodes such as standalone computer systems. Typical examples of event buses are found in graphical systems such as X Window System, Microsoft Windows as well as development tools such as SDT.
Event collection is the process of collecting event occurrences in a filtered event log for analysis. A filtered event log is logged event occurrences that can be of meaningful use in the future; this implies that event occurrences can be removed from the filtered event log if they are useless in the future. Event log analysis is the process of analyzing the filtered event log to aggregate event occurrences or to decide whether or not an event occurrence should be signalled. Event signalling is the process of signalling event occurrences over the event bus.
Something that is monitored is denoted the monitored object; for example, an application, an operating system, a database, hardware etc. can be monitored objects. A monitored object must be properly conditioned with event sensors to enable event monitoring, that is, an object must be instrumented with event sensors to be a monitored object. Event sensors are sensors that signal event occurrences whenever an event occurs. Whenever something is monitored, the probe effect must be managed.
Monitored objects and the probe effect
As discussed by Gait, when an object is monitored, its behavior is changed. In particular, in any concurrent system in which processes can run in parallel, this poses a particular problem. The reason is that whenever sensors are introduced in the system, processes may execute in a different order. This can cause a problem if, for example, we are trying to localize a fault, and by monitoring the system we change its behavior in such a way that the fault may not result in a failure; in essence, the fault can be masked by monitoring the system. The probe effect is the difference in behavior between a monitored object and its un-instrumented counterpart.
According to Schütz, we can avoid, compensate for, or ignore the probe effect. In critical real-time system, in which timeliness (i.e., the ability of a system to meet time constraints such as deadlines) is significant, avoidance is the only option. If we, for example, instrument a system for testing and then remove the instrumentation before delivery, this invalidates the results of most testing based on the complete system. In less critical real-time system (e.g., media-based systems), compensation can be acceptable for, for example, performance testing. In non-concurrent systems, ignorance is acceptable, since the behavior with respect to the order of execution is left unchanged.
Event log analysis
Event log analysis is known as event composition in active databases, chronicle recognition in artificial intelligence and as real-time logic evaluation in real-time systems. Essentially, event log analysis is used for pattern matching, filtering of event occurrences, and aggregation of event occurrences into composite event occurrences. Commonly, dynamic programming strategies from algorithms are employed to save results of previous analyses for future use, since, for example, the same pattern may be match with the same event occurrences in several consecutive analysis processing. In contrast to general rule processing (employed to assert new facts from other facts, cf. inference engine) that is usually based on backtracking techniques, event log analysis algorithms are commonly greedy; for example, when a composite is said to have occurred, this fact is never revoked as may be done in a backtracking based algorithm.
Several mechanisms have been proposed for event log analysis: finite state automata, Petri nets, procedural (either based on an imperative programming language or an object-oriented programming languages), a modification of Boyer–Moore string search algorithm, and simple temporal networks.
See also
Event stream processing (ESP)
Complex event processing (CEP)
Network monitoring
Runtime verification (RV)
References
Operating system technology
Network management
System monitors
Systems management
|
21250314
|
https://en.wikipedia.org/wiki/Jeppiaar%20Engineering%20College
|
Jeppiaar Engineering College
|
Jeppiaar Engineering College (JEC) is one of the institutions of Jeppiaar Educational Trust and was established on 15 August 2001. The college is a Christian Minority Institution run by Dr. Regeena Jepiaar B.E., M.E., PhD daughter of Col. Dr. Jeppiaar M.A., B.L., PhD. The college is accredited with A Grade by NAAC and has NBA accredited B.Tech Biotechnology department.
The college is affiliated to the Anna University, Chennai and is approved by the All India Council for Technical Education and Government of Tamil Nadu.
Academic departments
The institution was established primarily as an engineering college but also conducts courses in Business Administration. The college consists of the following academic departments:
Department of Electronics and Communication Engineering
Department of Aeronautical Engineering
Department of Artificial Intelligence and Data Science
Department of Information Technology
Department of Computer Science and Engineering
Department of Mechanical Engineering
Department of Bio-Technology
Department of Management Studies (Business Administration)
Department of Information Technology
The Information Technology department started in 2001 with student of 60 and then increased to 120 in the year 2010. It was accredited on 21 June 2011. Currently the Department has 350 students. This is the first department in academics. Second and third year students compulsorily undergo inplant training and final year students carry out project work in various industries.
Department of Electronics and Communication Engineering
The Electronics & Communication Engineering department started in 2001 with a student intake of 60 and increased to 120 in 2004. It was accredited on 19 July 2008. Currently, the Department has 580 students.
Department of Mechanical Engineering
The Mechanical Department was inspected in 2002 and later accredited on 19 July 2008. The department has an intake of 120 students.
Controversy
Jeppiaar Engineering College, like other Jeppiaar educational institutions and other private colleges in Tamil Nadu, is known for extreme forms of gender segregation, when girls and boys are subjected to extreme restrictions. This has been subject to much controversy and criticism.
References
External links
Jeppiaar Engineering College - Official website
Jeppiaar Engineering College - Official landing page
Engineering colleges in Chennai
Colleges affiliated to Anna University
Educational institutions established in 2001
2001 establishments in Tamil Nadu
|
39169738
|
https://en.wikipedia.org/wiki/SNAP%20Points
|
SNAP Points
|
SNAP is the acronym for "Software Non-functional Assessment Process", a measurement of non-functional software size. The SNAP sizing method complements ISO/IEC 20926:2009, which defines a method for the sizing of functional user requirements. SNAP is a product of the International Function Point Users Group (IFPUG), and is sized using the “Software Non-functional Assessment Process (SNAP) Assessment Practices Manual” (APM) now in version 2.4. SNAP is recognized as an international standard by IEEE as “IEEE 2430-2019-IEEE Trial-Use Standard for Non-Functional Sizing Measurements,” published October 19, 2019 (https://standards.ieee.org/standard/2430-2019.html). It is also recognized as an international standard by ISO as “Software engineering — Trial use standard for software non-functional sizing measurements,” (https://www.iso.org/standard/81913.html), published October 2021.
Introduction
“Software sizing or software size estimation is an activity in software engineering that is used to determine or estimate the size of a software application or component in order to be able to implement other software project management activities (such as estimating or tracking). Size is an inherent characteristic of a piece of software just like weight is an inherent characteristic of a tangible material.”
A software application can provide two aspects of value to its users. The first aspect is its data processing capacity. This is basically the flow and storage of data through the application. This flow and storage can be defined as its "functionality". One metric used to measure the size of one unit of this functionality is the “function point.” By using an ISO-standard functional sizing metric (FSM) such as that in the IFPUG “Function Point Counting Practices Manual,” (FSM ISO/IEC 20926:2009), a function point counting specialist can examine the software application’s functional portion and measure its functional size in units of function points.
For more detail on the function point metric, and other organizations’ functional software sizing metrics, see the bibliography, the Wikipedia article “function point,” and numerous references in the literature.
A software application can also provide aspects other than data processing capacity. These types of software are defined by IFPUG as being “non-functional.” Their size is measured by SNAP. The IFPUG APM details how to size the non-functional aspects of software applications. The SNAP methodology has the IEEE standard IEEE2430-2019. The non-functional aspects are defined and classified in ISO/IEC 25010:2011, “Systems and software engineering -- Systems and software Quality Requirements and Evaluation (SQuaRE) -- System and software quality models”.
The functional size, together with the non- functional size, should be used for measuring the size of software projects. The two sizes should be used to measure the performance of the software project, setting benchmarks, and estimating the cost and duration of software projects.
The Non-functional Sizing Method
Similar to function point sizing, one unit of non-functionality is the “SNAP point” and the sizing of the non-functional portion of an application can be measured by using the procedure in the APM. Similar to function points, by using the IFPUG APM, a SNAP point counting specialist can examine the software application and measure the size of its non-functional portion in units of SNAP points. Also like function points, the number of SNAP points in an application correlates with the work effort to develop the non-functional portion of that application. The original research detailing this correlation is in CrossTalk The Journal of Defense Software Engineering, as the paper “A New Software Metric to Complement Function Points The Software Non-functional Assessment Process (SNAP)”.
Each portion (the functional and the non-functional) of the software project requires work effort to develop, which is proportional to their software sizes. Software development organizations can use their correlations between function points and their work effort, and between SNAP points and their work effort, to help forecast their software development costs and schedules and to audit projects to determine how well funding was spent and schedules were managed
SNAP recognizes four categories and 14 subcategories of non-functionality. These are in the below table from the APM.
1. Data Operations
1.1.Data Entry Validations
1.2.Logical and Mathematical Operations
1.3.Data Formatting
1.4.Internal Data Movements
1.5.Delivering Added Value to Users by Data Configuration
2. Interface Design
2.1.User Interfaces
2.2.Help Methods
2.3.Multiple Input Methods
2.4.Multiple Output Formats
3. Technical Environment
3.1.Multiple Platform
3.2.Database Technology
3.3.Batch Processes
4. Architecture
4.1.Component Based Software
4.2.Multiple Input / Output interfaces
For example, software development to change the field sizes for data in a data table does not represent changes in data processing capacity. However, this development requires work effort. Data Formatting is considered non-functional, and is countable under SNAP subcategory 1.3.
Help Methods (subcategory 2.2) are usually considered non-functional. When compared to the function point process, which requires data to cross an application’s boundary and maintain an internal logical file, the Help data may be coded to reside internally as part of the application development and be accessed upon command from the user. This access can be anything from bubble help over an icon on a screen, to access of part of an internally stored application operations manual. Data is not being processed per se, so Help is usually considered non-functional.
Function points and SNAP points measure two different aspects of software, and therefore are not added together. For example, an application of 500 function points and 300 SNAP points cannot be considered to be the size 800 of some metric; function points and SNAP points are intended to be orthogonal. A good reference for further detailed information regarding the relationship between functionality and non-functionality is in the document “Glossary of Terms for Non-Functional Requirements and Project Requirements Used in Software Project Performance Measurement, Benchmarking and Estimating”.
Benefits
SNAP provides users and software development teams many benefits additional to the sole use of function points. Below are five of many examples.
Measuring the non-functional requirements improves the work effort estimation of software development based on functional sizing alone.
This improved work effort estimation should also lead to better estimates of scheduling, resource allocation, and risks.
Including the measure of non-functional size improves the work effort estimation to maintain the software after it is deployed.
The productivity rates of project teams can be better determined because more factors are included in their measured work output.
Including both functional and non-functional work products better demonstrates value delivered to the user.
Further, some software development efforts might be measured as having zero function points. For example, an Agile software maintenance sprint might be required only to change the length of data fields in data tables. This would be measured to have zero function points because it is non-functional; however, that work would be accountable in SNAP. SNAP at least partly solves the “0 function point” problem.
Areas for future research
The SNAP beta test in 2012 was conducted using 48 applications. More research will hopefully improve the calibration of the subcategory weighting factors to yield an even stronger statistical correlation. It is recommended that future research results be submitted to IFPUG’s Non-functional Sizing Standards Committee (NFSSC) for review.
See also
IFPUG
Bibliography
Buglione, Luigi, and Santillo, Luca, “NFR: L”Altra Meta Della Mela,” Newlsetter, Gruppo Utenti Function Point Italia Italian Software Metrics Association, www.gufpi-isma.org, December 2011.
International Function Point Users Group, “How Function Points and SNAP Work Together,” MetricViews, www.ifpug.org, Princeton Junction, NJ, 08550, USA, August 2015.
Jones, Capers, “A Guide to Selecting Software Measures and Metrics,” CRC Press, Boca Rotan, FL, 33487, USA, 2017.
Jones, Capers, “Quantifying Software Global and Industry Perspectives,” CRC Press, Boca Rotan, FL, 33487, USA 2018.
References
External links
ifpug.org
Software metrics
|
16984046
|
https://en.wikipedia.org/wiki/Whitechapel%20Computer%20Works
|
Whitechapel Computer Works
|
Whitechapel Computer Works Ltd. (WCW) was a computer workstation company founded in the East End of London, United Kingdom in April 1983 by Timothy Eccles and Bob Newman, with a combined investment of £1 million from the Greater London Enterprise Board (£100,000 initially), venture capital companies Newmarket and Baillie Gifford, and the Department of Trade and Industry. The company was situated in the Whitechapel Technology Centre - a council-funded high-technology enterprise hub - and began the design of their first workstation model in August 1983, shipping the first units by September 1984.
MG-1 Workstation
The company's first workstation model was the MG-1 (named after the Milliard Gargantubrain from The Hitchhiker's Guide to the Galaxy). The MG-1 was based on the National Semiconductor NS32016 microprocessor, with 512 KB of RAM (expandable to 8 MB), a 1024 × 800 pixel monochrome display, a 10, 22 or 45 MB hard disk, 800 KB floppy drive, and an optional Ethernet interface, with prices stated as being equivalent to $6975 for the 10 MB hard disk system, $8250 for the 22 MB system and $9500 for the 45 MB system.
A contemporary evaluation of a 40 MB hard disk system with 2 MB RAM lists an approximate acquisition price of £9000. While there was no distributor in the United States, the MG-1 was sold in North America by Cybertool Systems Ltd. from 1984 through 1986. A colour version, the CG-1, was also announced in 1986, followed by the MG-200, with an NS32332 processor, in 1987.
The MG-1 employed an 8 MHz 32016 CPU with 32082 memory management unit (MMU) and 32081 floating-point unit (FPU), with the MMU being noted in a 1985 article as "suffering from bugs" and being situated on its own board providing hardware fixes. In order to deliver the machine at prices closer to personal computers than contemporary workstations (such as Sun, Apollo and Perq), design techniques from the personal computer industry were adopted, with a single eight-layer system board being used to hold the CPU and other integrated circuits.
Initially, NatSemi's Genix operating system, described as being based on Unix System III with 4.1BSD enhancements, or just 4.1BSD, was provided. NatSemi's Unix roadmap in 1984 advertised forthcoming 4.2BSD features and a "generic port of UNIX System V". However, during 1985, Genix was replaced on the MG-1 by a port of 4.2BSD called 42nix and augmented with the Oriel graphical user interface to give a reported factor of six performance improvement in graphics performance, Oriel being partially kernel-based.
In order to improve responsiveness and reduce the latency observed with contemporary Unix systems, the mouse position was tracked using a dedicated processor which also monitored the keyboard for events, and a form of hardware mouse pointer was used, with the pointer bitmap being stored in its own 64-pixel buffer as a kind of overlay, this being combined with the main display image to produce the final screen image. The machine also featured a "soft power switch" similar to that provided by the Apple Lisa (and also the slightly later Torch Triple X) which initiated "an orderly UNIX shutdown".
History and Legacy
WCW went into receivership in 1986, but were soon revived as Whitechapel Workstations Ltd. The new company, described as "a briefly flowering UK-based UNIX workstation company that shipped the first MIPS desktop computers in 1987", initially announced the MG-300 based on the MIPS architecture with a performance rating of 8 to 10 million instructions per second as part of a strategy to pursue sales in the US market via original equipment manufacturers and value-added resellers, with the company's management having been reconstituted to include "one-half new and one-half old staff". The MG-300 model was subsequently launched as the Hitech-10, featuring the MIPS R2000 processor, this being followed by the Hitech-20 with a MIPS R3000 processor, subsequently known as the Mistral-20. These ran the UMIPS variant of UNIX, with either X11 or NeWS-based GUIs, and were aimed at computer animation applications.
Whitechapel had reportedly sold as many as 1,000 workstations from its first range, these having been "particularly successful" in the London financial industry, and was aiming to increase production levels by relocating manufacturing from the UK to West Germany. However, the company entered receivership in April 1988. Its assets related to the Hitech-10 were purchased in June 1988 by a consortium, Computer Hitech International, which adopted the corporate identity Mistral Computer Systems. Mistral subcontracted the design of its systems to Algorithmics Ltd., this being "essentially the rump of the old Whitechapel design team". Algorithmics was later acquired by MIPS Technologies in 2002.
References
Defunct companies based in London
Defunct computer companies of the United Kingdom
Defunct computer hardware companies
Computer companies established in 1983
Computer workstations
MIPS architecture
|
3903457
|
https://en.wikipedia.org/wiki/William%20Genovese
|
William Genovese
|
William Genovese is a former greyhat hacker turned security professional, who goes by the alias illwill.
History
In the early 2000s, Genovese was a former figure in a loose-knit group of computer hackers who called themselves illmob. illmob.org, that was a security community website ran by Genovese, which, at the time, had many high-profile incidents related to it.
Genovese now works as a private security consultant involved in the computer security industry, doing penetration testing, phishing, OSINT threat intel, mitigation. He is also a contributor to the Metasploit project.
Website controversy
In 2003, Genovese's website was the first to release 0day code that exploited the MS03-026 Windows RPC vulnerability, which was later used by unknown hackers to create variants of the W32/Blaster Worm. In response, Genovese released a tool he coded to remove the worm from infected Windows PC's.
In 2004 federal authorities charged Genovese with Theft of a Trade Secret (US Code Title 18, section 1832), for selling the incomplete WindowsNT/2000 Microsoft source code to Microsoft investigators and federal agents, even though the code sold was already widely distributed on the Internet prior to his sale. Authorities used an obscure law enacted under the Economic Espionage Act of 1996, which had been traditionally
adjudicated through private civil litigation.
In 2005, the illmob.org site had posted leaked images and phone book from Paris Hilton's T-Mobile Sidekick phone that were obtained from a fellow hacker. Reportedly, the data was obtained by social engineering and exploiting a vulnerability in a BEA WebLogic Server database function that allowed an attacker to remotely read or replace any file on a system by feeding it a specially-crafted web request. BEA produced a patch for the bug in March 2003 which T-Mobile failed to apply.
The website was also mentioned in news articles, in connection with Fred Durst's sex tape leak which was stolen from his personal email account.
Hackerspace
From 2010 until his resignation in 2016, Genovese co-founded, and was a board member of a 501(3)(c) non-profit Hackerspace in Connecticut called NESIT, which he helped the local community by offering free classes on various network security topics, personal internet safety, reverse engineering, embedded electronic projects, 3-D Printing, and design. He helped build a virtualized pen-testing lab with a large server farm donation from a pharmaceutical company, where users can simulate attacking and penetrating machines in a safe lab environment.
Consulting
Since 2008, Genovese has reinvented himself as a security consultant, public speaker, and teacher. He does security consulting and performs penetration testing services for worldwide companies . He was also a co-founder and speaker at security conferences eXcon and BSides Connecticut (BSidesCT) in 2011, 2014, 2016, 2017, and 2018. In 2015 he was a panelist at DEF CON 23 in Las Vegas for a charity fundraiser to help a fellow hacker who was stricken with terminal cancer.
References
External links
http://illmob.org/
http://willgenovese.com/
Year of birth missing (living people)
Living people
American computer criminals
Computer security specialists
|
28382335
|
https://en.wikipedia.org/wiki/Interxion
|
Interxion
|
Interxion is a European provider of carrier and cloud-neutral colocation data centre services. Founded in 1998 in the Netherlands, the firm was publicly listed on the New York Stock Exchange from 28 January 2011 until its acquisition by Digital Realty in March 2020. Interxion is headquartered in Schiphol-Rijk, the Netherlands, and delivers its services through 53 data centres in 11 European countries located in major metropolitan areas, including Dublin, London, Frankfurt, Paris, Amsterdam and Madrid, the 6 main data centre markets in Europe, as well as Marseille, Interxion’s Internet Gateway.
The company's core offering is carrier-neutral colocation, which includes provision of space, power and a secure environment in which to house customers’ computing, network, storage and IT infrastructure. Interxion also supplements its core colocation offering with a number of additional services, including systems monitoring, systems management, engineering support services, data back-up and storage.
Within its data centres, Interxion enables approximately 1,500 customers to house their equipment and connect to a broad range of telecommunications carriers, ISPs and other customers. The data centres act as content and connectivity hubs that facilitate the processing, storage, sharing and distribution of data, content, applications and media among carriers and customers.
Interxion's customer base is in high-growth market segments, including financial services, cloud and managed services providers, digital media and carriers. Customers in these target markets enable expansion of existing communities of interest and build new, high-value communities of interest within the data centre. Communities of interest are particularly important to customers in each of these market segments. For example, customers in the digital media segment benefit from the close proximity to content delivery network providers and Internet exchanges in order to rapidly deliver content to consumers. Interxion expects the high-value and reduced-cost benefits of communities of interest to continue to attract new customers.
Interxion's data centres enable its customers to connect to more than 500 carriers and ISPs and 20 European Internet exchanges, allowing them to lower telecommunications costs and reduce latency times.
Communities of interest
Interxion focuses its efforts on attracting customers in well-defined sectors of industry:
Digital media
Interxion has created content hubs across its European data centre footprint. The hubs allow organisations to aggregate, exchange, store, manage and distribute content in addition to interconnecting with a large digital media community, helping to optimise distribution and minimise costs.
Financial services
Interxion has created financial hubs across key European financial markets, including London, Paris, Amsterdam, Frankfurt, Dublin and Stockholm. The hubs consist of highly interactive and extensive communities of capital market participants, including a range of algorithmic and high-frequency traders, brokers, hedge funds, exchanges, multilateral trading facilities, market data providers and clearing houses. The financial hubs are accessible via a wide range of carriers and high bandwidth fibre connectivity providers.
Cloud
Interxion has created cloud hubs across its European footprint, creating an optimum environment for cost-effective development, launch and management of cloud-based services for enterprises, systems integrators and cloud service providers. The hubs also enable fast, easy interconnection with one of Europe's largest and fastest-growing community of cloud operators.
Carriers and network providers
Interxion works with many carriers, network providers and Internet Service Providers as well as 20 Internet exchanges, neutral Ethernet exchanges, CDN’s and over 500 carriers. Existing Interxion customers can interconnect data centres to any of these parties via a simple cross connect using the Ethernet platform within Interxion data centres.
History
European Telecom Exchange BV was incorporated on 6 April 1998, and (after being renamed Interxion Holding B.V. on 12 June 1998) was converted into Interxion Holding N.V. on 11 January 2000. Interxion completed its IPO on the New York Stock Exchange (NYSE) on 28 January 2011. Interxion was founded by Bart van den Dries. A first round of venture capital was provided by Residex together with some informal investors.
In February 2015, it was announced that UK-based data center operator Telecity would merge with Interxion, purchasing it in a $2.2 billion deal, thus creating a joint data-center operator, with a combined value of $4.5 billion. According to the two CEOs, a deal promised to deliver around $600 million in synergy savings. In May 2015, US data company Equinix announced it would be acquiring TelecityGroup for £2.35 billion ($3.6 billion), which would terminate Telecity's deal with Interxion.
On 15 October 2015 the Montreuil ordered Interxion to stop using the La Courneuve data centre because of noise pollution concerns raised by the inhabitants.
In October 2019, Digital Realty and Interxion announced the acquisition of Interxion by Digital Realty for $8.4 billion to “create a leading global provider of data centre, colocation and interconnection solutions”.
In November 2019, Interxion announced a new contract deployment from global Infrastructure-as-a-Service provider Voxility for its campus in Madrid, reaching more than 90 carriers in this hub alone.
In Q1 2020, Interxion acquired 70% of icolo.io, a Kenyan data centre company and in 2021 acquired controlling stakes of Medallion, a leading data centre operator in Nigeria.
Industry standards and accreditations
Interxion is certified with BS 25999, the British Standards Institution (BSI) standard for business continuity management. This has been integrated with Interxion's existing Information Security Management System (ISMS) certification ISO 27001:2005 standard for all of its European country operations. In addition, the company's European Customer Service Centre (ECSC) team has now been trained in ITIL v3, the latest ITIL standard.
BS 25999 is the world's first business continuity management (BCM) standard, developed to minimise the risks of disruptions, which can impact a business. The standard is designed to keep businesses operational during challenging times by protecting staff, preserving reputations and providing the ability to keep trading.
Interxion's development of a BCM system involved integrating with the already established Information Security Management System, ISO 27001, an internationally recognised certification designed to assess levels of risk across an entire company's data centre network.
Awards
In 2010 Interxion's Technology and Engineering Group was recognised for its “Outstanding Contribution to the Data Centre Sector” at the sixth annual Data Centre Europe awards ceremony held at Espaces Antipolis in Nice, France. Interxion was nominated in the Green I.T. Awards 2011 as a finalist for “IT Operator of the Year”.
Memberships
Interxion is a member of the following organisations:
RIPE
Euro-IX
Irish Internet Association
The Green Grid – Contributor Member
The Green Grid – Advisory Council
The Uptime Institute
Colocation services
Security
Interxion data centre buildings are typically designed with five layers of physical security: the perimeter fence, the security gate and entrance, mantraps into the data centre, access systems into the rooms and secure, locked cabinets. Clients can introduce additional levels, such as lockable cages or cubes (containment aisles), as required. No one enters or leaves the data centre without proof of identity, such as national ID, passport or driving license; and all visitors are checked against customer-defined access lists. There are multiple physical security layers, including CCTV, man traps and 24x7 controlled access. All building areas are secured by an alarm system, and an external security firm patrols the area, both inside and outside. Interxion utilises ISO 27001-certified information security management systems.
Power
Interxion provides managed power with resilience built in all the way to the cabinet and server if required and a minimum N+1 configuration on power infrastructure with facilities such as high-speed refuelling. The continuously improving design specification supports a modular build, including critical infrastructure. Extra equipment can be added to the infrastructure to increase capacity without causing outages.
Controlled environment
All equipment is maintained and continuously monitored in a climate-controlled environment. The average temperature inside the cold aisle is controlled between 18 and 25 °C and a humidity level of 50% ± 10%. Multiple air conditioning units provide redundant capacity. Early warning systems installed as standard detect hot spots before they become a problem.
Energy efficiency
After joining the Green Grid association in 2008 and becoming a Contributor Member and part of the Advisory Council, Interxion has committed to continuously investigate efficiency opportunities such as free cooling as standard, ground water cooling, and waste heat re-use. Continuous monitoring and measuring provides information about the environments and enables identification of opportunities to improve efficiency. The flexible design provides a scalable infrastructure model.
Connectivity
Interxion connects to more than 400 individual carriers and ISPs as well as 18 European Internet exchanges. This is part of the carrier-neutral data centre concept.
Carriers
Interxion hosts global Tier 1, regional Tier 2 and national Tier 3 networks with direct access to the backbone infrastructure and PoP’s for over 400 carriers across its European footprint. These carriers are present at Interxion’s data centres both to interconnect to other carriers but also to take advantage of the customer communities within the data centres.
Internet exchanges
Internet exchanges are the major points on the Internet where networks interconnect. They serve as an exchange point for the traffic of the Internet via bi-lateral, settlement-free peering agreements. Interxion houses 18 such Internet exchanges in Europe. Interxion is an active supporter of the public Internet exchanges and was an active participant in the creation of Euro-IX.
See also
Data center
Cloud computing
Colocation centre
Internet exchange point
Peering
References
External links
Interxion official website
Companies formerly listed on the New York Stock Exchange
Data centers
2011 initial public offerings
Real estate companies established in 1998
Telecommunications companies established in 1998
Real estate companies of the Netherlands
Telecommunications companies of the Netherlands
Companies based in Amsterdam
2020 mergers and acquisitions
Dutch companies established in 1998
|
11953861
|
https://en.wikipedia.org/wiki/Gopsall
|
Gopsall
|
Gopsall (or Gopsall Park) is an area of land in Hinckley and Bosworth, England. It is located between the villages of Appleby Magna, Shackerstone, Twycross and Snarestone. The population is included in the civil parish of Mancetter (Warwickshire).
The name 'Gopsall' means 'hill of the servants'.
Gopsall is the site of a former Georgian country house that was known as Gopsall Hall. The northern edge of the estate is dissected by the Ashby-de-la-Zouch Canal and a long distance trail known as the Ivanhoe Way.
The area is mostly agricultural and is dotted with privately rented farms. A permissive footpath allows limited access to the public between Little Twycross and Shackerstone. The A444 Ashby to Nuneaton road also leads to a canal wharf on the western edge of the estate.
Gopsall Hall
Gopsall Hall was erected for Charles Jennens around 1750 at a cost of £100,000 (£8,516,000 today). It was long believed to have been designed by John Westley and built by the Hiorns of Warwick, who later added service wings and Rococo interiors. However, later research by John Harris, curator of the RIBA drawings collection suggests that it was designed as well as built by William or David Hiorns.
The Hall was set in several hundred acres of land and included two lakes, a walled garden, a Chinese boathouse, a Gothic seat and various garden buildings. In 1818 a grand entrance (modelled on the Arch of Constantine) was added.
Queen Adelaide was a frequent visitor to the Hall during her long widowhood. She was popular with the locals, being remembered in many of the surrounding villages. (E.g. The former Queen Adelaide Pub in Appleby Magna, Queen Street, Measham and the Queen Adelaide Oak Tree in Bradgate Park)
In 1848 Gopsall Hall was described as follows:
Said to be the finest country house in Leicestershire, its last use was as an army headquarters during World War II, and was in such bad repair that it was demolished in 1951. Gopsall Park Farm was built over most of the original site and is not accessible without invitation.
The remains include parts of the walled garden, the electricity generating building, an underground reservoir, the tree-lined avenue, the gatehouse and the temple ruins associated with Handel.
During the 1920s and 1930s Gopsall hosted a motor racing circuit and part of the woodland is still named "The Race Course".
Notable guests who stayed at the estate included Queen Adelaide, King Edward VII, Queen Alexandra, and Winston Churchill.
Land around Gopsall was considered as a possible site for East Midlands Airport.
Between 1873 and the 1930s Gopsall was served via the Ashby to Nuneaton railway line. The station at Shackerstone is part of a preserved railway and visitor attraction (Battlefield Line Railway).
There was a Great Western Railway steam locomotive by the name of "Gopsal Hall". Note the misspelling of the name.
Chronology of owners
pre 1750: Humphrey Jennens
circa 1750 - 1773: Charles Jennens, grandson of Humphrey Jennens
circa 1773 - 1797: Penn Assheton Curzon, son of Assheton Curzon, 1st Viscount Curzon, and also a cousin of Charles Jennens
1797 - 1870: Richard Curzon-Howe (Earl Howe), son of Penn Assheton Curzon and Sophia Howe (Baroness Howe)
1870 - 1919: the Curzon-Howe family
1919 – 1927: Sir Samuel James Waring (Lord Waring), of Waring & Gillow.
1927 – 1932: Crown Estate (Gopsall estate only)
1932 – present: Crown Estate (Gopsall estate and Hall) (NOTE: Hall demolished circa 1952)
1942 – 1945: the Royal Electrical and Mechanical Engineers (REME) made use of the Hall as an experimental radar base during the Second World War.
Gopsall Temple
In 2002 the temple was part of a restoration project and it is also a Grade II listed building.
It is possible to visit the monument via the public footpath near the old Gopsall Hall Gatehouse entrance in the village of Shackerstone. It is a good 15 minute walk to the site.
A statue of Religion by Louis Francois Roubiliac stood on the roof of the temple and was erected as a memorial to the classical scholar (and Jennens’s friend) Edward Holdsworth. The figure was donated by Lord Howe to the City of Leicester and is housed in the gardens of Belgrave Hall Museum.
Handel’s Messiah
During the second half of the eighteenth century the estate was owned by Charles Jennens (a librettist and friend of George Frideric Handel). It is reputed that in 1741 Handel composed part of Messiah, his famous oratorio, inside a garden temple at Gopsall. Some texts however challenge this theory and posit there is no evidence to confirm Handel stayed on the estate in 1741, although he was a frequent visitor. The temple was built after Messiah had already been completed.
The organ that Handel specified for Charles Jennens in 1749 is now to be found in St James' Church, Great Packington.
Notes
References
Census output area 31UEGL0005 covers most of the area around Gopsall Park. For further details visit Neighbourhood Statistics website
The Musical Times and Singing Class Circular, Vol. 43, No. 717 (November 1, 1902), pp. 713–718 website link
Lewis, Samuel (Eds), A Topographical Dictionary of England., (7th Edition, 1848). British History website
Details of Crown Estate ownership can be found on The Crown Estate website
Details of old money conversion can be found at The National Archives - Currency converter: 1270–2017
Further reading
Oakley, Glynis. A History of Gopsall. (Bancroft printing, 1997)
Smith, Ruth 'The Achievements of Charles Jennens (1700–1773)', Music & Letters, Vol. 70, No. 2 (May, 1989), pp. 161–190
Lewis, Samuel (Eds), A Topographical Dictionary of England, 1848 (7th Edition), 'Goodneston - Gosforth', pp. 315–19.
External links
Gopsall Hall history and photos
Handel House Museum website
Letter from Handel to Charles Jennens regarding the organ for Gopsall
The Gopsall Organ
The tune from the Messiah by Handel "Gopsal" (Rejoice, the Lord is King!)
Pictures of Gopsall
Gopsall Fishing Club
History of Leicestershire
Ruins in Leicestershire
Geography of Leicestershire
Tourist attractions in Leicestershire
Grade II listed buildings in Leicestershire
Crown Estate
|
56767592
|
https://en.wikipedia.org/wiki/Election%20security
|
Election security
|
Election cybersecurity or election security refers to the protection of elections and voting infrastructure from cyberattack or cyber threat – including the tampering with or infiltration of voting machines and equipment, election office networks and practices, and voter registration databases.
Cyber threats or attacks to elections or voting infrastructure could be carried out by insiders within a voting jurisdiction, or by a variety of other actors ranging from nefarious nation-states, to organized cyber criminals to lone-wolf hackers. Motives may range from a desire to influence the election outcome, to discrediting democratic processes, to creating public distrust or even political upheaval.
United States
The United States is characterized by a highly decentralized election administration system. Elections are a constitutional responsibility of state and local election entities such as secretaries of state, election directors, county clerks or other local level officials encompassing more than 6,000+ local subdivisions nationwide.
However, election security has been characterized as a national security concern increasingly drawing the involvement of federal government entities such as the U.S. Department of Homeland Security. In early 2016, Jeh Johnson, Secretary of Homeland Security designated elections as “critical infrastructure” making the subsector eligible to receive prioritized cybersecurity assistance and other federal protections from the Department of Homeland Security. The designation applies to storage facilities, polling places, and centralized vote tabulations locations used to support the election process, and information and communications technology to include voter registration databases, voting machines, and other systems to manage the election process and report and display results on behalf of state and local governments. In particular, hackers falsifying official instructions before an election could affect voter turnout or hackers falsifying online results after an election could sow discord.
Post 2016 Election
Election security has become a major focus and area of debate in recent years, especially since the 2016 U.S. Presidential Election. In 2017, DHS confirmed that a U.S. foreign adversary, Russia, attempted to interfere in the 2016 U.S. Presidential Election via “a multi-faceted approach intended to undermine confidence in [the American] democratic process." This included conducting cyber espionage against political targets, launching propaganda or “information operations” (IO) campaigns on social media, and accessing elements of multiple U.S. state or local electoral boards.
On September 22, 2017, it was reported that the U.S. Department of Homeland Security (DHS) notified 21 states that they were targeted by Kremlin-backed hackers during the 2016 election. Those states included Alabama, Alaska, Colorado, Connecticut, Delaware, Florida, Illinois, Maryland, Minnesota, Ohio, Oklahoma, Oregon, North Dakota, Pennsylvania, Virginia, Washington,2 Arizona, California, Iowa, Texas, and Wisconsin. Currently, hackers only reportedly succeeded in breaching the voter registration system of one state: Illinois.
In the aftermath of the 2016 hacking, a growing bench of national security and cyber experts have emerged noting that Russia is just one potential threat. Other actors including North Korea, Iran, organized criminals possess, and individual hackers have motives and technical capability to infiltrate or interfere with elections and democratic operations. Leaders and experts have warned that a future attack on elections or voting infrastructure by Russian-backed hackers or others with nefarious intent, such as seen in 2016, is likely in 2018 and beyond.
One recommendation to prevent disinformation from fake election-related web sites and email spoofing is for local governments to use .gov domain names for web sites and email addresses. These are controlled by the federal government, which authenticates the legitimate government controls the domain. Many local governments use .com or other top-level domain names; an attacker could easily and quickly set up an altered copy of the site on a similar-sounding .com address using a private registrar.
In 2018 assessment of US state election security by the Center for American Progress, no state received an “A” based on their measurements of seven election security factors. Forty states received a grade of C or below. A separate 2017 report from the Center for American Progress outlines nine solutions which states can implement to secure their elections; including requiring paper ballots or records of every vote, the replacement of outdated voting equipment, conducting post election audits, enacting cybersecurity standards for voting systems, pre-election testing of voting equipment, threat assessments, coordination of election security between state and federal agencies, and the allocating of federal funds for ensuring election security.
Europe
Russia's 2016 attempts to interfere in U.S. elections fits a pattern of similar incidents across Europe for at least a decade. Cyberattacks in Ukraine, Bulgaria, Estonia, Germany, France and Austria that investigators attributed to suspected Kremlin-backed hackers appeared aimed at influencing election results, sowing discord and undermining trust in public institutions that include government agencies, the media and elected officials. In late 2017, the United Kingdom through its intelligence agencies made claims that a Russian IO campaign played a damaging role in its Brexit referendum vote in June 2016.
Role of white hat hackers
The "white hat" hacker community has also been involved in the public debate. From July 27–30, 2017, DEFCON – the world's largest, longest running and best-known hacker conference – hosted a “Voting Machine Hacking Village” at its annual conference in Las Vegas, Nevada to highlight election security vulnerabilities. The event featured 25 different pieces of voting equipment used in federal, state and local U.S. elections and made them available to white-hat hackers and IT researchers for the purpose of education, experimentation, and to demonstrate the cyber vulnerabilities of such equipment. During the 3-day event, thousands of hackers, media and elected officials witnessed the hacking of every piece of equipment, with the first machine to be compromised in under 90 minutes. One voting machine was hacked remotely and was configured to play Rick Astley's song "Never Gonna Give You Up." Additional findings of the Voting Village were published in a report issued by DEFCON in October 2017.
The "Voting Village" was brought back for a second year at DEF CON, which was held in Las Vegas, August 9–12, 2018. The 2018 event dramatically expanded its inquiries to include more of the election environment, from voter registration records to election night reporting and many more of the humans and machines in the middle. DEF CON 2018 also featured a greater variety of voting machines, election officials, equipment, election system processes, and election night reporting. Voting Village participants consisted of hackers, IT and security professionals, journalists, lawyers, academics, and local, state and federal government leaders. A full report was issued on the 2018 Village Findings at a press conference in Washington, DC, held on September 27, 2018.
Legislation and policy
A variety of experts and interest groups have emerged to address U.S. voting infrastructure vulnerabilities and to support state and local elections officials in their security efforts. From these efforts have come a general set of policy ideas for election security, including:
Implement universal use of paper ballots, marked by hand and read by optical scanner, ensuring a voter-verified paper audit trail (VVPAT).
Phase out touch-screen voting machines – especially the most vulnerable direct-recording electronic (DRE) devices
Update pollbooks and other electronic equipment used to check-in voters.
Verify voting results by requiring election officials to conduct risk-limiting audits, a statistical post-election audit before certification of final results.
Secure voting infrastructure, especially voter registration databases, using cyber hygiene tools such as the CIS “20 Critical Security Controls” or NIST's Cybersecurity Framework.
Call upon outside experts to conduct cyber assessments – DHS, white-hat hackers, cybersecurity vendors and security researchers – where needed.
Provide resources and training to state and local election leaders for cyber maintenance and on-going monitoring.
Promote information-sharing on cyber threats and incidents in and across the entire voting industry.
Appropriate federal funding to states to implement infrastructure upgrades, audits, and cyber hygiene measures.
Establish clear channels for coordination between local, state, and federal agencies, including real-time sharing of threat and intelligence information.
Maintain DHS's designation of elections as a Critical Infrastructure Subsector.
Require DHS to institute a pre-election threat assessment plan to bolster its technical support capacity to state and locals requesting assistance.
Federal legislation has also been introduced to address these concerns. The first bipartisan Congressional legislation to protect the administration of Federal elections against cybersecurity threats – the Secure Elections Act (SB 2261) – was introduced on December 21, 2017 by Senator James Lankford (R-OK).
The 2018 Federal Budget (as signed by President Donald Trump) included $380m USD in state funding to improve election security. Each state received a standard payment of $3m USD, with the remaining $230m USD allocated to each state proportionally based on voting age population. Security measures funded included improving cybersecurity (36.3% of funds), the purchase of new voting equipment (27.8%), improvement of voter registration systems (13.7%), post election audits (5.6%), and improving communications efforts (2%).
See also
Verified Voting Foundation
References
External links
Verified Voting - U.S. advocacy organization that catalogs voting equipment used in each state
Elections
Electoral systems
Electoral fraud
National security
Security technology
Information governance
Cyberwarfare
Cryptography
Cybercrime
|
61825840
|
https://en.wikipedia.org/wiki/Aaron%20Fleishhacker
|
Aaron Fleishhacker
|
Aaron Fleishhacker (February 4, 1820 – February 19, 1898) was a German-born American businessman who founded paper box manufacturer, A. Fleishhacker & Co. He had been active during the Gold Rush with the formation of Comstock silver mines.
Biography
Aaron Fleishhacker was born on February 4, 1820 to a Jewish family in Kingdom of Bavaria.
In 1845, he immigrated to the United States first settling in New Orleans where he opened a retail store and then briefly to New York City before moving to San Francisco in 1853. He moved around the region selling his wares to miners traveling to Sacramento, Grass Valley, Oregon, Virginia City, Nevada and Carson City, Nevada.
He then returned to San Francisco, where he started a paper wholesale business and then either founded or purchased the Golden Gate Paper Box Company which then was renamed A. Fleishhacker & Co. The company was nicknamed the "Paper Bag House" and the company became the largest box manufacturer in the West. His sons, Mortimer and Herbert, who had both started working in the business while teens, took over the company upon his death.
Death and legacy
He died on February 19, 1898 in San Francisco, California.
The Fleishhacker Pool (or Delia Fleishhacker Memorial Building) was a public saltwater swimming pool in San Francisco, dedicated to the family. Fleishhacker was a founding member of Congregation Emanu-El in San Francisco.
Personal life
In 1857, he married Deliah Stern of Albany, New York; they had eight children, six of whom survived to adulthood: Carrie Fleishhacker Schwabacher (married to Ludwig Schwabacher), Emma Fleishhacker Rosenbaum (married to S. D. Rosenbaum), Mortimer Fleishhacker (1866–1953), Herbert Fleishhacker (1872–1957), Belle Fleishhacker Scheeline (married to S. C. Scheeline), and Blanche Fleishhacker Wolf (married to Frank Wolf).
References
External links
American company founders
Bavarian emigrants to the United States
1820 births
1898 deaths
Businesspeople from Bavaria
19th-century German Jews
Businesspeople from California
People from the Kingdom of Bavaria
19th-century American businesspeople
Burials at Home of Peace Cemetery (Colma, California)
|
25879316
|
https://en.wikipedia.org/wiki/2d%20Airborne%20Command%20and%20Control%20Squadron
|
2d Airborne Command and Control Squadron
|
The United States Air Force's 2d Airborne Command and Control Squadron was an airborne command and control unit located at Offutt Air Force Base, Nebraska. The squadron was an integral part of the United States' Post Attack Command and Control System, performing the Operation Looking Glass mission with the Boeing EC-135 aircraft.
History
World War II
From its activation in April 1942 until it was disbanded in 1944, the 2d Ferrying Squadron received aircraft at their factory of origin and ferried them to the units to which they were assigned.
Liaison duties in the 1950s
The 2d Liaison Squadron provided emergency air evacuation, search and rescue, courier and messenger service, routine reconnaissance and transportation of personnel. It regularly operated between Langley Air Force Base, Virginia and Fort John Custis with one Beechcraft C-45 Expeditor and several Stinson L-13s.
In July 1952, the squadron closed at Langley and reopened at Shaw Air Force Base, South Carolina, operating de Haviland Canada L-20 Beavers. It operated a regular courier service to Pope Air Force Base, North Carolina and Myrtle Beach Air Force Base, South Carolina. In 1953, the squadron also began operating Sikorsky H-19 helicopters. The unit was inactivated in June 1954.
Airborne Command Post
The desire for an Airborne Command Post (ABNCP) to provide survivable command and control of Strategic Air Command's nuclear forces came about in 1958. By 1960, modified KC-135A command post aircraft began pulling alert for SAC under the 34th Air Refueling Squadron (AREFS). On 3 February 1961, Continuous Airborne Operations (CAO) commenced, which meant that there was always at least one Looking Glass aircraft airborne at all times. Starting in 1966, the 38th Strategic Reconnaissance Squadron took over the Looking Glass mission. Eventually, on 1 April 1970, the 2nd ACCS took over the Looking Glass mission flying EC-135C ABNCP aircraft for the duration of the Cold War as a critical member in the Post Attack Command and Control System. There continued to be at least one Looking Glass aircraft airborne at all times providing a backup to SAC's underground command post. Additionally, the 2 ACCS maintained an additional EC-135C on ground alert at Offutt AFB, NE as the EASTAUXCP (East Auxiliary Command Post), providing backup to the airborne Looking Glass, radio relay capability, and a means for the Commander in Chief of SAC to escape an enemy nuclear attack.
The 2nd ACCS was also a major player in Airborne Launch Control System operations. The primary mission of the 2nd ACCS was to fly the SAC ABNCP Looking Glass aircraft in continuous airborne operations, however, due to its close proximity in orbiting over the central US, the airborne Looking Glass provided ALCS coverage for the Minuteman Wing located at Whiteman AFB, MO. Not only did Whiteman AFB have Minuteman II ICBMs, but it also had ERCS configured Minuteman missiles on alert. The EASTAUXCP was ALCS capable, however, it did not have a dedicated ALCS mission.
Lineage
2d Ferrying Squadron
Constituted as the 2d Air Corps Ferrying Squadron on 18 February 1942
Activated on 16 April 1942
Redesignated 2d Ferrying Squadron on 12 May 1943
Disbanded on 31 March 1944
Reconstituted and consolidated with 2d Liaison Squadron and 2d Airborne Command and Control Squadron as the 2d Airborne Command and Control Squadron on 19 September 1985
2d Liaison Squadron
Constituted as the 2d Liaison Flight on 27 September 1949
Activated on 25 October 1949
Redesignated 2d Liaison Squadron on 15 July 1952
Inactivated on 22 July 1952
Activated on 22 July 1952
Inactivated on 18 June 1954
Consolidated with 2d Ferrying Squadron and 2d Airborne Command and Control Squadron as the 2d Airborne Command and Control Squadron on 19 September 1985
2d Airborne Command and Control Squadron
Constituted as the 2d Airborne Command and Control Squadron on 12 March 1970
Activated on 1 April 1970
Consolidated with 2d Ferrying Squadron and 2d Liaison Squadron on 19 September 1985
Inactivated on 19 July 1994
Assignments
Midwest Sector, Air Corps Ferrying Command (later 5th Ferrying Group, 16 April 1942 – 31 March 1944
Ninth Air Force, 25 October 1949 (attached to 4th Fighter Wing (later 4th Fighter-Interceptor Wing))
Tactical Air Command, 1 August 1950 – 22 July 1952 (remained attached to 4th Fighter-Interceptor Wing to 1 September 1950, attached to 363d Tactical Reconnaissance Wing 1 September 1950, 47th Bombardment Wing 12 March 1951, 4430th Air Base Wing after 12 February 1952)
Ninth Air Force, 22 July 1952 – 18 June 1954 (attached to 363d Tactical Reconnaissance Wing)
55th Strategic Reconnaissance Wing, 1 April 1970
55th Operations Group, 1 September 1992 – 19 July 1994
Stations
Hensley Field, Texas (18 February 1942)
Love Field, Texas, 8 September 1942
Fairfax Airport, Kansas, 16 January 1943 – 31 March 1944
Langley Air Force Base, Virginia, 25 October 1949 – 22 July 1952
Shaw Air Force Base, South Carolina, 22 July 1952 – 18 June 1954
Offutt Air Force Base, Nebraska, 1 April 1970 – 19 July 1994
Awards and Campaigns
Air Force Outstanding Unit Award
1 July 1970 – 30 June 1971
1 July 1972 – 30 June 1974
1 July 1974 – 30 June 1976
1 July 1976 – 30 June 1978
1 July 1978 – 30 June 1980
Aircraft & Missiles Operated
Various aircraft (1942–1944)
Beechcraft C-45 Expeditor (1949–1952)
Stinson L-13 (1949–1952)
de Haviland Canada L-20 Beaver (1952–1954)
Sikorsky H-19 (1953–1954)
Boeing EC-135 (1970–1994)
See also
Airborne Launch Control System
Survivable Low Frequency Communications System
Ground Wave Emergency Network
Minimum Essential Emergency Communications Network
Emergency Rocket Communications System
The Cold War
Game theory
Continuity of government
References
Notes
Citations
Bibliography
External links
2d Airborne Command and Control Squadron Website
United States nuclear command and control
002
Continuity of government in the United States
Military units and formations established in 1970
Command and control squadrons of the United States Air Force
|
3260102
|
https://en.wikipedia.org/wiki/List%20of%20Jewish%20American%20computer%20scientists
|
List of Jewish American computer scientists
|
This is a list of notable Jewish American computer scientists. For other Jewish Americans, see Lists of Jewish Americans.
Hal Abelson; artificial intelligence
Leonard Adleman; RSA cryptography, DNA computing, Turing Award (2002)
Adi Shamir; RSA cryptography, DNA computing, Turing Award (2002)
Paul Baran, Polish-born engineer; co-invented packet switching
Lenore and Manuel Blum (Turing Award (1995)), Venezuelan-American computer scientist; computational complexity, parents of Avrim Blum (Co-training)
Dan Bricklin, creator of the original spreadsheet
Sergey Brin; co-founder of Google
Danny Cohen, Israeli-American Internet pioneer; first to run a visual flight simulator across the ARPANet
Robert Fano, Italian-American information theorist
Ed Feigenbaum; artificial intelligence, Turing Award (1994)
William F. Friedman, cryptologist
Herbert Gelernter, father of Unabomber victim David Gelernter;artificial intelligence
Richard D. Gitlin; co-inventor of the digital subscriber line (DSL)
Adele Goldberg; Smalltalk design team
Shafi Goldwasser, Israeli-American cryptographer; Turing Award (2013)
Philip Greenspun; web applications
Frank Heart; co-designed the first routing computer for the ARPANET, the forerunner of the internet
Martin Hellman; public key cryptography, co-inventor of the Diffie–Hellman key exchange protocol, Turing Award (2015)
Douglas Hofstadter, author of Gödel, Escher, Bach and other publications (half Jewish)
Bob Kahn; co-invented TCP and IP, Presidential Medal of Freedom, Turing Award (2004)
Richard M. Karp; computational complexity, Turing Award (1985)
John Kemeny, Hungarian-born co-developer of BASIC
Leonard Kleinrock; packet switching
John Klensin; i18n, SMTP, MIME
Solomon Kullback, cryptographer
Ray Kurzweil; OCR, speech recognition
Jaron Lanier, virtual reality pioneer
Leonid Levin, Soviet Ukraine-born computer scientist; computational complexity, Knuth Prize (2012)
Barbara Liskov (born Huberman), first woman to be granted a doctorate in computer science in the United States; Turing Award (2008)
Udi Manber, Israeli-American computer scientist; agrep, GLIMPSE, suffix array, search engines
John McCarthy; artificial intelligence, LISP programming language, Turing Award (1971)
Jack Minker; database logic
Marvin Minsky; artificial intelligence, neural nets, Turing Award (1969); co-founder of MIT's AI laboratory
John von Neumann (born Neumann János Lajos), Hungarian-American computer scientist, mathematician and economist
Seymour Papert, South African-born co-inventor — with Wally Feurzeig and Cynthia Solomon — of the Logo programming language
Judea Pearl, Israeli-American AI scientist; developer of Bayesian networks; father of Daniel Pearl, who was kidnapped and later beheaded by rebels in Pakistan
Alan J. Perlis; compilers, Turing Award (1966)
Frank Rosenblatt; invented an artificial intelligence program called "Perceptrons" (1960)
Radia Perlman; inventor of the Spanning Tree Protocol
Azriel Rosenfeld; image analysis
Ben Shneiderman; human-computer interaction, information visualization
Herbert A. Simon, cognitive and computer scientist; Turing Award (1975)
Abraham Sinkov, cryptanalyst; NSA Hall of Honor (1999)
Gustave Solomon, mathematician and electrical engineer; one of the founders of the algebraic theory of error detection and correction
Ray Solomonoff; algorithmic information theory
Richard Stallman; designed the GNU operating system, founder of the Free Software Foundation (FSF)
Andrew S. Tanenbaum, American-Dutch computer scientist; creator of MINIX
Warren Teitelman; autocorrect, Undo/Redo, Interlisp
Larry Tesler; Cut, copy, and paste
Jeffrey Ullman; compilers, theory of computation, data-structures, databases, awarded Knuth Prize (2000)
Peter J. Weinberger; contributed to the design of the AWK programming language (he is the "W" in AWK), and the FORTRAN compiler FORTRAN 77
Joseph Weizenbaum, German-born computer scientist; developer of ELIZA; the Weizenbaum Award is named after him
Norbert Wiener; cybernetics
Terry Winograd; SHRDLU
Jacob Wolfowitz, Polish-born information theorist
Stephen Wolfram, British-American computer scientist; designer of the Wolfram Language
Lotfi Zadeh, Azerbaijan SSR-born computer scientist; inventor of Fuzzy logic (Jewish mother, Azerbaijani father)
References
Computer scientists
Jewish American
Computer scientists
|
26572163
|
https://en.wikipedia.org/wiki/Strategic%20Air%20Command%20Digital%20Information%20Network
|
Strategic Air Command Digital Information Network
|
The Strategic Air Command DIgital Network (SACDIN) was a United States military computer network that provided computerized record communications, replacing the Data Transmission Subsystem and part of the Data Display Subsystem of the SAC Automated Command and Control System.
SACDIN enabled a rapid flow of communications from headquarters SAC to its fielded forces, such as B-52 bases and ICBM Launch Control Centers.
Logistics
Major portions of SACDIN were developed, engineered and installed by the International Telephone and Telegraph (ITT) company, under contract to the Electronic Systems Center.
Chronology
1969
- Headquarters SAC submits a request to the Joint Chiefs of Staff to study an expanded communications system, known as the SAC Total Information Network (SATIN). It would interconnect Air Force Satellite Communications (AFSATCOM), Advanced Airborne Command Post (AABNCP), Airborne Command Post (ABNCP), high frequency/single sideband radio HF/SSB radio, SAC Automated Command and Control System (SACCS), Automatic Digital Information Network (AUTODIN), Survivable Low Frequency Communications System (SLFCS) and Command Data Buffer (CDB)
1977
1 November - SATIN IV was effectively terminated by Congress. The restructured program was renamed SAC Digital Network (SACDIN), and was formulated to meet SAC's minimum essential data communications requirements, but also had the capability to grow in a modular fashion.
1986
?? ??? - SACDIN replaces much of the SAC Automated Command and Control System (SACCS) and the SAC Automated Total Information Network (SATIN)
See also
Strategic Automated Command and Control System (SACCS) - precursor (and resurrected successor) to SACDIN
Post Attack Command and Control System (PACCS)
Airborne Launch Control System (ALCS)
Ground Wave Emergency Network (GWEN)
Minimum Essential Emergency Communications Network (MEECN)
Survivable Low Frequency Communications System (SLFCS)
Primary Alerting System (PAS)
References
United States nuclear command and control
Computer networks
|
2064051
|
https://en.wikipedia.org/wiki/FCEUX
|
FCEUX
|
FCEUX is an open-source Nintendo Entertainment System and Family Computer Disk System emulator. It is a merger of various forks of FCE Ultra.
Multiplayer support
The Win32 and SDL versions of FCEUX do not currently support TCP/IP network play functionality, as they do not support controllers.
Ports
An integrated GTK2 GUI was added to the SDL port of FCEUX in version 2.1.3. This GTK GUI deprecated the previous python frontend, gfceux.
As of version 2.3.0, the SDL port migrated from GTK2 to a cross platform Qt5 GUI front end. The 2.4.0 version was the first release in which the SDL port is runnable on Windows, Linux, and macOS operating systems.
It has been ported to DOS, Linux (with either SVGAlib or X), macOS (its SDL port should also work on other Unix-like platforms such as FreeBSD, Solaris and IRIX), Windows, GP2X, PlayStation Portable, the Nintendo GameCube, Wii, PlayStation 2 and Pepper Pad.
History
FCE Ultra was forked from FCE (Family Computer Emulator). Its last full release was version 0.98.12 in August 2004, while a pre-release version 0.98.13-pre was released in September 2004 as source code only. After that, development appeared to stop and the homepage and forums for the emulator were taken down.
In the absence of official development, many forks of FCE Ultra were created. Most notable are FCEU-MM, which supports many new and unusual mappers, FCEU Rerecording, which incorporates many useful features for tool-assisted speedruns, and FCEUXD SP, which adds a number of debugging utilities.
In March 2006 it was reactivated and shortly thereafter a project was initiated to combine all the forks into one new application called FCEUX, which attracted collaboration from many authors of the various forks of FCE Ultra.
FCEUX was first publicly released on August 2, 2008. This fork of the emulator has continued steady development since then, allowing the other forks to become deprecated, and now has features the original FCE Ultra does not, such as native movie recording support and the ability to extend, enhance, or alter gameplay with Lua scripts. Thus it has become far more advanced than its predecessors.
Contributors
FCE was written by Bero. FCE Ultra was written by Xodnizel. It was reactivated by Anthony Giorgio and Mark Doliner. The FCEUX project was initiated by Zeromus and Sebastian Porst. Additional authors joined the group prior to its first release, including mz, Andrés Delikat, nitsujrehtona, maximus, CaH4e3, qFox and Lukas Sabota (punkrockguy318). Other contributors have included Aaron O'Neal, Joe Nahmias, Paul Kuliniewicz, Quietust, Parasyte, bbitmaster, blip, nitsuja, Luke Gustafson, UncombedCoconut, Jay Lanagan, Acmlm, DWEdit, Soules, radsaq, qeed, Shinydoofy, ugetab and Ugly Joe.
Reception
Brandon Widdler of Digital Trends considers FCEUX the go to emulator for the NES because of its multiple advanced features including debugging, ROM hacking, and video recording.
See also
List of NES emulators
References
External links
Nintendo Entertainment System emulators
DOS emulation software
GP2X emulation software
Linux emulation software
Lua (programming language)-scriptable software
MacOS emulation software
Windows emulation software
Portable software
Free emulation software
|
12185032
|
https://en.wikipedia.org/wiki/RCC%20Institute%20of%20Information%20Technology
|
RCC Institute of Information Technology
|
RCC Institute of Information Technology (RCCIIT) is a government sponsored engineering college which is located in Kolkata, West Bengal, India. The college was established in 1999. Government aids are given to the institution by Government of West Bengal and is academically affiliated to Maulana Abul Kalam Azad University of Technology.
All the faculties of this institution are recruited by the Government of West Bengal.
History
The RCC Institute of Information Technology (RCCIIT) was set up in 1999 headed by Jadavpur University.
Initially, the institution started with only three streams viz. Bachelor's in Computer Science and Information Technology and Master's in Computer Application. In 2006, a new stream was added to the bachelor's degree - Electronics and Communication Engineering. From 2010 onwards many new streams and master's degree in different Engineering courses were added.
Campus
The college campuses are located at Canal South Road, Beliaghata, Kolkata-700015. The old campus is housed at the old Government College of Engineering and Leather Technology campus. A new campus is about 20 metres from the old campus.
The campus is a 10 minutes drive away from East Metropolitan Bypass (near Chingrighata- away). The nearest airport is Netaji Subhash Chandra Bose International Airport at Dum Dum, Kolkata. It is a 10 minutes walk away from the nearest bus stop Beliaghata (CIT More) and is well connected with the two city stations, Howrah and Sealdah.
There is no hostel facility in this college.
Organisation and administration
The institute is a unit of the RCC Institute of Information Technology, an autonomous society of Department of Higher Education, Government of West Bengal, with part funding from the Government of West Bengal. This provides the institution with a degree of autonomy, while being administrated and funded by the Government. It has been selected for a TEQIP grant in the govt/govt aided category by the World Bank
The institute is currently headed by Prof. (Dr.)Ashok Mondal as the principal and Prof.(Dr.) Ajoy Kumar Ray (Padmashri) who has been given the responsibility of chairing the RCCIIT society by the Department of Higher Education, Government of West Bengal.
Academics
Academic programmes
RCC Institute of Information Technology is one of the few colleges in the state which is dedicated to train Computer Engineers and IT Personnel. It offers courses in the following streams.
Under-Graduate Courses
Bachelors' in Technology in different domains:
Computer Science & Engineering
Information Technology (NBA Accredited)
Electronics & Communication Engineering
Electrical Engineering
Applied Electronics And Instrumentation Engineering
Bachelors' in Computer Applications
Post-Graduate Courses
Masters in Computer Applications
Masters in Technology (M.Tech.) in 3 disciplines:
Telecommunication Engineering
Information Technology
Computer Science Engineering
Admissions
Admission to the undergraduate B.Tech. courses is done through the WBJEE or Joint Entrance Examination JEE MAIN. Postgraduate admissions are done on the basis of ranks in national entrance examinations like GATE or the state level examination PGET conducted by MAKAUT, WB.
Research collaborations
The institute has partnered with the following:
MoU with University of California, US
Proposed MoU with EU-IndiaGrid Project
MoU with University of Cagliari, Italy
Proposed MoU with Computer Sc. Dept., Bremen University, Germany
Inclusion in EMMA Project of European Community
Proposed MoU with IBM: RCCIIT proposed as IBM Center of Excellence
Renewal of MoU with Infosys - RCCIIT proposed as Nodal Center of Infosys Campus Connect Training
Proposed MoU with IETE, Kolkata Chapter
Two MODROBS Project with AICTE
Projects proposal under process with DST (Fast Track) & C-MET (Trichur)
Health Grid Project of BRNS, Dept. of Atomic Energy in collaboration with VECC Kolkata
Student life
Freshers Welcome
Freshers are welcomed to the institution by their seniors every year with a cultural event known as BIHAAN.
Technical Fest
The students of the college organize a technical fest every year which sharpens their technical skills known as Techtrix. Software competitions, paper presentations, programming in various languages, technical quiz, robotics and debate competitions are held. Technical colleges from all over the state participate in the competitions. Seminars and exhibitions are also held as a part of this program.
Cultural Fest
Every year the college holds an annual fest, Regalia, organized by the students. Students from various colleges compete with others in the fields like drama, choreography, quiz, antakshari, solo and group singing competition, fashion shows, war of bands, and various other fields. Students of the college perform skits, musical performances and plays. To make it more entertaining skilled performers from different genres are invited. Some major music bands and singers rock the students. Apart from the annual fest, there are other cultural programs, organized around the year where the students get a chance to show their talent.
Sports
The students organize intra-college and inter-college sports in a variety of games and sports such as Table-Tennis, Cricket Football and Carrom tournaments. The inter-college sports fest is known as "Game of Thrones" and the intra college sports fest is known as "Krirathon".
College Clubs
Rotaract Club of RCCIIT
RCC Institute of Information Technology has its own institute-based Rotaract Club sponsored by the Rotary Club of Calcutta Renaissance. The club was established by the students of RCCIIT on the 13th of June, 2017 with Awsaf Ambar as the Charter President. The purpose of the club is to conduct non-profit social activities for career development and personal growth.
See also
References
External links
Official website
Information technology institutes
Colleges affiliated to West Bengal University of Technology
Engineering colleges in West Bengal
Educational institutions established in 1999
1999 establishments in West Bengal
|
5622375
|
https://en.wikipedia.org/wiki/Wild%20Energy.%20Lana
|
Wild Energy. Lana
|
Wild Energy. Lana (orig. Ukrainian Dyka enerhiya. Lana) is a 2006 Science fiction novel, written by Maryna and Serhiy Dyachenko and published in Ukraine. The authors were named the best European fantasy writers in 2005.
The storyline of Wild Energy. Lana was inspired by Ukrainian musician Ruslana, and contains many elements from her unique lifestyle. The main character Lana is supposed to become the prototype of Ruslana's new music project. The first music video for this project was due in summer 2006. In 2008 the English version of the book should be released in Europe. A German version is also planned.
The book was released with two different covers.
Plot
The story begins in a city where every evening every person hooks up to the power grid via an armband and given energy, which helps them last until the next power up. The heroine, whose name isn't revealed, works as a pixel. Her job involves standing in a giant grid, while wearing a special suit. The suit's arms have flaps of several different colors, and via headphones she is given instructions on which colors to display. With several hundred pixels each displaying a color while standing on the grid, it forms a giant telescreen which displays advertisements for 20 minutes each day. The heroine's best friend is Eve, also a coworker. One day, after a power up, the heroine finds that due to a blunder on Eve's part during the 20 minute show, Eve has been denied her energy packet for the day. Since Eve didn't have a backup packet, she didn't get energy and is now rapidly losing the will to function and live. The heroine has no choice but to drag Eve to a black market dealer who illegally sells energy packets and can administer them through a portable power up machine. The heroine and Eve barely have enough money to cover the cost of the energy packet, but Eve's life is saved. Suddenly, a mysterious individual shows up and kills the black market dealers with advanced weaponry. After some deliberation, he spares the heroine and Eve.
After the incident, Eve starts acting more and more strangely. Eventually she disappears and after some time her body is found in a sewage drain. The heroine is saddened by this event and decides to seek answers. She learns of the rumor of The Factory, a mysterious place where the energy is supposedly produced and then sent. She also hears rumors about people who live in abandoned skyscrapers and don't rely on any energy packets to survive. She travels up one of these skyscrapers. After riskily getting around a broken flight of stairs by climbing on the outside of the skyscraper, she encounters a man who derisively refers to her as a "synthetic". However, he is impressed by the risk she took getting up the skyscraper and invites her to an event in one of the skyscrapers where synthetics go to try to get over their dependence on the energy packets. After asking one of her friends whether he'd be willing to share even a small portion of his energy packet with everyone, she realizes the addictive properties of the energy and decides to go.
The event turns out to be a rave supervised by the people who have learned to live on their own energy. The heroine sees many people and recognizes that the journey up to the event is a big risk since there isn't enough time to get back down in time for the power up. The heroine, due to her work as a pixel where she has to rely on the beat of the commands, feels the beat of the music and feels energy flowing through her. In a moment of euphoria she climbs onto the stage and her newly discovered self energy is powerful enough to make the old projector lights on the roof work at full power. After the event, she is adopted by the wild people. The wild people live in the tops of the abandoned skyscrapers, and live like birds. They rely on flying harnesses and the wind currents to get around. The heroine learns their way of life and lives there for a while. She finds out that the Controllers, the city's police, actively try to hunt down the wild people, therefore the wild people have to stay in the safety of the skyscrapers and only go down to scavenge supplies. One day, a kid crashes while trying out his flying harness and ends up with several broken bones. The heroine goes back down to the city to try and buy painkillers, but since she has been missing for a long time, her card has been flagged and she is captured by the Controllers.
She is taken to the precinct. At the precinct, she is taken to the office of an agent called a Catcher who tells her that she has been misinformed, and that the Controllers only want to send the wild people to The Factory, where they belong. She is put in a cable car along with several other wild people who were captured, and sent to the factory. Since the cable car will take about a day to get to the factory, most of the wild people go to sleep for the night, except for the heroine. At midnight, she sees the cable car conductor (a synthetic) use a portable power up machine, similar to the one used by the black market dealers, to power up. However, he doesn't power up once but at least 6 times. The heroine realizes that using the illegal energy gets you addicted and that after some time all the energy in the world wouldn't be able to satisfy the addicts, which leads them to die. She realizes that this is what caused Eve to die. She gets into an argument with the cable car conductor. In his intoxication with the energy, he reveals The Factory takes "generators" aka wild people who rely on their own energy, and rips the energy out in order to feed it to the city. This is where the energy provided by The Factory comes from. Refusing this fate, the heroine jumps out of the cable car, and survives by using the skills she learned in Overground to navigate the wind currents. She lands in a forest.
In the forest, she discovers more wild people, except they live a more indigenous lifestyle. They are led by the Queen Mother, who supposedly has mystical powers due to the fact that she has a lot of wild energy. The Queen Mother thinks that the heroine will lead her people to ruin and as such wishes to get rid of her. The Queen Mother decides to put the heroine in one on one combat against her apprentice. They are to fight to the death on a frozen over lake. Although the apprentice is a much better fighter, having grown up in the wild, the heroine manages to defeat her by predicting her movements and breaking the ice underneath her. As the apprentice is about to drown, the heroine decides to save her. While at first she thinks that her act would earn her sympathy, she quickly finds that by saving the apprentice in combat, she has greatly dishonored the apprentice and failed her own test. The apprentice is given the title Unnamed and the heroine is forced into another fight, this time with a towering female named The Huntress. The fight takes place on a tall cliff. The heroine realizes she is physically outmatched, and is soon left dangling on the side of the cliff with the Huntress towering above. She predicts the Huntress's strikes and crawls onto a tree sticking out of the side of the mountain. The Huntress follows her, but the tree falls under her weight, taking the Huntress with it. The heroine, using the skills she learned in Overground, glides to safety onto the cliff. The Queen Mother is angered and declares that she will fight the heroine personally.
The fight takes place on top of a fire pit, which somehow doesn't seem to hurt the combatants. In fact, the fire seems to empower the heroine and she defeats the Queen Mother. In her last breaths, the Queen Mother calls the heroine Lana. After the fight, the heroine is dazed and confused until she realizes that Lana is her new name, and she is the new Queen Mother.
Lana lives with the indigenous people for sometime, even falling in love with a man named Yariy, and although she doesn't feel like she has any of the mystical powers the Queen Mother was supposed to have, she quickly realizes that her own wild energy is her greatest strength. During her time, the village elder, takes Lana to see the Factory from a distance. Lana sees that the Factory isn't a happy place filled with energy like so many back in the city believed, but a scary large concrete building surrounded by persistent fog. As she sees the cable car regularly coming to and from the Factory she realizes that she has to stop the killing of wild people for energy. The village elder tells her that he thinks that at first, the Factory used to be powered by weather, hence the lightning rods on its roof, but eventually that was abandoned for a different power source. He believes that at the center of the Factory is a giant grey Heart that powers it and the only way to stop the Factory is to get inside and stab the Heart. She gathers a group of willing village people and decides to assault the Factory. Yariy destroys her drum and attempts to convince her that it was an omen to not go through with the plan, but this only angers Lana who declares him a coward and ends their relationship.
The next day, Lana and her people attack the Factory, with the intent of attracting electrical currents to the lightning rods to cause lightning to destroy the security system. However, they encounter the Factory's greatest security system: the anti-rhythm, which causes deafening silence around the Factory, and slowly stops a person's living processes completely. The people are hindered in channeling the lightning and by the time Lana is able to destroy the security robots, all of the people who went with her died. The door to the Factory can't close, the body of the village elder blocking it, and Lana is able to enter. She travels through the foggy dark and damp corridors of the Factory in an attempt to find the heart but is knocked out by a mysterious man. Once she wakes up she realizes that the man is the same person who killed the black market dealers but spared her and Eve at the beginning of the story. He explains that the way the illegal energy packets are obtained is that the black market dealers find people addicted to the energy, like the cable car operator or Eve, and drain them of their energy before dumping their body. The energy drained is resold and is so addictive due to it being the energy of a synthetic. The man finds these people to be disgusting and thus kills anybody he can find dealing out the energy. When Lana questions his identity he simply declares that he is the Heart of the Factory.
At first, Lana is imprisoned inside a room with a foldable TV screen with only 3 channels, one of which only works for 20 minutes a day and displays the same telescreen that she used to work in. Soon, the Heart begins to trust her more and lets her out and shows her around. He shows her the view from the top of the factory and shows her the wires running from the factory in the direction of the city. He explains that via these wires, each power up, everybody hooks themselves up to the factory which gives them energy. Lana is on the verge of committing suicide, but the Heart convinces her the life is worth living. He explains that he himself came to the Factory attempting to change something after his wife died due to lack of energy before giving birth to their daughter, but realized that the Factory must keep running in order to keep everybody living, but with less and less wild people, the Factory won't be able to last forever and give everyone energy. He shows Lana several other things. He shows her a computer that he can use to control what is being said on the 20 minute telescreen presentation all the way in the city. He also shows her a cart in the basement of the Factory which runs on the energy of mating slugs. Lana deduces that he uses this cart to get back to the city. The Heart also explains how the Dessicator works. He tells her that the Dessicator is the "oven" of the factory. It's a membrane that will detect the natural vibration of a person, and cause an anti vibration that fills the person, turning them to ashes and leaving the person's pure energy behind which is then absorbed by the factory. His description reminds Lana of the fire pit on which Lana fought with the Queen Mother, and the Heart explains that the fire pit actually transferred all of the Queen Mother's energy into Lana upon Lana's victory.
Once every couple of days, a cable car arrives with new wild people. The Heart and Lana always sit together as they feel the Factory shudder as the Dessicator does its work. The Heart explains that he journeys to the city primarily to see the happy synthetics in order to demonstrate to himself that the work he's doing matters. Eventually, Lana hatches an escape plan. She takes the foldable TV screen the Heart gave her, runs into his room and types a message into his computer "Factory is real, devours human energy. Don't trust the Controllers/Catchers. Search for SYNT - Lana". She sends this message to the telescreen where it is displayed to the entire city. She then runs up to the top of the Factory, with the Heart in pursuit, and then jumps off, using the foldable screen to glide over the antirhythm and out of the Factory. Once outside, she is able to follow a water flow back into the Factory and into the tunnel with the cart used to get back to the city. She activates the cart, however the Heart was expecting this and didn't tell her that she needed to flip a lever in the tunnel, which causes the cart to return to the Factory. The Heart recaptures her, however after more planning Lana decides to escape again. This time she jumps onto a cable car as it's leaving. Suddenly, the cable car conductor is called and asked to look for stowaways. He sees Lana hanging off the side of the car, but chooses not to report her. He drops her off before the cart enters the precinct, and Lana ends up back in the city.
Back in the city, Lana discovers that the Controllers are now patrolling the streets much more actively. She visits the abandoned skyscraper but finds that the wild people aren't there and that Overground has been ransacked. Back in the lower city, she visits a drum shop that was run by a contact who helped the wild people. This time, there is a woman at the counter. When Lana mentions her own name, a Controller immediately arrives and stuns her with a taser. However, instead of taking her to the precinct, he determines that she is indeed Lana and drops her off next to an entrance to the sewers and tells her to climb down. In the sewers, she finds her old friends from Overground who explain that the Controllers raided Overground and they were forced to live in the sewers, which is occupied by its own wild people, the moles. She tells them about her adventures during her absence and explains that SYNT is the name of the organization in charge of the Factory. Her friends explain that ever since she sent that message to the telescreen, the Controllers have been on high alert and searching for Lana, who has become a legend in the city. She goes to another one of the wild people raves where she discovers that she can use her own wild energy to help free synthetics from its dependence.
Lana hatches a plan to take the cable car back to the village where she was Queen Mother and assault the Factory once more. However, most of her friends are captured in a raid. She realizes that she needs to break into SYNT in the next day in order to get on the same cable car that is carrying her friends. Her friend takes her back to the abandoned skyscraper where he gives her a drum. She beats it and people in the other skyscrapers beat back. Eventually, they are able to amass a lot of people who start a riot within the city which draws more and more people guided by the legend of Lana. This guarantees that SYNT will be empty as all of the police are busy handling the riot. As the riot progresses, Lana breaks into SYNT and gets on the cable car with her friends. She wakes them up and once they reach the village, they climb down using rope. Lana enters the village, and as she is still the Queen Mother, the villagers obey her despite many of them believing that Lana caused the deaths of their brothers and fathers. In the crowd, Lana sees Yariy together with a pregnant Unnamed. Despite many people doubting her, all of the grown up children who remember her as a benevolent Queen Mother decide to follow her into battle. As they approach the Factory, the antirhythm engulfs them, however they are able to disable it with their energy and enter the Factory.
In the Factory, Lana approaches the Dessicator and sees the Heart standing next to it. He criticizes her for never settling down even when she had everything, however Lana retorts that she refuses to live in a world where people have to fight for the will to live. At this point the Heart says that she is just like her mother, implying that he is her father. Lana steps onto the Dessicator. Her goal is to overload the Dessicator with wild energy so that it breaks and the Factory doesn't have to kill people anymore to provide energy. Despite her best efforts to maintain her own rhythm despite the membrane filling her with antivibration, she starts weakening. At this point all of her companions jump onto the membrane and each of them starts doing their best to maintain their own rhythm by performing anything that fills them with wild energy such as dancing and playing on drums. The battle starts going much better as the Dessicator starts overloading, but Lana realizes that they won't be able to keep it up for long enough. Suddenly, the Heart jumps onto the center of the membrane and starts channeling his own wild energy. As the Dessicator absorbs him. it overloads, leaving the Heart a pile of ashes and the Dessicator broken. Afterwards Lana sees off her friends by putting them on the same cart that the Heart used to take to the city.
In the epilogue it is revealed that Lana herself provides enough energy to feed the entire city, and that the more she gives, the stronger she becomes. She is the new Heart of the Factory, but she states that the final battle is yet to come, possibly meaning it is up to the population of synthetics to learn how to live again.
See also
"Dyka Enerhija"
2006 science fiction novels
Ukrainian science fiction novels
|
46264011
|
https://en.wikipedia.org/wiki/Instart
|
Instart
|
Instart was an American multinational computer technology corporation, headquartered in Palo Alto, California. The company specialized primarily in developing and marketing a Digital Experience Cloud that improves web and mobile application performance, consumer experience and security. The company marketed and sold to large enterprises that seek to achieve higher digital revenue, increased on-line conversion, faster website performance, improved consumer experience and better online security.
The company also offered cloud services designed to increase digital advertising revenue for media and publishing companies. These services included Advertising Acceleration, which improves digital advertising viewability and vCPM by speeding the delivery of digital ads, and advertising recovery capabilities that encrypt application content together with digital advertisements, so that ad blocking software cannot filter or block the ads, thus restoring advertising impressions and revenue.
The company claimed that digital enterprises using its services will achieve 5% to 15% higher on-line revenue via higher conversion, higher average order value, restored digital advertising and marketing functionality, and increased SEO traffic.
The company was headquartered in Palo Alto, California with offices in New York, London, Bangalore and Sydney.
, the company claimed that it processed 60 billion transactions per day, optimized 5 billion images per day, served 200 million consumers per day and recovered 5 billion digital advertisements per month.
On February 27, 2020, Akamai announced that it had acquired Instart's customers and select intellectual property.
Products
Instart developed and operated a Digital Experience Cloud that used artificial intelligence and machine learning to analyze the behavior of applications and of consumers accessing those applications, and then automatically optimizes HTML, JavaScript, images, video and other application components with a primary goal of improving application performance, consumer experience and security. For media companies, Instart offered Advertising Acceleration services that improve the viewability of display advertising, resulting in higher vCPM and revenue, and advertising recovery services that restore advertising impressions and revenue otherwise lost to ad blocking software.
Instart products included:
Instart Digital Experience Cloud - A suite of cloud services for improving the performance, consumer experience and security of cloud, web and mobile applications. Worked with Instart's CDN, or other CDNs such as Akamai.
Cloud and Web Application Performance Optimization - A cloud service that uses machine learning to automatically and continuously improve the performance of HTML-based applications, resulting in higher conversion and revenue.
Mobile Application Performance Optimization - A cloud service that improved network and application performance for mobile applications, in congested or noisy cellular or wifi network environments.
Image Optimization - A cloud service that used computer vision and machine learning to automatically optimize image compression, with a goal of delivering the highest visual quality at the smallest possible size.
Tag Analytics and Control - A cloud service that provided dashboards and alerting about whether 3rd, 4th and 5th party tags are operating normally, and provides automated control capabilities that can defer or promote tags to ensure reliable website performance or disable misbehaving tags.
Advertising Acceleration and Viewability Optimization - A cloud service that sped the delivery of display advertising, increasing the time ads are viewable by consumers, and as such improving viewability, vCPM and revenue.
Digital Advertising and Marketing Recovery - A cloud service that combined and encrypted content and advertising, rendering ad blocking software ineffective, thus recovering impressions and revenue.
Web Application Firewall - A cloud-based web application firewall, implementing OWASP and additional application protections.
DDOS Attack Protection - The ability to block and absorb large DDOS attacks, protecting websites from downtime.
Bot Management and Security - A cloud based services that operated both on the consumer device and in the cloud to automatically identify bots and implement policies such as allow, throttle, block or log. Particularly useful for controlling web scraping and stopping credit card and gift card fraud, credential stuffing and reservation fraud.
Instart Content Delivery Network (CDN) - A global content delivery network that was differentiated by being peering, last mile and mobile focused. Instart claimed it was the fastest CDN. Competes with Akamai, and Fastly CDNs.
Nanovisor - A JavaScript-based container that executes in consumer web browsers, providing visibility and control over both first and third party content and services.
The company was named to the visionary category of the Web Application Firewall magic quadrant by Gartner Group in September 2017.
Architecture
Instart's Digital Experience Cloud most commonly was deployed as a suite of cloud services in front of company's application servers or digital commerce servers. Instart's Digital Experience Cloud was independent of its Content Delivery Network (CDN). and as such works in conjunction with Instart's own CDN, or other CDNs such as Akamai and Fastly.
Instart was built with the understanding that 3rd-party cloud services make up approximately 75% of modern web applications. To be able to control, protect and accelerate the entire application, including 1st and 3rd party cloud services and content, Instart automatically injected a small ephemeral JavaScript-based container into every consumer web browser, which connects and coordinates with Instart's global cloud. The container provided visibility and control over all processing within the browser, including the ability to control 1st and 3rd party http commands, HTML code and JavaScript code to improve application performance, consumer experience and security. The cloud-connected container also allowed Instart to shift the processing of 3rd-party cloud services from the consumer browser to Instart's cloud servers.
History
The company was founded out of frustration with the slow speed of downloading and updating video games.
In the fall of 2014, the company started a $100 million contract buyout program for Akamai customers.
Instart was ranked by Business Insider as No. 1 among the 17 best startups to work for in America
In June 2018, the company shortened its name to Instart from Instart Logic. Instart was a shortening of "Instant Start", which reflected the company's founding mission of making digital applications faster.
Notable clients included Neiman Marcus, Cafe Media, Edmunds, Bonnier, Ziff Davis, CBS, Tronc, TUI Group, Telstra and Kate Spade.
Acquisitions
In February 2016, Instart acquired Kwicr, a leader in mobile application acceleration for Apple and Android platforms, for an undisclosed amount to strengthen its mobile application delivery and analytics capabilities.
Financing
Instart had received US$140 million in 6 rounds of funding from 10 investors:
Series A: In February 2012 Instart Logic received US$9 million as first round of funding
Series B: In April 2013 Instart Logic received US$17 million
Series C: In May 2014 Instart Logic received US$26 million
Series C2: In May 2015, Instart Logic closed a US$13 million expansion funding led by new investors Four Rivers Group and Hermes Growth Partners, in addition to existing investors including Andreessen Horowitz, Kleiner Perkins Caufield & Byers and Tenaya Capital
Series D: In January 2016 Instart Logic received US$45 million funding from Geodesic Capital, Telstra Ventures, Stanford-StartX Fund, Harris Barton Asset Management and participation from existing investors
Series E: In November 2017 Instart Logic closed US$30 million of equity funding led by ST Telemedia with all other prior investors participating.
Other investors included Greylock Partners, Sutter Hill Ventures and Wing Venture Capital.
References
External links
2010 establishments in California
2020 disestablishments in California
Cloud computing providers
Companies based in Palo Alto, California
Content delivery networks
DDoS mitigation companies
Internet technology companies of the United States
Technology companies disestablished in 2020
Technology companies established in 2010
|
39725534
|
https://en.wikipedia.org/wiki/Thackeray%20%28disambiguation%29
|
Thackeray (disambiguation)
|
William Makepeace Thackeray (1811–1863) was a British novelist, author and illustrator.
Thackeray may also refer to:
Thackeray (surname), including a list of people and fictional characters with the name
Bal Thackeray (1926–2012), Indian politician
Thackeray (film), a 2019 Indian biopic about Bal Thackeray
Thackeray, Saskatchewan, a place in Canada
, a ship, launched as SS Empire Aldgate
See also
Thackray, a surname
Thackrey, a surname
Thackery (disambiguation)
Thackeray's Globules, a group of stars in the IC 2944 nebula
Thackeray Hall, an academic building of the University of Pittsburgh
|
511018
|
https://en.wikipedia.org/wiki/Media%20Player%20Classic
|
Media Player Classic
|
Media Player Classic (MPC), Media Player Classic - Home Cinema (MPC-HC), and Media Player Classic - Black Edition (MPC-BE) are a family of free and open-source, compact, lightweight, and customizable media players for 32-bit and 64-bit Microsoft Windows. The original MPC, along with the MPC-HC fork, mimic the simplistic look and feel of Windows Media Player 6.4, but provide most options and features available in modern media players. Variations of the original MPC and its forks are standard media players in the K-Lite Codec Pack and the Combined Community Codec Pack.
This project is now principally maintained by the community at the Doom9 forum. The active forks are Media Player Classic - Home Cinema (MPC-HC) by clsid2 (same developer known as clsid responsible for MPC 6.4.9.1), and Media Player Classic - Black Edition (MPC-BE) by aleksoid.
Media Player Classic
The original Media Player Classic was created and maintained by a programmer named "Gabest" who also created PCSX2 graphics plugin GSDX. It was developed as a closed-source application, but later relicensed as free software under the terms of the GPL-2.0-or-later license. MPC is hosted under the guliverkli project at SourceForge.net. The project itself is something of an umbrella organization for works by Gabest.
Media Player Classic development stalled in May 2006. Gabest, the main developer of the original version, stated in March 2007 that development of Media Player Classic is not dead but that he was unable to work on it. MPC 6.4.9.0, released March 20, 2006, is the final official version.
Forks
Media Player Classic 6.4.9.1
In August 2007, an unofficially patched and updated build became available, from Doom9 member clsid, hosted under the guliverkli2 project at SourceForge.net. Known as Media Player Classic 6.4.9.1, it was meant for fixing bugs and updating outdated libraries; its branch's development has been inactive since 2011. MPC 6.4.9.1 Revision 107, released February 14, 2010, is the final release version. The community at the Doom9 forum has since further continued the project with MPC-HC.
Media Player Classic - Home Cinema
A fork, called Media Player Classic - Home Cinema (MPC-HC), adds new features, as well as fixes bugs and updates libraries. It also updated the license to GPL-3.0-or-later.
MPC-HC updates the original player and adds many useful functionalities including the option to remove tearing, additional video decoders (in particular H.264, VC-1 and MPEG-2 with DirectX Video Acceleration support), Enhanced Video Renderer support, and multiple bug fixes. There is also a 64 bit-version of Media Player Classic - Home Cinema for the various Windows x64 platforms. MPC-HC requires at least Windows XP Service Pack 3.
As of version 1.4.2499.0, MPC-HC implemented color management support, an uncommon feature that nearly all video players on Microsoft Windows lack. Windows 8 support was introduced in version 1.6.5. Beginning with version 1.6.6 the stable releases are signed.
Apart from stable releases as published, nightly builds are also publicly available. MPC-HC is also distributed in the PortableApps format. MPC-HC 1.7.8 released in 2015 was built with the MediaInfoLib 0.7.71.
MPC-HC 1.7.1 added support for HEVC format.
MPC-HC 1.7.13 now requires SSE2 supporting CPU and no longer runs on Intel Pentium III or AMD Athlon XP.
MPC-HC 1.7.13 is the final version and the program has been officially discontinued as of July 16, 2017, due to a shortage of active developers with C/C++ experience. Its source code on GitHub was last updated on August 27, 2017, a month and a half after the official final version.
Updated fork
MPC-HC 1.7.xx Maintenance versions released, bug fixes.
MPC-HC 1.8.xx Youtube-DL integration, maintenance, bug fixes.
MPC-HC 1.9.xx Black Theme added, modern toolbars, better subtitle handling, video preview on seekbar, improved translations, various small features, maintenance, bug fixes.
Updated builds of MPC-HC, a fork from the same developer (known as clsid2 on GitHub/SourceForge) responsible for MPC 6.4.9.1, started appearing in January 2018. This fork contains updated internal codecs (LAV Filters), AV1 support, youtube-dl integration, a new dark theme, video preview on seekbar, support for MPC Video Renderer, A-B Repeat, subtitle performance improvements, updates to some other external components, other improvements, and many bug fixes; support for Windows XP was also dropped in these builds. Binary releases are available, as well as source code.
Media Player Classic - Black Edition
Media Player Classic - Black Edition (MPC-BE) is a fork of MPC and MPC-HC. It moved away from MPC's aim to mimic the look and feel of Windows Media Player with updated player controls and provides additional features on top of MPC-HC such as a video preview tooltip when hovering the mouse cursor over the seek bar, as known from video platforms such as YouTube and Dailymotion, though many of these features, including the video preview on seekbar, were added to MPC-HC at a later date.
MPC-BE, however, doesn't include LAV filters by default, making it less efficient than MPC-HC for decoding. This is most noticeable with higher resolution files, newer codecs, or on lower end hardware.
Player development began in February 2011. Developers used a modification of MPC-HC made by a programmer nicknamed "bobdynlan".
The first version (1.0.1.0) was released on September 12, 2012.
Starting with version 1.5.0, MPC-BE no longer supports Windows XP.
MPC-BE version 1.5.1 and newer require SSE2 supporting CPU and no longer run on Intel Pentium III or AMD Athlon XP.
Nightly builds are also available.
Media formats and features
In this section Media Player Classic and MPC refer to both the original MPC and its forks, unless otherwise specified.
Media Player Classic is capable of VCD, SVCD, and DVD playback without installation of additional software or codecs. MPC has built-in codecs for MPEG-2 video with support for subtitles and codecs for LPCM, MP2, 3GP, AC3, and DTS audio; along with native playback of the Matroska container format. MPC also contains an improved MPEG splitter that supports playback of VCDs and SVCDs using its VCD/SVCD/XCD Reader. On October 30, 2005, Gabest added MP4 and MPEG-4 Timed Text support. Adobe Flash movies (SWF) can be played and frames jumped to.
Supported media formats within the latest builds of MPC-HC and MPC-BE have been considerably expanded compared to the original MPC, as these builds are bundled with iterations of libavcodec and libavformat. MPC-HC version 1.7.0 and newer utilize LAV filters, while MPC-BE uses FFmpeg directly. Consequently, they support all formats from those libraries.
MPC-HC is also one of the first media players to support Dolby Atmos audio natively.
MPC can use an INI file in its application folder, making it a portable application.
DirectShow
Media Player Classic is primarily based on the DirectShow architecture and therefore automatically uses installed DirectShow decoding filters. For instance, after the open source DirectShow decoding filter ffdshow has been installed, fast and high quality decoding and postprocessing of the MPEG-4 ASP, H.264, and Flash Video formats is available in the original MPC. MPC-HC and MPC-BE, however, can play videos in these formats directly without ffdshow.
MPC-HC and MPC-BE also provide DXVA support for compatible Intel, NVIDIA, and ATI/AMD video cards when using a compatible codec. This provides hardware-acceleration for playback.
In addition to DirectShow, MPC can also use the QuickTime, RealPlayer, and SHOUTcast codecs and filters (if installed on the computer) to play their native files. Though some of these files may play without the external codecs or filters installed. Alternatively, QuickTime Alternative and Real Alternative can be used in place of their player installations for expanded support of their respective file formats.
TV tuners
MPC supports playback and recording of television if a supported TV tuner is installed.
See also
Comparison of video player software
DirectVobSub
WASAPI
VLC media player
References
External links
2003 software
Free media players
Free software programmed in C++
Free video software
Windows media players
Windows-only free software
|
705625
|
https://en.wikipedia.org/wiki/Ga%C3%ABl%20Duval
|
Gaël Duval
|
Gaël Duval (born 1973) is a French entrepreneur. In July 1998, he created Mandrake Linux (now Mandriva Linux), a Linux distribution originally based on Red Hat Linux and KDE. He was also a co-founder of MandrakeSoft (now merged in Mandriva) with Jacques Le Marois and Frédéric Bastok.
Gaël Duval was responsible for communication in the Mandriva management team until he was laid off by the company in March 2006, in a round of cost-cutting. Duval suspected part of the reason for his dismissal was disagreement with management over the company's future strategy, resulting in a lawsuit against the company.
, Duval is Chairman and Chief Technology Officer at Ulteo. The company is bought by AZNetwork group in 2015.
In 2016, he co-founded NFactory.io, an incubator-accelerator of "startups".
In November 2017, Gael Duval created the /e/ mobile operating system, which is a privacy-oriented fork of the Android-based LineageOS accompanied with a set of online services. In 2018, Duval founded the E Foundation, which maintains /e/, and ECORP SAS, a privately-held corporation, which operates their online sales and services sites.
Education
Duval is a graduate of the University of Caen Normandy in France, where he studied networks and documentary applications.
References
External links
Official website
Interview with Gael Duval, Sep 24, 2006
1973 births
Chief technology officers
French businesspeople
Linux people
Living people
People from Caen
University of Caen alumni
|
20626257
|
https://en.wikipedia.org/wiki/1978%20USC%20Trojans%20football%20team
|
1978 USC Trojans football team
|
The 1978 USC Trojans football team represented the University of Southern California in the 1978 NCAA Division I-A football season. Following the season, the Trojans were crowned national champions according to the Coaches Poll. While Alabama claimed the AP Poll title because it had defeated top-ranked Penn State in the Sugar Bowl, the Trojans felt they deserved the title since they had defeated Alabama and Notre Dame during the regular season, and then Michigan in the Rose Bowl. Both USC and Alabama ended their seasons with a single loss.
Schedule
The Trojans finished the regular season with an 11–1 record before going on to defeat the Michigan Wolverines 17–10 in the Rose Bowl.
Personnel
Game summaries
Notre Dame
Rose Bowl
1978 Trojans in the NFL
All 22 starters played in the NFL.
Marcus Allen
Chip Banks
Rich Dimler
Ronnie Lott
Anthony Muñoz
Charles White
Brad Budde
Garry Cobb
Larry Braziel
Paul McDonald
Riki Gray
Ray Butler
Steve Busick
Keith Van Horne
Dennis Smith
Awards and honors
Charles White: Heisman trophy, Maxwell Award, Walter Camp Award, UPI Player of the Year
References
USC
USC Trojans football seasons
College football national champions
Pac-12 Conference football champion seasons
Rose Bowl champion seasons
USC Trojans football
|
6912505
|
https://en.wikipedia.org/wiki/VNI
|
VNI
|
VNI Software Company is a developer of various education, entertainment, office, and utility software packages. They are known for developing an encoding (VNI encoding) and a popular input method (VNI Input) for Vietnamese on for computers. VNI is often available on computer systems to type Vietnamese, alongside TELEX input method as well. The most common pairing is the use of VNI on keyboard and computers, whilst TELEX is more common on phones or touchscreens.
History
The VNI company is a family-owned company and based in Westminster, California. It was founded in 1987 by Hồ Thành Việt to develop software that eases Vietnamese language use on computers. Among their products were the VNI Encoding and VNI Input Method. The VNI Input Method has since grown to become the top two most popular input methods for Vietnamese, alongside TELEX which is more advantageous for phones and touchscreens whilst VNI has found more use on keyboard computer systems.
VNI vs. Microsoft
In the 1990s, Microsoft recognized the potential of VNI's products and incorporated VNI Input Method into Windows 95 Vietnamese Edition and MSDN, in use worldwide.
Upon Microsoft's unauthorized use of these technologies, VNI took Microsoft to court over the matter. Microsoft settled the case out of court, withdrew the input method from their entire product line, and developed their own input method. It has, although virtually unknown, appeared in every Windows release since Windows 98.
Starting with Windows 10 version 1903, the VNI Input Method (as "Vietnamese Number Key-based"), along with the Telex input method, are now natively supported.
Unicode
Despite the growing popularity of Unicode in computing, the VNI Encoding (see below) is still in wide use by Vietnamese speakers both in Vietnam and abroad. All professional printing facilities in the Little Saigon neighborhood of Orange County, California continue to use the VNI Encoding when processing Vietnamese text. For this reason, print jobs submitted using the VNI Character Set are compatible with local printers.
Input methods
VNI invented, popularized, and commercialized an input method and an encoding, the VNI Character Set, to assist computer users entering Vietnamese on their computers. The user can type using only ASCII characters found on standard computer keyboard layouts. Because the Vietnamese alphabet uses a complex system of diacritics for tones and other letters of the Vietnamese alphabet, the keyboard would need 133 alphanumeric keys and a Shift key to cover all possible characters.
VNI Input Method
Originally, VNI's input method utilized function keys (F1, F2, ...) to enter the tone marks, which later turned out to be problematic, as the operating system used those keys for other purposes. VNI then turned to the numerical keys along the top of the keyboard (as opposed to the numpad) for entering tone marks. This arrangement survives today, but users also have the option of customizing the keys used for tone marks.
With VNI Tan Ky mode on, the user can type in diacritical marks anywhere within a word, and the marks will appear at their proper locations. For example, the word trường, which means 'school', can be typed in the following ways:
truong-7-2 → (most conventional way)
72truong →
t72ruong →
tr72uong →
tru7o72ng →
truo72ng →
truo7ng2 →
The first way is the conventional method, following handwriting and spelling convention, where the base is written first () and then the tonal marks added later one by one.
VNI Tan Ky
With the release of VNI Tan Ky 4 in the 1990s, VNI freed users from having to remember where to correctly insert tone marks within a word, because, as long as the user enters all the required characters and tone marks, the software will group them correctly. This feature is especially useful for newcomers to the language.
VNI Auto Accent
VNI Auto Accent is the company's most recent software release (2006), with the purpose of alleviating repetitive strain injury (RSI) caused by prolonged use of computer keyboards. Auto Accent helps reduce the number of keystrokes needed to type each word by automatically adding diacritical marks for the user. The user must still enter every base letter in the word.
Character encodings
VNI Encoding (Windows/Unix)
The VNI Encoding uses up to two bytes to represent one Vietnamese vowel character, with the second byte supplying additional diacritical marks, therefore removing the need to replace control characters with Vietnamese characters, a problematic system found in TCVN1 (VSCII-1) and in VISCII, or using two different fonts such as is sometimes employed for TCVN3 (VSCII-3), one containing lowercase characters and the other uppercase characters. A similar approach is taken by Windows-1258 and VSCII-2.
This solution is more portable between different versions of Windows and between different platforms. However, due to the presence of multiple characters in a file to represent one written character increases the file size. The increased file size can usually be accounted for by compressing the data into a file format such as ZIP.
The VNI encoding was used extensively in the south of Vietnam, and sometimes used overseas, while TCVN 5712 was dominant in the north.
Points 0x00 through 0x7F follow ASCII.
VNI Encoding for Macintosh
A version intended for use on Macintosh systems, with a different arrangement (corresponding to the different arrangement between Windows-1252 and Mac OS Roman).
VNI Encoding for DOS
The VNI encoding for use on DOS does not use separate characters for diacritics, instead replacing certain ASCII punctuation characters with tone-marked uppercase letters (compare ISO 646).
VIQR and VNI-Internet Mail
The use of Vietnamese Quoted-Readable (VIQR), a convention for writing in Vietnamese using ASCII characters, began during the Vietnam War, when typewriters were the main tool for word processing. Because the U.S. military required a way to represent Vietnamese scripts accurately on official documents, VIQR was invented for the military. Due to its longstanding use, VIQR was a natural choice for computer word processing, prior to the appearance of VNI, VPSKeys, VSCII, VISCII, and Unicode. It is still widely used for information exchange on computers, but is not desirable for design and layout, due to its cryptic appearance.
VIQR's main issue was the difficulty of reading VIQR text, especially for inexperienced computer users. VNI created and released a free font called VNI-Internet Mail, which utilized a variant of the VIQR notation and VNI's combining character technique to give VIQR text a more natural appearance by replacing certain ASCII punctuation with combining characters.
The following table compares VNI-Internet Mail to other codified VIQR or VIQR-like conventions.
See also
Telex (input method)
Vietnamese Quoted-Readable (VIQR)
VISCII
VPSKeys
Guide to inputting Vietnamese text at the Vietnamese Wikipedia
Vietnamese language and computers
References
External links
VNI Software Co.
VietUni Converter
VNI products
VNI Auto Accent
VNI XP & Dai Tu Dien
VNI Tan Viet
VNI Tan Ky 4
VNI Dai Tu Dien
Learn English by Phonic
Learn English by Pictures
VNI An Sao
VNI-Internet Mail
Character encoding
Companies based in Orange County, California
Companies based in Westminster, California
Educational software
Vietnamese character input
|
58460788
|
https://en.wikipedia.org/wiki/CleanBrowsing
|
CleanBrowsing
|
CleanBrowsing is a free public DNS resolver with content filtering, founded by Daniel B. Cid and Tony Perez. It supports DNS over TLS over port 853 and DNS over HTTPS over port 443 in addition to the standard DNS over port 53. CleanBrowsing filters can be used by parents to protect their children from adult and inappropriate content online.
IP addresses and DNS filters
CleanBrowsing has 3 standard filters accessible via the following anycast IP addresses:
Family Filter
Blocks access to adult content, proxy and VPNs, phishing and malicious domains. It enforces Safe Search on Google, Bing and YouTube.
Adult Filter
Less restrictive than the Family filter and only blocks access to adult content and malicious/phishing domains.
Security Filter
Blocks access to malicious and phishing domains.
See also
DNS over TLS
Public recursive name servers
References
External links
Alternative Internet DNS services
|
3203683
|
https://en.wikipedia.org/wiki/Derrick%20Morgan
|
Derrick Morgan
|
Derrick Morgan (born 27 March 1940) is a Jamaican musical artist popular in the 1960s and 1970s. He worked with Desmond Dekker, Bob Marley, and Jimmy Cliff in the rhythm and blues and ska genres, and he also performed rocksteady and skinhead reggae.
Biography
In 1957, Morgan entered the Vere Johns Opportunity Hour, a talent show held at the Palace Theatre in Kingston. He won with rousing impressions of Little Richard and, shortly after that, was recruited to perform around the island with the popular Jamaican comedy team Bim and Bam. In 1959, Morgan entered the recording studio for the first time. Duke Reid, the sound system boss, was looking for talent to record for his Treasure Isle record label. Morgan cut two popular shuffle-boogie sides "Lover Boy", a.k.a. "S-Corner Rock", and "Oh My". Soon after, Morgan cut the bolero-tinged boogie "Fat Man", which also became a hit. He also found time to record for Coxsone Dodd.
In 1960 Morgan became the only artist ever to fill the places from one to seven on the Jamaican pop chart simultaneously. Among those hits were "Don't Call Me Daddy", "In My Heart", "Be Still", and "Meekly Wait and Murmur Not". But it was the following year that Morgan released the biggest hit of his career, the Leslie Kong production of "Don't You Know", later retitled "Housewives' Choice" by a local DJ. The song featured a bouncing ska riddim, along with a duet by Morgan and Millicent "Patsy" Todd.
"Housewives' Choice" began the rivalry between Morgan and Prince Buster, who accused Morgan of stealing his ideas. Buster quickly released "Blackhead Chiney Man", chiding Morgan with the sarcastic put-down, "I did not know your parents were from Hong Kong" – a swipe at Kong. Morgan returned with the classic "Blazing Fire", in which he warns Buster to "Live and let others live, and your days will be much longer. You said it. Now it's the Blazing Fire". Buster shot back with, "Watch It Blackhead", which Morgan countered with "No Raise No Praise" and "Still Insist". Followers of the two artists often clashed, and eventually the government had to step in with a staged photo shoot depicting the rivals as friends.
Morgan had a major success in 1962 with "Forward March", a song celebrating Jamaican independence from Great Britain.
In the mid-1960s, when ska evolved into rocksteady, Morgan continued to release top quality material, including the seminal rude boy songs, "Tougher Than Tough", "Do the Beng Beng", "Conquering Ruler", and a cover of Ben E. King's soul hit, "Seven Letters". Produced by Bunny Lee, "Seven Letters" is often cited as the first true reggae single. In 1969 Morgan recorded the skinhead anthem "Moon Hop" (on Crab Records). However, failing eyesight then forced him to give up regular stage appearances. Morgan still performs occasionally at ska revival shows across the world – often backed by the guitarist Lynn Taitt. He remained popular in Jamaica and the UK into the early 1970s, and has lived primarily in the UK or the US since the late 1960s.
With reggae music's significant popularity of reggae in the UK in the late 1960s and early 1970s, British reggae label Trojan Records created a subsidiary, Song Bird, to issue Morgan's productions. The label issued 75 singles between 1969 and 1973.
Morgan has written several songs that have won the Festival Song Contest for other artists, including "Jamaica Whoa" (1998, Neville Martin), "Fi Wi Island A Boom" (2000, Stanley Beckford), and "Progress" (2002, Devon Black).
In July 2002 in Toronto, Ontario, Canada, a two-night "Legends of Ska" concert was held. Reuniting were The Skatalites, Lloyd Knibb, Rico Rodriguez, Lloyd Brevett, Lester Sterling, Johnny Moore and Lynn Taitt; along with Prince Buster, Alton Ellis, Owen Gray, Lord Creator, Justin Hinds, Derrick Harriott, Winston Samuels, Roy Wilson, Patsy Todd, Doreen Shaffer, Stranger Cole, Lord Tanamo and Derrick Morgan. In 2007 Morgan appeared on the bill at the annual Augustibuller music festival. His song "Tougher Than Tough" was featured in the video game Scarface: The World is Yours.
Morgan retired from the music industry because of illness in the 2010s, but returned in 2016 to collaborate with Kirk Diamond on a remake of Morgan's song 1960s "Conqueror".
Morgan headlined the Supernova International Ska Festival, in Fredericksburg, Virginia on May 27–28, 2017.
Discography
Albums
Seven Letters (1969)
Derrick Morgan in London (1969)
Moon Hop (1970)
Feel So Good (1975) (featuring Hortense Ellis)
People's Decision (1977)
Still in Love (1977) (also featuring Hortense Ellis)
Sunset at Moonlight City
Love City
The Legend of Derrick Morgan (1980)
I Am the Ruler (1992) – Trojan Records
Tougher Than Tough (Rudie in Court) (1992)
The Conquering Ruler (and the Sensational Yebo) (1994) – Pork Pie Records
Ska Man Classics (1995)
Ska Man Classics (1997)
21 Hits Salute (1997)
Meets the High Notes Live (2003)
Moon Hop: Best of the Early Years 1960–69 (2003)
Derrick:Top the Top (2003)
Derrick Meets the High Notes (2004)
Shake A Leg (2014)
Storybook Revisted (2019)
Singles
Morgan released nearly 200 singles in the UK, and more than 250 in Jamaica.
These include:
"The Hop" / "Tell It To Me", 7-inch: Island WI 006, UK, 1962
"Forward March" / "Please Don't Talk About Me", 7-inch: Island WI 011, UK, 1962
"See The Blind" / "Cherry Home", 7-inch: Island WI 013, UK, 1962
"I Am The Ruler" / "I Mean It" Pyramid 1968
"No Dice" / "I Mean It" Pyramid 1968
"Fat Man" / "South Parkway Rock" Trojan TR 626 UK, 1968
Singles on Crab Records
"Moon Hop" – 1969 – UK No. 49* "River to the Bank" / "Reggae Limbo", 7-inch (B side – Peter King)
"Seven Letters" / "Lonely Heartaches", 7-inch (B side – The Tartons)
"The First Taste of Love" / "Dance All Night", 7-inch
"Don't Play That Song" / "How Can I Forget You", 7-inch
"Mek It Tan Deh" / "Gimme Back", 7-inch
"Send Me Some Loving" / "Come What May", 7-inch
"Hard Time" / "Death Rides A Horse", 7-inch (B Side – Roy Richards)
"Man Pon Moon" / "What A Thing", 7-inch
"Moon Hop" / "Harris Wheel", 7-inch (B Side – Reggaeites)
"A Night at the Hop" / "Telephone", 7-inch
"Oh Baby" / The Rat", 7-inch (B Side – The Thunderbirds)
"Need To Belong" / "Let's Have Some Fun", 7-inch (with Jennifer Jones)
"I Wish I Was An Apple" / "The Story", 7-inch
"Take A Letter Maria" / "Just A Little Loving", 7-inch (with Owen Gray)
"Rocking Good Way" / "Wipe These Tears", 7-inch (with Jennifer Jones)
"My Dickie" / "Brixton Hop", 7-inch
"I Can't Stand It No Longer" / "Beyond The Wall", 7-inch
"Endlessly" / "Who's Making Love", 7-inch
"Hurt Me" / "Julia", 7-inch
"Searching So Long" / "Drums of Passion", 7-inch
See also
Reggae genres
List of ska musicians
List of reggae musicians
References
External links
[ Derrick Morgan biography] at Allmusic website
Reggaetrain.com biography
Jamaica Observer article on the rivalry with Prince Buster
Moon Hop: Best of the Early Years 1960–69 review
1940 births
Living people
Jamaican ska musicians
Jamaican reggae musicians
People from Clarendon Parish, Jamaica
Island Records artists
Trojan Records artists
Rocksteady musicians
|
18341963
|
https://en.wikipedia.org/wiki/Martin%20Bryant%20%28programmer%29
|
Martin Bryant (programmer)
|
Martin Bryant (born 1958) is a British computer programmer known as the author of White Knight and Colossus Chess, a 1980s commercial chess-playing program, and Colossus Draughts, gold medal winner at the 2nd Computer Olympiad in 1990.
Computer chess
Bryant started developing his first chess program – later named White Knight – in 1976. This program won the European Microcomputer Chess Championship in 1983, and was commercially released, in two versions ( and ) for the BBC Micro and Acorn Electron in the early 1980s. White Knight featured a then-novel display of principal variation – called "Best line" – that would become commonplace in computer chess.
Bryant used White Knight as a basis for development of Colossus Chess (1983), a chess-playing program that was published for a large number of home computer platforms in the 1980s, and was later ported to Atari ST, Amiga and IBM PC as Colossus Chess X. Colossus Chess sold well and was well-received, being described by the Zzap!64 magazine in 1985 as "THE best chess implementation yet to hit the 64, and indeed possibly any home micro".
Bryant later released several versions of his Colossus chess engine conforming to the UCI standard. The latest version was released in 2021 as Colossus 2021a.
Computer draughts
After chess, Bryant's interests turned to computer draughts (checkers). His program, Colossus Draughts, won the West of England championship in June 1990, thus becoming the first draughts program to win a human tournament. In August of the same year it won the gold medal at the 2nd Computer Olympiad, beating Chinook, a strong Canadian program, into second place.
Chinook's developers, headed by Jonathan Schaeffer, recognised Colossus' opening book as its major strength; it contained 40,000 positions compared to Chinook'''s 4,500, and relied on Bryant's research that had found flaws in the established draughts literature. In 1993, an agreement was made to trade Colossus' opening book for the Chinook's six-piece databases; Bryant also accepted the offer to join the Chinook development team. In August 1994, Chinook played a match against World Champion Marion Tinsley and world number two Don Lafferty (after Tinsley's withdrawal due to illness), earning the title of Man-Machine World Champion.
Bryant continued work on Colossus Draughts in the early 1990s, and in 1995 released an updated commercial version called Colossus '95, as well as draughts database programs DraughtsBase and DraughtsBase 2''.
Bryant lives in the Manchester area and retired in 2020.
More information can be found on his website.
References
External links
Colossus home page
Living people
British computer programmers
Place of birth missing (living people)
Video game programmers
Computer chess people
1958 births
|
22105071
|
https://en.wikipedia.org/wiki/OpenTG
|
OpenTG
|
OpenTG is an open-source implementation of a bulletin board system (BBS) software program written for Linux and/or Unix. Written from scratch in JRuby, the goal is to reproduce the look, feel, and functionality of similar legacy BBS systems such as Tag, Telegard, Maximus or Renegade, which were written for DOS and OS/2 during the pre-internet communication era. No original code from any BBS has been used nor referenced in order to focus on innovation and unique capabilities.
On August 17, 2008, the project was founded by Chris Tusa with work toward version 1 of the code. A year later, development on this branch, now known as OpenTG/1, ended. The back-end configuration tool had taken shape using the NCurses library, and database abstraction using ruby-DBI and PostgreSQL for the backend database engine. The developer deemed this version of the code a failure due to problems maintaining NCurses screen layouts and SQL queries through DBI.
With lessons learned and upfront design planning, iteration two now known simply as OpenTG, is under heavy development. The code has moved from MRI Ruby to JRuby running on OpenJDK. The latest code introduces:
MVC Design (Model View Controller)
Database Abstraction through the use of the Sequel ORM
Input Validation from Apache Commons
Integrated H2 SQL Database
Themes based on the FreeMarker template engine
TgThemer template editor (Graphical Application using QT5)
Current goals
Use standards based formats.
Produce a usable configuration and management interface, similar in scope to traditional BBS WFC tools.
Allow system operators to have flexibility of how their system is configured and consumed.
Implement security at the core, not as an after-thought.
Provide modern access using secure protocols such as SSH.
Provide a web interface for both administration and user management.
Hook into existing daemons and libraries where possible to reduce code efforts and conform to standards.
More status and goal information is available on the project homepage.
Software Stack
The following is a listing of software components used in Telegard/2
OpenJDK 10
Jruby 1.9.2 (Ruby = 2.5.0)
H2 (DBMS)
FreeMarker Template Engine
Apache Commons
Tested development platforms
The following are tested operating system platforms used by the developers:
Netrunner >= 18
MacOS X >= High Sierra
Developer information
This project is founded and currently led by Chris Tusa. It is hosted on Bitbucket and uses Mercurial for source code control. Snapshots are cloned to GitHub and downloadable tarballs are made available at various intervals for testing.. Developers can find information about contributing on the project's website.
See also
Telegard
Renegade (BBS)
WWIV
Mystic BBS
External links
Bulletin board system software
Unix Internet software
Internet software for Linux
|
4674326
|
https://en.wikipedia.org/wiki/Network%20traffic%20measurement
|
Network traffic measurement
|
In computer networks, network traffic measurement is the process of measuring the amount and type of traffic on a particular network. This is especially important with regard to effective bandwidth management.
Techniques
Network performance could be measured using either active or passive techniques. Active techniques (e.g. Iperf) are more intrusive but are arguably more accurate. Passive techniques have less network overhead and hence can run in the background to be used to trigger network management actions.
Measurement studies
A range of studies have been performed from various points on the Internet. The AMS-IX (Amsterdam Internet Exchange) is one of the world's largest Internet exchanges. It produces a constant supply of simple Internet statistics. There are also numerous academic studies that have produced a range of measurement studies on frame size distributions, TCP/UDP ratios and TCP/IP options.
Tools
Various software tools are available to measure network traffic. Some tools measure traffic by sniffing and others use SNMP, WMI or other local agents to measure bandwidth use on individual machines and routers. However, the latter generally do not detect the type of traffic, nor do they work for machines which are not running the necessary agent software, such as rogue machines on the network, or machines for which no compatible agent is available. In the latter case, inline appliances are preferred. These would generally 'sit' between the LAN and the LAN's exit point, generally the WAN or Internet router, and all packets leaving and entering the network would go through them. In most cases the appliance would operate as a bridge on the network so that it is undetectable by users.
Some tools used for SNMP monitoring are Tivoli Netcool/Proviso by IBM, CA Performance Management by CA Technologies., and SolarWinds
Functions and features
Measurement tools generally have these functions and features:
User interface (web, graphical, console)
Real-time traffic graphs
Network activity is often reported against pre-configured traffic matching rules to show:
Local IP address
Remote IP address
Port number or protocol
Logged in user name
Bandwidth quotas
Support for traffic shaping or rate limiting (overlapping with the network traffic control page)
Support website blocking and content filtering
Alarms to notify the administrator of excessive usage (by IP address or in total)
See also
IP Flow Information Export and NetFlow
Measuring network throughput
Network management
Network monitoring
Network scheduler
Network simulation
Packet sniffer
Performance management
References
Network management
Internet Protocol based network software
|
30872637
|
https://en.wikipedia.org/wiki/Eurydamas
|
Eurydamas
|
In Greek mythology, the name Eurydamas (Ancient Greek: Εὐρυδάμᾱς) may refer to:
Eurydamas, an Egyptian prince as one of the sons of King Aegyptus. His mother was a Phoenician woman and thus full brother of Agaptolemus, Cercetes, Aegius, Argius, Archelaus and Menemachus. In some accounts, he could be a son of Aegyptus either by Eurryroe, daughter of the river-god Nilus, or Isaie, daughter of King Agenor of Tyre. Eurydamas suffered the same fate as his other brothers, save Lynceus, when they were slain on their wedding night by their wives who obeyed the command of their father King Danaus of Libya. He married the Danaid Phartis, daughter of Danaus and an Ethiopian woman.
Eurydamas, one of the Argonauts, son of either of Ctimenus or Irus and Demonassa, if indeed in the latter case he is not being confounded with Eurytion who could also be his brother. Eurydamas was from Ctimene in Thessaly.
Eurydamas, son of Pelias (not the same as Pelias of Iolcus). He fought in the Trojan War and was one of those who hid in the Trojan Horse.
Eurydamas, an elder of Troy, interpreter of dreams. His sons Abas and Polyidus were killed by Diomedes.
Eurydamas, son-in-law of Antenor. Was killed by Diomedes.
Eurydamas, one of the suitors of Penelope, who gave her as a present a pair of earrings, and was eventually killed by Odysseus.
Notes
References
Apollodorus, The Library with an English Translation by Sir James George Frazer, F.B.A., F.R.S. in 2 Volumes, Cambridge, MA, Harvard University Press; London, William Heinemann Ltd. 1921. ISBN 0-674-99135-4. Online version at the Perseus Digital Library. Greek text available from the same website.
Apollonius Rhodius, Argonautica translated by Robert Cooper Seaton (1853-1915), R. C. Loeb Classical Library Volume 001. London, William Heinemann Ltd, 1912. Online version at the Topos Text Project.
Apollonius Rhodius, Argonautica. George W. Mooney. London. Longmans, Green. 1912. Greek text available at the Perseus Digital Library.
Gaius Julius Hyginus, Fabulae from The Myths of Hyginus translated and edited by Mary Grant. University of Kansas Publications in Humanistic Studies. Online version at the Topos Text Project.
Homer, The Iliad with an English Translation by A.T. Murray, Ph.D. in two volumes. Cambridge, MA., Harvard University Press; London, William Heinemann, Ltd. 1924. . Online version at the Perseus Digital Library.
Homer, Homeri Opera in five volumes. Oxford, Oxford University Press. 1920. . Greek text available at the Perseus Digital Library.
Homer, The Odyssey with an English Translation by A.T. Murray, Ph.D. in two volumes. Cambridge, MA., Harvard University Press; London, William Heinemann, Ltd. 1919. . Online version at the Perseus Digital Library. Greek text available from the same website.
Quintus Smyrnaeus, The Fall of Troy translated by Way. A. S. Loeb Classical Library Volume 19. London: William Heinemann, 1913. Online version at theio.com
Quintus Smyrnaeus, The Fall of Troy. Arthur S. Way. London: William Heinemann; New York: G.P. Putnam's Sons. 1913. Greek text available at the Perseus Digital Library.
Tryphiodorus, Capture of Troy translated by Mair, A. W. Loeb Classical Library Volume 219. London: William Heinemann Ltd, 1928. Online version at theoi.com
Tryphiodorus, Capture of Troy with an English Translation by A.W. Mair. London, William Heinemann, Ltd.; New York: G.P. Putnam's Sons. 1928. Greek text available at the Perseus Digital Library.
Sons of Aegyptus
Argonauts
Achaeans (Homer)
Trojans
Characters in the Iliad
Characters in the Odyssey
Phoenician characters in Greek mythology
Thessalian characters in Greek mythology
Characters in Greek mythology
Characters in the Argonautica
|
1834951
|
https://en.wikipedia.org/wiki/Aminet
|
Aminet
|
Aminet is the world's largest archive of Amiga-related software and files. Aminet was originally hosted by several universities' FTP sites, and is now available on CD-ROM and on the web. According to Aminet, as of 1 April 2013, it has 80592 packages online.
History
In January 1992, Swiss student Urban Müller took over a software archive that had been started by other members of a computer science students' club. Soon the archive became mirrored worldwide and in 1995 started being distributed on monthly CD-ROMs. By using a single master site (then wuarchive.wustl.edu) for creating ftp scripts for each slave site, Aminet reduced to a bare minimum the effort to set up new mirror sites. Aminet also illustrates practical use of metadata schema by software repositories. Reports of daily additions to this software archive were posted automatically to Usenet (de.comp.sys.amiga.archive), or could be requested as an email newsletter. Most of the programs on Aminet were public domain or shareware, but software companies made updates and demo versions of their programs available as well. Now Aminet is complemented by platform specific sites archiving software for AmigaOS 4, AROS or MorphOS only.
Aminet was an early attempt to create a centralized public archive maintained by the users themselves, predated only by the Info-Mac archive. Aminet aimed to keep the community united and free to download new open source software, new program demo releases, patches and localization of Amiga programs (AmigaOS and its modern programs are free to be localized by any single user into any country language), pictures, audio and video files and even hints or complete solutions to various Amiga games.
Recognized among interesting FTP sites in early 1990s, Aminet was the largest public archive of software for any platform until around 1996. When the Internet explosion occurred from 1996 to 1999, Aminet rapidly fell behind the emerging massive PC archives.
During 2004 the main Aminet mirror suffered from a hard-disk crash and many people considered the whole project dead. Around the same time, Nicolas Mendoza was setting up a modernized interface that indexed Aminet and provided advanced search features and a modern interface to navigate the tree, called Amirepo. He also posted public suggestions on how to help improving Aminet by adding tags for architectures to help catalog the tree that now consisted of MorphOS, AmigaOS 4 and Amithlon files, in addition to the already existing M68K, PowerUP and WarpOS files. He also suggested on measures to add proper dependencies to complement and replace the existing Requires field. This also in a future goal of letting Aminet function as a repository for package management systems similar to Debian's APT/DPKG and Red Hats' RPM. He tried to contact individuals like Matthias Scheler and Urban Müller which was known to maintain Aminet, but to no avail.
At the end of 2004 Christoph Gutjahr made contact with Urban Müller and set up a team to continue the Aminet effort. Urban Müller provided a new main mirror site and the backlog of packages were added in. The Amirepo interface of Nicolas Mendoza was integrated and Aminet was officially up and running again in February 2005. During 2005 the uploads started getting going again and in November 2005 most of the ambiguous files and .readme files were sorted out, finally sanitizing the repository. The team has gradually made many changes.
References
External links
Aminet
Aminet Wiki
The AROS Archives
MorphOS files, MorphOS Storage
OS4Depot
Amiga
File hosting
Software distribution platforms
|
51277063
|
https://en.wikipedia.org/wiki/Hacker%27s%20Olympic%20Rundown
|
Hacker's Olympic Rundown
|
Hacker's Olympic Rundown is a British comedy-sports miniseries which aired on CBBC during the 2016 Summer Olympics in Rio de Janeiro. Presented by Hacker T. Dog, each episode features a rundown of the previous day's events with a humorous twist on them. Characters from Hacker Time, Wilf Breadbin and Derek McGee appeared as Hacker's 'roving reporter' and studio crew respectively. The series began on 8 August 2016. 11 episodes aired throughout August. All roles were portrayed by Phil Fletcher and were written by Jaime Wilson and Graham Davies.
Format
Each episode features Hacker T. Dog (Phil Fletcher) as he covers a variety of sports in a comedic fashion, making fun of what was going on. Similar to Harry Hill's TV Burp. He also talked to his 'roving reporter' Wilfred Breadbin (also portrayed by Fletcher) who reported on something that had little to no relation to sports. Crew member Derek McGee (also portrayed by Fletcher) would help Hacker with presenting.
Episodes
Series 1 (2016)
References
External links
British children's comedy television series
2016 British television series debuts
2016 British television series endings
2010s British children's television series
British television shows featuring puppetry
English-language television shows
BBC children's television shows
BBC television comedy
|
20504129
|
https://en.wikipedia.org/wiki/UBIFS
|
UBIFS
|
UBIFS (UBI File System, more fully Unsorted Block Image File System) is a flash file system for unmanaged flash memory devices.
UBIFS works on top of an UBI (unsorted block image) layer, which is itself on top of a memory technology device (MTD) layer.
The file system is developed by Nokia engineers with help of the University of Szeged, Hungary. Development began in earnest in 2007, with the first stable release made to Linux kernel 2.6.27 in October 2008.
Two major differences between UBIFS and JFFS2 are that UBIFS supports write caching, and UBIFS errs on the pessimistic side of free space calculation.
UBIFS tends to perform better than JFFS2 for large NAND flash memory devices. This is a consequence of the UBIFS design goals: faster mounting, quicker access to large files, and improved write speeds. UBIFS also preserves or improves upon JFFS2's on-the-fly compression, recoverability and power fail tolerance. UBIFS's on-the-fly data compression allows zlib (deflate algorithm), LZO or Zstandard.
UBIFS stores indexes in flash whereas JFFS2 stores filesystem indexes in memory. This directly impacts the scalability of JFFS2 as the tables must be rebuilt every time the volume is mounted. Also, the JFFS2 tables may consume enough system RAM that some images may be unusable.
UBI
UBI (Unsorted Block Images) is an erase block management layer for flash memory devices. UBI serves two purposes, tracking NAND flash memory bad blocks and providing wear leveling. Wear leveling spreads the erases and writes across the entire flash device. UBI presents logical erase blocks to higher layers and maps these to physical erase blocks. UBI was written specifically for UBIFS so that UBIFS does not have to deal with wear leveling and bad blocks. However, UBI may also be useful with squashfs and NAND flash; squashfs is not aware of NAND flash bad blocks.
UBI's documentation explains that it is not a complete flash translation layer (FTL). Although a FTL also handles bad blocks and wear leveling, the interface a FTL provides is a block device with small (typically 512 bytes) sectors that can be written completely independently. In contrast, UBI's interface directly exposes erase blocks and programmable pages (which are different sizes, and much larger than typical block device sectors), and filesystems that use UBI must be aware of the sizes and restrictions on how blocks must be erased before being written.
UBI is in some ways analogous to a Logical Volume Manager. In typical usage, rather than partitioning flash into fixed regions, a single UBI device spans the entire flash (except for perhaps a few pages in fixed locations reserved for the bootloader), and multiple volumes are created within the UBI device. This allows wear-leveling to be spread across the whole flash, even if some volumes are written more frequently than others. UBI volumes can be static (which contain a whole file or image written once and protected by CRC-32 by UBI) or dynamic (which contain a read-write filesystem that is responsible for its own data integrity). The only filesystem that directly supports UBI is UBIFS, but using gluebi it's possible to emulate a MTD device, which can then be used to run other flash filesystems like JFFS2 and YAFFS, and using ubiblk it's possible to emulate block devices, which can run common filesystems like Ext4.
Fastmap
UBI was augmented in Linux 3.7 with fastmap support. Fastmap maintains an on-disk version of information previously created in memory by scanning the entire flash device. The code falls back to the previous mechanism of a full scan on failures and older UBI systems will simply ignore the fastmap information.
See also
List of file systems
Comparison of file systems
References
External links
Home page
University of Szeged: UBIFS
UBIFS experiments on the XO Laptop (One Laptop per Child)
UBIFS file system
Embedded Linux
Flash file systems supported by the Linux kernel
Free special-purpose file systems
|
867980
|
https://en.wikipedia.org/wiki/LynxOS
|
LynxOS
|
The LynxOS RTOS is a Unix-like real-time operating system from Lynx Software Technologies (formerly "LynuxWorks"). Sometimes known as the Lynx Operating System, LynxOS features full POSIX conformance and, more recently, Linux compatibility. LynxOS is mostly used in real-time embedded systems, in applications for avionics, aerospace, the military, industrial process control and telecommunications. As such, it is compatible with military-grade security protocol such as wolfSSL, a popular TLS/SSL library.
History
The first versions of LynxOS were written in 1986 in Dallas, Texas, by Mitchell Bunnell and targeted at a custom-built Motorola 68010-based computer. The first platform LynxOS ever ran on was an Atari 1040ST with cross development done on an Integrated Solutions UNIX machine. In 1988-1989, LynxOS was ported to the Intel 80386 architecture. Around 1989, ABI compatibility with System V.3 was added. Compatibility with other operating systems, including Linux, followed.
Full Memory Management Unit support has been included in the kernel since 1989, for the reliability of protected memory and the performance advantages of virtual addresses. The PowerPC architecture is also supported, and in February 2015 Lynx announced planned support for the ARM Cortex A-family.
LynxOS components are designed for absolute determinism (hard real-time performance), which means that they respond within a known period of time. Predictable response times are ensured even in the presence of heavy I/O due to the kernel's unique threading model, which allows interrupt routines to be extremely short and fast.
Lynx holds an expired patent on the technology that LynxOS uses to maintain hard real-time performance. was granted to Lynx November 21, 1995: "Operating System Architecture using Multiple Priority Light Weight kernel Task-based Interrupt Handling."
In 2003, Lynx introduced a specialized version of LynxOS called LynxOS-178, especially for use in avionics applications that require certification to industry standards such as DO-178B.
The Usenet newsgroup is devoted to discussion of LynxOS.
References
External links
Lynx real-time operating systems (RTOS)
Patent #5,469,571: LynuxWorks' "Operating System Architecture using Multiple Priority Light Weight kernel Task-based Interrupt Handling."
Whitepaper: Using the Microprocessor MMU for Software Protection in Real-Time Systems
Applications using LynxOS and other Lynx operating systems
ARM operating systems
Embedded operating systems
Real-time operating systems
Unix variants
|
46696066
|
https://en.wikipedia.org/wiki/Windows%2010%20editions
|
Windows 10 editions
|
Windows 10 has several editions, all with varying feature sets, use cases, or intended devices. Certain editions are distributed only on devices directly from an original equipment manufacturer (OEM), while editions such as Enterprise and Education are only available through volume licensing channels. Microsoft also makes editions of Windows 10 available to device manufacturers for use on specific classes of devices, including IoT devices and previously marketed Windows 10 Mobile for smartphones.
Baseline editions
Baseline editions are the only editions available as standalone purchases in retail outlets. PCs often come pre-installed with one of these editions.
Windows 10 Home is designed for use in PCs, tablets and 2-in-1 PCs. It includes all features directed at consumers.
Windows 10 Pro includes all features of Windows 10 Home, with additional capabilities that are oriented towards professionals and business environments, such as Active Directory, Remote Desktop, BitLocker, Hyper-V, and Windows Defender Device Guard.
Windows 10 Pro for Workstations is designed for high-end hardware for intensive computing tasks and supports Intel Xeon, AMD Opteron and the latest AMD Epyc processors; up to four CPUs; up to 6 TB RAM; the ReFS file system; Non-Volatile Dual In-line Memory Module (NVDIMM); and remote direct memory access (RDMA).
Organizational editions
These editions add features to facilitate centralized control of many installations of the OS within an organization. The main avenue of acquiring them is a volume licensing contract with Microsoft.
Windows 10 Education is distributed through Academic Volume Licensing. It was based on Windows 10 Enterprise and initially reported to have the same feature set. As of version 1709, however, this edition has fewer features. See for details.
This edition was introduced in July 2016 for hardware partners on new devices purchased with the discounted K–12 academic license. It was based on the Pro edition of Windows 10 and contains mostly the same features as Windows 10 Pro with different options disabled by default, and adds options for setup and deployment in an education environment. It also features a "Set Up School PCs" app that allows provisioning of settings using a USB flash drive, and does not include Cortana, Microsoft Store suggestions, Windows Sandbox, or Windows Spotlight.
Windows 10 Enterprise provides all the features of Windows 10 Pro for Workstations, with additional features to assist with IT-based organizations. Windows 10 Enterprise is configurable on two servicing channels, Semi-Annual Channel and Windows Insider Program.
Enterprise LTSC (Long-Term Servicing Channel) (formerly LTSB (Long-Term Servicing Branch)) is a long-term support variant of Windows 10 Enterprise released every 2 to 3 years. Each release is supported with security updates for either 5 or 10 years after its release, and intentionally receive no feature updates. Some features, including the Microsoft Store and bundled apps, are not included in this edition. This edition was first released as Windows 10 Enterprise LTSB (Long-Term Servicing Branch). There are currently 4 releases of LTSC: one in 2015 (version 1507), one in 2016 (version 1607), one in 2018 (labeled as 2019, version 1809), and one in 2021 (version 21H2).
S mode
Since 2018, OEMs can ship Windows 10 Home and Pro in a feature-limited variation named S mode which evolved from the discontinued Windows 10 S. Organizations employing Windows 10 Enterprise or Windows 10 Education can make use of S mode too. S mode is a feature-limited edition of Windows 10 designed primarily for low-end devices in the education market. It has a faster initial setup and login process, and allows devices to be provisioned using a USB drive with the "Set Up School PCs" app.
With the exception of the Microsoft Teams desktop client which was made available for S mode in April 2019, the installation of software (both Universal Windows Platform and Windows API apps) is only possible through the Microsoft Store, and command line programs or shells (even from Microsoft Store) are not allowed. System settings are locked to allow only Microsoft Edge as the default web browser with Bing as its search engine. The operating system may be switched out of S mode using the Microsoft Store for free. However, once S Mode is turned off, it cannot be re-enabled. All Windows 10 devices in S mode include a free one-year subscription to Minecraft: Education Edition. Critics have compared the edition to Windows RT, and have considered it to be a competitor to Chrome OS.
Device-specific editions
These editions are licensed to OEMs only, and are primarily obtained via the purchase of hardware that includes it:
A specific edition used by Microsoft's HoloLens mixed reality smartglasses.
A rebranded variant of Microsoft's earlier embedded operating systems, Windows Embedded. Designed specifically for use in small footprint, low-cost devices and IoT scenarios. IoT Core was discontinued on October 11, 2020.
A specific edition used by Microsoft's Surface Hub interactive whiteboard.
Discontinued editions
The following editions of Windows 10 were discontinued (as of Windows 10 version 21H2). For both Mobile and Mobile Enterprise, Microsoft confirmed it was exiting the consumer mobile devices market, so no successor product is available.
Windows 10 Mobile was designed for smartphones and small tablets. It included all basic consumer features, including Continuum capability. It was the de facto successor of Windows Phone 8.1 and Windows RT.
Windows 10 Mobile Enterprise provided all of the features in Windows 10 Mobile, with additional features to assist IT-based organizations, in a manner similar to Windows 10 Enterprise, but optimized for mobile devices.
A binary equivalent of Windows 10 Mobile Enterprise licensed for IoT applications. Also known as IoT Mobile Enterprise.
Windows 10 S was an edition released in 2017 which ultimately evolved into the so-called S mode of Windows 10. In March 2018, Microsoft announced that it would be phasing out Windows 10 S, citing confusion among manufacturers and end-users.
Originally announced for use on dual-screen devices such as the Surface Neo and other potential form factors, 10X featured a modified user interface designed around context specific interactions or "postures" on such devices, including a redesigned Start menu with no tiles, and use of container technology to run Win32 software. The platform was described as a more direct competitor to Chrome OS. On May 4, 2020, Microsoft announced that Windows 10X would first be used on single-screen devices, and that they will "continue to look for the right moment, in conjunction with our OEM partners, to bring dual-screen devices to market". On May 18, 2021, Head of Windows Servicing and Delivery John Cable stated that Windows 10X had been cancelled, and that its foundational technologies would be leveraged for future Microsoft products. Several design changes in 10X, notably the centered taskbar and redesigned start menu, would be later introduced in Windows 11.
Regional variations
As with previous versions of Windows since Windows XP, all Windows 10 editions for PC hardware have "N" and "KN" variations in Europe and South Korea that exclude certain bundled multimedia functionality, including media players and related components, in order to comply with antitrust rulings. The "Media Feature Pack" can be installed to restore these features. The variation cannot be changed without a clean install, and keys for one variation will not work on other variations.
As with Windows 8.1, a reduced-price "Windows 10 with Bing" SKU is available to OEMs; it is subsidized by having Microsoft's Bing search engine set as default, which cannot be changed to a different search engine by OEMs. It is intended primarily for low-cost devices, and is otherwise identical to Windows 10 Home.
In some emerging markets, OEMs preinstall a variation of Windows 10 Home called Single Language without the ability to switch the display language. It is otherwise identical to Windows 10 Home. To change display language, the user will need to upgrade to Windows 10 Pro.
In May 2017, it was reported that Microsoft, as part of its partnership with China Electronics Technology Group, created a specially-modified variant of Windows 10 Enterprise ("G") designed for use within branches of the Chinese government. This variant is pre-configured to "remove features that are not needed by Chinese government employees", and allow the use of its internal encryption algorithms.
Comparison chart
[1] The 4 GB limit for 32-bit editions is a limitation of the 32-bit addressing, not of Windows 10 itself. In practice, less than 4 GB of memory is addressable as the 4 GB space also includes the memory mapped peripherals.
Microsoft OEM licensing formula takes display size, RAM capacity and storage capacity into account. In mid-2015, devices with 4 GB RAM were expected to be $20 more expensive than devices with 2 GB RAM.
Upgrade path
Free upgrade
At the time of launch, Microsoft deemed Windows 7 (with Service Pack 1) and Windows 8.1 users eligible to upgrade to Windows 10 free of charge, so long as the upgrade took place within one year of Windows 10's initial release date. Windows RT and the respective Enterprise editions of Windows 7, 8, and 8.1 were excluded from this offer.
Commercial upgrade
The following table summarizes possible upgrade paths that can be taken, provided that proper licenses are purchased.There is no upgrade path that can allow Windows RT 8.1 devices to install Windows 10.
Release branches
New releases of Windows 10, called feature updates, are released twice a year as a free update for existing Windows 10 users. Each feature update contains new features and other changes to the operating system. The pace at which a system receives feature updates is dependent on the release branch from which the system downloads its updates. Windows 10 Pro, Enterprise and Education could optionally use a branch, which is defunct since version 1903, that received updates at a slower pace. These modes could be managed through system settings, Windows Server Update Services (WSUS), Windows Update for Business, Group Policy or through mobile device management systems such as Microsoft Intune.
Windows
Windows Insider is a beta testing program that allows access to pre-release builds of Windows 10; it is designed to allow power users, developers, and vendors to test and provide feedback on future feature updates to Windows 10 as they are developed. Windows Insider itself consists of four "rings", "Fast" (which receives new builds as they are released), "Slow" (which receives new builds on a delay after it is deployed to Fast ring users), "Release Preview" (which receives early access to updates for the Current Branch), and formerly "Skip Ahead" (which receives super-early builds for the next feature update while a current release is being finished).
The Current Branch (CB) distributed all feature updates as they graduate from the Windows Insider branch. Microsoft only supported the latest build. A feature update can be deferred for up to 365 days, while a quality update can be deferred for up to 30 days before it will be listed as available in Windows Update. As of version 1703, additional settings were provided to pause checking of updates for up to 35 days, but they were not available on Windows 10 Home. The branch was renamed to Semi-Annual Channel (Targeted) beginning with version 1709 before being merged to the Semi-Annual Channel since version 1903.
The Current Branch for Business (CBB) distributed feature updates on a four-month delay from their original release to the Current Branch for Business, till version 1809. This allowed customers and vendors to evaluate and perform additional testing on new builds before broader deployments. Devices could be switched back to the Current Branch at any time. Before version 1903, the branch was not available on Windows 10 Home. This branch was renamed to Semi-Annual Channel (SAC) from version 1703 to version 21H1. It was later renamed again to General Availability Channel (GAC) since version 21H2.
(LTSC)
This servicing option is exclusively available for Windows 10 Enterprise, IoT Core, and IoT Enterprise LTSC editions. Distribution snapshots of these editions are updated every 2-3 years. LTSC builds adhere to Microsoft's traditional support policy which was in effect before Windows 10: They are not updated with new features, and are supported with critical updates for either 5 or 10 years after their release. Microsoft officially discourages the use of LTSC outside of "special-purpose devices" that perform a fixed function and thus do not require new user experience features. As a result, it excludes Windows Store, most Cortana functionality, and most bundled apps (including Microsoft Edge). According to a Microsoft announcement, this servicing option was renamed from Long-Term Servicing Branch (LTSB) in 2016 to Long-Term Servicing Channel (LTSC) in 2018, to match the name changes mentioned above.
See also
Windows Server 2016, based on Windows 10 version 1607
Windows Server 2019, based on Windows 10 version 1809
Xbox system software, an operating system now based on the Windows 10 core, designed to run on consoles
Windows 10 version history
Notes
References
Windows 10
|
57564969
|
https://en.wikipedia.org/wiki/SimCorp
|
SimCorp
|
SimCorp () is a Danish company providing software and services to financial institutions such as asset managers, banks, national banks, pension funds, sovereign wealth funds and insurance companies worldwide.
Their core product is the SimCorp Dimension, a front to back integrated investment management system used by more than 190 clients around the world.
SimCorp was founded in 1971 and has its headquarters in Copenhagen and has offices in over 20 locations throughout Europe, Asia, and North America. SimCorp has over 1800 employees.
Over the years, SimCorp has acquired several companies:
Equipos, now SimCorp Coric, was acquired in 2014 and offers client reporting software
APL Italiana, now SimCorp Italiana, was acquired in 2017 and provides integrated investment management software for the Italian insurance market
AIM Software, now SimCorp Gain, was acquired in 2019 and focuses on enterprise data management software for the buy-side
The company is listed on the Nasdaq Copenhagen exchange.
References
Companies listed on the Nasdaq
Software companies based in Copenhagen
1971 establishments in Denmark
|
3598428
|
https://en.wikipedia.org/wiki/Interactive%20Entertainment%20Merchants%20Association
|
Interactive Entertainment Merchants Association
|
The Interactive Entertainment Merchants Association (IEMA) was a United States-based non-profit organization dedicated to serving the business interests of leading retailers that sell Interactive entertainment software (including video games, multimedia entertainment, peripherals and other software). Member companies of the IEMA collectively accounted for approximately seventy-five percent of the $10 billion annual interactive entertainment business in the United States. The association was established in 1997 by Hal Halpin, its president and founder, and counts among its member companies the largest retailers of games including Walmart, Target Corporation, Blockbuster Entertainment and Circuit City. The IEMA also sponsored an important annual trade show in the promotion of the business of the video game industry called the "Executive Summit".
In April 2006, the Interactive Entertainment Merchants Association merged with the Video Software Dealers Association to form the Entertainment Merchants Association (EMA).
Ratings
The IEMA was largely responsible for the acceptance and industry wide adoption of the self-regulatory ESRB ratings system, having endorsed it and subsequently required software publishers to rate all games in order to have their product sold on store shelves. The IEMA had also worked with parallel trade groups in the business including the Entertainment Software Association (ESA) in defeating laws that would prohibit the sale of Mature-rated games to minors. The group instead voluntarily committed to carding policies and procedures, requiring Government-issued photo identification, for all M-rated games - in much the same way that movie theatres voluntarily ask for ID for admittance to R-rated movies.
Box standardization
The IEMA played a major role in improving, from a retailer's perspective, the way most PC games are packaged. In 2000, many retailers were becoming disenchanted with the salability of PC games as compared with their more profitable console game counterparts as products. Oversized software boxes were blamed for a lack of productivity per square foot (the profitability of a particular item sold at retail based upon its foot print). The IEMA worked with leading game publishers in creating the now-standard IEMA-sized box, essentially a double-thick DVD-sized plastic or cardboard box, which effectively increased the profitability per square foot by over 33% and appeased merchants and developers alike.
PC identification mark
In creating the new box size the IEMA found itself in the unlikely position of platform guardian (where each console platform had a first-party publisher to oversee standardization matters, PC games by their very nature did not). As such, the industry pressured the organization to develop a platform identification mark which would unify the display and focus the customer's brand perception. Again the IEMA worked with publishers to create a new standard "PC" icon, and would provide its use on a royalty-free basis to the industry.
Charitable work
As part of the contract that computer game publishers must sign in order to use the PC icon(s), they agreed to provide three finished copies of each game that they create which uses one or more of the trademarks, as is standard practice. The IEMA chose to use the influx of new software to re-launch the video game industry's first charitable organization, Games for Good. GfG essentially acts as a repository for the games business. It receives in donated items and redistributes them to partner charities: children's hospitals, shelters, schools and other appropriate non-profit institutions.
Representation
In addition to its roles above the IEMA handled lobbying and legislative efforts with regard to First Amendment matters which concern its members. Association executives routinely testified before State and Federal agencies and committees on behalf of the game industry, as well as providing representation to the media and speaking on behalf of channel-oriented perspectives at trade shows and conferences. The IEMA worked on both inter and intra-industry matters for its members including RFID, Source-tagging, Organized Retail Crime Loss prevention, digital distribution.
Controversy
The IEMA had been accused of not following through on promises made with regard to stemming the sale of Mature-rated games to minors. The Federal Trade Commission (FTC) as well as special interest groups including the National Institute on Media and the Family (NIMF) have performed sting operations on IEMA member company stores and found that retailers continue to sell M-rated games to children. Critics claim that the organization makes public statements that are meant to appease law-makers and the press but does not follow through with penalties imposed upon members which run afoul of their commitment. They would furthermore like to see the IEMA more directly involved with its membership in educating store-level staff about the ESRB ratings system. Others have praised the association for its swift response to the 2005 Grand Theft Auto: San Andreas Hot Coffee minigame controversy, in which the rating for the game was changed from "M" to "AO" (Adults Only). Upon receiving notification of the change, all IEMA retailers removed the product from store shelves within 24 hours.
See also
Censorship
Censorship in the United States
Entertainment Software Association
MPAA film rating system, the U.S. film industry equivalent to ESRB
Video game controversy
References
External links
When Two Tribes Go to War: A History of Video Game Controversy
Organizations established in 1997
Video game organizations
|
4640734
|
https://en.wikipedia.org/wiki/Infonomicon
|
Infonomicon
|
Infonomicon Computer Club is a premier hacking organization, comprising over a dozen people from across the United States. Once called Infonomicon Media, a majority of the group members produce hacker-related webcasts, including both podcasts and TV webcasts. The group changed its name in early 2006 to the Infonomicon Computer Club, and has since started work on projects outside the media realm. Members of the ICC have done many presentations at various hacker conventions across the country, the most recent being PhreakNIC 19 in November 2015 (for complete listing, see Presentations section).
There has been little "mainstream" attention to the Club as of 2007, but its episodes and member sites have been mentioned often in other hacker-related broadcasts such as by Binary Revolution Radio and Hak5.
Members
(in order of appearance)
droops - Infonomicon Father
Obfuscated - Infonomicon Father
Phizone - Host of Infonomicon TV
Irongeek - Author of Hacking Illustrated
Lowtek_Mystik (Morgellon) - Host of Ninja Night School Radio
Enigma - Twatech.org Admin
p0trillo23 - Twatech.org Host
kn1ghtl0rd - Development Department
Ponyboy - Host of Bellcore Radio
livinded - Twatech.org Host
kotrin - just a dude
mralk3 - just another dude
downer - Head of Graphic Department
Dosman - Host of The Packet Sniffers
Zach - Host of The Packet Sniffers
Electroman - Host of ElectroStuff
TelcoBob - Six Legged Groove Machine Member
FrolicsWScissors - Six Legged Groove Machine Member
Fiebig - Host of M0diphyd.org
FTP - Infonomicon cohort
ragechin - Shellbox Admin
History
Infonomicon started in June 2004, with its first episode broadcast under the title of droops Radio (later changed to Infonomicon Radio). Episodes focused on a variety of subjects, from hardware hacking, lockpicking, and FM transmitters, to wilderness survival tips. The series lasted until December 25, 2005 and produced 53 episodes.
The show was originally hosted by droops, later bringing in Obfuscated as a second host in episode 3 (July 20, 2004). The show released sporadic episodes for about a year, and then in late 2004 added a video webcast, called Infonomicon TV, which included content from both droops and other outside contributors.
In August 2005, one of their contributors, Phizone, joined the group, and helped launch a new website called PodcastIncubator.com, which provided free hosting to other individuals who were interested in starting their own podcast. Also in August, droops became the administrator of HackerMedia.org, a site which many hacker audio and video webcasts had been using to announce new episodes of shows.
Towards the end of October, 2005 droops began setting up free computers for children by installing Linux on old Windows 98 boxes. He used Edubuntu, which was designed as an educational tool for children to learn how to use Linux at the same time as they learned other things with the educational software that was included in the Linux distribution.
The club has also released two Linux Live CDs called PodcastFertilizer and Slast. Podcast Fertilizer, or PodFert, was designed for people who want to make their own podcast. It contains free audio editing software as well as other tools embedded directly into a Slax Live CD. Slast is also based on the Slax Live CD, and has asterisk PBX software pre-compiled on it. Users simply have to enter in some basic information and they can make any computer into a temporary asterisk console.
In April 2006, several members of Infonomicon gave presentations at Notacon 3 in Cleveland, Ohio, and also participated in a Hacker Media panel, moderated by Jason Scott.
In October 2006 again several members of ICC gave presentation at PhreakNIC X in Nashville, TN. In April 2007 one member of ICC (Kn1ghtl0rd) spoke at Notacon 4 in April, 2007 once again in Cleveland, Ohio.
In October 2007 Lowtek_Mystik and Kn1ghtl0rd gave a talk again about RFID at PhreakNIC 11 in Nashville, TN. This was part two of the duo's RFID information and covered previously unaddressed material.
October 2008 brought the members of ICC together again at PhreakNIC 12 in Nashville, TN. Droops and Morgellon (formally Lowtek_Mystik) gave a talk that started with basic electronics knowledge then moved into discussing how to build embedded systems (specifically using the Arduino). Irongeek presented a talk on hardware keyloggers, their use, the advantages/disadvantages of the current crop on the market, and how to possibly detect them via physical inspection and software.
Presentations
Notacon 2
'C64 RIP (Revival In Progress)' by Zach Notacon 2 April 2005
Interzone 5
'Hacker Media' by droops and Ponyboy Interz0ne 5 March 2006
Notacon 3
'Build Your Own Linux' by droops Notacon 3 April 2006
'Blended Threat Management' by kn1ghtl0rd Notacon 3 April 2006
'The Rise and Fall of Payphones, and the Evolution of Phreaking' by Ponyboy Notacon 3 April 2006
'Network Printer Hacking' by Irongeek Notacon 3 April 2006
'Hacker Media Panel' by droops, Ponyboy, Lowtek_Mystik, Dosman, Zach, (also featuring slick0, Jason Scott, and Drew) Notacon 3 April 2006
PhreakNIC X
'Understanding RFID' by Lowtek Mystik and Kn1ghtl0rd October 2006 (see video below)
'Penetration Testing with Bart's PE' by Irongeek October 2006
Notacon 4
'Grid Computing with Alchemi and .NET' by Kn1ghtl0rd April 2007
PhreakNIC 0x0b
'RFID 2.0' by Lowtek Mystik and Kn1ghtl0rd October 2007
Notacon 5
'Lock Picking into the New Frontier: From Mechanical to Electronic Locks' by Dosman April 2008
PhreakNIC 12
'The Extraordinary Journey from Fundamental Electronics to Fabulous Enchanted Systems with Arduino's and Magical Potions.' by Droops and Morgellon October 2008
'Hardware Keyloggers: Use, Review, and Stealth' by Irongeek October 2008
Notacon 6
'Interactivity with Arduinos, Transducing the Physical World' by Droops & Morgellon the Lowtek Mystic April 2009
(Morgellon was unable to make this con, droops presented this talk solo)
PhreakNIC 13
'Arduino Fun Part Deux' by Droops & Morgellon the Lowtek Mystic October 2009
(Morgellon was unable to make this con, droops presented this talk solo)
'Darknets: Fun and games with anonymizing private networks' by Irongeek October 2009
'Lock Picking is Not a Crime (unless you are here)' by Dosman and Dave October 2009
PhreakNIC 19
'A Beginners Guide to Nootropics' by Ponyboy November 2015
Other projects
Hackermedia - the site for hacker media content
Geeks Unleashed: Assault and Batteries - Fighting game
PDA Phreak - a revived site with pda and phreaking content
Slast Linux - A Live-CD containing Asterisk and based on Slax
Free Linux CD Project - Send in CDs and get Linux mailed back to you
Old Computer Documentation Project - Old computer books
RFID Syphon - To be shown at Phreaknic 11
References
Hacker Public Radio
TWAT Radio: Today with a Techie (archive.org final snapshot - website under different owner now)
Slast Linux
Hackermedia
Official website
List of episodes at Jason Scott's website
Binary Revolution Radio
BellCore Radio
InfoGrid Website
GridCrack Console Download Page
Kn1ghtl0rd and Lowtek_Mystik talking about RFID at PhreakNIC X
Hacker groups
Organizations established in 2004
|
2625019
|
https://en.wikipedia.org/wiki/International%20Open%20Source%20Network
|
International Open Source Network
|
The International Open Source Network has as its slogan "software freedom for all". It is a Centre of Excellence for free software (also known as FLOSS, FOSS, or open-source software) in the Asia-Pacific region.
IOSN says it "shapes its activities around FOSS technologies and applications. It is "tasked specifically to facilitate and network FOSS advocates and human resources in the region."
FLOSS's perceived potential
IOSN's website says: "FOSS presents itself as an access solution for developing countries. It represents an opportunity for these countries to adopt affordable software and solutions towards bridging the digital divide. Only the use of FOSS permits sustainable development of software; it is technology that is free to learn about, maintain, adapt and reapply".
It explains its emphasis on Free and Open Source Software for the following reasons:
Universal access to software without restrictions.
Less dependence on imported technology.
Freedom to share and collaborate in development efforts.
Freedom to customize software to local languages and cultures.
Development of local software capacity.
Open standards and vendor independence.
Beginning 2008, IOSN is now managed from three centers of excellence: University of the Philippines Manila (ASEAN+3), CDAC in Chennai, India (South Asia), and a consortium composed of members from the academe and government in the Pacific Island Countries (PIC).
IOSN objectives
IOSN's objectives include:
Serve as a clearing house for information on FOSS in the Asia-Pacific region.
Strengthen current FOSS capacities.
Assist with the development of needed toolkits and resource materials, including localisation efforts.
Assist in the coordination of FOSS programmes and initiatives through information sharing and networking in the Asia-Pacific region.
Affiliations and sponsorships
IOSN is an initiative of the UNDP Asia-Pacific Development Information Programme and is supported by the International Development Research Centre (IDRC).
External links
IOSN's official site
Bangladesh Open Source Network's (BdOSN) official site
Asia-Pacific Development Information Programme of the UNDP
Free and open-source software organizations
Information and communication technologies for development
Information technology organizations
|
56143781
|
https://en.wikipedia.org/wiki/NiceHash
|
NiceHash
|
NiceHash is a global cryptocurrency hash power broker and cryptocurrency exchange with an open marketplace that connects sellers of hashing power (cryptominers) with buyers of hashing power using the sharing economy approach. The company provides software for cryptocurrency mining. The company was founded in 2014 by two Slovenian university students, Marko Kobal and Matjaž Škorjanc.
The company is based in The British Virgin Islands and has offices in Maribor, Slovenia. NiceHash users are primarily video gamers who have powerful graphics cards (GPUs) suited to cryptocurrency mining. The company has over 2.5 million users in 190 countries worldwide as of November 2021.
History
NiceHash was founded in April 2014 by Matjaž Škorjanc, a former medical student turned computer programmer, and Marko Kobal.
NiceHash's founder Matjaž Škorjanc is one of the creators of malware called Mariposa botnet, which infected over 1 million computers with Mariposa ( Butterfly in Spanish) Bot. The Slovenian National Police Force arrested Matjaž on charges of distributing the malware in 2010. Matjaž was found guilty and served four years and ten months in a Slovenian prison. On June 5, 2019, US law enforcement opened a case in the operations of the Mariposa (Butterfly Bot, BFBOT) malware gang. In 2019, The FBI moved forward with new charges and arrest warrants against four suspects, including Matjaž Škorjanc. Matjaž was detained in prison in Germany in 2019. Due to international law agreements on double jeopardy, he was released in 2020.
On December 6, 2017, approximately 4,700 Bitcoins (US$64 million at the time of the hack) were stolen from NiceHash allegedly by a spear phishing attack. Due to the open and transparent nature of the blockchain, the security breach received an influx of attention as the stolen sum and movement of bitcoins were visible to anyone on the internet.
On December 21, 2017, Marko Kobal resigned as the CEO of NiceHash. On that day, the company also re-opened their marketplace after the December 6th hack. NiceHash announced that they had hired cyber security and implemented new security protocols after the incident. NiceHash said that affected users who made a claim by the deadline of December 16th 2020 had their funds reimbursed.
On February 17, 2021 a North Korean hacker group (Lazarus) was indicted for the 2017 NiceHash attack.
Business model
Hash power marketplace
NiceHash provides an open hash power marketplace where buyers can purchase computer power to add to their mining pool or operations.
Sellers (miners)
Sellers or miners have to run NiceHash mining software and connect their mining hardware or just regular PCs to NiceHash stratum servers and to the buyer's order. Their hashing power is forwarded to the pool that the buyer has chosen for mining. For each valid share they submit, they get paid in bitcoin for the price that is determined by the current weighted average and refreshed each minute. This is all done automatically and the process does not require complex technical skills.
In return for providing this service, NiceHash takes a percentage or fee from each group (buyers and sellers).
NiceHash opened a cryptocurrency exchange in May 2019, where miners and other traders of cryptocurrencies trade their mined coins to other cryptocurrencies or exchange to fiat currency. The platform lists 64 coins and tokens, and 80 trading pairs.
References
External links
licensed CC BY-SA
Cryptocurrencies
Slovenian websites
Internet properties established in 2014
Companies based in Ljubljana
Slovenian brands
Companies established in 2014
|
52427673
|
https://en.wikipedia.org/wiki/Jan%20Zaj%C3%AD%C4%8Dek
|
Jan Zajíček
|
Jan Zajíček (born 4 March 1977 in Prague, Czechoslovakia) is a Czech film director, screenwriter and artist.
Early life
Jan Zajíček was born on 4 March 1977, in Prague, Czechoslovakia. He studied fine art at the Václav Hollar School of
Art and film directing at the Film and TV School of the Academy of Performing Arts in Prague (FAMU). In 1992 he began to create graffiti art, one of the earliest graffiti artists in Czechoslovakia (under the pseudonym "Scarf" or "Skarf"). His experience with graffiti later influenced his strongly visual cinematic style, characterised by its combination of live-action with animation and visual effects. Between 1993 and 2000 he was a member of the Czech hip hop group WWW, which played in Prague venues including Alterna Komotovka, RC Bunkr, ROXY, and Rock Café, and in 1996 supported Sinéad O'Connor. During his studies and after graduation, he was a lecturer of experimental audio-visual production at the Josef Škvorecký Literary Academy. Since 2010, he has been a resident at the MeetFactory Contemporary Art Centre.
Career
Music videos
Zajíček has directed several award winning music videos. His first video, for the song "Známka punku" (The Sign of Punk) by the Czech punk band Visací zámek, was named Best Music Video of the Year by the Czech Music Academy. Other videos that received attention included "Days Will Never Be the Same", and "Meleme, meleme kávu" (Grinding the Coffee) by Czech hip hop artists Hugo Toxxx and Vladimir 518, which received Filter magazine's award for Best Music Video of the Year, and was selected by the International Short Film Festival Oberhausen for screening during its MuVi programme in 2010.
Theatre and stage design
From 2004 to 2013, Zajíček worked as a director and animator with several theatre groups, including Theatre XXL, VerTeDance, the National Theatre and the State Opera. In 2012, he created video content for the 60 metre screen at the Czech House for the 2012 Summer Olympics in London, United Kingdom. Along with Tomáš Mašín, he was also the co-director of Czech rock band Lucie's 2014 concert tour.
Film
In 2003 Zajíček produced a short experimental student film, The End of the Individual (), a study of the social structure of society on a model of one's own death. The film is a combination of 3D animation and live-action with a non-linear narrative. The film received several awards and was received positively internationally. In 2010, he created the short video montage Polys and contributed to the Czech pavilion at Expo 2010 in Shanghai, China.
He has also directed several commercials (for General Electric, ING, Nike, Skoda Auto, Kia Motors) and co-edited several documentaries. In 2011 he edited a documentary about the impact of corporate psychopaths upon society, and how the overuse of anti-depressants can result in erratic behavior; entitled I Am Fishead, it featured Peter Coyote, Philip Zimbardo, Václav Havel, Nicholas Christakis, Robert D. Hare and Christopher J. Lane. He was a co-creator (with rapper Vladimir 518) of the Czech TV documentary series Kmeny (Tribes), about urban subcultures, and in 2015 he wrote and directed the show's episode about hackers. In 2016 he wrote and co-directed (with the female Czech graffiti artist Sany) a documentary about female graffiti entitled Girl Power, the first documentary on this topic. It starred Martha Cooper, Lady Pink, and others, and was screened at cinemas and festivals all around the globe.
Awards
1999 – Eurovideo 99 Malaga - Honorable Mention (won) / Caramel Is Sugar That Will Never Recover ()
2002 – FAMU Festival - Best Sound Award (won) / The End of the Individual ()
2003 – A. N. Stankovič Award - Solitaire d’Or (won) / The End of the Individual ()
2003 – IFF Karlovy Vary - Best Student Film Collection Award (won) / The End of the Individual (), shared with V. Kadrnka and H. Papírníková
2005 – Sazka Dance Award (won) / Silent Talk (), VerTeDance
2006 – Czech Music Academy Award - Best Music Video of the Year (won) / "Známka punku" (The Sign of Punk)
2007 – Óčko Music TV Award - Best Music Video of the Year (nominated) / "Chvátám" (Rush)
2008 – Filter Mag Award - Best Music Video of the Year (won) / Days Will Never Be the Same
2008 – Óčko Music TV Award - Best Music Video of the Year (nominated) / Days Will Never Be the Same
2008 – Žebřík Award - Best Music Video of the Year (nominated) / Days Will Never Be the Same
2009 – Filter Mag Award - Best Music Video of the Year (won) / "Meleme, meleme kávu" (Grinding the Coffee)
2016 – UNERHÖRT Film Festival Hamburg - Best Film Prize (won) / Girl Power
Work
Direction and screenwriting filmography
2002 – The End of Individual () / short
2003 – Off Off / documentary TV series
2003 – Operation In/Out (Operace In/Out) / documentary
2006 – Break My Heart Please! / short dance film series
2012 – Then And Now () / documentary
2015 – Tribes: Hackers () / documentary
2016 – Girl Power / documentary
Music videos
1998 – WWW / The Caramel Is Sugar That Will Never Recover ()
2005 – Padlock (Visací Zámek) / The Sign of Punk ()
2006 – Pio Squad / Rush ()
2006 – PSH / The Year of PSH ()
2008 – Sunshine / Days Will Never Be The Same
2009 – Hugo Toxxx & Vladimir518 / Grinding the Coffee ()
2011 – Padlock () / 50
Edit
2002 – Elusive Butterfly () / documentary
2008 – The Anatomy of Gag () / short
2009 – Extraordinary Life Stories - Josef Abrhám () / TV documentary
2011 – I Am Fishead / documentary
2013 – Hotelier / documentary
2014 – Magical Dramatic Club () / TV documentary series
Theatre performances (cooperation)
2003 – Hypermarket / The National Theatre
2004 – Eldorado / The National Theatre
2004 – The Queen of Spades () / The State Opera
2004 – Silent Talk () / VerTeDance
2006 – Break My Heart Please! / Theatre XXL
2012 – Czech House / 2012 Summer Olympics in London, United Kingdom
2013 – War with the Newts () / The State Opera
2013 – Krabat – The Sorcerer's Apprentice () / The National Theatre
2014 – The Lucie 2014 Megatour
Festivals and exhibitions
2002 – Febiofest, Prague, the Czech Republic (The End of Individual / )
2003 – Anthology Film Archive, New York City, USA (The End of Individual / )
2003 – Karlovy Vary International Film Festival, the Czech Republic (The End of Individual / )
2003 – Dahlonega International Film Festival, Georgia, USA (The End of Individual / )
2008 – Chelsea Art Museum / Sonicself, New York City, USA (The End of Individual / )
2008 – Bolzano ShortFilm Festival, Italy (Rush / )
2008 – Brooklyn Film Festival, New York City, USA (Days Will Never Be the Same)
2008 – PechaKucha Night, Prague, the Czech Republic
2010 – 56th International Short Film Festival Oberhausen, Germany (Grinding the Coffee / )
2010 – DOX Centre for Contemporary Art, Prague, the Czech Republic (Polys)
2010 – Expo 2010, Shanghai, China (Polys)
2015 – AFO International Festival of Science Documentary Films, Olomouc, the Czech Republic (Tribes: Hackers / )
Girl Power (2016): Red Gallery London, United Kingdom, Urban Art Fair Paris, France, Cinema L'Ecran Saint-Denis, France, American Cosmograph Toulouse, France, Expo Charleroi Tattoo’moi 1 Graffiti, Belgium, Urban Spree Berlin, Germany, Friedrich Wilhelm Murnau Foundation Wiesbaden, Germany, Milla Club Munchen, Germany, IFZ Leipzig, Germany, Film Festival Essen, Germany, Platzprojekt Hannover, Germany, Culture centrum Kiel, Germany, UNERHÖRT Film Festival Hamburg, Germany, Frameout Film Festival Vienna, Austria, Kuns Festival Oslo, Norway, Helsinki International Film Festival, Finland, Stencibility Festival Tartu, Estonia, International Film Festival 86 Slavutych, Ukraine, Artmossphere Moscow, Russia, Artmossphere Saint Petersburg, Russia, Cultural Institution Krajn, Slovenia, Blart festival, Bosna and Hercegovina, Colombian Urban Art Film Festival Bogota, Colombia, De Cine MÁS Managua, Nicaragua, Redfern Community Centre Sydney, Australia & StayFly Sydney, Australia.
Literature
Martina Overstreet: In Graffiti We Trust. Praha: Mladá fronta 2006, 230 pages
References
External links
Czech-Slovak Film Database
Jan Zajicek’s videos on Vimeo
1977 births
Czech music video directors
Film directors from Prague
Living people
|
1343949
|
https://en.wikipedia.org/wiki/Test%20case
|
Test case
|
In software engineering, a test case is a specification of the inputs, execution conditions, testing procedure, and expected results that define a single test to be executed to achieve a particular software testing objective, such as to exercise a particular program path or to verify compliance with a specific requirement. Test cases underlie testing that is methodical rather than haphazard. A battery of test cases can be built to produce the desired coverage of the software being tested. Formally defined test cases allow the same tests to be run repeatedly against successive versions of the software, allowing for effective and consistent regression testing.
Formal test cases
In order to fully test that all the requirements of an application are met, there must be at least two test cases for each requirement: one positive test and one negative test. If a requirement has sub-requirements, each sub-requirement must have at least two test cases. Keeping track of the link between the requirement and the test is frequently done using a traceability matrix. Written test cases should include a description of the functionality to be tested, and the preparation required to ensure that the test can be conducted.
A formal written test case is characterized by a known input and by an expected output, which is worked out before the test is executed. The known input should test a precondition and the expected output should test a postcondition.
Informal test cases
For applications or systems without formal requirements, test cases can be written based on the accepted normal operation of programs of a similar class. In some schools of testing, test cases are not written at all but the activities and results are reported after the tests have been run.
In scenario testing, hypothetical stories are used to help the tester think through a complex problem or system. These scenarios are usually not written down in any detail. They can be as simple as a diagram for a testing environment or they could be a description written in prose. The ideal scenario test is a story that is motivating, credible, complex, and easy to evaluate. They are usually different from test cases in that test cases are single steps while scenarios cover a number of steps of the key.
Typical written test case format
A test case is usually a single step, or occasionally a sequence of steps, to test the correct behaviour/functionality, features of an application. An expected result or expected outcome is usually given.
Additional information that may be included:
Test Case ID - This field uniquely identifies a test case.
Test case Description/Summary - This field describes the test case objective.
Test steps - In this field, the exact steps are mentioned for performing the test case.
Pre-requisites - This field specifies the conditions or steps that must be followed before the test steps executions.
Test category
Author- Name of the Tester.
Automation - Whether this test case is automated or not.
pass/fail
Remarks
Larger test cases may also contain prerequisite states or steps, and descriptions.
A written test case should also contain a place for the actual result.
These steps can be stored in a word processor document, spreadsheet, database or other common repository.
In a database system, you may also be able to see past test results and who generated the results and the system configuration used to generate those results. These past results would usually be stored in a separate table.
Test suites often also contain
Test summary
Configuration
Besides a description of the functionality to be tested, and the preparation required to ensure that the test can be conducted, the most time-consuming part in the test case is creating the tests and modifying them when the system changes.
Under special circumstances, there could be a need to run the test, produce results, and then a team of experts would evaluate if the results can be considered as a pass. This happens often on new products' performance number determination. The first test is taken as the base line for subsequent test and product release cycles.
Acceptance tests, which use a variation of a written test case, are commonly performed by a group of end-users or clients of the system to ensure the developed system meets the requirements specified or the contract. User acceptance tests are differentiated by the inclusion of happy path or positive test cases to the almost complete exclusion of negative test cases.
See also
Classification Tree Method
References
External links
Writing Software Security Test Cases - Putting security test cases into your test plan by Robert Auger
Software Test Case Engineering By Ajay Bhagwat
Software testing
|
51077504
|
https://en.wikipedia.org/wiki/Nick%20Timothy
|
Nick Timothy
|
Nicholas James Timothy (born March 1980) is a British political adviser. He served as Joint Downing Street Chief of Staff, alongside Fiona Hill, to Prime Minister Theresa May, until his resignation in the wake of the 2017 general election.
Early life
Timothy was born in Birmingham, the son of a steel worker and a school secretary. He was educated at King Edward VI Grammar School in Aston, Birmingham, and at the University of Sheffield, where he gained a First in Politics.
He has cited as his inspiration in politics the Birmingham-born Liberal politician Joseph Chamberlain, of whom he wrote a short biography for the Conservative History Group. He has supported conservative philosophies which he believes benefit poorer people and has suggested the Conservative party should focus on benefiting all citizens.
Career
Early posts
Following his graduation, Timothy worked at the Conservative Research Department (CRD) for three years, from 2001 to 2004. In 2004, Timothy left the Conservative Research Department to work as corporate affairs adviser for the Corporation of London. In 2005, Timothy took up a post as a policy adviser for the Association of British Insurers. In 2006, Timothy returned to politics after two years in the financial sector, spending a year working for Theresa May - the first of three posts on May's staff. In 2007, Timothy returned to the CRD, where he worked for a further three years.
Home Office
In 2010, Theresa May was appointed Secretary of State at the Home Office and appointed Timothy as a special adviser. He spent five years working for the Home Secretary, before leaving, in 2015, to become a Director at the New Schools Network (NSN).
New Schools Network
While at the NSN he spoke in favour of ending the 50% Rule which requires oversubscribed Free Schools to allocate half of their places without reference to faith.
In 2015, Timothy wrote an article to express his worry that the People's Republic of China was effectively buying Britain's silence on allegations of Chinese human rights abuse and opposing China's involvement in sensitive sectors such as the Hinkley Point C nuclear power station. He criticised David Cameron and George Osborne for "selling our national security to China" and asserted that "the Government seems intent on ignoring the evidence and presumably the advice of the security and intelligence agencies." He warned that security experts were worried that the Chinese could use their role in the programme to build weaknesses into computer systems which would allow them to shut down Britain's energy production at will and argued that "no amount of trade and investment should justify allowing a hostile state easy access to the country's critical national infrastructure."
In October 2016, the Health Service Journal rated him as the fifth most influential person in the English NHS in 2016.
Timothy has stated that he voted to leave the European Union in the 2016 membership referendum.
Downing Street
Following David Cameron's resignation as Prime Minister in the wake of the Brexit referendum result, Timothy took a sabbatical from his position at the NSN to work on Theresa May's 2016 leadership campaign. May's campaign was a success and Timothy was appointed Joint Chief of Staff to the Prime Minister on 14 July 2016.
In spring 2017, May called a snap general election. As a result of the election, the Conservative Party lost its majority and became a minority government dependent on the Democratic Unionist Party for their majority. Timothy, along with Fiona Hill, faced immediate calls for his removal. Theresa May was also given an ultimatum by Conservative Members of Parliament, to sack Timothy or face her own leadership challenge.
On 9 June 2017, Timothy resigned as Joint Chief of Staff to the Prime Minister. He, along with Hill, had been blamed by members of the Conservative Party for a disastrous campaign, which resulted in May losing a 20-point lead in the polls.
Reflecting in 2020 on the projected cost of adult social care, Timothy wrote "Many things went wrong in that election campaign, but I resigned as joint Chief of Staff in Downing Street because our social care proposal blew up the manifesto."
Journalism
Since leaving Downing Street, Timothy has worked as a columnist for The Daily Telegraph newspaper.
Brexit and allegations of antisemitism
In February 2018, Timothy denied allegations of antisemitism following the publication of an article of which he was the principal author that claimed the existence of a "secret plot" to stop Brexit by the Jewish philanthropist George Soros. In response, Timothy tweeted: "Throughout my career I’ve campaigned against antisemitism, helped secure more funding for security at synagogues and Jewish schools".
2019 general election
In November 2019, Timothy failed in a bid to be selected as the Conservative candidate for the Meriden constituency in the West Midlands, for the 2019 general election. The seat had previously been held by Dame Caroline Spelman, who opted to stand down as an MP and candidate over the "intensity of abuse arising out of Brexit".
2022 Commonwealth Games
In January 2019 Timothy was appointed as a member of the organising committee of the 2022 Commonwealth Games, to be held in his home city of Birmingham.
The Trojan Horse Scandal
In February 2022, The New York Times released a podcast entitled "The Trojan Horse Affair" which was created by Brian Reed and Hamza Syed. The podcast shed light on Nick Timothy's contribution to the scandal when he emailed a Birmingham community centre which was due to host an event entitled "Trojan Horse or Trojan Hoax" in order to shut down the event. In the email it is alleged that Nick insinuated that the owners of the community centre would be associated with terrorism if they allowed the event to go ahead, and references an article Nick Timothy himself wrote.
References
External links
Twitter feed
Columns by Timothy for ConservativeHome
Living people
1980 births
British political consultants
Conservative Party (UK) people
People from Birmingham, West Midlands
Alumni of the University of Sheffield
British special advisers
Commanders of the Order of the British Empire
Downing Street Chiefs of Staff
|
57559371
|
https://en.wikipedia.org/wiki/Cultist%20Simulator
|
Cultist Simulator
|
Cultist Simulator is a card-based simulation video game developed by indie studio Weather Factory and published by Humble Bundle. It was released for Microsoft Windows, macOS and Linux computer systems in May 2018, with mobile versions developed by Playdigious and released in April 2019. A port for Nintendo Switch was released in February 2021.
The game, set in a Lovecraftian horror theme, has the player seek out and become the leader of an occult cult, with actions and other events played out through various types of playing cards.
Gameplay
Cultist Simulator is a narrative-driven simulation game that has the player take on the role of a citizen in a nameless society where their actions may lead to their creating a cultlike following. The game's mechanics are presented as a combination of cards and action buttons. Cards represent a range of different elements: persons, attributes like health or reason, emotions, locations, items, wealth, lore, and others. With the Aspirant, the default choice of character, the player starts with two cards: a career and one health. To play, the player drops cards onto slots contained within the action buttons, then starts the action which triggers a timer. When complete, the player clicks the action, collecting played cards and additional rewards, which may be random or predetermined based on the action. The first action button is "Work"; placing their career card on this earns the player "Funds" cards.
As the game progresses, new action buttons can appear. Some of these are beneficial, adding more options that players can do, such as Study, Talk, Explore, or Dream. Other action buttons are a detriment to the player's progress. For example, players will eventually get an action button that reflects the passage of their character's time in the game, which will automatically consume wealth cards; should the player have no wealth cards when this action's timer completes, they will gain Hunger cards, which leads to a chain of cards and action buttons that can lead to the death of the character. Some cards, often generated by action buttons, also have timers attached, either which they will burn out, or may revert to a different card type. The game takes place in real-time, but the player has the option of pausing the game to review cards and actions, and to place or collect cards from the board. The game ultimately has many different parallel victory and failure conditions, both based on "sane" and "insane" routes that the player's character may uncover.
Development
Weather Factory is an independent game studio created by Alexis Kennedy, who previously had founded Failbetter Games. Failbetter had developed several gothic and Lovecraftian horror story-driven games, including Fallen London. Kennedy split from Failbetter and founded Weather Factory in 2016, looking for a more hands-on role in designing and writing than his management-focused role at Failbetter. Cultist Simulator represented the studio's first game and an experimental title that they could produce quickly with minimal costs. Humble Bundle published the game.
The use of card-driven narrative systems was already something Kennedy was familiar with through Fallen London. Kennedy said that a card-based approach helped to make concepts tangible and allowed players to organize the cards as they saw best fit. Cultist Simulator represented the most minimalist take Kennedy could take with the card-based concept, since cards represented a vast vocabulary of terms within the game. While Kennedy provided user interface elements to help the player understand where to place cards, much of the strategy and reasoning was something he wanted players to discover for themselves. Kennedy did this to both mimic crafting systems in other role-playing games, and to create the Lovecraftian feel to the game. Kennedy said the player "will be able to mesh together an understanding of this very deep, very complicated lore in the same way that the scholars of Lovecraft are actually doing in fiction."
Kennedy announced the project in September 2016, providing a free simplified web-driven version of how the game would play, and with plans to use Kickstarter to raise funds for an anticipated October 2017 release date. Kennedy launched the Kickstarter in September 2017 and pushed the release date into May 2018; the Kickstarted succeeded in obtaining of its target fundraising goal. The game was released on 31 May 2018 for Microsoft Windows, macOS, and Linux systems.
Kennedy extended Cultist Simulator through downloadable content following the Paradox Interactive model. The first such content, "The Dancer", was released on 16 October 2018. A free update released on 22 January 2019 includes an extended end-game, where the player is challenged to take their successful cult leader and elevate them into a godlike state. Two additional DLC packs, "Priest" and "Ghoul" were released in May 2019, alongside the Cultist Simulator: Anthology Edition which includes the game and all DLC content in a single package. In May 2020, a DLC named "Exile" was released, which differed from the base game more than previous expansions in gameplay.
Ports for iOS and Android have been developed by Playdigious and were released in the second quarter of 2019.
A port for Nintendo Switch was released on 2 February 2021.
Reception
Cultist Simulator received "mixed or average reviews" from professional critics at launch.
According to Alexis Kennedy, Cultist Simulators sales surpassed 35,000 units within six days of release, which caused the game to break even and generate a profit in its first week. Its sales were similar in volume to those of Sunless Sea over the same period.
Cultist Simulator won the award for "Best Emotional Game Design" at the Emotional Game Awards 2018. It was nominated in the "Debut Game" and "Game Innovation" categories for the 15th British Academy Games Awards, and won the awards for "Best Game Design" and "Best Innovation" at the Develop:Star Awards, whereas its other nomination was for "Best Narrative".
References
External links
Initial web-based prototype released in 2016
2018 video games
Android (operating system) games
Dark fantasy video games
Early access video games
Indie video games
IOS games
Kickstarter-funded video games
Linux games
macOS games
Nintendo Switch games
Simulation video games
Single-player video games
Weird fiction video games
Video games developed in the United Kingdom
Windows games
Video games about cults
Lovecraftian horror
Humble Games games
Video games with downloadable content
|
57836368
|
https://en.wikipedia.org/wiki/Hybrid%20Access%20Networks
|
Hybrid Access Networks
|
Hybrid Access Networks refer to a special architecture for broadband access networks where two different network technologies are combined to improve bandwidth. A frequent motivation for such Hybrid Access Networks to combine one xDSL network with a wireless network such as LTE. The technology is generic and can be applied to combine different types of access networks such as DOCSIS, WiMAX, 5G or satellite networks. The Broadband Forum has specified an architecture as a framework for the deployment of such converged networks.
Use cases
One of the main motivations for such Hybrid Access Networks is to provide faster Internet services in rural areas where it is not always cost-effective to deploy faster xDSL technologies such as G.Fast or VDSL2 that cannot cover long distances between the street cabinet and the home. Several governments, notably in Europe, required network operators to provide fast Internet services to all inhabitants with a minimum of 30 Mbps by 2020.
A second use case is to improve the reliability of the access link given that it is unlikely that both the xDSL network and the wireless network will fail at the same time.
A third motivation is the fast service turnup. The customer can immediately install the hybrid network access and use the wireless leg while the network operator is installing the wired part.
Technology
Several techniques are defined by the Broadband Forum to create Hybrid Access Networks. To illustrate them, we assume that the end user has an hybrid CPE (Customer-premises_equipment) router that is attached to both a wired access network such as xDSL and a wireless one such as LTE. Other deployments are possible, e.g., the end user might have two different access routers that are linked together by a cable instead of a single hybrid CPE router.
The first deployment scenario is where the network operator provides a hybrid CPE router to each subscriber but no specialised equipment in the operator's network. There are two possible configurations for IP addresses. A first deployment scenario is to allocate different IP addresses to the wired and wireless interfaces. In this case, the hybrid CPE router needs to load-balance intelligently the packets over the two networks. In particular, it must ensure that all packets belonging to a given TCP connection are sent over the same interface. A second deployment scenario is to allocate the same IP address to both the wired and the wireless networks and configure the routing in these networks to ensure that packets are correctly routed.
The second deployment scenario is where the network operator provides a hybrid CPE router to each subscriber and installs a Hybrid Aggregation Gateway (HAG) inside its access networks. The Hybrid Aggregation Gateway plays an important role in balancing the packets sent by and destined to the hybrid CPE router over the two access networks. Two technologies have been defined and deployed to enable hybrid CPE routers to interact with Hybrid Aggregation Gateways. The main objective of these technologies is to efficiently use the two access links even if they have different delay and bandwidth. One technical difficulty that occurs when distributing packets over such heterogeneous links is to accurately detect congestion, notably on the wireless network whose bandwidth can vary quickly, and cope with the reordering which is caused by the delay difference. One approach uses GRE tunnels to hide the two links to the upper layer protocol. Both the hybrid CPE and the HAG need to reorder the received packets to ensure that TCP receives in-sequence packets. The second approach uses Multipath TCP, a recent TCP extension that has been designed to enable the transmission of the packets that belong to a single session across different links. This approach leverages the ability of MPTCP to efficiently handle congestion and cope with reordering on the heterogeneous access links. MPTCP needs support in both the host and the server. Two approaches have been defined for the interactions between the hybrid CPE router and the Hybrid Aggregation Gateway. The transparent mode is used when the Hybrid Aggregation Gateway is placed on the path of all packets sent by the hybrid CPE router. Otherwise, the Hybrid Aggregation Gateway includes a TCP converter as defined in. Additional details on Hybrid Access Networks and their deployment are described in
Deployments
The first commercial deployments started in 2015.
Several deployments of Hybrid Access Networks have already been documented.
Deutsche Telekom has deployed Hybrid Access Networks by using GRE Tunnels
Proximus has deployed Hybrid Access Networks by using Multipath TCP
KPN has deployed hybrid Access Networks by using Multipath TCP. The solution is available to 440,000 addresses
Telia has also deployed Hybrid Access Networks in Lithuania and Finland
Free (ISP) has also deployed Hybrid Access Networks in France
Go Malta has deployed a Hybrid Access Network in Malta
References
Digital subscriber line
Broadband
Internet access
Transmission Control Protocol
|
683452
|
https://en.wikipedia.org/wiki/9P%20%28protocol%29
|
9P (protocol)
|
9P (or the Plan 9 Filesystem Protocol or Styx) is a network protocol developed for the Plan 9 from Bell Labs distributed operating system as the means of connecting the components of a Plan 9 system. Files are key objects in Plan 9. They represent windows, network connections, processes, and almost anything else available in the operating system.
9P was revised for the 4th edition of Plan 9 under the name 9P2000, containing various improvements. Some of the improvements made are, the removal of certain filename restrictions, the addition of a 'last modifier' metadata field for directories, and authentication files. The latest version of the Inferno operating system also uses 9P2000. The Inferno file protocol was originally called Styx, but technically it has always been a variant of 9P.
A server implementation of 9P for Unix, called u9fs, is included in the Plan 9 distribution. A 9P OS X client kernel extension is provided by Mac9P. A kernel client driver implementing 9p with some extensions for Linux is part of the v9fs project. 9P and its derivatives have also found application in embedded environments, such as the Styx on a Brick project.
Server applications
Many of Plan 9's applications take the form of 9P file servers. Examples include:
acme: a text editor/development environment
rio: the Plan 9 windowing system
plumber: interprocess communication
ftpfs: an FTP client that presents the files and directories on a remote FTP server in the local namespace
wikifs: a wiki editing tool that presents a remote wiki as files in the local namespace
webfs: a file server that retrieves data from URLs and presents the contents and details of responses as files in the local namespace
Outside of Plan 9, the 9P protocol is still used when a lightweight remote filesystem is required:
NixOS: a purely functional and declarative Linux distribution can rebuild itself inside a virtual machine, where the client uses 9P to mount the package store directory of the host.
Windows Subsystem for Linux: since Windows 10 version 1903, the subsystem implements 9P as a server and the host Windows operating system acts as a client.
Crostini: a custom 9P server is used to provide access to files outside of a Linux VM
QEMU: the VirtFS device allows for filesystem sharing over 9P, which is accelerated with kernel drivers and shared memory
DIOD: Distributed I/O Daemon - a 9P file server
Implementation
9P sends the following messages between clients and servers. These messages correspond to the entry points in the Plan 9 vfs layer that any 9P server must implement.
version
Negotiate protocol version
error
Return an error
flush
Abort a message
auth, attach
Establish a connection
walk
Descend a directory hierarchy
create, open
Prepare a fid for I/O on an existing or new file
read, write
Transfer data from and to a file
clunk
Forget about a fid
remove
Remove a file from a server
stat, wstat
Inquire or change file attributes
See also
Distributed file system
References
External links
9P Resources page at cat-v.org
9P Manual
The Styx Architecture for Distributed Systems by Rob Pike and Dennis Ritchie
The Organization of Networks in Plan 9 by Dave Presotto and Phil Winterbottom
Security in Plan 9
Application layer protocols
Inferno (operating system)
Inter-process communication
Internet Protocol based network software
Network file systems
Network file transfer protocols
Network protocols
Plan 9 from Bell Labs
|
1708944
|
https://en.wikipedia.org/wiki/Architecture%20tradeoff%20analysis%20method
|
Architecture tradeoff analysis method
|
In software engineering, architecture tradeoff analysis method (ATAM) is a risk-mitigation process used early in the software development life cycle.
ATAM was developed by the Software Engineering Institute at the Carnegie Mellon University. Its purpose is to help choose a suitable architecture for a software system by discovering trade-offs and sensitivity points.
ATAM is most beneficial when done early in the software development life-cycle, when the cost of changing architectures is minimal.
ATAM benefits
The following are some of the benefits of the ATAM process:
identified risks early in the life cycle
increased communication among stakeholders
clarified quality attribute requirements
improved architecture documentation
documented basis for architectural decisions
ATAM process
The ATAM process consists of gathering stakeholders together to analyze business drivers (system functionality, goals, constraints, desired non-functional properties) and from these drivers extract quality attributes that are used to create scenarios. These scenarios are then used in conjunction with architectural approaches and architectural decisions to create an analysis of trade-offs, sensitivity points, and risks (or non-risks). This analysis can be converted to risk themes and their impacts whereupon the process can be repeated. With every analysis cycle, the analysis process proceeds from the more general to the more specific, examining the questions that have been discovered in the previous cycle, until such time as the architecture has been fine-tuned and the risk themes have been addressed.
Steps of the ATAM process
ATAM formally consists of nine steps, outlined below:
Present ATAM – Present the concept of ATAM to the stakeholders, and answer any questions about the process.
Present business drivers – everyone in the process presents and evaluates the business drivers for the system in question.
Present the architecture – the architect presents the high-level architecture to the team, with an 'appropriate level of detail'
Identify architectural approaches – different architectural approaches to the system are presented by the team, and discussed.
Generate quality attribute utility tree – define the core business and technical requirements of the system, and map them to an appropriate architectural property. Present a scenario for this given requirement.
Analyze architectural approaches – Analyze each scenario, rating them by priority. The architecture is then evaluated against each scenario.
Brainstorm and prioritize scenarios – among the larger stakeholder group, present the current scenarios, and expand.
Analyze architectural approaches – Perform step 6 again with the added knowledge of the larger stakeholder community.
Present results – provide all documentation to the stakeholders.
These steps are separated in two phases: Phase 1 consists of steps 1-6 and after this phase, the state and context of the project, the driving architectural requirements and the state of the architectural documentation are known. Phase 2 consists of steps 7-9 and finishes the evaluation
See also
ilities
Architecture-centric design method
Multi-criteria decision analysis
ARID
Software architecture analysis method, precursor to architecture tradeoff analysis method
Architectural analytics
References
External links
Reduce Risk with Architecture Evaluation
ATAM: Method for Architecture Evaluation
Software architecture
Enterprise architecture
|
8168925
|
https://en.wikipedia.org/wiki/Network%20address
|
Network address
|
A network address is an identifier for a node or host on a telecommunications network. Network addresses are designed to be unique identifiers across the network, although some networks allow for local, private addresses, or locally administered addresses that may not be unique. Special network addresses are allocated as broadcast or multicast addresses. These too are not unique.
In some cases, network hosts may have more than one network address. For example, each network interface controller may be uniquely identified. Further, because protocols are frequently layered, more than one protocol's network address can occur in any particular network interface or node and more than one type of network address may be used in any one network.
Network addresses can be flat addresses which contain no information about the node's location in the network (such as a MAC address), or may contain structure or hierarchical information for the routing (such as an IP address).
Examples
Examples of network addresses include:
Telephone number, in the public switched telephone network
IP address in IP networks including the Internet
IPX address, in NetWare
X.25 or X.21 address, in a circuit switched data network
MAC address, in Ethernet and other related IEEE 802 network technologies
See also
IP address
References
External links
Telecommunications engineering
|
1056289
|
https://en.wikipedia.org/wiki/Quartz%20%28graphics%20layer%29
|
Quartz (graphics layer)
|
In Apple's macOS operating system, Quartz is the Quartz 2D and Quartz Compositor part of the Core Graphics framework. Quartz includes both a 2D renderer in Core Graphics and the composition engine that sends instructions to the graphics card. Because of this vertical nature, Quartz is often synonymous with Core Graphics.
In a general sense, Quartz or Quartz technologies can refer to almost every part of the graphics model from the rendering layer down to the compositor including Core Image and Core Video. Other Apple graphics technologies that use the "Quartz" prefix include these:
Quartz Extreme
QuartzGL (originally Quartz 2D Extreme)
QuartzCore
Quartz Display Services
Quartz Event Services
Quartz 2D and Quartz Compositor
Quartz 2D is the primary two-dimensional (2D) text and graphics rendering library: It directly supports Aqua by displaying two-dimensional graphics to create the user interface, including on-the-fly rendering and anti-aliasing. Quartz can render text with sub-pixel precision; graphics are limited to more traditional anti-aliasing, which is the default mode of operation but can be turned off. In Mac OS X 10.4 Tiger, Apple introduced Quartz 2D Extreme, enabling Quartz 2D to offload rendering to compatible GPUs. However, GPU rendering was not enabled by default due to potential video redraw issues or kernel panics.
As of Mac OS X v10.5 Quartz 2D Extreme has been renamed to QuartzGL. However, it still remains disabled by default, as there are some situations where it can degrade performance, or experience visual glitches; it is a per-application setting which can be turned on if the developer wishes.
The Quartz Compositor is the compositing engine used by macOS. In Mac OS X Jaguar and later, the Quartz Compositor can use the graphics accelerator (GPU) to vastly improve composition performance. This technology is known as Quartz Extreme and is enabled automatically on systems with supported graphics cards.
Use of PDF
It is widely stated that Quartz "uses PDF internally" (notably by Apple in their 2000 Macworld presentation and Quartz's early developer documentation), often by people making comparisons with the Display PostScript technology used in NeXTSTEP and OPENSTEP (of which is a descendant). Quartz's internal imaging model correlates well with the PDF object graph, making it easy to output PDF to multiple devices.
See also
Quartz Composer
References
External links
Quartz 2D Programming Guide at developer.apple.com
Core Graphics API Reference at developer.apple.com
Quartz in Tiger (from a review of 10.4 in Ars Technica)
Introduction to OS X graphics APIs
Cocoa Graphics with Quartz: Part 1
Cocoa Graphics with Quartz: Part 2
Graphics software
MacOS
|
23629
|
https://en.wikipedia.org/wiki/DECSYSTEM-20
|
DECSYSTEM-20
|
The DECSYSTEM-20 was a 36-bit Digital Equipment Corporation PDP-10 mainframe computer running the TOPS-20 operating system (products introduced in 1977).
PDP-10 computers running the TOPS-10 operating system were labeled DECsystem-10 as a way of differentiating them from the PDP-11. Later on, those systems running TOPS-20 (on the KL10 PDP-10 processors) were labeled DECSYSTEM-20 (the block capitals being the result of a lawsuit brought against DEC by Singer, which once made a computer called "The System Ten"). The DECSYSTEM-20 was sometimes called PDP-20, although this designation was never used by DEC.
Models
The following models were produced:
DECSYSTEM-2020: KS10 bit-slice processor with up to 512 kilowords of solid state RAM (The ADP OnSite version of the DECSYSTEM-2020 supported 1 MW of RAM)
DECSYSTEM-2040: KL10 ECL processor with up to 1024 kilowords of magnetic core RAM
DECSYSTEM-2050: KL10 ECL processor with 2k words of cache and up to 1024 kilowords of RAM
DECSYSTEM-2060: KL10 ECL processor with 2k words of cache and up to 4096 kilowords of solid state memory
DECSYSTEM-2065: DECSYSTEM-2060 with MCA25 pager (double-sized (1024 entry) two-way associative hardware page table)
The only significant difference the user could see between a DECsystem-10 and a DECSYSTEM-20 was the operating system and the color of the paint. Most (but not all) machines sold to run TOPS-10 were painted "Blasi Blue", whereas most TOPS-20 machines were painted "Terracotta" (often mistakenly called "Chinese Red" or orange; the actual name of the color on the paint cans was Terra Cotta).
There were some significant internal differences between the earlier KL10 Model A processors, used in the earlier DECsystem-10s running on KL10 processors, and the later KL10 Model Bs, used for the DECSYSTEM-20s. Model As used the original PDP-10 memory bus, with external memory modules. The later Model B processors used in the DECSYSTEM-20 used internal memory, mounted in the same cabinet as the CPU. The Model As also had different packaging; they came in the original tall PDP-10 cabinets, rather than the short ones used later on for the DECSYSTEM-20.
The last released implementation of DEC's 36-bit architecture was the single cabinet DECSYSTEM-2020, using a KS10 processor.
The DECSYSTEM-20 was primarily designed and used as a small mainframe for timesharing. That is, multiple users would concurrently log on to individual user accounts and share use of the main processor to compile and run applications. Separate disk allocations were maintained for all users by the operating system, and various levels of protection could be maintained by for System, Owner, Group, and World users. A model 2060, for example, could typically host up to 40 to 60 simultaneous users before exhibiting noticeably reduced response time.
Remaining machines
The Living Computer Museum of Seattle, Washington maintains a 2065 running TOPS-10, which is available to interested parties via SSH upon registration (at no cost) at their website.
References
C. Gordon Bell, Alan Kotok, Thomas N. Hasting, Richard Hill, "The Evolution of the DECsystem-10", in C. Gordon Bell, J. Craig Mudge, John E. McNamara, Computer Engineering: A DEC View of Hardware Systems Design (Digital Equipment, Bedford, 1979)
Frank da Cruz, Christine Gianone, The DECSYSTEM-20 at Columbia University 1977–1988
Further reading
Storage Organization and Management in TENEX. Daniel L. Murphy. AFIPS Proceedings, 1972 FJCC.
"DECsystem-10/DECSYSTEM-20 Processor Reference Manual". 1982.
"Manuals for DEC 36-bit computers".
"Introduction to DECSYSTEM-20 Assembly Language Programming" (Ralph E. Gorin, 1981, )
External links
PDP-10 Models—Explains all the various KL-10 models in detail
Columbia University DECSYSTEM-20
Login into the Living Computer Museum, a portal into the Paul Allen collection of timesharing and interactive computers, including an operational DECSYSTEM-20 KL-10 2065
36-bit computers
DEC mainframe computers
Computer-related introductions in 1977
|
38061502
|
https://en.wikipedia.org/wiki/Tracy%20Camp
|
Tracy Camp
|
Tracy Kay Camp (born September 27, 1964) is an American computer scientist noted for her research on wireless networking. She is also noted for her leadership in broadening participation in computing. She was the co-chair of CRA-W from 2011 to 2014 and she was
the co-chair of ACM-W from 1998 to 2002.
Biography
Camp received a B.A. in Mathematics from Kalamazoo College in 1987. She received a M.S. in Computer Science from Michigan State University in 1989 and a Ph.D in Computer Science from The College of William & Mary in 1993.
She then joined the Department of Computer Science at the University of Alabama as an assistant professor in 1993. In 1998 she moved to the Colorado School of Mines as an assistant professor, and was then promoted
to associate professor in 2000 and to professor in 2007.
In 2010–11, she was interim head of Mathematical and Computer Sciences and then
interim head of Electrical Engineering and Computer Science. In this role, she helped lead this re-organization of the university. She is currently Head of the Department of Computer Science at Colorado School of Mines.
Awards
In 2012 she
was named an ACM Fellow.
Her other notable awards include:
ACM Fellow (2012)
IEEE Fellow (2016)
References
External links
Colorado School of Mines: Tracy Kay Camp, Department of Computer Science
American women computer scientists
American computer scientists
Colorado School of Mines faculty
Fellows of the Association for Computing Machinery
Living people
1964 births
Scientists from Detroit
American women academics
21st-century American women
|
1467948
|
https://en.wikipedia.org/wiki/Problem%20solving
|
Problem solving
|
Problem solving consists of using generic or ad hoc methods in an orderly manner to find solutions to difficulties.
Some of the problem-solving techniques developed and used in philosophy, medicine, societies, mathematics, engineering, computer science, and artificial intelligence in general are related to mental problem-solving techniques studied in psychology and cognitive sciences.
Definition
The term problem solving has a slightly different meaning depending on the discipline. For instance, it is a mental process in psychology and a computerized process in computer science. There are two different types of problems: ill-defined and well-defined; different approaches are used for each. Well-defined problems have specific end goals and clearly expected solutions, while ill-defined problems do not. Well-defined problems allow for more initial planning than ill-defined problems. Solving problems sometimes involves dealing with pragmatics, the way that context contributes to meaning, and semantics, the interpretation of the problem. The ability to understand what the end goal of the problem is, and what rules could be applied represents the key to solving the problem. Sometimes the problem requires abstract thinking or coming up with a creative solution.
Psychology
Problem solving in psychology refers to the process of finding solutions to problems encountered in life. Solutions to these problems are usually situation or context-specific. The process starts with problem finding and problem shaping, where the problem is discovered and simplified. The next step is to generate possible solutions and evaluate them. Finally a solution is selected to be implemented and verified. Problems have an end goal to be reached and how you get there depends upon problem orientation (problem-solving coping style and skills) and systematic analysis. Mental health professionals study the human problem solving processes using methods such as introspection, behaviorism, simulation, computer modeling, and experiment. Social psychologists look into the person-environment relationship aspect of the problem and independent and interdependent problem-solving methods. Problem solving has been defined as a higher-order cognitive process and intellectual function that requires the modulation and control of more routine or fundamental skills.
Problem solving has two major domains: mathematical problem solving and personal problem solving. Both are seen in terms of some difficulty or barrier that is encountered. Empirical research shows many different strategies and factors influence everyday problem solving. Rehabilitation psychologists studying individuals with frontal lobe injuries have found that deficits in emotional control and reasoning can be re-mediated with effective rehabilitation and could improve the capacity of injured persons to resolve everyday problems. Interpersonal everyday problem solving is dependent upon the individual personal motivational and contextual components. One such component is the emotional valence of "real-world" problems and it can either impede or aid problem-solving performance. Researchers have focused on the role of emotions in problem solving, demonstrating that poor emotional control can disrupt focus on the target task and impede problem resolution and likely lead to negative outcomes such as fatigue, depression, and inertia. In conceptualization, human problem solving consists of two related processes: problem orientation and the motivational/attitudinal/affective approach to problematic situations and problem-solving skills. Studies conclude people's strategies cohere with their goals and stem from the natural process of comparing oneself with others.
Cognitive sciences
The early experimental work of the Gestaltists in Germany placed the beginning of problem solving study (e.g., Karl Duncker in 1935 with his book The psychology of productive thinking). Later this experimental work continued through the 1960s and early 1970s with research conducted on relatively simple (but novel for participants) laboratory tasks of problem solving. The use of simple, novel tasks was due to the clearly defined optimal solutions and short time for solving, which made it possible for the researchers to trace participants' steps in problem-solving process. Researchers' underlying assumption was that simple tasks such as the Tower of Hanoi correspond to the main properties of "real world" problems and thus the characteristic cognitive processes within participants' attempts to solve simple problems are the same for "real world" problems too; simple problems were used for reasons of convenience and with the expectation that thought generalizations to more complex problems would become possible. Perhaps the best-known and most impressive example of this line of research is the work by Allen Newell and Herbert A. Simon. Other experts have shown that the principle of decomposition improves the ability of the problem solver to make good judgment.
Computer science
In computer science and in the part of artificial intelligence that deals with algorithms, problem solving includes techniques of algorithms, heuristics and root cause analysis. The amount of resources (e.g. time, memory, energy) required to solve problems is described by computational complexity theory. In more general terms, problem solving is part of a larger process that encompasses problem determination, de-duplication, analysis, diagnosis, repair, and other steps.
Other problem solving tools are linear and nonlinear programming, queuing systems, and simulation.
Much of computer science involves designing completely automatic systems that will later solve some specific problem—systems to accept input data and, in a reasonable amount of time, calculate the correct response or a correct-enough approximation.
In addition, people in computer science spend a surprisingly large amount of human time finding and fixing problems in their programs: Debugging.
Logic
Formal logic is concerned with such issues as validity, truth, inference, argumentation and proof. In a problem-solving context, it can be used to formally represent a problem as a theorem to be proved, and to represent the knowledge needed to solve the problem as the premises to be used in a proof that the problem has a solution. The use of computers to prove mathematical theorems using formal logic emerged as the field of automated theorem proving in the 1950s. It included the use of heuristic methods designed to simulate human problem solving, as in the Logic Theory Machine, developed by Allen Newell, Herbert A. Simon and J. C. Shaw, as well as algorithmic methods, such as the resolution principle developed by John Alan Robinson.
In addition to its use for finding proofs of mathematical theorems, automated theorem-proving has also been used for program verification in computer science. However, already in 1958, John McCarthy proposed the advice taker, to represent information in formal logic and to derive answers to questions using automated theorem-proving. An important step in this direction was made by Cordell Green in 1969, using a resolution theorem prover for question-answering and for such other applications in artificial intelligence as robot planning.
The resolution theorem-prover used by Cordell Green bore little resemblance to human problem solving methods. In response to criticism of his approach, emanating from researchers at MIT, Robert Kowalski developed logic programming and SLD resolution, which solves problems by problem decomposition. He has advocated logic for both computer and human problem solving and computational logic to improve human thinking
Engineering
Problem solving is used when products or processes fail, so corrective action can be taken to prevent further failures. It can also be applied to a product or process prior to an actual failure event—when a potential problem can be predicted and analyzed, and mitigation applied so the problem never occurs. Techniques such as failure mode and effects analysis can be used to proactively reduce the likelihood of problems occurring.
Forensic engineering is an important technique of failure analysis that involves tracing product defects and flaws. Corrective action can then be taken to prevent further failures.
Reverse engineering attempts to discover the original problem-solving logic used in developing a product by taking it apart.
Military science
In military science, problem solving is linked to the concept of "end-states", the desired condition or situation that strategists wish to generate. The ability to solve problems is important at any military rank, but is highly critical at the command and control level, where it is strictly correlated to the deep understanding of qualitative and quantitative scenarios. Effectiveness of problem solving is used to measure the result of problem solving, tied to accomplishing the goal. Planning for problem-solving is the process of determining how to achieve the goal
Problem-solving strategies
Problem-solving strategies are the steps that one would use to find the problems that are in the way to getting to one's own goal. Some refer to this as the "problem-solving cycle".
In this cycle one will acknowledge, recognize the problem, define the problem, develop a strategy to fix the problem, organize the knowledge of the problem cycle, figure out the resources at the user's disposal, monitor one's progress, and evaluate the solution for accuracy. The reason it is called a cycle is that once one is completed with a problem another will usually pop up.
Insight is the sudden solution to a long-vexing problem, a sudden recognition of a new idea, or a sudden understanding of a complex situation, an Aha! moment. Solutions found through insight are often more accurate than those found through step-by-step analysis. To solve more problems at a faster rate, insight is necessary for selecting productive moves at different stages of the problem-solving cycle. This problem-solving strategy pertains specifically to problems referred to as insight problem. Unlike Newell and Simon's formal definition of move problems, there has not been a generally agreed upon definition of an insight problem (Ash, Jee, and Wiley, 2012; Chronicle, MacGregor, and Ormerod, 2004; Chu and MacGregor, 2011).
Blanchard-Fields looks at problem solving from one of two facets. The first looking at those problems that only have one solution (like mathematical problems, or fact-based questions) which are grounded in psychometric intelligence. The other is socioemotional in nature and have answers that change constantly (like what's your favorite color or what you should get someone for Christmas).
The following techniques are usually called problem-solving strategies
Abstraction: solving the problem in a model of the system before applying it to the real system
Analogy: using a solution that solves an analogous problem
Brainstorming: (especially among groups of people) suggesting a large number of solutions or ideas and combining and developing them until an optimum solution is found
Critical thinking
Divide and conquer: breaking down a large, complex problem into smaller, solvable problems
Hypothesis testing: assuming a possible explanation to the problem and trying to prove (or, in some contexts, disprove) the assumption
Lateral thinking: approaching solutions indirectly and creatively
Means-ends analysis: choosing an action at each step to move closer to the goal
Method of focal objects: synthesizing seemingly non-matching characteristics of different objects into something new
Morphological analysis: assessing the output and interactions of an entire system
Proof: try to prove that the problem cannot be solved. The point where the proof fails will be the starting point for solving it
Reduction: transforming the problem into another problem for which solutions exist
Research: employing existing ideas or adapting existing solutions to similar problems
Root cause analysis: identifying the cause of a problem
Trial-and-error: testing possible solutions until the right one is found
Problem-solving methods
Eight Disciplines Problem Solving
GROW model
How to Solve It
Lateral thinking
OODA loop (observe, orient, decide, and act)
PDCA (plan–do–check–act)
Root cause analysis
RPR problem diagnosis (rapid problem resolution)
TRIZ (in Russian: Teoriya Resheniya Izobretatelskikh Zadach, "theory of solving inventor's problems")
A3 problem solving
System dynamics
Hive mind
Experimental Action Plan
Common barriers
Common barriers to problem solving are mental constructs that impede our ability to correctly solve problems. These barriers prevent people from solving problems in the most efficient manner possible. Five of the most common processes and factors that researchers have identified as barriers to problem solving are confirmation bias, mental set, functional fixedness, unnecessary constraints, and irrelevant information.
Confirmation bias
Confirmation bias is an unintentional bias caused by the collection and use of data in a way that favors a preconceived notion. The beliefs affected by confirmation bias do not need to have motivation, the desire to defend or find substantiation for beliefs that are important to that person. Research has found that professionals within scientific fields of study also experience confirmation bias. Andreas Hergovich, Reinhard Schott, and Christoph Burger's experiment conducted online, for instance, suggested that professionals within the field of psychological research are likely to view scientific studies that agree with their preconceived notions more favorably than studies that clash with their established beliefs. According to Raymond Nickerson, one can see the consequences of confirmation bias in real-life situations, which range in severity from inefficient government policies to genocide. Nickerson argued that those who killed people accused of witchcraft demonstrated confirmation bias with motivation. Researcher Michael Allen found evidence for confirmation bias with motivation in school children who worked to manipulate their science experiments in such a way that would produce favorable results. However, confirmation bias does not necessarily require motivation. In 1960, Peter Cathcart Wason conducted an experiment in which participants first viewed three numbers and then created a hypothesis that proposed a rule that could have been used to create that triplet of numbers. When testing their hypotheses, participants tended to only create additional triplets of numbers that would confirm their hypotheses, and tended not to create triplets that would negate or disprove their hypotheses. Thus research also shows that people can and do work to confirm theories or ideas that do not support or engage personally significant beliefs.
Mental set
Mental set was first articulated by Abraham Luchins in the 1940s and demonstrated in his well-known water jug experiments. In these experiments, participants were asked to fill one jug with a specific amount of water using only other jugs (typically three) with different maximum capacities as tools. After Luchins gave his participants a set of water jug problems that could all be solved by employing a single technique, he would then give them a problem that could either be solved using that same technique or a novel and simpler method. Luchins discovered that his participants tended to use the same technique that they had become accustomed to despite the possibility of using a simpler alternative. Thus mental set describes one's inclination to attempt to solve problems in such a way that has proved successful in previous experiences. However, as Luchins' work revealed, such methods for finding a solution that have worked in the past may not be adequate or optimal for certain new but similar problems. Therefore, it is often necessary for people to move beyond their mental sets in order to find solutions. This was again demonstrated in Norman Maier's 1931 experiment, which challenged participants to solve a problem by using a household object (pliers) in an unconventional manner. Maier observed that participants were often unable to view the object in a way that strayed from its typical use, a phenomenon regarded as a particular form of mental set (more specifically known as functional fixedness, which is the topic of the following section). When people cling rigidly to their mental sets, they are said to be experiencing fixation, a seeming obsession or preoccupation with attempted strategies that are repeatedly unsuccessful. In the late 1990s, researcher Jennifer Wiley worked to reveal that expertise can work to create a mental set in people considered to be experts in their fields, and she gained evidence that the mental set created by expertise could lead to the development of fixation.
Functional fixedness
Functional fixedness is a specific form of mental set and fixation, which was alluded to earlier in the Maier experiment, and furthermore it is another way in which cognitive bias can be seen throughout daily life. Tim German and Clark Barrett describe this barrier as the fixed design of an object hindering the individual's ability to see it serving other functions. In more technical terms, these researchers explained that "[s]ubjects become 'fixed' on the design function of the objects, and problem solving suffers relative to control conditions in which the object's function is not demonstrated." Functional fixedness is defined as only having that primary function of the object itself hinder the ability of it serving another purpose other than its original function. In research that highlighted the primary reasons that young children are immune to functional fixedness, it was stated that "functional fixedness...[is when] subjects are hindered in reaching the solution to a problem by their knowledge of an object's conventional function." Furthermore, it is important to note that functional fixedness can be easily expressed in commonplace situations. For instance, imagine the following situation: a man sees a bug on the floor that he wants to kill, but the only thing in his hand at the moment is a can of air freshener. If the man starts looking around for something in the house to kill the bug with instead of realizing that the can of air freshener could in fact be used not only as having its main function as to freshen the air, he is said to be experiencing functional fixedness. The man's knowledge of the can being served as purely an air freshener hindered his ability to realize that it too could have been used to serve another purpose, which in this instance was as an instrument to kill the bug. Functional fixedness can happen on multiple occasions and can cause us to have certain cognitive biases. If people only see an object as serving one primary focus, then they fail to realize that the object can be used in various ways other than its intended purpose. This can in turn cause many issues with regards to problem solving.
Functional fixedness limits the ability for people to solve problems accurately by causing one to have a very narrow way of thinking. Functional fixedness can be seen in other types of learning behaviors as well. For instance, research has discovered the presence of functional fixedness in many educational instances. Researchers Furio, Calatayud, Baracenas, and Padilla stated that "... functional fixedness may be found in learning concepts as well as in solving chemistry problems." There was more emphasis on this function being seen in this type of subject and others.
There are several hypotheses in regards to how functional fixedness relates to problem solving. There are also many ways in which a person can run into problems while thinking of a particular object with having this function. If there is one way in which a person usually thinks of something rather than multiple ways then this can lead to a constraint in how the person thinks of that particular object. This can be seen as narrow minded thinking, which is defined as a way in which one is not able to see or accept certain ideas in a particular context. Functional fixedness is very closely related to this as previously mentioned. This can be done intentionally and or unintentionally, but for the most part it seems as if this process to problem solving is done in an unintentional way.
Functional fixedness can affect problem solvers in at least two particular ways. The first is with regards to time, as functional fixedness causes people to use more time than necessary to solve any given problem. Secondly, functional fixedness often causes solvers to make more attempts to solve a problem than they would have made if they were not experiencing this cognitive barrier. In the worst case, functional fixedness can completely prevent a person from realizing a solution to a problem. Functional fixedness is a commonplace occurrence, which affects the lives of many people.
Unnecessary constraints
Unnecessary constraints are another very common barrier that people face while attempting to problem-solve. This particular phenomenon occurs when the subject, trying to solve the problem subconsciously, places boundaries on the task at hand, which in turn forces him or her to strain to be more innovative in their thinking. The solver hits a barrier when they become fixated on only one way to solve their problem, and it becomes increasingly difficult to see anything but the method they have chosen. Typically, the solver experiences this when attempting to use a method they have already experienced success from, and they can not help but try to make it work in the present circumstances as well, even if they see that it is counterproductive.
Groupthink, or taking on the mindset of the rest of the group members, can also act as an unnecessary constraint while trying to solve problems. This is due to the fact that with everybody thinking the same thing, stopping on the same conclusions, and inhibiting themselves to think beyond this. This is very common, but the most well-known example of this barrier making itself present is in the famous example of the dot problem. In this example, there are nine dots lying on a grid three dots across and three dots running up and down. The solver is then asked to draw no more than four lines, without lifting their pen or pencil from the paper. This series of lines should connect all of the dots on the paper. Then, what typically happens is the subject creates an assumption in their mind that they must connect the dots without letting his or her pen or pencil go outside of the square of dots. Standardized procedures like this can often bring mentally invented constraints of this kind, and researchers have found a 0% correct solution rate in the time allotted for the task to be completed. The imposed constraint inhibits the solver to think beyond the bounds of the dots. It is from this phenomenon that the expression "think outside the box" is derived.
This problem can be quickly solved with a dawning of realization, or insight. A few minutes of struggling over a problem can bring these sudden insights, where the solver quickly sees the solution clearly. Problems such as this are most typically solved via insight and can be very difficult for the subject depending on either how they have structured the problem in their minds, how they draw on their past experiences, and how much they juggle this information in their working memories In the case of the nine-dot example, the solver has already been structured incorrectly in their minds because of the constraint that they have placed upon the solution. In addition to this, people experience struggles when they try to compare the problem to their prior knowledge, and they think they must keep their lines within the dots and not go beyond. They do this because trying to envision the dots connected outside of the basic square puts a strain on their working memory.
The solution to the problem becomes obvious as insight occurs following incremental movements made toward the solution. These tiny movements happen without the solver knowing. Then when the insight is realized fully, the "aha" moment happens for the subject. These moments of insight can take a long while to manifest or not so long at other times, but the way that the solution is arrived at after toiling over these barriers stays the same.
Irrelevant information
Irrelevant information is information presented within a problem that is unrelated or unimportant to the specific problem. Within the specific context of the problem, irrelevant information would serve no purpose in helping solve that particular problem. Often irrelevant information is detrimental to the problem solving process. It is a common barrier that many people have trouble getting through, especially if they are not aware of it. Irrelevant information makes solving otherwise relatively simple problems much harder.
For example: "Fifteen percent of the people in Topeka have unlisted telephone numbers. You select 200 names at random from the Topeka phone book. How many of these people have unlisted phone numbers?"
The people that are not listed in the phone book would not be among the 200 names you selected. The individuals looking at this task would have naturally wanted to use the 15% given to them in the problem. They see that there is information present and they immediately think that it needs to be used. This of course is not true. These kinds of questions are often used to test students taking aptitude tests or cognitive evaluations. They aren't meant to be difficult but they are meant to require thinking that is not necessarily common. Irrelevant Information is commonly represented in math problems, word problems specifically, where numerical information is put for the purpose of challenging the individual.
One reason irrelevant information is so effective at keeping a person off topic and away from the relevant information, is in how it is represented. The way information is represented can make a vast difference in how difficult the problem is to be overcome. Whether a problem is represented visually, verbally, spatially, or mathematically, irrelevant information can have a profound effect on how long a problem takes to be solved; or if it's even possible. The Buddhist monk problem is a classic example of irrelevant information and how it can be represented in different ways:
A Buddhist monk begins at dawn one day walking up a mountain, reaches the top at sunset, meditates at the top for several days until one dawn when he begins to walk back to the foot of the mountain, which he reaches at sunset. Making no assumptions about his starting or stopping or about his pace during the trips, prove that there is a place on the path which he occupies at the same hour of the day on the two separate journeys.
This problem is near impossible to solve because of how the information is represented. Because it is written out in a way that represents the information verbally, it causes us to try and create a mental image of the paragraph. This is often very difficult to do especially with all the irrelevant information involved in the question. This example is made much easier to understand when the paragraph is represented visually. Now if the same problem was asked, but it was also accompanied by a corresponding graph, it would be far easier to answer this question; irrelevant information no longer serves as a road block. By representing the problem visually, there are no difficult words to understand or scenarios to imagine. The visual representation of this problem has removed the difficulty of solving it.
These types of representations are often used to make difficult problems easier. They can be used on tests as a strategy to remove Irrelevant Information, which is one of the most common forms of barriers when discussing the issues of problem solving. Identifying crucial information presented in a problem and then being able to correctly identify its usefulness is essential. Being aware of irrelevant information is the first step in overcoming this common barrier.
Other barriers for individuals
Individual humans engaged in problem-solving tend to overlook subtractive changes, including those that are critical elements of efficient solutions. This tendency to solve by first, only or mostly creating or adding elements, rather than by subtracting elements or processes is shown to intensify with higher cognitive loads such as information overload.
Dreaming: problem-solving without waking consciousness
Problem solving can also occur without waking consciousness. There are many reports of scientists and engineers who solved problems in their dreams. Elias Howe, inventor of the sewing machine, figured out the structure of the bobbin from a dream.
The chemist August Kekulé was considering how benzene arranged its six carbon and hydrogen atoms. Thinking about the problem, he dozed off, and dreamt of dancing atoms that fell into a snakelike pattern, which led him to discover the benzene ring. As Kekulé wrote in his diary,
There also are empirical studies of how people can think consciously about a problem before going to sleep, and then solve the problem with a dream image. Dream researcher William C. Dement told his undergraduate class of 500 students that he wanted them to think about an infinite series, whose first elements were OTTFF, to see if they could deduce the principle behind it and to say what the next elements of the series would be. He asked them to think about this problem every night for 15 minutes before going to sleep and to write down any dreams that they then had. They were instructed to think about the problem again for 15 minutes when they awakened in the morning.
The sequence OTTFF is the first letters of the numbers: one, two, three, four, five. The next five elements of the series are SSENT (six, seven, eight, nine, ten). Some of the students solved the puzzle by reflecting on their dreams. One example was a student who reported the following dream:
With more than 500 undergraduate students, 87 dreams were judged to be related to the problems students were assigned (53 directly related and 34 indirectly related). Yet of the people who had dreams that apparently solved the problem, only seven were actually able to consciously know the solution. The rest (46 out of 53) thought they did not know the solution.
Mark Blechner conducted this experiment and obtained results similar to Dement's. He found that while trying to solve the problem, people had dreams in which the solution appeared to be obvious from the dream, but it was rare for the dreamers to realize how their dreams had solved the puzzle. Coaxing or hints did not get them to realize it, although once they heard the solution, they recognized how their dream had solved it. For example, one person in that OTTFF experiment dreamed:
In the dream, the person counted out the next elements of the series six, seven, eight, nine, ten, eleven, twelve yet he did not realize that this was the solution of the problem. His sleeping mindbrain solved the problem, but his waking mindbrain was not aware how.
Albert Einstein believed that much problem solving goes on unconsciously, and the person must then figure out and formulate consciously what the mindbrain has already solved. He believed this was his process in formulating the theory of relativity: "The creator of the problem possesses the solution." Einstein said that he did his problem-solving without words, mostly in images. "The words or the language, as they are written or spoken, do not seem to play any role in my mechanism of thought. The psychical entities which seem to serve as elements in thought are certain signs and more or less clear images which can be 'voluntarily' reproduced and combined."
Cognitive sciences: two schools
In cognitive sciences, researchers' realization that problem-solving processes differ across knowledge domains and across levels of expertise (e.g. Sternberg, 1995) and that, consequently, findings obtained in the laboratory cannot necessarily generalize to problem-solving situations outside the laboratory, has led to an emphasis on real-world problem solving since the 1990s. This emphasis has been expressed quite differently in North America and Europe, however. Whereas North American research has typically concentrated on studying problem solving in separate, natural knowledge domains, much of the European research has focused on novel, complex problems, and has been performed with computerized scenarios (see Funke, 1991, for an overview).
Europe
In Europe, two main approaches have surfaced, one initiated by Donald Broadbent (1977; see Berry & Broadbent, 1995) in the United Kingdom and the other one by Dietrich Dörner (1975, 1985; see Dörner & Wearing, 1995) in Germany. The two approaches share an emphasis on relatively complex, semantically rich, computerized laboratory tasks, constructed to resemble real-life problems. The approaches differ somewhat in their theoretical goals and methodology, however. The tradition initiated by Broadbent emphasizes the distinction between cognitive problem-solving processes that operate under awareness versus outside of awareness, and typically employs mathematically well-defined computerized systems. The tradition initiated by Dörner, on the other hand, has an interest in the interplay of the cognitive, motivational, and social components of problem solving, and utilizes very complex computerized scenarios that contain up to 2,000 highly interconnected variables (e.g., Dörner, Kreuzig, Reither & Stäudel's 1983 LOHHAUSEN project; Ringelband, Misiak & Kluwe, 1990). Buchner (1995) describes the two traditions in detail.
North America
In North America, initiated by the work of Herbert A. Simon on "learning by doing" in semantically rich domains, researchers began to investigate problem solving separately in different natural knowledge domainssuch as physics, writing, or chess playingthus relinquishing their attempts to extract a global theory of problem solving (e.g. Sternberg & Frensch, 1991). Instead, these researchers have frequently focused on the development of problem solving within a certain domain, that is on the development of expertise; Chase & Simon, 1973; Chi, Feltovich & Glaser, 1981).
Areas that have attracted rather intensive attention in North America include:
Reading (Stanovich & Cunningham, 1991)
Writing (Bryson, Bereiter, Scardamalia & Joram, 1991)
Calculation (Sokol & McCloskey, 1991)
Political decision making (Voss, Wolfe, Lawrence & Engle, 1991)
Managerial problem solving ()
Lawyers' reasoning
Mechanical problem solving (Hegarty, 1991)
Problem solving in electronics (Lesgold & Lajoie, 1991)
Computer skills (Kay, 1991)
Game playing (Frensch & Sternberg, 1991)
Personal problem solving (Heppner & Krauskopf, 1987)
Mathematical problem solving (Pólya, 1945; Schoenfeld, 1985)
Social problem solving
Problem solving for innovations and inventions: TRIZ
Characteristics of complex problems
Complex problem solving (CPS) is distinguishable from simple problem solving (SPS). When dealing with SPS there is a singular and simple obstacle in the way. But CPS comprises one or more obstacles at a time. In a real-life example, a surgeon at work has far more complex problems than an individual deciding what shoes to wear. As elucidated by Dietrich Dörner, and later expanded upon by Joachim Funke, complex problems have some typical characteristics as follows:
Complexity (large numbers of items, interrelations and decisions)
enumerability
heterogeneity
connectivity (hierarchy relation, communication relation, allocation relation)
Dynamics (time considerations)
temporal constraints
temporal sensitivity
phase effects
dynamic unpredictability
Intransparency (lack of clarity of the situation)
commencement opacity
continuation opacity
Polytely (multiple goals)
inexpressivenes
opposition
transience
Collective problem solving
Problem solving is applied on many different levels − from the individual to the civilizational. Collective problem solving refers to problem solving performed collectively.
Social issues and global issues can typically only be solved collectively.
It has been noted that the complexity of contemporary problems has exceeded the cognitive capacity of any individual and requires different but complementary expertise and collective problem solving ability.
Collective intelligence is shared or group intelligence that emerges from the collaboration, collective efforts, and competition of many individuals.
Collaborative problem solving is about people working together face-to-face or in online workspaces with a focus on solving real world problems. These groups are made up of members that share a common concern, a similar passion, and/or a commitment to their work. Members are willing to ask questions, wonder, and try to understand common issues. They share expertise, experiences, tools, and methods. These groups can be assigned by instructors, or may be student regulated based on the individual student needs. The groups, or group members, may be fluid based on need, or may only occur temporarily to finish an assigned task. They may also be more permanent in nature depending on the needs of the learners. All members of the group must have some input into the decision making process and have a role in the learning process. Group members are responsible for the thinking, teaching, and monitoring of all members in the group. Group work must be coordinated among its members so that each member makes an equal contribution to the whole work. Group members must identify and build on their individual strengths so that everyone can make a significant contribution to the task. Collaborative groups require joint intellectual efforts between the members and involve social interactions to solve problems together. The knowledge shared during these interactions is acquired during communication, negotiation, and production of materials. Members actively seek information from others by asking questions. The capacity to use questions to acquire new information increases understanding and the ability to solve problems. Collaborative group work has the ability to promote critical thinking skills, problem solving skills, social skills, and self-esteem. By using collaboration and communication, members often learn from one another and construct meaningful knowledge that often leads to better learning outcomes than individual work.
In a 1962 research report, Douglas Engelbart linked collective intelligence to organizational effectiveness, and predicted that pro-actively 'augmenting human intellect' would yield a multiplier effect in group problem solving: "Three people working together in this augmented mode [would] seem to be more than three times as effective in solving a complex problem as is one augmented person working alone".
Henry Jenkins, a key theorist of new media and media convergence draws on the theory that collective intelligence can be attributed to media convergence and participatory culture. He criticizes contemporary education for failing to incorporate online trends of collective problem solving into the classroom, stating "whereas a collective intelligence community encourages ownership of work as a group, schools grade individuals". Jenkins argues that interaction within a knowledge community builds vital skills for young people, and teamwork through collective intelligence communities contributes to the development of such skills.
Collective impact is the commitment of a group of actors from different sectors to a common agenda for solving a specific social problem, using a structured form of collaboration.
After World War II the UN, the Bretton Woods organization and the WTO were created; collective problem solving on the international level crystallized around these three types of organizations from the 1980s onward. As these global institutions remain state-like or state-centric it has been called unsurprising that these continue state-like or state-centric approaches to collective problem-solving rather than alternative ones.
Crowdsourcing is a process of accumulating the ideas, thoughts or information from many independent participants, with aim to find the best solution for a given challenge. Modern information technologies allow for massive number of subjects to be involved as well as systems of managing these suggestions that provide good results. With the Internet a new capacity for collective, including planetary-scale, problem solving was created.
See also
Actuarial science
Analytical skill
Creative problem-solving
Collective intelligence
Community of practice
Coworking
Crowdsolving
Divergent thinking
Grey problem
Innovation
Instrumentalism
Problem statement
Problem structuring methods
Psychedelics in problem-solving experiment
Structural fix
Subgoal labeling
Troubleshooting
Wicked problem
Notes
References
Tonelli M. (2011). Unstructured Processes of Strategic Decision-Making. Saarbrücken, Germany: Lambert Academic Publishing.
External links
Collaborative & Proactive Solutions
The Collaborative Problem Solving approach was originated by Dr. Ross Greene. He now refers to his model as Collaborative & Proactive Solutions, is no longer associated in any way with organizations or individuals marketing the product now called Collaborative Problem Solving, and does not endorse what they have done with his work.
Reasoning
Artificial intelligence
Educational psychology
Neuropsychological assessment
Psychology articles needing expert attention
|
53841657
|
https://en.wikipedia.org/wiki/Yandex%20Launcher
|
Yandex Launcher
|
Yandex Launcher () is a free GUI for organizing the workspace on Android smartphones.
Functionality
According to The Next Web, one of the main distinguishing features of Yandex Launcher is the built-in recommendation service. Machine learning technology provides the basis of the recommendation service, with which Launcher selects apps, games, videos and other forms of content that might interest the user. The key elements of Launcher are the content feed of personal recommendations by Yandex Zen, as well as a system of recommended apps; both elements are built into Launcher and analyze the user's favorite websites and other aspects of their behavior with the aim of creating a unique model of the user's preferences.
Other features of Launcher are: themes for the interface, wallpaper collections, fast search of contacts, apps and sites, search by app icon color, "smart" folders and widgets, built-in notifications on icons, screen manager, a visual editor grid for icons, etc.
History
In 2009, SPB Software published the SPB Mobile Shell application. The application won several accolades.
In 2011, SPB Software was purchased by Yandex. Through this, Yandex acquired the rights to the company's products, including SPB Shell 3D (a paid application).
After the purchase by Yandex, the shell has become "Yandex.Shell". The company's services were built into it, and it was distributed for free for users from Russia and other countries.
In 2014, Yandex released a modified Android firmware, which was given the name Yandex.Kit. The firmware was tightly integrated with Yandex services. One of the standard apps supplied with the firmware was the launcher that was based on Yandex.Shell. Yandex.Kit was preinstalled, in particular, on Huawei smartphones.
On October 6, 2015, the Yandex Launcher GUI was released. Despite the fact that the developers of Yandex.Shell and Yandex.Kit took part in the creation of Launcher, these projects have little in common. Unlike Launcher, Kit was focused on enterprise applications and was not distributed through Google Play. Shell had a different monetization scheme and different geographical distribution.
On October 8, 2015, Google Play accidentally blocked Yandex Launcher. A few hours later Launcher was unblocked.
After its launch, the app was available only to users from Latin America, later it was unlocked for users from the EU, United States, Russia and other countries. On December 14, 2015, the app was made available without geographical limitations.
In October 2016, Yandex offered the pre-installation of their apps (including Yandex Launcher) to retailers and manufacturers of Android smartphones. Participating in this program was MTS (Russia), Multilaser (Brazil), ZTE (China), Wileyfox (United Kingdom), Posh Mobile and others.
At the start of 2016, the service's foreign audience was three times bigger than the Russian audience.
Technology
To generate personal recommendations, Yandex Launcher uses artificial intelligence technology. The system analyzes which of the recommended apps the user has installed or ignored. Based on this information, the system predicts what apps users might be interested in at a later time. The more a user interacts with Launcher, the more accurate the recommendations become. Recommendations also depend on the user's place of residence, their interests and other factors.
Launcher is one of the “Discovery” products of Yandex. Yandex Zen, which is part of Launcher, belongs to the same product category.
Several design elements of Launcher are generated algorithmically. In particular, the color of app information cards are selected automatically based on the color scale of app icons.
The user can select one of the three search engines to use within the Launcher (Yandex, Google or Bing).
Monetization
Monetization of Launcher is due to the built-in recommendation services. Native advertising is displayed inside the Yandex Zen content feed. Most recommendations in the app recommendation service are selected without taking into account the commercial component, but if a user installs one of the commercial recommendations, then the corresponding ad network pays a commissions to Yandex.
According to data from Q2 2016, the experimental business activities of Yandex (which includes Yandex Launcher and Yandex Zen built into it, along with a number of other company products) brought in 153 million rubles in revenue.
Management
The head of the service is Fyodor Yezhov. Prior to this, Yezhov worked at SPB Software and SPB TV.
Earlier, the project was headed by Dmitry Polishchuk.
Criticism
Yandex Launcher has been criticized for the lack of options for fine-tuning the app. In particular, it is not possible to hide icon names. Launcher has also been criticized for the small variety of wallpapers in the online collection.
References
2015 software
Android (operating system) software
Mobile application launchers
Yandex software
|
4404787
|
https://en.wikipedia.org/wiki/Proprietary%20format
|
Proprietary format
|
A proprietary format is a file format of a company, organization, or individual that contains data that is ordered and stored according to a particular encoding-scheme, designed by the company or organization to be secret, such that the decoding and interpretation of this stored data is easily accomplished only with particular software or hardware that the company itself has developed. The specification of the data encoding format is not released, or underlies non-disclosure agreements. A proprietary format can also be a file format whose encoding is in fact published, but is restricted through licences such that only the company itself or licensees may use it. In contrast, an open format is a file format that is published and free to be used by everybody.
Proprietary formats are typically controlled by a company or organization for its own benefits, and the restriction of its use by others is ensured through patents or as trade secrets. It is thus intended to give the licence holder exclusive control of the technology to the (current or future) exclusion of others.
Typically such restrictions attempt to prevent reverse engineering, though reverse engineering of file formats for the purposes of interoperability is generally believed to be legal by those who practice it. Legal positions differ according to each country's laws related to, among other things, software patents.
Because control over a format may be exerted in varying ways and in varying degrees, and documentation of a format may deviate in many different ways from the ideal, there is not necessarily a clear black/white distinction between open and proprietary formats. Nor is there any universally recognized "bright line" separating the two. The lists of prominent formats below illustrate this point, distinguishing "open" (i.e. publicly documented) proprietary formats from "closed" (undocumented) proprietary formats and including a number of cases which are classed by some observers as open and by others as proprietary.
Privacy, ownership, risk and freedom
One of the contentious issues surrounding the use of proprietary formats is that of ownership of created content. If the information is stored in a way which the user's software provider tries to keep secret, the user may own the information by virtue of having created it, but they have no way to retrieve it except by using a version of the original software which produced the file. Without a standard file format or reverse engineered converters, users cannot share data with people using competing software. The fact that the user depends on a particular brand of software to retrieve the information stored in a proprietary format file increases barriers of entry for competing software and may contribute to vendor lock-in concept.
The issue of risk comes about because proprietary formats are less likely to be publicly documented and therefore less future proof. If the software firm owning right to that format stops making software which can read it then those who had used the format in the past may lose all information in those files. This is particularly common with formats that were not widely adopted. However, even ubiquitous formats such as Microsoft Word cannot be fully reverse-engineered.
Prominent proprietary formats
Open Proprietary Formats
AAC – an open standard, but owned by Via Licensing
GEDCOM – an open specification for genealogy data exchange, owned by The Church of Jesus Christ of Latter-day Saints
MP3 – an open standard, but subject to patents in some countries
Closed Proprietary Formats
CDR – (non-documented) CorelDraw's native format primarily used for vector graphic drawings
DWG – (non-documented) AutoCAD drawing
PSD – (documented) Adobe Photoshop's native image format
RAR – (partially documented) archive and compression file format owned by Alexander L. Roshal
WMA – a closed format, owned by Microsoft
Controversial
RTF – a formatted text format (proprietary, published specification, defined and maintained only by Microsoft)
SWF – Adobe Flash format (formerly closed/undocumented, now partially or completely open)
XFA – Adobe XML Forms Architecture, used in PDF files (published specification by Adobe, required but not documented in the PDF ISO 32000-1 standard; controlled and maintained only by Adobe)
ZIP – a base version of this data compression and archive file format is in the public domain, but newer versions have some patented features
Formerly proprietary
GIF – CompuServe's Graphics Interchange Format (the specification's royalty-free licence requires implementers to give CompuServe credit as owner of the format; separately, patents covering certain aspects of the specification were held by Unisys until they expired in 2004)
PDF – Adobe's Portable Document Format (open since 2008 - ISO 32000-1), but there are still some technologies indispensable for the application of ISO 32000-1 that are defined only by Adobe and remain proprietary (e.g. Adobe XML Forms Architecture, Adobe JavaScript).
DOC – Microsoft Word Document (formerly closed/undocumented, now Microsoft Open Specification Promise)
XLS – Microsoft Excel spreadsheet file format (formerly closed/undocumented, now Microsoft Open Specification Promise)
PPT – Microsoft PowerPoint Presentation file format (formerly closed/undocumented, now Microsoft Open Specification Promise)
See also
Open format
De facto standard
Dominant design
References
Computer file formats
|
699052
|
https://en.wikipedia.org/wiki/Ethics%20of%20technology
|
Ethics of technology
|
The ethics of technology is a sub-field of ethics addressing the ethical questions specific to the Technology Age, the transitional shift in society wherein personal computers and subsequent devices provide for the quick and easy transfer of information. The topic has evolved as technologies have developed. Technology poses an ethical dilemma on producers and consumers alike. The subject of technoethics, or the ethical implications of technology, have been studied by different philosophers such as Hans Jonas and Mario Bunge.
Technology ethics is the application of ethical thinking to the growing concerns of technology as new technologies continue to rise in prominence[1]. Ethics of technology is now an important subject as technologies give people more power to act that before. The development of Artificial Intelligence and rise of social media brings up the questions of how to behave on these platforms and how far is A.I allowed to go. Recent issues such as Facebook data leaks and circulation of fake news highlights the downside of social media when in the wrong hands. As technology continues to develop and have the power to alter people's daily lives, questions surrounding what is ethical or not will remain.
Technoethics
Technoethics (TE) is an interdisciplinary research area that draws on theories and methods from multiple knowledge domains (such as communications, social sciences information studies, technology studies, applied ethics, and philosophy) to provide insights on ethical dimensions of technological systems and practices for advancing a technological society.
Technoethics views technology and ethics as socially embedded enterprises and focuses on discovering the ethical uses for technology, protecting against the misuse of technology, and devising common principles to guide new advances in technological development and application to benefit society. Typically, scholars in technoethics have a tendency to conceptualize technology and ethics as interconnected and embedded in life and society. Technoethics denotes a broad range of ethical issues revolving around technology – from specific areas of focus affecting professionals working with technology to broader social, ethical, and legal issues concerning the role of technology in society and everyday life.
Technoethical perspectives are constantly in transition as technology advances in areas unseen by creators and as users change the intended uses of new technologies. Humans cannot be separated from these technologies because it is an inherent part of consciousness. The short term and longer term ethical considerations for technologies engage the creator, producer, user, and governments.
With the increasing impact emerging technologies have on society, the importance of assessing ethical and social issues constantly becomes more important. While such technologies provide opportunities for novel applications and the potential to transform the society on a global scale, their rise is accompanied by new ethical challenges and problems that must be considered. This becomes more difficult with the increasing pace at which technology is progressing and the increasing impact it has on the societal understanding by seemingly outrunning human control. The concept of technoethics focuses on expanding the knowledge of existing research in the areas of technology and ethics in order to provide a holistic construct for the different aspects and subdisciplines of ethics related to technology-related human activity like economics, politics, globalization, and scientific research. It is also concerned with the rights and responsibilities that designers and developers have regarding the outcomes of the respective technology. This is of particular importance with the emergence of algorithmic technology capable of making decisions autonomously and the related issues of developer or data bias influencing these decisions. To work against the manifestation of these biases, the balance between human and technology accountability for ethical failure has to be carefully evaluated and has shifted the view from technology as a merely positive tool towards the perception of technology as inherently neutral. Technoethics thus has to focus on both sides of the human technology equation when confronted with upcoming technology innovations and applications.
With technology continuing to advance over time, there are new Technoethical issues that come into play. For instance, discussions on genetically modified organisms (GMOs) have brought about a huge concern for technology, ethics, and safety. There is also a huge question of whether or not artificial intelligence (AI) should be trusted and relied upon. These are just some examples of how the advancements in technology will affect the ethical values of humans in the future.
Technoethics finds application in various areas of technology. The following key areas are mentioned in the literature:
Computer ethics: Focuses on the use of technology in areas including visual technology, artificial intelligence, and robotics.
Engineering ethics: Dealing with professional standards of engineers and their moral responsibilities to the public.
Internet ethics and cyberethics: Concerning the guarding against unethical Internet activity.
Media and communication technoethics: Concerning ethical issues and responsibilities when using mass media and communication technology.
Professional technoethics: Concerning all ethical considerations that revolve around the role of technology within professional conduct like in engineering, journalism, or medicine.
Educational technoethics: Concerning the ethical issues and outcomes associated with using technology for educational aims.
Biotech ethics: Linked to advances in bioethics and medical ethics like considerations arising in cloning, human genetic engineering, and stem cell research.
Environmental technoethics: Concerning technological innovations that impact the environment and life.
Nanoethics: Concerning ethical and social issues associated with developments in the alteration of matter at the level of atoms and molecules in various disciplines including computer science, engineering, and biology.
Military technoethics: Concerning ethical issues associated with technology use in military action.
Definitions
Ethics address the issues of what is 'right', what is 'just', and what is 'fair'. Ethics describe moral principles influencing conduct; accordingly, the study of ethics focuses on the actions and values of people in society (what people do and how they believe they should act in the world).
Technology is the branch of knowledge that deals with the creation and use of technical means and their interrelation with life, society, and the environment; it may draw upon a variety of fields, including industrial arts, engineering, applied science, and pure science. Technology "is core to human development and a key focus for understanding human life, society and human consciousness."
Using theories and methods from multiple domains, technoethics provides insights on ethical aspects of technological systems and practices, examines technology-related social policies and interventions, and provides guidelines for how to ethically use new advancements in technology. Technoethics provides a systems theory and methodology to guide a variety of separate areas of inquiry into human-technological activity and ethics. Moreover, the field unites both technocentric and bio-centric philosophies, providing "conceptual grounding to clarify the role of technology to those affected by it and to help guide ethical problem solving and decision making in areas of activity that rely on technology." As a bio-techno-centric field, technoethics "has a relational orientation to both technology and human activity"; it provides "a system of ethical reference that justifies that profound dimension of technology as a central element in the attainment of a 'finalized' perfection of man."
Fundamental Problems
Technology is merely a tool like a device or gadget. With this thought process of technology just being a device or gadget, it is not possible for technology to possess a moral or ethical quality. Going by this thought process the tool maker or end user would be the one who decides the morality or ethicality behind a device or gadget. "Ethics of technology" refers to two basic subdivisions:
The ethics involved in the development of new technology—whether it is always, never, or contextually right or wrong to invent and implement a technological innovation.
The ethical questions that are exacerbated by the ways in which technology extends or curtails the power of individuals—how standard ethical questions are changed by the new powers.
In the former case, ethics of such things as computer security and computer viruses asks whether the very act of innovation is an ethically right or wrong act. Similarly, does a scientist have an ethical obligation to produce or fail to produce a nuclear weapon? What are the ethical questions surrounding the production of technologies that waste or conserve energy and resources? What are the ethical questions surrounding the production of new manufacturing processes that might inhibit employment, or might inflict suffering in the third world?
In the latter case, the ethics of technology quickly break down into the ethics of various human endeavors as they are altered by new technologies. For example, bioethics is now largely consumed with questions that have been exacerbated by the new life-preserving technologies, new cloning technologies, and new technologies for implantation. In law, the right of privacy is being continually attenuated by the emergence of new forms of surveillance and anonymity. The old ethical questions of privacy and free speech are given new shape and urgency in an Internet age. Such tracing devices as RFID, biometric analysis and identification, genetic screening, all take old ethical questions and amplify their significance. As you can see, the fundamental problem is as society produces and advances technology that we use in all areas of our life from work, school, medicine, surveillance, etc. we receive great benefits, but there are underlying costs to these benefits. As technology evolves even more, some of the technological innovations can be seen as inhumane and those same technological innovations can be seen by others as creative, life changing, and innovative.
History of Technoethics
Though the ethical consequences of new technologies have existed since Socrates' attack on writing in Plato's dialogue, Phaedrus, the formal field of technoethics had only existed for a few decades. The first traces of TE can be seen in Dewey and Peirce's pragmatism. With the advent of the industrial revolution, it was easy to see that technological advances were going to influence human activity. This is why they put emphasis on the responsible use of technology.
The term "technoethics" was coined in 1977 by the philosopher Mario Bunge to describe the responsibilities of technologists and scientists to develop ethics as a branch of technology. Bunge argued that the current state of technological progress was guided by ungrounded practices based on limited empirical evidence and trial-and-error learning. He recognized that "the technologist must be held not only technically but also morally responsible for whatever he designs or executes: not only should his artifacts be optimally efficient but, far from being harmful, they should be beneficial, and not only in the short run but also in the long term." He recognized a pressing need in society to create a new field called 'Technoethics' to discover rationally grounded rules for guiding science and technological progress.
With the spurt in technological advances came technological inquiry. Societal views of technology were changing; people were becoming more critical of the developments that were occurring and scholars were emphasizing the need to understand and to take a deeper look and study the innovations. Associations were uniting scholars from different disciplines to study the various aspects of technology. The main disciplines being philosophy, social sciences and science and technology studies (STS). Though many technologies were already focused on ethics, each technology discipline was separated from each other, despite the potential for the information to intertwine and reinforce itself. As technologies became increasingly developed in each discipline, their ethical implications paralleled their development, and became increasingly complex. Each branch eventually became united, under the term technoethics, so that all areas of technology could be studied and researched based on existing, real-world examples and a variety of knowledge, rather than just discipline-specific knowledge.
Technology and Ethics
Ethics theories
Technoethics involves the ethical aspects of technology within a society that is shaped by technology. This brings up a series of social and ethical questions regarding new technological advancements and new boundary crossing opportunities. Before moving forward and attempting to address any ethical questions and concerns, it is important to review the three major ethical theories to develop a perspective foundation :
Utilitarianism (Bentham) is an ethical theory which attempts to maximize happiness and reduce suffering for the greatest number of people. Utilitarianism focused on results and consequences rather than rules.
Duty ethics (Kant) notes the obligations that one has to society and follows society's universal rules. It focuses on the rightness of actions instead of the consequences, focusing on what an individual should do.
Virtue ethics is another main perspective in normative ethics. It highlights the role and virtues that an individual's character contains to be able to determine or evaluate ethical behaviour in society. By practicing honing honest and generous behavior, Aristotle, the philosopher of this theory believes that people will then make the right choice when faced with an ethical decision.
Relationship ethics states that care and consideration are both derived from human communication. Therefore, ethical communication is the core substance to maintain healthy relationships.
Historical framing of technology – four main periods
Greek civilization defined technology as techné. Techné is "the set principles, or rational method, involved in the production of an object or the accomplishment of an end; the knowledge such as principles of method; art." This conceptualization of technology used during the early Greek and Roman period to denote the mechanical arts, construction, and other efforts to create, in Cicero's words, a "second nature" within the natural world.
Modern conceptualization of technology as invention materialized in the 17th century in Bacon's futuristic vision of a perfect society governed by engineers and scientists in Saloman's House, to raise the importance of technology in society.
The German term "Tecknik" was used in the 19th-20th century. Technik is the totality of processes, machines, tools and systems employed in the practical arts and Engineering. Webber popularized it when it was used in broader fields. Mumford said it was underlying a civilization. Known as: before 1750: Eotechnic, in 1750-1890: Paleoethnic and in 1890: Neoethnic. Place it at the center of social life in close connection to social progress and societal change. Mumford says that a machine cannot be divorced from its larger social pattern, for it is the pattern that gives it meaning and purpose.
Rapid advances in technology provoked a negative reaction from scholars who saw technology as a controlling force in society with the potential to destroy how people live (Technological Determinism). Heidegger warned people that technology was dangerous in that it exerted control over people through its mediating effects, thus limiting authenticity of experience in the world that defines life and gives life meaning. It is an intimate part of the human condition, deeply entrenched in all human history, society and mind.
Significant technoethical developments in society
Many advancements within the past decades have added to the field of technoethics. There are multiple concrete examples that have illustrated the need to consider ethical dilemmas in relation to technological innovations. Beginning in the 1940s influenced by the British eugenic movement, the Nazis conduct "racial hygiene" experiments causing widespread, global anti-eugenic sentiment. In the 1950s the first satellite Sputnik 1 orbited the earth, the Obninsk Nuclear Power Plant was the first nuclear power plant to be opened, the American nuclear tests take place. The 1960s brought about the first manned moon landing, ARPANET created which leads to the later creation of the Internet, first heart transplantation completed, and the Telstar communications satellite is launched. The 70s, 80s, 90s, 2000s and 2010s also brought multiple developments.
Technological consciousness
Technological consciousness is the relationship between humans and technology. Technology is seen as an integral component of human consciousness and development. Technology, consciousness and society are intertwined in a relational process of creation that is key to human evolution. Technology is rooted in the human mind, and is made manifest in the world in the form of new understandings and artifacts. The process of technological consciousness frames the inquiry into ethical responsibility concerning technology by grounding technology in human life.
The structure of technological consciousness is relational but also situational, organizational, aspectual and integrative. Technological consciousness situates new understandings by creating a context of time and space. As well, technological consciousness organizes disjointed sequences of experience under a sense of unity that allows for a continuity of experience. The aspectual component of technological consciousness recognizes that individuals can only be conscious of aspects of an experience, not the whole thing. For this reason, technology manifests itself in processes that can be shared with others. The integrative characteristics of technological consciousness are assimilation, substitution and conversation. Assimilation allows for unfamiliar experiences to be integrated with familiar ones. Substitution is a metaphorical process allowing for complex experiences to be codified and shared with others — for example, language. Conversation is the sense of an observer within an individual's consciousness, providing stability and a standpoint from which to interact with the process.
Misunderstandings of consciousness and technology
According to Rocci Luppicini, the common misunderstandings about consciousness and technology are listed as follows. The first misunderstanding is that consciousness is only in the head when according to Luppicini, consciousness is not only in the head meaning that "[c]onsciousness is responsible for the creation of new conscious relations wherever imagined, be it in the head, on the street or in the past." The second misunderstanding is technology is not a part of consciousness. Technology is a part of consciousness as "the conceptualization of technology has gone through drastic changes." The third misunderstanding is that technology controls society and consciousness, by which Luppicini means "that technology is rooted in consciousness as an integral part of mental life for everyone. This understanding will most likely alter how both patients and psychologists deal with the trials and tribunes of living with technology." The last misunderstanding is society controls technology and consciousness. "…(other) accounts fail to acknowledge the complex relational nature of technology as an operation within mind and society. This realization shifts the focus on technology to its origins within the human mind as explained through the theory of technological consciousness."
Consciousness (C) is only a part of the head: C is responsible for the creation of new conscious relations
Technology (T) is not part of C: Humans cannot be separated from technology
T controls society and C: Technology cannot control the mind
Society controls T and C: Society fails to take in account the consideration of society shaping what technology gets developed?
Types of Technology Ethics
Technology ethics are principles that can be used to govern technology including factors like risk management and individual rights. They are basically used to understand and resolve moral issues that have to do with the development and application of technology of different types.
There are many types of technology ethics:
Access rights: access to empowering technology as a right
Accountability: decisions made for who is responsible when considering success or harm in technological advancements
Digital Rights: protecting intellectual property rights and privacy rights
Environment: how to produce technology that could harm the environment
Existential Risk: technologies that represent a threat to the global quality of life pertaining to extinction
Freedom: technology that is used to control a society raising questions related to freedom and independence
Health & Safety: health and safety risks that are increased and imposed by technologies
Human Enhancement: human genetic engineering and human-machine integration
Human Judgement: when can decisions be judged by automation and when do they acquire a reasonable human?
Over-Automation: when does automation decrease quality of life and start affecting society?
Precaution Principle: Who decides that developing this new technology is safe for the world?
Privacy: protection of privacy rights
Security: Is due diligence required to ensure information security?
Self Replicating Technology: should self replicating be the norm?
Technology Transparency: clearly explaining how a technology works and what its intentions are
Terms of Service: ethics related to legal agreements
Ethical challenges
Ethical challenges arise in many different situations:
Human knowledge processes
Workplace discrimination
Strained work-life balance in technologically enhanced work environments: Many people find that simply having the technology allowing one to do work while at home increases stress levels. In a recent study 70% of respondents said that since technology, work has crept into their personal lives.
Digital divide: Inequalities in information access for parts of the population
Unequal opportunities for scientific and technological development
Norris says access to information and knowledge resources within a knowledge society tend to favour the economically privileged who have greater access to technological tools needed to access information and knowledge resources disseminated online and the privatization of knowledge
Inequality in terms of how scientific and technological knowledge is developed around the globe. Developing countries do not have the same opportunities as developed countries to invest in costly large-scale research and expensive research facilities and instrumentation
Organizational responsibility and accountability issues
Intellectual property ownership issues
Information overload: Information processing theory asserts that working memory that has a limited capacity and too much information can lead to cognitive overload resulting in loss of information from short-term memory
Knowledge society is intertwined with changing technology requiring new skills of its workforce. Cutler says that there is the perception that older workers lack experience with new technology and that retaining programs may be less effective and more expensive for older workers. Cascio says that there is a growth of virtual organizations. Saetre & Sornes say that it is a blurring of the traditional time and space boundaries has also led to many cases in the blurring of work and personal life
Negative impacts of many scientific and technological innovations have on humans and the environment has led to some skepticism and resistance to increasing dependence on technology within the Knowledge Society. Doucet calls for city empowerment to have the courage and foresight to make decisions that are acceptable to its inhabitants rather that succumb to global consumer capitalism and the forces of international corporations on national and local governments
Scientific and technological innovations that have transformed organizational life within a global economy have also supplanted human autonomy and control in work within a technologically oriented workplace
The persuasive potential of technology raises the question of "how sensitive ... designers and programmers [should] be to the ethics of the persuasive technology they design." Technoethics can be used to determine the level of ethical responsibility that should be associated with outcomes of the use of technology, whether intended or unintended
Rapidly changing organizational life and the history of unethical business practices have given rise to public debates concerning organizational responsibility and trust. The advent of virtual organizations and telework has bolstered ethical problems by providing more opportunities for fraudulent behaviour and the production of misinformation. Concerted efforts are required to uphold ethical values in advancing new knowledge and tools within societal relations which do not exclude people or limit liberties of some people at the expense of others
Artificial Intelligence: Artificial Intelligence seems to be the one of the most talked of challenges when it comes ethics. In order to avoid these ethical challenges some solutions have been established; first and for most it should be developed for the common good and benefit of humanity. Secondly, it should operate on principles of intelligibility and fairness. It should also not be used to diminish the data rights or privacy of individuals, families, or communities. It is also believed that all citizens should have the right to be educated on artificial intelligence in order to be able to understand it. Finally, the autonomous power to hurt, destroy, or deceive humans should never be vested in artificial intelligence.
Current issues
Copyrights
Digital copyrights are a complicated issue because there are multiple sides to the discussion. There are ethical considerations surrounding the artist, producer, and end user. Not to mention the relationships with other countries and the impact on the use of content housed in their countries. In Canada, national laws such as the Copyright Act and the history behind Bill C-32 are just the beginning of the government's attempt to shape the "wild west" of Canadian Internet activities. The ethical considerations behind Internet activities such a peer-to-peer file sharing involve every layer of the discussion – the consumer, artist, producer, music/movie/software industry, national government, and international relations. Overall, technoethics forces the "big picture" approach to all discussions on technology in society. Although time-consuming, this "big picture" approach offers some level of reassurance when considering that any law put in place could drastically alter the way we interact with our technology and thus the direction of work and innovation in the country.
The use of copyrighted material to create new content is a hotly debated topic. The emergence of the musical "mashup" genre has compounded the issue of creative licensing. A moral conflict is created between those who believe that copyright protects any unauthorized use of content, and those who maintain that sampling and mash-ups are acceptable musical styles and, though they use portions of copyrighted material, the result is a new creative piece which is the property of the creator, and not of the original copyright holder. Whether or not the mashup genre should be allowed to use portions of copyrighted material to create new content is one which is currently under debate.
Cybercriminality
Cybercrime can consist of many subcategories and can be referred to as a big umbrella. Cyber theft such as online fraud, identity theft, and digital piracy can be classified as one sector. Another section of cybercrime can include cyber-violence which can be defined as online behavior that can be anywhere from hate speeches, harassment, cyberstalking, to behavior that leads to physical, psychological, or emotional assault against the well-being of an individual. Cyber obscenity is another section when child sexual exploitation materials are involved. Cyber trespass is when there is unauthorized computer system access. Cybercrime can encompass many other sections where technology and computers are used to assist and commit various forms of crimes.
For many years , new technologies took an important place in social, cultural, political, and economic life. Thanks to the democratization of informatics access and the network's globalization, the number of exchanges and transaction is in perpetual progress.
In the article, “The Dark Figure of Online Property Crime: Is Cyberspace Hiding a Crime Wave?”, the authors analyze evidence that reveals cyber criminality rates are increasing as the typical street crimes gradually decrease. With the increase in cyber criminality, it is imperative to research more information on how to increase cyber security. The issue with increasing cyber security is that the more laws to protect people, the more citizens would feel threatened that their freedom is being compromised. One way to avoid making people feel threatened by all the security measures and protocols is by being as clear and straightforward as possible. Gregory Nojeim in his article “Cybersecurity and Freedom on the Internet” state, “Transparency in the cybersecurity program will build the confidence and trust that is essential to industry and public support for cybersecurity measures.” It is important to create ethical laws that protect privacy, innovation, and consumers’ freedom.
Many people are exploiting the facilities and anonymity that modern technologies offer in order to commit multiple criminal activities. Cybercrime is one of the fastest growing areas of crime. The problem is that some laws that profess to protect people from those who would do wrong things via digital means also threaten to take away people's freedom.
Privacy vs. security: Full-body airport scanners
Since the introduction of full body X-ray scanners to airports in 2007, many concerns over traveler privacy have arisen. Individuals are asked to step inside a rectangular machine that takes an alternate wavelength image of the person's naked body for the purpose of detecting metal and non-metal objects being carried under the clothes of the traveler. This screening technology comes in two forms, millimeter wave technology (MM-wave technology) or backscatter X-rays (similar to x-rays used by dentists). Full-body scanners were introduced into airports to increase security and improve the quality of screening for objects such as weapons or explosives due to an increase of terrorist attacks involving airplanes occurring in the early 2000s.
Ethical concerns of both travelers and academic groups include fear of humiliation due to the disclosure of anatomic or medical details, exposure to a low level of radiation (in the case of backscatter X-ray technology), violation of modesty and personal privacy, clarity of operating procedures, the use of this technology to discriminate against groups, and potential misuse of this technology for reasons other than detecting concealed objects. Also people with religious beliefs that require them to remain physically covered (arms, legs, face etc.) at all times will be unable and morally opposed to stepping inside of this virtually intrusive scanning technology. The Centre for Society, Science and Citizenship have discussed their ethical concerns including the ones mentioned above and suggest recommendations for the use of this technology in their report titled "Whole Body Imaging at airport checkpoints: the ethical and policy context" (2010).
Privacy and GPS technologies
The discourse around GPS tracking devices and geolocation technologies and this contemporary technology's ethical ramifications on privacy is growing as the technology becomes more prevalent in society. As discussed in the New York Timess Sunday Review on September 22, 2012, the editorial focused on the ethical ramifications that imprisoned a drug offender because of the GPS technology in his cellphone was able to locate the criminal's position. Now that most people carry on the person a cell, the authorities have the ability to constantly know the location of a large majority of citizens. The ethical discussion now can be framed from a legal perspective. As raised in the editorial, there are stark infractions that these geolocation devices on citizens' Fourth Amendment and their protection against unreasonable searches. This reach of this issue is not just limited to the United States but affects more democratic state that uphold similar citizens' rights and freedoms against unreasonable searches.
These geolocation technologies are not only affecting how citizens interact with their state but also how employees interact with their workplaces. As discussed in article by the Canadian Broadcasting Company, "GPS and privacy", that a growing number of employers are installing geolocation technologies in "company vehicles, equipment and cellphones" (Hein, 2007). Both academia and unions are finding these new powers of employers to be indirect contradiction with civil liberties. This changing relationship between employee and employer because of the integration of GPS technology into popular society is demonstrating a larger ethical discussion on what are appropriate privacy levels. This discussion will only become more prevalent as the technology becomes more popular.
Genetically modified organisms
Genetically modified foods have become quite common in developed countries around the world, boasting greater yields, higher nutritional value, and greater resistance to pests, but there are still many ethical concerns regarding their use. Even commonplace genetically modified crops like corn raise questions of the ecological consequences of unintended cross pollination, potential horizontal gene transfer, and other unforeseen health concerns for humans and animals.
Trademarked organisms like the "Glofish" are a relatively new occurrence. These zebrafish, genetically modified to appear in several fluorescent colours and sold as pets in the United States, could have unforeseen effects on freshwater environments were they ever to breed in the wild.
Providing they receive approval from the U.S. Food and Drug Administration (FDA), another new type of fish may be arriving soon. The "AquAdvantage salmon", engineered to reach maturity within roughly 18 months (as opposed to three years in the wild), could help meet growing global demand. There are health and environmental concerns associated with the introduction any new GMO, but more importantly this scenario highlights the potential economic impact a new product may have. The FDA does perform an economic impact analysis to weigh, for example, the consequences these new genetically modified fish may have on the traditional salmon fishing industry against the long term gain of a cheaper, more plentiful source of salmon. These technoethical assessments, which regulatory organizations like the FDA are increasingly faced with worldwide, are vitally important in determining how GMOs—with all of their potential beneficial and harmful effects—will be handled moving forward.
Pregnancy screening technology
For over 40 years, newborn screening has been a triumph of the 20th century public health system. Through this technology, millions of parents are given the opportunity to screen for and test a number of disorders, sparing the death of their children or complications such as mental retardation. However, this technology is growing at a fast pace, disallowing researchers and practitioners from being able to fully understand how to treat diseases and provide families in need with the resources to cope.
A version of pre-natal testing, called tandem mass spectrometry, is a procedure that "measures levels and patterns of numerous metabolites in a single drop of blood, which are then used to identify potential diseases. Using this same drop of blood, tandem mass spectrometry enables the detection of at least four times the number of disorders than was possible with previous technologies." This allows for a cost-effective and fast method of pre-natal testing.
However, critics of tandem mass spectrometry and technologies like it are concerned about the adverse consequences of expanding newborn screen technology and the lack of appropriate research and infrastructure needed to provide optimum medical services to patients. Further concerns include "diagnostic odysseys", a situation in which the patient aimlessly continues to search for diagnoses where none exists.
Among other consequences, this technology raises the issue of whether individuals other than newborn will benefit from newborn screening practices. A reconceptualization of the purpose of this screening will have far reaching economic, health and legal impact. This discussion is only just beginning and requires informed citizenry to reach legal if not moral consensus on how far we as a society are comfortable with taking this technology.
Citizen journalism
Citizen journalism is a concept describing citizens who wish to act as a professional journalist or media person by "collecting, reporting, analyzing, and disseminating news and information" According to Jay Rosen, citizen journalists are "the people formerly known as the audience," who "were on the receiving end of a media system that ran one way, in a broadcasting pattern, with high entry fees and a few firms competing to speak very loudly while the rest of the population listened in isolation from one another— and who today are not in a situation like that at all. ... The people formerly known as the audience are simply the public made realer, less fictional, more able, less predictable".
The internet has provided society with a modern and accessible public space. Due to the openness of the internet, there are discernible effects on the traditional profession of journalism. Although the concept of citizen journalism is a seasoned one, "the presence of online citizen journalism content in the marketplace may add to the diversity of information that citizens have access to when making decisions related to the betterment of their community or their life". The emergence of online citizen journalism is fueled by the growing use of social media websites to share information about current events and issues locally, nationally and internationally.
The open and instantaneous nature of the internet affects the criteria of information quality on the web. A journalistic code of ethics is not instilled for those who are practicing citizen journalism. Journalists, whether professional or citizen, have needed to adapt to new priorities of current audiences: accessibility, quantity of information, quick delivery and aesthetic appeal. Thus, technology has affected the ethical code of the profession of journalism with the popular free and instant sharing qualities of the internet. Professional journalists have had to adapt to these new practices to ensure that truthful and quality reporting is being distributed. The concept can be seen as a great advancement in how society communicates freely and openly or can be seen as contributing to the decay of traditional journalistic practices and codes of ethics.
Other issues to consider:
Privacy concerns: location services on cell devices which tell all users where a person is should they decide to turn on this feature, social media, online banking, new capabilities of cellular devices, Wi-fi, etc.
New music technology: People see more electronic music today with the new technology able to create it, as well as more advanced recording technology
Recent developments
Despite the amassing body of scholarly work related to technoethics beginning in the 1970s, only recently has it become institutionalized and recognized as an important interdisciplinary research area and field of study. In 1998, the Epson Foundation founded the Instituto de Tecnoética in Spain under the direction of Josep Esquirol. This institute has actively promoted technoethical scholarship through awards, conferences, and publications. This helped encourage scholarly work for a largely European audience. The major driver for the emergence of technoethics can be attributed to the publication of major reference works available in English and circulated globally. The "Encyclopedia of Science, Technology, and Ethics" included a section on technoethics which helped bring it into mainstream philosophy. This helped to raise further interest leading to the publication of the first reference volume in the English language dedicated to the emerging field of Technoethics. The two volume Handbook of Research on Technoethics explores the complex connections between ethics and the rise of new technologies (e.g., life-preserving technologies, stem cell research, cloning technologies, new forms of surveillance and anonymity, computer networks, Internet advancement, etc.) This recent major collection provides the first comprehensive examination of technoethics and its various branches from over 50 scholars around the globe. The emergence of technoethics can be juxtaposed with a number of other innovative interdisciplinary areas of scholarship which have surfaced in recent years such as technoscience and technocriticism.
Technology and Ethics in the Music Industry
With all the developments we've had in technology it has created a lot advancement for the music industry both positive and negative. A main concern is piracy and illegal downloading; with all that is available through the internet a lot of music (TV shows and movies as well) have become easily accessible to download and upload for free. This does create new challenges for artist, producers, and copyright laws. The advances it has positively made for the industry is a whole new genre of music. Computers are being used to create electronic music, as well as synthesizers (computerized/electronic piano). This type of music is becoming rapidly more common and listened to. These advances have allowed the industry to try new things and make new explorations.
Because the internet is not controlled by a centralized power, users can maintain anonymity and find loopholes to avoid consequences for using peer-to-peer technology. The peer-to-peer network allows users to connect to a computer network and freely trade songs. Many companies, like Napster, have taken advantage of this because the protection of intellectual property is close to impossible on the internet. Digital and downloadable music has become a severe threat to major record companies. Associated digital music technologies have changed the power dynamics greatly for major record companies, music consumers, and the artists. Not only has this change in power dynamics provided more opportunities for independent music labels but also reduce costs for music.
The digital environment in the music industry is always evolving. “The industry is beginning to work at adapting to the digital environment and downturns in a business performance like by online distribution and sales; harnessing visibility events for sales momentum; new capabilities for artist management in the digital age and by leveraging online communities to influence product development, among others". These new capabilities and new developments need strong intellectual property regulations to protect artists.
Technology is a pillar in the music industry; therefore, it is imperative to have strong technology ethics. Copyright protections and legislation help artists trademark their music and protect their intellectual property. Protecting intellectual property in the music industry becomes tricky when music firms are in the process of incorporating new technologies and methodologies, which forces firms to be innovative and update the industry standards.
Technology and Ethics During the Coronavirus Pandemic
As of April 20, 2020 there has been over 43 contract tracing apps available globally. Countries are in the process of creating their own methods of digitally tracing coronavirus status (symptoms, confirmed infected, exposed). Apple and Google are working together on a shared solution that helps with contract tracing around the world. Since this is a global pandemic with no end in sight, the restriction of some fundamental rights and freedoms may be ethically justifiable. It may be unethical to not use these tracing solutions to slow the spread. The European Convention on Human Rights, the United Nations International Covenant on Civil and Political Rights, and the United Nations Siracusa Principles all indicate when it is ethical to restrict the rights of the population to prevent the spread of infectious disease. All three documents cite that the circumstances for restricting rights must be time-bound, meet standards of necessity, proportionality, and scientific validity. We must evaluate if the gravity of the situation justifies the potential negative impact, if the evidence shows that the technology will work, is timely, will be adopted by enough people and yields accurate data and insights, and evaluate if the technology will only be temporary. These three documents also provide guidelines on how to ethically develop and design technologies. The development and design guidelines are important for being effective and for security reasons.
The development of technology has enhanced the ability to obtain, track, and share data. Technology has been mobilized by governments around the world to combat the issue of Covid-19, which has brought attention to several issues surrounding ethics. Governments have implemented technologies such as smartphone metadata and Bluetooth applications to contact trace and notify the public of any important information. There are implications for privacy as technologies such as metadata have the capacity to track every movement of an individual. Due to the Coronavirus Pandemic, contact tracing and other tracking apps have been implemented globally in order to fight against the pandemic. Countries across the globe have developing various methods of digitally tracing corona virus such as outbreak origin, symptoms, confirmed positives, and those who are potentially exposed. Governments around the world combined available technology to identify individuals and surveillance technology while still having a low impact on individuals privacy. In 2020, the Australian government released a Bluetooth connected app that allows phones communicate through Bluetooth opposed to metadata. This allowed the app to connect with surrounding phones through Bluetooth opposed to metadata or GPS, which can have a bigger impact on individual privacy. The technology records individuals who have been in close proximity, by connecting through their phones, and recording the data for a certain time period before deleting itself. The app does not track individual's locations but still can pinpoint if they have had close contact with those who were positive or exposed.
On the other hand, some countries such as South Korea utilized metadata technology to closely surveillance their citizens. Metadata can provide a detailed description of an individual's movements by staying regularly in contact with the local cellular towers to maintain reception. In S. Korea, the government utilized individuals' metadata information to convey any public health message to the public. Anonymized information would be released to the public of the locations of individuals who have tested positive for Covid-19. Similarly in Israel, the government approved emergency regulations that allowed authorities to utilize a database that tracks the movements of individuals that have tested positive for Covid-19.
The rise of surveillance technologies by the government to track individuals raises many questions of ethic concerns. As lockdowns and Covid protocols continue, the focus on protecting public health can severely conflict with individual autonomy, although it can be necessary to implement certain technologies and protocols.
Even though these three bodies of government can deem contact tracing ethical, all these contact tracing apps come with a price. They are collecting sensitive personal data including health data. This poses a threat to violate HIPAA and PII if not handled and processed correctly. Even if these apps are only used temporarily, they are storing permanent records of health, movements, and social interactions. Not only do we have to consider the ethical implications of your personal information being stored, but we must also look at the accessibility and digital literacy of the users. Not everyone has access to a smartphone or a cell phone. If we are developing smartphone applications, we will be missing a huge portion of coronavirus data.
While it may be necessary to utilize technology to slow the spread of coronavirus, the Government needs to design and deploy the technology in a way that does not breach the public trust. There is a fine line of saving lives and possibly harming the fundamental rights and freedoms of individuals.
Future developments
The future of technoethics is a promising, yet evolving field. The studies of e-technology in workplace environments are an evolving trend in technoethics. With the constant evolution of technology, and innovations coming out daily, technoethics is looking to be a rather promising guiding framework for the ethical assessments of new technologies. Some of the questions regarding technoethics and the workplace environment that have yet to be examined and treated are listed below:
Are organizational counter measures not necessary because it invades employee privacy?
Are surveillance cameras and computer monitoring devices invasive methods that can have ethical repercussions?
Should organizations have the right and power to impose consequences?
Artificial Intelligence
Artificial intelligence is a large range of technology that deals with building smart machines and data processing so tasks can be performed by machines that are normally completed by humans. AI may prove to be beneficial to human life, but it can also quickly become pervasive and dangerous. Changes in AI are difficult to anticipate and understand, such as employers spying on workers, facial recognition, deep fakes, etc. Along with AI, the algorithms used to implement the technology may prove to be biased which can have detrimental effects on individuals. For example, in facial recognition technology, the AI may be proven to be biased toward different ethnic and racial groups than others. These challenges have social, racial, ethical, and economic implications.
Deepfakes
Deepfake is a form of media in which one existing image or video is replaced or altered by someone else. Altering may include acting out fake content, false advertisement, hoaxes, and financial fraud. The technology of deepfakes may also use machine learning or artificial intelligence. Deepfakes propose an ethical dilemma due to how accessible they are as well as the implications on one’s integrity it may cause to viewers. Deepfakes reconsider the challenge of trustworthiness of the visual experience and can create negative consequences. Deepfakes contribute to the problem of “fake news” by enabling both the more widespread fabrication or manipulation of media that may be deliberately used for the purposes of disinformation. There are four categories of deepfakes: deepfake porn, deepfake political campaigns, deepfake for commercial use, and creative deepfakes. Deepfakes have many harmful effects such as deception, intimidation, and reputational harm. Deception causes views to synthesize a form of reality that did not exist before and may think of it as real footage. The contents of the footage may be detrimental depending on what it is. Detrimental information may include fraudulent voter information, candidate information, money fraud, etc. Intimidation may occur by targeting a certain audience with harmful threats to generate fear. An example of intimidation may be deepfake revenge pornography which also ties into reputational harm.
Accessibility of deepfakes also raises ethical dilemmas as it can be accessed through apps like FakeApp, Zao, and Impressions. The accessibility to these applications may cause legal action. In 2018 the Malicious Deep Fake Prohibition Act was introduced to protect those who may be harmed by deepfakes. These crimes can result in prosecution for harassment or sentences to imprisonment. Although there can be legal actions for deepfakes, they do become increasingly difficult as many parties are involved in its development. The many parties for a deepfake such as the software developer, the application for amplification, the user of the software, etc. Due to these many different components, it may be difficult to prosecute individuals for deepfakes.
UNESCO
UNESCO – a specialized intergovernmental agency of the United Nations, focusing on promotion of education, culture social and natural sciences and communication and information.
In the future, the use of principles as expressed in the UNESCO Universal Declaration on Bioethics and Human Rights (2005) will also be analyzed to broaden the description of bioethical reasoning (Adell & Luppicini, 2009).
User data
In a digital world, much of users' personal lives are stored on devices such as computers and smartphones, and we trust the companies we store our lives on to take care of our data. A topic of discussion regarding the ethics of technology is just exactly how much data these companies really need and what they are doing with it. Another major cause for concern is the security of our personal data and privacy, whether it is leaked intentionally or not.
User data has been one of the main topics regarding ethics as companies and government entities increasingly have access to billions of users' information. Why do companies need so much data regarding their users and are users aware that their data is being tacked? These questions have risen over the years over concerns of how much do companies actually know. Some websites and apps now ask users if they are allowed to track user activity across different apps with the option to decline. Most companies before did not ask or notify users that their app activity would be tracked. Companies over the years have been facing an increased number of data hacks where user's data such as credit cards, social security, phone numbers, and addresses have been leaked. Users of social networks such as Snapchat and Facebook have been facing phone calls from scammers as recent data hacks released users' phone numbers. The most recent breach to affect Facebook leaked over 533 million Facebook users from 106 countries, including 32 million users alone in the U.S. The type of information leaked included user phone numbers, Facebook IDs, full names, locations, birthdates, bios, and email addresses. Hackers and web scrapers have been selling Facebook user data on hacker forums, information for 1 million users can go for $5,000 on these forums.
Large companies share their users' data constantly. In 2018, the U.S, government cracked down on Facebook selling user data to other companies after declaring that it had made the data in question inaccessible. One such case was in a scandal regarding Cambridge Analytica, in which Facebook sold user data to the company without consent from the users whose data was being accessed. The data was then used for several political agendas, such as the Brexit vote and the U.S. Presidential Election of 2016. In an interview with CBS' 60 Minutes, Trump campaign manager Brad Parscale described in detail how he used data taken from different social media websites to create ads that were both visually appealing to potential voters and targeted the issues that they felt strongest about.
Besides swinging political races, the theft of people's data can result in serious consequences on an individual level. In some cases, hackers can breach websites or businesses that have identifying information about a person, such as their credit card number, cell phone number, and address, and upload it to the dark web for sale, if they decide not to use it for their own deviant purposes.
Drones
In the book Society and Technological Change, 8th Edition, by Rudi Volti, the author comments on unmanned aerial vehicles, also known as UAVs or drones. Once used primarily as military technology, these are becoming increasingly accessible tools to the common person for hobbies like photography. In the author's belief, this can also cause concern for security and privacy, as these tools allow people with malicious intents easier access to spying.
Outside of public areas, drones are also able to be used for spying on people in private settings, even in their own homes. In an article by today.com, the author writes about people using drones and taking videos and photographs of people in their most private moments, even in the privacy of their own home.
From an ethical perspective, drones have a multitude of ethical issues many of which are determining current legal policy. Some areas include the ethical military usage of drones, private non-military use by hobbyists for photography or potential spying, drone usage in political campaigns as a way to spread campaign messages, drone usage in the private business sector as a means for delivery, and ethical usage of public/private airspace.
Pet Cloning
In 2020 pet cloning is to become something of interest for those who can afford it. For $25k - $50k anyone will be able to clone their house pet but there is no guarantee you will get the exact same pet that you once had. This may seem very appealing to certain animal-lovers, but what about all of those animals that already have no home?
There are a few different ethical questions here; the first being how is this fair to the animals that are suffering out in the wilderness with no home? The second being that cloning animals is not only for pets, but for all animals in general. Maybe people are concerned that people are going to clone animals for food purposes.Another question about animal cloning is it is good for the welfare of the animal or will the radiation and other procedural aspects cause the animals life to end earlier? These are just some of many concerns people have with animal cloning.
Animal Cloning
The ethical standpoint of animal cloning is a heavily debated topic in a plethora of different career paths. Some of these ethical concerns are the health and well being of the animals, long term side effects, obstetrical complications that occur during cloning, environmental impacts, use of clones in farming/repopulation of endangered species, and the use of clones for other research, specifically in the medical/pharmaceutical field. Many of these concerns are only more recently spoken about due to the advancement of cloning technology in the past decade since humanity's first clone was only twenty-five years ago in 1996 resulting in the birth of a sheep known as Dolly.
Facebook and Meta’s ethical concerns
Further information: Privacy concerns of Facebook
Facebook, or rather Meta, Facebook’s parent company is currently one of the top social networking sites, throughout the early to late 2010s and into current times (2022). A variety of issues ranging from privacy concerns, the issue of who bears the responsibility of unhealthy social interactions and other unhealthy behaviors, to deliberate enabling of misinformation on the platform. The following are a few examples of various ethical concerns raised throughout the years in relation to Facebook.
Federal Trade Commission v. Facebook
A recent Forbes interview conducted on October 22, 2021, by Curt Steinhorst, a contributor for Forbes, with Michael Thate, an ethics teacher employed at Princeton University, asserts that in addition to the “Federal Trade Commission v. Facebook” ruling determining that Facebook had engaged in unethical antitrust behaviors with the acquisition of its competing social media platforms Instagram and WhatsApp, “Facebook developed an algorithm to capture user attention and information into a platform that they knew promoted unhealthy behaviors.”. Firstly the unethical acquisition of smaller competing social media platforms restricts free-market practices and restricts users’ choices in, at least in this case, what social media sites they choose to access. In addition to the antitrust, the promotion of unhealthy behaviors and lifestyles to increase user engagement on the platform is considered to be by Michael Thate to be an ethical concern, as users of the social media platform are given a choice between maintaining a healthy lifestyle and engaging in the social media platform that is designed to keep them on the site and actively engaged regardless of its impact on the wellbeing of the user.
Facebook's Algorithm
On October 4, 2021, CBS News interviewed Frances Haugen, a whistleblower and former employee of Facebook, who revealed Facebook was aware of various concerning ethical practices. “The complaints say Facebook's own research shows that it amplifies hate, misinformation, and political unrest—but the company hides what it knows. One complaint alleges that Facebook's Instagram harms teenage girls.”. These various unethical practices were all employed to, yet again promote increased user engagement with the social media platform. Fences Haugen stated in the interview: “The thing I saw at Facebook over and over again was there were conflicts of interest between what was good for the public and what was good for Facebook. And Facebook, over and over again, chose to optimize for its own interests, like making more money.”. An article written on February 10, 2021, by Paige Cooper outlines how Facebook’s algorithm has changed over the years highlights the changes made by Facebook to prioritize the more emotional interactions on the site.
Facebook-Cambridge Analytica
Through the 2010’s the British political consulting firm Cambridge Analytica in a conjoined effort with Facebook gathered information and personal data on upwards of 87 million nonconsenting users as stated by a New York Times article titled “Cambridge Analytica and Facebook: The Scandal and the Fallout So Far”. The illegally obtained data was then utilized in Donald Trump’s 2016 presidential campaign to help develop personalized ads and campaign messages based on the data provided by Cambridge Analytica. An article was written by The Guardian on March 18, 2021, interviewing a Cambridge Analytica whistleblower Christopher Wylie. In the interview, Wylie asserted that the data given to him was legally obtained and that he and various other academic analyses were also unaware of the nature to which the data used in the psychological profiles was obtained.
Areas of Technoethical Inquiry
Biotech ethics
Biotech ethics concerned with ethical dilemmas surrounding the use of biotechnologies in fields including medical research, health care, and industrial applications. Topics such as cloning ethics, e-health ethics, telemedicine ethics, genetics ethics, neuroethics, and sport and nutrition ethics fall into this category; examples of specific issues include the debates surrounding euthanasia and reproductive rights.
Telemedicine is a medical technology that has been used to advance clinical care with the use of video conferencing, text messaging, and applications. With the advantage of telemedicine, there are concerns about its pitfalls such as threats to patient privacy and HIPAA regulations. Cyberattacks in healthcare are a significant concern when implementing technology because there needs to be measures in place to keep patient privacy secure. One type of cyber attack is a medical device hijack also known as medjack where hackers can alter the functionality of implants, and expose patient medical history. When implementing technology, it is important to check for weaknesses that can cause vulnerability to hacking.
The use of technology in ethics also becomes a key factor when considering artificial intelligence. AI is not seen as a neutral tool, and policies have been set in place to ensure it is not misused under human bias. Although AI is a valuable tool in medicine, the current ethical policies are not up to standard to accommodate AI as it is a multi-disciplinary approach. AI in healthcare is not available to make clinical decisions however, it can provide assistance in surgeries, imaging, etc.
Technoethics and cognition
This area of technoethical inquiry is concerned with technology's relation to the human mind, artificial agents, and society. Topics of study that would fit into this category would be artificial morality and moral agents, technoethical systems and techno-addiction.
An artificial agent describes any type of technology that is created to act as an agent, either of its own power or on behalf of another agent. An artificial agent may try to advance its own goals or those of another agent.
Mass Surveillance
The ethics behind Mass Surveillance has become a highly discussed ethical topic in the twenty-first century, especially in the United States due to the tragedy of 9/11. Some areas of ethical concern involve privacy, discrimination, trust in government, infringement of government-granted rights/basic human rights, conflict of interest, stigmatization, and obtrusiveness. Many of these ethical topics in the timeframe between 2001 and 2021 have become the main topic of discussion in many recent laws all throughout the world. Shortly after 9/11 when the United States began to fear the idea that more terrorist attacks could occur on American soil. A law passed on October 26, 2001, known as the Patriot Act was one of the first larger Mass Surveillance laws passed in the United States. Years later Europe would begin to follow suit with their own set of mass surveillance laws after a string of terrorist attacks. After the 2015 terrorist attacks in France, the French government would move forward with passing the International Electronic Communications Law. The IEC would recognize the power of the French Directorate-General for External Security allowing them to collect, monitor, and intercept all communications sent or received on French territory. In 2016, the United Kingdom would pass the Investigatory Powers Act of 2016, a law allowing the GCHQ to engage in acquisition, interception, and equipment interference of communications/systems sent by anyone on British territory. Finally, in 2016, another law like the Investigatory Powers Act was passed in Germany that was named the Communications Intelligence Gathering Act. This act allowed the German intelligence community to gather foreign nationals communications while they were in German territory. In 2021, Australia passed a law known as the Surveillance Legislation Amendment, which granted the Australian Federal Police and Australian Criminal Intelligence Commission the right to modify or delete data of suspected offenders, Collect intelligence on criminal networks, and finally, forcefully break into a suspected offender's online account. After these laws were passed all throughout Europe, and later on in Australia, a string of protests would begin to arise involving the laws, as citizens from each country would feel it infringed their privacy rights.
Two years after the Investigatory Powers Act of 2016 was passed in the United Kingdom the English High Court would rule that the act would have to be rewritten. This ruling occurred due to the High Court finding the law to be incompatible with EU law since the law "authorizes the UK government to issue retention notices with no prior independent checks, such as review by a court or other body, and for the purpose of investigating crime that is not “serious crime”; and (2) subsequent access to any retained data was similarly not subject to any independent authorization and not limited to the purpose of combating “serious crime.” The origin of this ruling comes from a human rights group known as Liberty who first began to battle the act shortly after it was enacted as they stated it violates the United Kingdom's citizens the right to privacy. In 2020, Four years after Germany enacted the Communications Intelligence Gathering Act it would also make its way to court to be reviewed. Receiving heavy backlash from multiple members of the German public and Non-German citizens. Many of these complaints continued to dwell on the same issue of the privacy of both German and non-German citizens. After a two day trial, the high German court did rule that the law was unconstitutional and gave the German parliament until 2021 to make corrections to the Act.
Though, as of recent in the year 2020 during the height of the COVID-19 Pandemic. The ethical atmosphere regarding public health surveillance began to take center stage due to its overall use during the height of the pandemic. The purpose of this mass surveillance was for data collection of the transmission of the COVID-19. Though, many individuals around the world cited they felt this form of surveillance infringed on their privacy and basic human rights. Another concern regarding this level of surveillance was the lack of government or institutional policy documents regarding how to address the ethical challenges around mass surveillance to track a pandemic transmission rate. The use of this mass surveillance was used on a far larger scale compared to some of the other acts passed in recent years, as this had a more global focus due to the want to bring the transmission of COVID-19 to a halt. For example, on March 16, 2020, the Israeli government allowed emergency regulations regarding mass location tracking of citizens to slow the spread of the disease. Singapore and Taiwan also did something similar, yet their method of mass surveillance was allowing their law-enforcement agencies to monitor quarantine orders.
Technoethics and society
This field is concerned with the uses of technology to ethically regulate aspects of a society. For example: digital property ethics, social theory, law, science, organizational ethics and global ethics.
Digital property rights or DPR refers to individual rights on information available online such as email accounts, online website accounts, posts, blogs, pictures, and other digital media. Digital property rights can be regulated and protected by making the digital property tamper-proof, by adding legal clause to the digital properties, and limiting the sharing of software code.
Social theory refers to how societies change and develop over time in terms of behavior and explanation of behaviors. Technology has a great impact on social change. As technology evolves and upgrades, human interaction goes along with the changes. “Technological theory suggests that technology is an important factor for social change, and it would initiate changes in the arrangement of social relationships”.
Organizational ethics refers to the code of conduct and the way an organization responds to stimulus. Techno ethics plays a role in organizational ethics because technology can be embedded and incorporated in many different aspects of ethical values.
Technofeminism
Technoethics has concerned itself with society as a general group and made no distinctions between the genders, but considers technological effects and influences on each gender individually. This is an important consideration as some technologies are created for use by a specific gender, including birth control, abortion, fertility treatments, and Viagra. Feminists have had a significant influence on the prominence and development of reproductive technologies. Technoethical inquiry must examine these technologies' effects on the intended gender while also considering their influence on the other gender. Another dimension of technofeminism concerns female involvement in technological development: women's participation in the field of technology has broadened society's understanding of how technology affects the female experience in society.
Information and communication technoethics
Information and communication technoethics is "concerned with ethical issues and responsibilities arising when dealing with information and communication technology in the realm of communication." This field is related to internet ethics, rational and ethical decision making models, and information ethics. A major area of interest is the convergence of technologies: as technologies become more interdependent and provide people with multiple ways of accessing the same information, they transform society and create new ethical dilemmas. This is particularly evident in the realms of the internet. In recent years, users have had the unprecedented position of power in creating and disseminating news and other information globally via social networking; the concept of "citizen journalism" primarily relates to this. With developments in the media, has led to open media ethics as Ward writes, leading to citizen journalism.
In cases such as the 2004 Indian Ocean Tsunami or the 2011 Arab Spring movements, citizen journalists were seen to have been significant sources of facts and information in relation to the events. These were re-broadcast by news outlets, and more importantly, re-circulated by and to other internet users. As Jay David Bolter and Richard Grusin state in their book Remediation: Understanding New Media (1999): "The liveness of the Web is a refashioned version of the liveness of broadcast television" However, it is commonly political events (such as 'Occupy' movements or the Iran Elections of 2009) that tend to raise ethical questions and concerns. In the latter example, there had been efforts made by the Iranian government in censoring and prohibiting the spread of internal happenings to the outside by its citizen journalists. This occurrence questioned the importance of the spread of crucial information regarding the issue, and the source from which it came from (citizen journalists, government authorities, etc.). This goes to prove how the internet "enables new forms of human action and expression [but] at the same time it disables [it]" Information and Communication Technoethics also identifies ways to develop ethical frameworks of research structures in order to capture the essence of new technologies.
Educational and professional technoethics
Technoethical inquiry in the field of education examines how technology impacts the roles and values of education in society. This field considers changes in student values and behavior related to technology, including access to inappropriate material in schools, online plagiarism using material copied directly from the internet, or purchasing papers from online resources and passing them off as the student's own work. Educational technoethics also examines the digital divide that exists between educational institutions in developed and developing countries or between unequally-funded institutions within the same country: for instance, some schools offer students access to online material, while others do not. Professional technoethics focuses on the issue of ethical responsibility for those who work with technology within a professional setting, including engineers, medical professionals, and so on. Efforts have been made to delineate ethical principles in professions such as computer programming (see programming ethics).
Environmental and engineering technoethics
Environmental technoethics originate from the 1960s and 1970s' interest in environment and nature. The field focuses on the human use of technologies that may impact the environment; areas of concern include transport, mining, and sanitation. Engineering technoethics emerged in the late 19th century. As the Industrial Revolution triggered a demand for expertise in engineering and a need to improve engineering standards, societies began to develop codes of professional ethics and associations to enforce these codes. Ethical inquiry into engineering examines the "responsibilities of engineers combining insights from both philosophy and the social sciences."
Technoethical assessment and design
A technoethical assessment (TEA) is an interdisciplinary, systems-based approach to assessing ethical dilemmas related to technology. TEAs aim to guide actions related to technology in an ethical direction by advancing knowledge of technologies and their effects; successful TEAs thus produce a shared understanding of knowledge, values, priorities, and other ethical aspects associated with technology. TEAs involve five key steps:
Evaluate the intended ends and possible side effects of the technology in order to discern its overall value (interest).
Compare the means and intended ends in terms of technical and non-technical (moral and social) aspects.
Reject those actions where the output (overall value) does not balance the input in terms of efficiency and fairness.
Consider perspectives from all stakeholder groups.
Examine technological relations at a variety of levels (e.g. biological, physical, psychological, social, and environmental).
Technoethical design (TED) refers to the process of designing technologies in an ethical manner, involving stakeholders in participatory design efforts, revealing hidden or tacit technological relations, and investigating what technologies make possible and how people will use them. TED involves the following four steps:
Ensure that the components and relations within the technological system are explicitly understood by those in the design context.
Perform a TEA to identify relevant technical knowledge.
Optimize the technological system in order to meet stakeholders' and affected individuals' needs and interests.
Consult with representatives of stakeholder and affected groups in order to establish consensus on key design issues.
Both TEA and TED rely on systems theory, a perspective that conceptualizes society in terms of events and occurrences resulting from investigating system operations. Systems theory assumes that complex ideas can be studied as systems with common designs and properties which can be further explained using systems methodology. The field of technoethics regards technologies as self-producing systems that draw upon external resources and maintain themselves through knowledge creation; these systems, of which humans are a part, are constantly in flux as relations between technology, nature, and society change. TEA attempts to elicit the knowledge, goals, inputs, and outputs that comprise technological systems. Similarly, TED enables designers to recognize technology's complexity and power, to include facts and values in their designs, and to contextualize technology in terms of what it makes possible and what makes it possible.
Organizational Technoethics
Recent advances in technology and their ability to transmit vast amounts of information in a short amount of time has changed the way information is being shared amongst co-workers and managers throughout organizations across the globe. Starting in the 1980s with information and communications technologies (ICTs), organizations have seen an increase in the amount of technology that they rely on to communicate within and outside of the workplace. However, these implementations of technology in the workplace create various ethical concerns and in turn a need for further analysis of technology in organizations. As a result of this growing trend, a subsection of technoethics known as organizational technoethics has emerged to address these issues.
Key scholarly contributions
Key scholarly contributions linking ethics, technology, and society can be found in a number of seminal works:
The Imperative of Responsibility: In Search of Ethics for the Technological Age (Hans Jonas, 1979).
On Technology, Medicine and Ethics (Hans Jonas, 1985).
The Real World of Technology (Franklin, 1990).
Thinking Ethics in Technology: Hennebach Lectures and Papers, 1995-1996 (Mitcham, 1997).
Technology and the Good Life (Higgs, Light & Strong, 2000).
Readings in the Philosophy of Technology (Kaplan, 2004).
Ethics and technology: Ethical issues in an age of information and communication technology (Tavani, 2004).
This resulting scholarly attention to ethical issues arising from technological transformations of work and life has helped given rise to a number of key areas (or branches) of technoethical inquiry under various research programs (i.e., computer ethics, engineering ethics, environmental technoethics, biotech ethics, Nanoethics, educational technoethics, information and communication ethics, media ethics, and Internet ethics).
See also
Algorithmic bias
Democratic transhumanism
Engineering ethics
Ethics of artificial intelligence
Information ethics
Information privacy
Organizational technoethics
Philosophy of technology
Robotic governance
Techno-progressivism
Technocriticism
References
Hans Jonas, The Imperative of Responsibility: In Search of Ethics for the Technological Age (1979).
Hans Jonas, On Technology, Medicine and Ethics (1985).
Melanie G. Snyders, CyberEthics and Internet Downloads: An Age by Age Guide to Teaching Children what they need to know (2005).
Further reading
General
Kristin Shrader-Frechette. (2003). "Technology and Ethics," in Philosophy of Technology: The Technological Condition, Oxford: Blackwell Publishing.
Eugene Mirman. (2009) "The Will To Whatevs: A Guide to Modern Life." Harper Perennial.
Daniel A. Vallero. (2007) "Biomedical Ethics for Engineers: Ethics and Decision Making in Biomedical and Biosystem Engineering." Amsderdam: Academic Press.
Ethics, technology and engineering
Fleddermann, C.B. (2011). Engineering Ethics. Prentice Hall. 4th edition.
Harris, C.E., M.S. Pritchard, and M.J. Rabins (2008). Engineering Ethics: Concepts and Cases. Wadsworth Publishing, 4th edition.
Hauser-Katenberg, G., W.E. Katenberg, and D. Norris (2003). "Towards Emergent Ethical Action and the Culture of Engineering," Science and Engineering Ethics, 9, 377–387.
Huesemann M.H., and J.A. Huesemann (2011). Technofix: Why Technology Won’t Save Us or the Environment, Chapter 14, "Critical Science and Social Responsibility", New Society Publishers.
Layton, E. (1986). The Revolt of the Engineers: Social Responsibility and the American Engineering Profession. The Johns Hopkins University Press.
Martin, M.W., and R. Schinzinger (2004). Ethics in Engineering. McGraw-Hill. 4th edition.
Peterson, M. (2017). The Ethics of Technology: A Geometric Analysis of Five Moral Principles. Oxford University Press.
Mitcham, C. (1984). Thinking through technology, the path between engineering and philosophy. Chicago: The University of Chicago Press.
Van de Poel, I., and L. Royakkers (2011). Ethics, Technology, and Engineering: An Introduction. Wiley-Blackwell.
Education and technology
Marga, A. (2004). "University Reforms in Europe: Some Ethical Considerations," Higher Education in Europe, Vol. 79, No. 3, pp. 432–820.
External links
National Academies of Engineering's Center for Engineering, Ethics, and Society
Stanford Law School's Center for Internet and Society
California Polytechnic State University's Ethics + Emerging Sciences Group
University of Notre Dame's Reilly Center for Science, Technology, and Values
Arizona State University's Lincoln Center for Applied Ethics
Santa Clara University's Markkula for Applied Ethics
Centre for Applied Philosophy and Public Ethics, Australia
Yale University's Interdisciplinary Center for Bioethics
Case Western Reserve University's Inamori Center for Ethics and Excellence
University of Delaware's Center for Science, Ethics, and Public Policy
University of Oxford's Future of Humanity Institute
UNESCO - Ethics of Science and Technology
4TU.Centre for Ethics and Technology
Cyber Crime
Journals
International Journal of Technoethics
Journal of Technology, Knowledge, and Society
Journal of Social Work Ethics and Values
Stanford Encyclopedia of Philosophy
Journal of Ethics and Social Philosophy
Philosophy and Technology
Ethics and Information Technology
Journal of Responsible Innovation
Technology in Society
Minds and Machines
Journal of Information, Communication and Ethics in Society
Organizations
Ethics and Emerging Sciences Group
W. Maurice Centre for Applied Ethics
United Nations Educational, Scientific and Cultural Organization UNESCO
Institute for Ethics in Artificial intelligence
Institute for Ethics and Emerging Technologies
Institute for Ethics in AI
Technoethics
Ahmad Al Khabaz vs Dawson college
Adam-swartz case
Bagheri, A. (2011). The Impact of the UNESCO Declaration in Asian and Global Bioethics. Asian Bioethics Review, Vol. 3(2), 52–64.
Bolter, J. D., Grusin, R., & Grusin, R. A. (2000). Remediation: Understanding new media. MIT Press.
Borgmann, A. (1984). Technology and the character of contemporary life: A philosophical inquiry. Chicago: University of Chicago Press.
Coyne, R., 1995, Designing information technology in the postmodern age: From method to metaphor. Cambridge MA: MIT Press.
Castells, M. (2000). The rise of the network society. The information age: economy, society and culture (Vol. 1). Malden, UK:Blackwell.
Canada Foundation for Innovation: www.innovation.ca
Puig de la Bellacasa, M. (2017). Matters of care : speculative ethics in more than human worlds. Minneapolis: University of Minnesota Press.
Dreyfus, H.L., 1999, "Anonymity versus commitment: The dangers of education on the internet," Ethics and Information Technology, 1/1, p. 15-20, 1999
Gert, Bernard. 1999, "Common Morality and Computing," Ethics and Information Technology, 1/1, 57–64.
Fleddermann, C.B. (2011). Engineering Ethics. Prentice Hall. 4th edition.
Harris, C.E., M.S. Pritchard, and M.J. Rabins (2008). Engineering Ethics: Concepts and Cases. Wadsworth Publishing, 4th edition.
Heidegger, M., 1977, The Question Concerning Technology and Other Essays, New York: Harper Torchbooks.
Huesemann M.H., and J.A. Huesemann (2011). Technofix: Why Technology Won't Save Us or the Environment, Chapter 14, "Critical Science and Social Responsibility", New Society Publishers, , 464 pp.
Ihde, D. 1990, Technology and the Lifeworld: From garden to earth. Bloomington and Indianopolis: Indiana University Press.
Jonas, H. (1979). The Imperative of Responsibility: In Search of Ethics for the Technological Age, Chicago: Chicago University Press.
Jonas, H. (1985). On technology, medicine and ethics. Chicago: Chicago University Press.
Levinas, E., 1991, Otherwise than Being or Beyond Essence, Dordrecht: Kluwer Academic Publishers.
Luppicini, R., (2008). The emerging field of Technoethics. In R. Luppicini and R. Adell (eds.). Handbook of Research on Technoethics (pp. 49–51). Hershey: Idea Group Publishing.
Luppicini, R., (2010). Technoethics and the Evolving Knowledge Society: Ethical Issues in Technological Design, Research, Development and Innovation. Hershey, PA: IGI Global.
Martin, M.W., and R. Schinzinger (2004). Ethics in Engineering. McGraw-Hill. 4th edition.
Mitcham, C. (1994). Thinking through technology. University of Chicago Press.
Mitcham, C. (1997). Thinking ethics in technology: Hennebach lectures and papers, 1995–1996. Golden, CO: Colorado School of Mines Press.
Mitcham, C. (2005). Encyclopedia of science, technology, and ethics. Detroit: Macmillan Reference.
Sullins, J. (2010). RoboWarfare: can robots be more ethical than humans on the battlefield. Journal of Ethics and Information Technology, Vol. 12(3), 263–275.
Tavani, H. T. (2004). Ethics and technology: Ethical issues in an age of information and communication technology. Hoboken, NJ: John Wiley & Sons.
Turkle, S. 1996, "Parallel lives: Working on identity in virtual space." in D. Grodin & T. R. Lindlof, (eds.), Constructing the self in a mediated world, London: Sage, 156–175.
Van de Poel, I., and L. Royakkers (2011). Ethics, Technology, and Engineering: An Introduction. Wiley-Blackwell.
Other
Ethical Dilemmas in Information Technology: A Scenario Collection
The Technological Citizen
Canada's Bill C-32: Copyright that can stifle creativity
|
1016345
|
https://en.wikipedia.org/wiki/Cayley%E2%80%93Purser%20algorithm
|
Cayley–Purser algorithm
|
The Cayley–Purser algorithm was a public-key cryptography algorithm published in early 1999 by 16-year-old Irishwoman Sarah Flannery, based on an unpublished work by Michael Purser, founder of Baltimore Technologies, a Dublin data security company. Flannery named it for mathematician Arthur Cayley. It has since been found to be flawed as a public-key algorithm, but was the subject of considerable media attention.
History
During a work-experience placement with Baltimore Technologies, Flannery was shown an unpublished paper by Michael Purser which outlined a new public-key cryptographic scheme using non-commutative multiplication. She was asked to write an implementation of this scheme in Mathematica.
Before this placement, Flannery had attended the 1998 ESAT Young Scientist and Technology Exhibition with a project describing already existing cryptographic techniques from the Caesar cipher to RSA. This had won her the Intel Student Award which included the opportunity to compete in the 1998 Intel International Science and Engineering Fair in the United States. Feeling that she needed some original work to add to her exhibition project, Flannery asked Michael Purser for permission to include work based on his cryptographic scheme.
On advice from her mathematician father, Flannery decided to use matrices to implement Purser's scheme as matrix multiplication has the necessary property of being non-commutative. As the resulting algorithm would depend on multiplication it would be a great deal faster than the RSA algorithm which uses an exponential step. For her Intel Science Fair project Flannery prepared a demonstration where the same plaintext was enciphered using both RSA and her new Cayley–Purser algorithm and it did indeed show a significant time improvement.
Returning to the ESAT Young Scientist and Technology Exhibition in 1999, Flannery formalised Cayley-Purser's runtime and analyzed a variety of known attacks, none of which were determined to be effective.
Flannery did not make any claims that the Cayley–Purser algorithm would replace RSA, knowing that any new cryptographic system would need to stand the test of time before it could be acknowledged as a secure system. The media were not so circumspect however and when she received first prize at the ESAT exhibition, newspapers around the world reported the story that a young girl genius had revolutionised cryptography.
In fact an attack on the algorithm was discovered shortly afterwards but she analyzed it and included it as an appendix in later competitions, including a Europe-wide competition in which she won a major award.
Overview
Notation used in this discussion is as in Flannery's original paper.
Key generation
Like RSA, Cayley-Purser begins by generating two large primes p and q and their product n, a semiprime. Next, consider GL(2,n), the general linear group of 2×2 matrices with integer elements and modular arithmetic mod n. For example, if n=5, we could write:
This group is chosen because it has large order (for large semiprime n), equal to (p2−1)(p2−p)(q2−1)(q2−q).
Let and be two such matrices from GL(2,n) chosen such that . Choose some natural number r and compute:
The public key is , , , and . The private key is .
Encryption
The sender begins by generating a random natural number s and computing:
Then, to encrypt a message, each message block is encoded as a number (as in RSA) and they are placed four at a time as elements of a plaintext matrix . Each is encrypted using:
Then and are sent to the receiver.
Decryption
The receiver recovers the original plaintext matrix via:
Security
Recovering the private key from is computationally infeasible, at least as hard as finding square roots mod n (see quadratic residue). It could be recovered from and if the system could be solved, but the number of solutions to this system is large as long as elements in the group have a large order, which can be guaranteed for almost every element.
However, the system can be broken by finding a multiple of by solving for in the following congruence:
Observe that a solution exists if for some and
If is known, — a multiple of . Any multiple of yields . This presents a fatal weakness for the system, which has not yet been reconciled.
This flaw does not preclude the algorithm's use as a mixed private-key/public-key algorithm, if the sender transmits secretly, but this approach presents no advantage over the common approach of transmitting a symmetric encryption key using a public-key encryption scheme and then switching to symmetric encryption, which is faster than Cayley-Purser.
See also
Non-commutative cryptography
References
Sarah Flannery and David Flannery. In Code: A Mathematical Journey.
Public-key encryption schemes
|
13277642
|
https://en.wikipedia.org/wiki/Accounting%20software
|
Accounting software
|
Accounting software describes a type of application software that records and processes accounting transactions within functional modules such as accounts payable, accounts receivable, journal, general ledger, payroll, and trial balance. It functions as an accounting information system. It may be developed in-house by the organization using it, may be purchased from a third party, or may be a combination of a third-party application software package with local modifications. Accounting software may be web based, accessed anywhere at any time with any device which is Internet enabled, or may be desktop based. It varies greatly in its complexity and cost.
The market has been undergoing considerable consolidation since the mid-1990s, with many suppliers ceasing to trade or being bought by larger groups.
Modules
Accounting software is typically composed of various modules, with different sections dealing with particular areas of accounting. Among the most common are:
Core modules
Accounts receivable—where the company enters money received
Accounts payable—where the company enters its bills and pays money it owes
General ledger—the company's "books"
Billing—where the company produces invoices to clients/customers
Stock/inventory—where the company keeps control of its inventory
Purchase order—where the company orders inventory
Sales order—where the company records customer orders for the supply of inventory
Bookkeeping—where the company records collection and payment
Financial close management — where accounting teams verify and adjust account balances at the end of a designated time period
Non-core modules
Debt collection—where the company tracks attempts to collect overdue bills (sometimes part of accounts receivable)
Electronic payment processing
Expense—where employee business-related expenses are entered
Inquiries—where the company looks up information on screen without any edits or additions
Payroll—where the company tracks salary, wages, and related taxes
Reports—where the company prints out data
Timesheet—where professionals (such as attorneys and consultants) record time worked so that it can be billed to clients
Purchase requisition—where requests for purchase orders are made, approved and tracked
Reconciliation—compares records from parties at both sides of transactions for consistency
Drill down
Journals
Departmental accounting
Support for value added taxation
Calculation of statutory holdback
Late payment reminders
Bank feed integration
Document attachment system
Document/Journal approval system
Note that vendors may use differing names for these modules.
Implementation
In many cases, implementation (i.e. the installation and configuration of the system at the client) can be a bigger consideration than the actual software chosen when it comes down to the total cost of ownership for the business. Most mid-market and larger applications are sold exclusively through resellers, developers, and consultants. Those organizations generally pass on a license fee to the software vendor and then charge the client for installation, customization, and support services. Clients can normally count on paying roughly 50-200% of the price of the software in implementation and consulting fees.
Other organizations sell to, consult with, and support clients directly, eliminating the reseller. Accounting software provides many benefits such as speed up the information retrieval process, bring efficiency in Bank reconciliation process, automatically prepare Value Added TAX (VAT) / Goods and Services TAX (GST), and, perhaps most importantly, provide the opportunity to see the real-time state of the company’s financial position.
Types
Personal accounting
Personal accounting software is mainly targeted towards home users, supporting accounts payable-type accounting transactions, managing budgets, and simple account reconciliation, at the inexpensive end of the market.
Low-end market
At the low-end of the business markets, inexpensive applications software allows most general business accounting functions to be performed. Suppliers frequently serve a single national market, while larger suppliers offer separate solutions in each national market.
Many of the low end products are characterized by being "single-entry" products, as opposed to double-entry systems seen in many businesses. Some products have considerable functionality but are not considered GAAP or IFRS/FASB compliant. Some low-end systems do not have adequate security nor audit trails.
Mid-market
The mid-market covers a wide range of business software that may be capable of serving the needs of multiple national accountancy standards and allow accounting in multiple currencies.
In addition to general accounting functions, the software may include integrated or add-on management information systems, and may be oriented towards one or more markets, for example with integrated or add-on project accounting modules.
Software applications in this market typically include the following features:
Industry-standard robust databases
Industry-standard reporting tools
Tools for configuring or extending the application (e.g. an SDK), access to program code.
High-end market
The most complex and expensive business accounting software is frequently part of an extensive suite of software often known as enterprise resource planning (ERP) software.
These applications typically have a very long implementation period, often greater than six months. In many cases, these applications are simply a set of functions which require significant integration, configuration and customization to even begin to resemble an accounting system.
Many freeware high-end open-source accounting software are available online these days which aim to change the market dynamics. Most of these software solutions are web-based.
The advantage of a high-end solution is that these systems are designed to support individual company specific processes, as they are highly customizable and can be tailored to exact business requirements. This usually comes at a significant cost in terms of money and implementation time.
Hybrid solutions
As technology improves, software vendors have been able to offer increasingly advanced software at lower prices. This software is suitable for companies at multiple stages of growth. Many of the features of mid-market and high-end software (including advanced customization and extremely scalable databases) are required even by small businesses as they open multiple locations or grow in size. Additionally, with more and more companies expanding overseas or allowing workers to home office, many smaller clients have a need to connect multiple locations. Their options are to employ software-as-a-service or another application that offers them similar accessibility from multiple locations over the internet.
SaaS accounting software
With the advent of faster computers and internet connections, accounting software companies have been able to create accounting software paid for on a monthly recurring charge instead of a larger upfront license fee (software as a service - SaaS). The rate of adoption of this new business model has increased steadily to the point where legacy players have been forced to come out with their own online versions.
See also
Accounting
Index of accounting articles
Comparison of accounting software
Double-entry bookkeeping system
E-accounting
Enterprise resource planning
Tax compliance software
References
External links
|
1616002
|
https://en.wikipedia.org/wiki/Medical%20history
|
Medical history
|
The medical history, case history, or anamnesis (from Greek: ἀνά, aná, "open", and μνήσις, mnesis, "memory") of a patient is information gained by a physician by asking specific questions, either of the patient or of other people who know the person and can give suitable information, with the aim of obtaining information useful in formulating a diagnosis and providing medical care to the patient. The medically relevant complaints reported by the patient or others familiar with the patient are referred to as symptoms, in contrast with clinical signs, which are ascertained by direct examination on the part of medical personnel. Most health encounters will result in some form of history being taken. Medical histories vary in their depth and focus. For example, an ambulance paramedic would typically limit their history to important details, such as name, history of presenting complaint, allergies, etc. In contrast, a psychiatric history is frequently lengthy and in depth, as many details about the patient's life are relevant to formulating a management plan for a psychiatric illness.
The information obtained in this way, together with the physical examination, enables the physician and other health professionals to form a diagnosis and treatment plan. If a diagnosis cannot be made, a provisional diagnosis may be formulated, and other possibilities (the differential diagnoses) may be added, listed in order of likelihood by convention. The treatment plan may then include further investigations to clarify the diagnosis.
The method by which doctors gather information about a patient’s past and present medical condition in order to make informed clinical decisions is called the history and physical (a.k.a. the H&P). The history requires that a clinician be skilled in asking appropriate and relevant questions that can provide them with some insight as to what the patient may be experiencing. The standardized format for the history starts with the chief concern (why is the patient in the clinic or hospital?) followed by the history of present illness (to characterize the nature of the symptom(s) or concern(s)), the past medical history, the past surgical history, the family history, the social history, their medications, their allergies, and a review of systems (where a comprehensive inquiry of symptoms potentially affecting the rest of the body is briefly performed to ensure nothing serious has been missed). After all of the important history questions have been asked, a focused physical exam (meaning one that only involves what is relevant to the chief concern) is usually done. Based on the information obtained from the H&P, lab and imaging tests are ordered and medical or surgical treatment is administered as necessary.
Process
A practitioner typically asks questions to obtain the following information about the patient:
Identification and demographics: name, age, height, weight.
The "chief complaint (CC)" – the major health problem or concern, and its time course (e.g. chest pain for past 4 hours).
History of the present illness (HPI) – details about the complaints, enumerated in the CC (also often called history of presenting complaint or HPC).
Past medical history (PMH) (including major illnesses, any previous surgery/operations (sometimes distinguished as past surgical history or PSH), any current ongoing illness, e.g. diabetes).
Review of systems (ROS) Systematic questioning about different organ systems
Family diseases – especially those relevant to the patient's chief complaint.
Childhood diseases – this is very important in pediatrics.
Social history (medicine) – including living arrangements, occupation, marital status, number of children, drug use (including tobacco, alcohol, other recreational drug use), recent foreign travel, and exposure to environmental pathogens through recreational activities or pets.
Regular and acute medications (including those prescribed by doctors, and others obtained over-the-counter or alternative medicine)
Allergies – to medications, food, latex, and other environmental factors
Sexual history, obstetric/gynecological history, and so on, as appropriate.
Conclusion & closure
History-taking may be comprehensive history taking (a fixed and extensive set of questions are asked, as practiced only by health care students such as medical students, physician assistant students, or nurse practitioner students) or iterative hypothesis testing (questions are limited and adapted to rule in or out likely diagnoses based on information already obtained, as practiced by busy clinicians). Computerized history-taking could be an integral part of clinical decision support systems.
A follow-up procedure is initiated at the onset of the illness to record details of future progress and results after treatment or discharge. This is known as a catamnesis in medical terms.
Review of systems
Whatever system a specific condition may seem restricted to, all the other systems are usually reviewed in a comprehensive history. The review of systems often includes all the main systems in the body that may provide an opportunity to mention symptoms or concerns that the individual may have failed to mention in the history. Health care professionals may structure the review of systems as follows:
Cardiovascular system (chest pain, dyspnea, ankle swelling, palpitations) are the most important symptoms and you can ask for a brief description for each of the positive symptoms.
Respiratory system (cough, haemoptysis, epistaxis, wheezing, pain localized to the chest that might increase with inspiration or expiration).
Gastrointestinal system (change in weight, flatulence and heartburn, dysphagia, odynophagia, hematemesis, melena, hematochezia, abdominal pain, vomiting, bowel habit).
Genitourinary system (frequency in urination, pain with micturition (dysuria), urine color, any urethral discharge, altered bladder control like urgency in urination or incontinence, menstruation and sexual activity).
Nervous system (Headache, loss of consciousness, dizziness and vertigo, speech and related functions like reading and writing skills and memory).
Cranial nerves symptoms (Vision (amaurosis), diplopia, facial numbness, deafness, oropharyngeal dysphagia, limb motor or sensory symptoms and loss of coordination).
Endocrine system (weight loss, polydipsia, polyuria, increased appetite (polyphagia) and irritability).
Musculoskeletal system (any bone or joint pain accompanied by joint swelling or tenderness, aggravating and relieving factors for the pain and any positive family history for joint disease).
Skin (any skin rash, recent change in cosmetics and the use of sunscreen creams when exposed to sun).
Inhibiting factors
Factors that inhibit taking a proper medical history include a physical inability of the patient to communicate with the physician, such as unconsciousness and communication disorders. In such cases, it may be necessary to record such information that may be gained from other people who know the patient. In medical terms this is known as a heteroanamnesis, or collateral history, in contrast to a self-reporting anamnesis.
Medical history taking may also be impaired by various factors impeding a proper doctor-patient relationship, such as transitions to physicians that are unfamiliar to the patient.
History taking of issues related to sexual or reproductive medicine may be inhibited by a reluctance of the patient to disclose intimate or uncomfortable information. Even if such an issue is on the patient's mind, he or she often doesn't start talking about such an issue without the physician initiating the subject by a specific question about sexual or reproductive health. Some familiarity with the doctor generally makes it easier for patients to talk about intimate issues such as sexual subjects, but for some patients, a very high degree of familiarity may make the patient reluctant to reveal such intimate issues. When visiting a health provider about sexual issues, having both partners of a couple present is often necessary, and is typically a good thing, but may also prevent the disclosure of certain subjects, and, according to one report, increases the stress level.
Computer-assisted history taking
Computer-assisted history taking or computerized history taking systems have been available since the 1960s. However, their use remains variable across healthcare delivery systems.
One advantage of using computerized systems as an auxiliary or even primary source of medically related information is that patients may be less susceptible to social desirability bias. For example, patients may be more likely to report that they have engaged in unhealthy lifestyle behaviors. Another advantage of using computerized systems is that they allow easy and high-fidelity portability to a patient's electronic medical record.
Also an advantage is that it saves money and paper.
One disadvantage of many computerized medical history systems is that they cannot detect non-verbal communication, which may be useful for elucidating anxieties and treatment plans. Another disadvantage is that people may feel less comfortable communicating with a computer as opposed to a human. In a sexual history-taking setting in Australia using a computer-assisted self-interview, 51% of people were very comfortable with it, 35% were comfortable with it, and 14% were either uncomfortable or very uncomfortable with it.
The evidence for or against computer-assisted history taking systems is sparse. As of 2011, there were no randomized control trials comparing computer-assisted versus traditional oral-and-written family history taking to identifying patients with an elevated risk of developing type 2 diabetes mellitus. In 2021, a substudy of a large prospective cohort trial showed that a majority (70%) of patients with acute chest pain could, with computerized history taking, provide sufficient data for risk stratification with a well-established risk score (HEART score).
See also
Genogram
Medical record
Medicine
Physical examination
Psychoanalysis (Freud uses the term anamnesis to describe neurotics' recounting of their symptoms)
References
Practice of medicine
Medical terminology
Athletic training
History of science by discipline
|
61776029
|
https://en.wikipedia.org/wiki/Firefox%20early%20version%20history
|
Firefox early version history
|
The project that became Firefox today began as an experimental branch of the Mozilla Suite called m/b (or mozilla/browser). Firefox retains the cross-platform nature of the original Mozilla browser, using the XUL user interface markup language. The use of XUL makes it possible to extend the browser's capabilities through the use of extensions and themes. The development and installation processes of these add-ons raised security concerns, and with the release of Firefox 0.9, the Mozilla Foundation opened a Mozilla Update website containing "approved" themes and extensions. The use of XUL sets Firefox apart from other browsers, including other projects based on Mozilla's Gecko layout engine and most other browsers, which use interfaces native to their respective platforms (Galeon and Epiphany use GTK+, K-Meleon uses MFC, and Camino uses Cocoa). Many of these projects started before Firefox, and probably served as inspiration.
Releases
Phoenix and Firebird
Hyatt, Ross, Hewitt and Chanial developed their browser to combat the perceived software bloat of the Mozilla Suite (codenamed, internally referred to, and continued by the community as SeaMonkey), which integrated features such as IRC, mail, news, and WYSIWYG HTML editing into one internet suite. After it was sufficiently developed, binaries for public testing appeared in September 2002 under the name Phoenix. This name carried the implication of the mythical firebird that rose triumphantly from the ashes of its dead predecessor, in this case Netscape Navigator which lost the "First browser war" to Microsoft's Internet Explorer. The name Mozilla began as the internal codename for the original 1994 Netscape Navigator browser aiming to displace NCSA Mosaic as the world's most popular web browser. The name for this would-be "Mosaic killer" was meant to evoke the building-crushing Godzilla. The name Mozilla was revived as the 1998 open sourcing spinoff organization from Netscape.
The name Phoenix remained until April 14, 2003, when it was changed because of a trademark dispute with the BIOS manufacturer Phoenix Technologies (which produces a BIOS-based browser called Phoenix FirstWare Connect). The new name, Firebird, met with mixed reactions, particularly as the Firebird database server already carried the name. In response, the Mozilla Foundation stated that the browser should always bear the name Mozilla Firebird to avoid confusion with the database software.
Firefox
Due to continuing pressure from the Firebird community, on February 9, 2004 the project was renamed again to Mozilla Firefox. The name "Firefox" (a reference to the red panda) was chosen for its similarity to "Firebird", and its uniqueness in the computing industry. To ensure that no further name changes would be necessary, the Mozilla Foundation began the process of registering Firefox as a trademark with the United States Patent and Trademark Office in December 2003. This trademark process led to a delay of several months in the release of Firefox 0.8 when the foundation discovered that Firefox had already been registered as a trademark in the UK for Charlton Company software. The situation was resolved when the foundation was given a license to use Charlton's European trademark.
Firefox version 1.0 was released on November 9, 2004. The launch of version 1.0 was accompanied by "a respectable amount of pre-launch fervor" including a fan-organized campaign to run a full-page ad in The New York Times.
Although the Mozilla Foundation had intended to make the Mozilla Suite obsolete and replace it with Firefox, the Foundation continued to maintain the suite until April 12, 2006 because it had many corporate users and was bundled with other software. The Mozilla community (as opposed to the Foundation) continues to release new versions of the suite, using the product name SeaMonkey to avoid confusion with the original Mozilla Suite.
Firefox 1.5
Firefox 1.5 was released on November 30, 2005. Originally, it was planned to have a version 1.1 at an earlier date as the new Firefox version after 1.0, with development on a later version (1.5) in a separate development branch, but during 2005 both branches and their feature sets were merged (the Mozilla Foundation abandoned the 1.1 release plan after the first two alpha builds), resulting in an official release date between the original dates planned for both versions.
Version 1.5 implemented a new Mac-like options interface, the subject of much criticism from Microsoft Windows and Linux users, with a "Sanitize" action to allow someone to clear their privacy-related information without manually clicking the "Clear All" button. In Firefox 1.5, a user could clear all privacy-related settings simply by exiting the browser or using a keyboard shortcut, depending on their settings. Moreover, the software update system was improved (with binary patches now possible). There were also improvements in the extension management system, with a number of new developer features. In addition, Firefox 1.5 had preliminary SVG 1.1 support.
Behind the screens, the new version resynchronized the code base of the release builds (as opposed to nightly builds) with the core "trunk", which contained additional features not available in 1.0, as it branched from the trunk around the 0.9 release. As such, there was a backlog of bug fixes between 0.9 and the release of 1.0, which were made available in 1.5.
There were also changes in operating system support. As announced on 23 June 2005 by the Mozilla Foundation, Firefox 1.1, which later became 1.5, and other new Mozilla products did no longer support Mac OS X v10.1, in order to improve the quality of Firefox releases on Mac OS X v10.2 and above. Firefox 1.5 was the final version supported on Windows 95.
Alpha builds of Firefox 1.5 (id est, 1.1a1 and 1.1a2) did not carry Firefox branding; they were labelled "Deer Park" (which was Firefox 1.5's internal codename) and contained a different program icon. This was done to dissuade end-users from downloading preview versions, which are intended for developers only.
Firefox 2
On October 24, 2006, Mozilla released Firefox 2. This version included updates to the tabbed browsing environment, the extensions manager, the GUI (Graphical User Interface), and the find, search and software update engines. It also implemented a new session restore feature, inline spell checking, and an anti-phishing feature which was implemented by Google as an extension and later merged into the program itself.
In December 2007, Firefox Live Chat was launched. It allowed users to ask volunteers questions through a system powered by Jive Software, with guaranteed hours of operation and the possibility of help after hours.
Firefox 2.0.0.20 was the final version that could run under an unmodified installation of Windows NT 4.0, Windows 98, and Windows Me. Subsequently, Mozilla Corporation announced it would not develop new versions of Firefox 2 after the 2.0.0.20 release, but continued Firefox 2 development as long as other programs, such as Thunderbird mail client, depended on it. The final internal release was 2.0.0.22, released in late April 2009.
Firefox 3
Firefox 3 was released on June 17, 2008, by the Mozilla Corporation. Firefox 3 uses version 1.9 of the Mozilla Gecko layout engine for displaying web pages. This version fixes many bugs, improves standard compliance, and implements new web APIs. Other new features include a redesigned download manager, a new "Places" system for storing bookmarks and history, and separate themes for different operating systems.
Development stretches back to the first Firefox 3 beta (under the codename 'Gran Paradiso') which had been released several months earlier on November 19, 2007, and was followed by several more beta releases in spring 2008 culminating in the June release. Firefox 3 had more than 8 million unique downloads the day it was released, setting a Guinness World Record.
Firefox 3.5
Version 3.5, codenamed Shiretoko, adds a variety of new features to Firefox. Initially numbered Firefox 3.1, Mozilla developers decided to change the numbering of the release to 3.5 in order to reflect a significantly greater scope of changes than originally planned. The final release was on June 30, 2009. The changes included much faster performance thanks to an upgrade to SpiderMonkey JavaScript engine called TraceMonkey and rendering improvements, and support for the <video> and <audio> tags as defined in the HTML5 specification, with a goal to offer video playback without being encumbered by patent problems associated with many video technologies. Cross-site XMLHttpRequests (XHR), which can allow for more powerful web applications and an easier way to implement mashups, are also implemented in 3.5. A new global JSON object contains native functions to efficiently and safely serialize and deserialize JSON objects, as specified by the ECMAScript 3.1 draft. Full CSS 3 selector support has been added. Firefox 3.5 uses the Gecko 1.9.1 engine, which includes a few features that were not included in the 3.0 release. Multi-touch touchpad support was also added to the release, including gesture support like pinching for zooming and swiping for back and forward. Firefox 3.5 also features an updated logo.
Firefox 3.6
Version 3.6, released on January 21, 2010, uses the Gecko 1.9.2 engine and includes several interface improvements, such as "personas". This release was referred to as 3.2 before 3.1 was changed to 3.5. The codename for this version was Namoroka. This is the last major, official version to run on PowerPC-based Macintoshes.
One minor update to Firefox 3.6, version 3.6.4 (code-named Lorentz) is the first minor update to make non-intrusive changes other than minor stability and security fixes. It adds Out of Process Plugins (OOPP), which runs plugins in a separate process, allowing Firefox to recover from plugin crashes. Firefox 3.6.6 lengthens the amount of time a plugin is allowed to be unresponsive before the plugin quits.
Firefox 4
On October 13, 2006, Brendan Eich, Mozilla's then-Chief-Technology-Officer, wrote about the plans for "Mozilla 2", referring to the most comprehensive iteration (since its creation) of the overall platform on which Firefox and other Mozilla products run. Most of the objectives were gradually incorporated into Firefox through versions 3.0, 3.5, and 3.6. The largest changes, however, were planned for Firefox 4.
After five "Alpha" releases, twelve "Beta" releases, and two "Release Candidate" versions, Firefox 4 was released on March 22, 2011, originally Firefox 3.7 (Gecko 1.9.3) during its alpha stage, brought a new user interface and is said to be faster. Early mockups of the new interface on Windows, Mac OS X, and Linux were first made available in July 2009. Other new features included improved notifications, tab groups, "switch to tab" where opened tabs can be searched through the address bar, application tabs, a redesigned add-on manager, integration with Firefox Sync, and support for multi-touch displays.
Firefox 4 was based on the Gecko 2.0 engine, which added or improved support for HTML5, CSS3, WebM, and WebGL. It also included a new JavaScript engine (JägerMonkey) and better XPCOM APIs.
See also
GNU IceCat
History of free and open-source software
History of Mozilla Application Suite
Mozilla Corporation software rebranded by the Debian project
Notes
References
Further reading
Eich, Brendan (2005). Branch Plan. In Mozilla Wiki. Retrieved December 21, 2005.
External links
Mozilla Firefox release notes for each version
Mozilla Firefox developer release notes for each version
Releases - MozillaWiki
unofficial changelogs for Firefox releases, Jesse Ruderman (last updated in 2008)
history of the Mozilla logo by Jamie Zawinski
Firefox browser for web 2.0 age, BBC News
Firefox
Firefox
Firefox
Firefox
|
1371102
|
https://en.wikipedia.org/wiki/Maelstrom%20%281992%20video%20game%29
|
Maelstrom (1992 video game)
|
Maelstrom is a video game developed by Andrew Welch, released as shareware in November 1992 for Mac OS. The game is an enhanced clone of Atari, Inc.'s 1979 arcade game with a visual style similar to the Atari Games 1987 sequel, . was released when there were few action games for the high-resolution color displays of the Macintosh, so it garnered much interest, despite the dated concept, and led to the creation of Ambrosia Software.
The game was later released as free and open-source software, resulting in ports for other platforms.
Gameplay
is played in a 2D overview in a section of space. The object of the game is to reach the highest score possible by shooting asteroids with a plasma cannon from a spaceship. The ship can move in any direction across the screen and also has a limited amount of shield. The player may also pick up power-ups and encounter unusual objects and enemies throughout the game.
Development
was created using THINK C and uses 18,000 lines of C code with 9,000 lines of inline assembly language.
Reception
In 1993, Maelstrom won "Best New Macintosh Product" in the "Shareware Industry Awards for Best Game," as well as receiving other awards.
Legacy
Welch gave the source code to Sam Lantinga, who created a SDL port and released it in 1995. It included networked multiplayer.
In 1999 Ambrosia Software released Latinga's version 3.0 as open-source software under the terms of the GNU General Public License (GPL).
In 2010, Andrew Welch and Ian Gilman released the game's contents under the free Creative Commons license Attribution, which makes completely free and open-source software.
References
External links
Official Page
Maelstrom 3.0 on libsdl.org
1992 video games
Video game clones
Ambrosia Software games
Open-source video games
Multidirectional shooters
Linux games
Classic Mac OS games
Commercial video games with freely available source code
Creative Commons-licensed video games
Video games developed in the United States
|
45001435
|
https://en.wikipedia.org/wiki/Khalid%20Maqbool%20Siddiqui
|
Khalid Maqbool Siddiqui
|
Khalid Maqbool Siddiqui () is a Pakistani politician who had been Federal Minister for Information Technology and Telecommunication, in office from 20 August 2018 to 06 April 2020. He has been a member of the National Assembly of Pakistan, since August 2018 and the leader of the Muttahida Quami Movement Pakistan, since February 2018.
Previously, he was a member of the National Assembly from 1990 to 1993, from 1997 to 1999 and again from June 2013 to May 2018. During his second tenure as member of the National Assembly, he served Federal Minister for Industries and Production from July 1997 to August 1998 in the cabinet of Prime Minister Nawaz Sharif.
Political career
As a student, he became chairman of All Pakistan Muttahidda Students Organization (APMSO) in 1989 while studying in Jinnah Sindh Medical University.
He was elected to the National Assembly of Pakistan as a candidate of Haq Parast Group (HPG) from Constituency NA-169 (Hyderabad-III) in 1990 general election. The Muttahida Qaumi Movement (MQM) had fielded candidates for the 1990 general election under the banner of the HPG. In 1993, he was appointed as deputy convener of MQM.
He was elected to the National Assembly as a candidate of HPG from Constituency NA-169 (Hyderabad-III) in 1997 Pakistani general election. In July 1997, he was inducted into the federal cabinet of Prime Minister Nawaz Sharif and was appointed as Federal Minister for Industries and Production where he continued to serve until August 1998.
He was re-elected to the National Assembly as a candidate of MQM from Constituency NA-219 (Hyderabad-I) in 2013 Pakistani general election. He received 141,035 votes and defeated Ali Muhammad Sehto, a candidate of Pakistan Peoples Party (PPP).
On 11 February 2018, he was elevated from deputy Convener of MQM to Convener of MQM after the Coordination Committee of MQM appointed him as the new Convener of the party, replacing Farooq Sattar. In retaliation, Farooq Sattar dissolved the party's Coordination Committee and called for fresh intra-party election. On 18 February, Farooq Sattar was elected as Convener of the party in intra-party election. Faction of MQM led by Siddique challenged the polls and filed a petition in the Election Commission of Pakistan (ECP) and stated that since Farooq Sattar was removed as convener on 11 February, he had no authority to hold the intra-party elections. On 26 March 2018, the ECP ruled in favour of Siddiqui and removed Farooq Sattar as Convener of MQM.
On 28 March 2018, Farooq Sattar filed a petition in the Islamabad High Court to against the decision of the ECP to suspend him as the convener of the MQM. On 11 June, the Islamabad High Court dismissed the petition and upheld the orders of the ECP.
He was re-elected to the National Assembly as a candidate of MQM from Constituency NA-255 (Karachi Central-III) in 2018 Pakistani general election.
On 18 August, Imran Khan formally announced his federal cabinet structure and Siddiqui was named as Minister for Information Technology and Telecommunication. On 20 August 2018, he was sworn in as Federal Minister for Information Technology and Telecommunication in the federal cabinet of Prime Minister Imran Khan.
On 20 August 2018, a court declared Siddiqui proclaimed absconder in a case regarding violation of the Loudspeaker Act.
On 6 April in a cabinet reshuffle his resignation was accepted by PM Imran Khan.
References
Living people
Muttahida Qaumi Movement politicians
Pakistani MNAs 2013–2018
People from Sindh
Muttahida Qaumi Movement MNAs
Jinnah Sindh Medical University alumni
Politicians from Hyderabad, Sindh
Pakistani MNAs 2018–2023
Year of birth missing (living people)
|
33171468
|
https://en.wikipedia.org/wiki/Business%20operating%20system%20%28management%29
|
Business operating system (management)
|
The term business operating system (BOS) refers to standard, enterprise-wide collection of business processes used in many diversified industrial companies. The definition has also been extended to include the common structure, principles and practices necessary to drive the organization.
Diversified industrial companies like Ingersoll Rand, Honeywell, and Danaher have adopted a standard, common collection of business processes and/or business process improvement methodologies which they use to manage strategy development and execution. In the case of Danaher, the business system is a core part of the company's culture and is seen as one of the key drivers of corporate performance.
The objectives of such systems are to ensure daily work is focused on the organisation's strategic objectives and is done in the most efficient way. The systems deal with the questions "why" (purpose of the work), "what" (specific objectives of the work) and "how" (the processes used to do the work). The Toyota Production System is focused on both how to make cars, and how to improve the way cars are made. A third objective can also be added, which is to improve the business system itself by identifying or improving the component tools and techniques.
Terminology
Terms used to describe such systems include:
XPS - meaning “Company-specific Production System" with the X standing in place of the company name
Business System
Management Operating System
Examples of business operating systems
Toyota's Toyota Production System (TPS) is one of the earliest examples, developed between 1948 and 1975
Danaher is well known for its Danaher Business System (DBS)
Fortive (a company that split from Danaher in 2016) has the Fortive Business System (FBS) which is derived from the DBS
Ingersoll Rand established the Ingersoll Rand business operating system (BOS) to describe the six enterprise focus areas and its process improvement method (Lean Six Sigma).
Honeywell has the Honeywell Operating System (HOS)
United Technologies has the Achieving Competitive Excellence (ACE) Operating System
Idex Corporation has the Idex Operating Model
ETW (Execute to Win)
Bosch Production System
Boeing Production System
Audi Production System
Lego Production System
John Deere Quality and Production System
Alcoa Business System
REC Production System
Electrolux Manufacturing System
Novo Nordisk - "cLEAN”
Trumpf - “Synchro”
Magellan Operating System (Magellan Aerospace)
Americold has the "Americold Operating System" (AOS)
List of common features
Many of business operating systems share common features. This is because the systems are derived from other known systems, and from established methods and practices for business management. The following is a list of features that appear in several systems.
Hoshin Kanri, a strategic planning methodology developed by Yoji Akao, used to create goals, assign them measurable milestones, and assess progress against those milestones (Hoshin Kanri is also referred to as Policy Deployment or X-Matrix)
Standard work - the best and most reliable methods and sequences for each work process which is used as the basis for sustaining improvements
Process improvement methodologies: Lean, Six Sigma, and Kaizen are popular approaches incorporated in business systems
Just-in-time manufacturing
Gemba walks
Jidoka - "automation with a human touch" where human intervention is used to improve machine performance
Visual controls or visual management where management processes (e.g. checking) use simple graphics to show problems at a glance
Problem solving techniques such as root cause analysis
Technology: While these standard business operating systems may inform or be linked to a company's technology platform, they more commonly refer to the way the company manages complex business processes in a common way across its diverse portfolio of businesses.
References
Business terms
|
26677600
|
https://en.wikipedia.org/wiki/HTC%20Evo%204G
|
HTC Evo 4G
|
The HTC Evo 4G (trademarked in capitals as EVO 4G, also marketed as HTC EVO WiMAX ISW11HT in Japan) is a smartphone developed by HTC Corporation and marketed as Sprint's flagship Android smartphone, running on its WiMAX network. The smartphone launched on June 4, 2010 and was the first 4G enabled smartphone released in the United States.
History
During development, the device was known as the HTC "Supersonic", which was leaked through the Internet and was known as a variant of the HTC HD2 running Android.
The EVO was released on June 4, 2010 in the United States through Sprint. The device became the top-selling launch day phone on Sprint, surpassing the Palm Pre, Samsung Instinct and Motorola Razr V3.
Features
The HTC EVO features hardware very similar to the HTC HD2, a smartphone running Windows Mobile. The device is sometimes referenced as the Android version of the HTC HD2 although a variety of features are only available on the EVO 4G (video calling for example).
Screen and input
The EVO proved a trendsetter among Android phones. Unlike many other smart-phones at the time of its release, the EVO has a large 4.3-inches (480-by-800) TFT LCD capacitive touch screen display with a pixel density of 217 pixels per inch (ppi). Larger sizes are commonplace now, but in mid-2010, this was quite innovative. The display is designed to be used with a bare finger or multiple fingers at one time for multi-touch sensing. Most gloves and styli prevent the necessary electric conductivity needed for use on the capacitive display.
The EVO has a balanced hardware-software user interface, featuring seven hardware/touch sensitive buttons, four of which are on the front of the device. Input and interaction with the device is balanced between the hardware and software user interface and in most situations require users to use hardware/touch sensitive buttons often throughout Android OS. Like most Gingerbread era Android devices, the EVO features four main touch-sensitive buttons on the front — Home, Menu, Back, and Search. The Home button returns to the Sense Home screen. The Menu button shows menu options in various applications although it can also be used for other purposes, the Back button is used to return to the prior page or screen displayed, and the Search button mainly allows searching through the phone but can be used for other purposes in various applications. Unlike iPhones, the device does not feature a hardware ringer switch. The volume adjustment control is located on the right spine. A multifunction sleep/wake button is located on the top of the device, which serves as the unit's power and sleep button and also controls phone calls. The touchscreen furnishes the remainder of the user interface.
The device responds to four sensors. A proximity sensor deactivates the display and touchscreen when the device is brought near the face during a call. This is done to save battery power and to prevent inadvertent inputs via users' faces and ears. An ambient light sensor adjusts the display brightness, which in turn saves battery power. A 3-axis accelerometer senses the orientation of the phone and changes the screen accordingly, allowing users to easily switch between page orientation modes. A geomagnetism sensor provides orientation with respect to Earth's magnetic field. The proximity sensor and the accelerometer can also be used to control and/or interact with third party apps, notably games. The device also contains a temperature sensor used for monitoring the temperature of the battery.
The device also features a GPS chip, allowing applications (with user permission) to report device location allowing for location-based services and can also be useful to turn-by-turn navigation apps.
Processor and memory
The EVO is powered by the Qualcomm QSD8650 chipset that contains a Snapdragon Scorpion microprocessor clocked at 1 GHz and an embedded Adreno 200 graphics chip capable of up to 22 million triangles per second.
It features 512 MB of eDRAM, which allows for a smoother experience with Android OS, applications, and the HTC Sense user interface. The device also features 1024 MB of built-in ROM that is mainly used for the system software.
Cameras
The EVO features a rear-facing backside illumination 8-megapixel camera capable of recording videos in 720p at 30 frames per second and dual photoflash, which helps to illuminate objects in low-light conditions. In addition, the EVO has a 1.3-megapixel camera on front of the device designed for use with video calling and for taking portrait images, although it can also be used in other applications. The front facing camera does not work on any versions of Android higher than 2.3.7 Gingerbread.
Storage
Like many other Android mobile devices, the HTC EVO 4G features a microSD slot in addition to the built-in ROM that allows for user-expandable storage. The device supports microSD cards of sizes up to 32 GB. With Android version 2.2+ (Froyo) available as an over-the-air upgrade, the OS supports applications that permit themselves to be installed on the SD card.
The device comes pre-installed with an 8 GB microSDHC card of Class 2 or 4.
Audio and output
The rear of the EVO sports a speaker that is used for most applications like music, applications, and such as the main speaker. A loudspeaker that serves as an earpiece is located above the screen. The microphone is featured on the bottom of the phone and is used for phone calls and voice-commands, although it can also be used in many other third-party applications. The unit has an HDMI-out (type D, micro connector) port, which allows sending content to an HD television set. The Sprint Mobile Hotspot application allows sharing the device's mobile broadband with up to eight devices.
Smartphone connectivity
The EVO features a CDMA cellular radio that supports 3G EVDO, Revisions 0, A, and the yet-undeployed B allowing faster download and upload speeds, greater power efficiency; and WiMAX, a protocol known as 802.16e, featuring speeds of up to 10 Mbit/s on the downlink and 1.5 Mbit/s on the uplink. The device is marketed as a 4G phone, WiMAX is considered to be a 4G technology based on 4G standards recently set by ITU-R. 4G for this device does not work on Android versions above 2.3.7 Gingerbread.
Battery and power
The device comes pre-installed with a 1500 mAh Li-ion rechargeable battery that is designed to be user-replaceable. The battery is interchangeable with batteries from the HTC Incredible, HTC Touch Pro 2, HTC Arrive, and HTC Hero (CDMA). Stand by for the pre installed battery is 146 hr. and talk time is 5hr.12 min.
Software
The device sports the HTC Sense user interface that runs on top of the Android operating system and presents information through the Android desktop widgets and application, and includes launchers, app drawer, and lock screen replacements. Sense also brings a modified browser and home screen as well. The device first came with Android OS 2.1 "Eclair" although Android OS 2.2 "Froyo" has since been rolled out through OTA (Over-The-Air) making it the third device to officially run "Froyo" and the first to be officially rolled out by a US network. Exactly a year after the phone's official release, the EVO received an update to Android 2.3.3 (Gingerbread). The software could be manually installed by searching for a software update, and began being pushed to HTC EVOs across Sprint on June 6, 2011. Improvements aside from the upgraded Android OS include a fix for battery issues, increases battery life, includes the ability to sync multiple Gmail accounts, and a few user interface tweaks. A second update was pushed by Sprint on June 20, 2011, fixing magnetometer (compass) issues, Netflix streaming, voicemail notifications, and hearing aid compatibility. The Phone (Like many of the phones from this year) did NOT officially get the Android Ice Cream Sandwich Update. Only getting the 2010 versions of Android.
The EVO has also seen support from the developer community with Android versions including Ice Cream Sandwich 4.0, Jelly Bean 4.1 - 4.3, and finally KitKat 4.4, some of which are available with HTC Sense while other available versions are based on stock Android. As of the end of 2016, there is very little, if any, support from the community due to the age of the device and the fact that it is nearly impossible to run newer versions of Android (5.0+) on this device.
Interface
In HTC Sense, the interface is based around home screen panels which in total are seven panels that allows user-customization. By default, the center home screen panel features a digital clock located on the top of the screen and weather animations of the current weather in the device's location, the remaining space in the bottom can be customized to user preferences. The launcher, located at the bottom of the screen, displays icons to open the App Drawer, Phone application, and the ability to add widgets on the Android desktop, and is shown throughout all seven home screen panels. Users can switch from one panel to another by sliding left or right. A small bar that sits on top of the launcher represents the current panel the device is viewing. Pinching the home screen (or pressing the home button if the user is on the center panel) brings up Leap screen, showing thumbnail views of all the home screen panels and allowing users to "leap" to another home screen panel easily. Unlike other custom user interfaces for the Android OS like Samsung's TouchWiz UI, HTC Sense does not allow disabling or removing a panel.
Most of the input on the device is given through the touchscreen, which understands complex gestures using multi-touch. Android's interaction techniques enable moving up or down by a touch-drag motion of the finger. However the buttons on the front of the device will also require frequent use throughout various applications in Android OS as the buttons play an important part in the user interface.
Criticism
30 frames per second cap
Some users have experienced noticeable graphics lag and/or slowness while using the phone. Various reports throughout the Internet indicated that the device may have a 30 frames per second cap. An HTC representative announced that it was a hardware cap, not subject to software updates. Despite the repeated claims regarding the supposed hardware cap, HTC released an update on September 22, 2010 that, among other things, removed the 30 FPS cap.
Screen
There have been many problems with the screen. Some of the first customers complained of screen separation; HTC acknowledged the problem and was able to limit the number of affected units. Another problem with the screen has been a bright spot in the lower area of the screen. This problem is commonly referred to as the B-spot because it is exactly where the B on the keyboard is. This bright spot is only noticeable when using a bright background. The B-spot is also noticeable when screen display is set to automatic brightness.
Device clock reference
The device clock is 15 seconds faster than Coordinated Universal Time (UTC), either with Android 2.1, 2.2 or 2.3; a manual clock setting does not override seconds, and root authority would be needed to overcome issue using Network Time Protocol (ntp) software. The 15-second offset hints the number of leap seconds introduced since GPS inception in 1980 rather than an epoch issue and was reported as an Android bug on Dec 16, 2009. Certain phones running Android including Samsung EPIC do not exhibit the issue, possibly because firmware fetches UTC value rather than GPS from Network time, or subtracts leap second offset. Sprint, HTC, OHA, Google and goodandevo.net were informed of the aforesaid issue earlier than October 2010, reported as Android bug 5485. HTC update 3.70.651.1 released on 2010-12-15 still did not overcome the issue, and neither did build 4.22.651.2 (Android 2.3.3) released on 2011-06-03. Finally, on 2012-01-19, HTC software update 4.67.651.3 overcame the issue, just a week before the end of life for the EVO was announced.
Battery life
Users have complained that the battery life for the Evo is inadequate and incapable of lasting one day of normal use. This has spawned the creation of pages dedicated to explaining how to optimize the battery life, and even an aftermarket extra large battery that enlarges the unit. In addition, there is a software error that causes a severely depleted battery to become unable to be charged in the phone. The OTA upgrade to Android 2.3.3 fixes this issue along with improving device battery life on the whole.
Design
The EVO's design is derived from its Windows Mobile-based brother, the HTC HD2, which also has a multi-touch capacitive touchscreen, nearly the same slim profile, and the same placements of most general components and buttons. Although similar, the EVO has features that distinguish it from the HTC HD2 including the front-facing camera, the circular-shaped rear camera, an integrated
Kickstand, and touch-sensitive buttons instead of hardware buttons. Another feature is Android-specific buttons. The device has nearly the same dimensions, namely high wide deep.
Warranty
The phone comes with a one-year warranty, which does not cover scratches, cracks, smudge marks, liquid damage, and other forms of physical damage. Sending phones for repair is typically handled by the phone service provider, rather than through HTC.
See also
Android (operating system)
HTC Evo Shift 4G
HTC Evo 3D
HTC Evo Design 4G
HTC Evo 4G LTE
References
External links
HTC EVO 4G PC World Review
HTC EVO 4G Specs
HTC EVO 4G LTE Specs
Mobile phones introduced in 2010
Discontinued smartphones
EVO
Android (operating system) devices
Mobile phones with user-replaceable battery
|
48077208
|
https://en.wikipedia.org/wiki/Sci-Hub
|
Sci-Hub
|
Sci-Hub is a shadow library website that provides free access to millions of research papers and books, without regard to copyright, by bypassing publishers' paywalls in various ways. Sci-Hub was founded by Alexandra Elbakyan in 2011 in Kazakhstan in response to the high cost of research papers behind paywalls. The site is extensively used worldwide. In September 2019, the site's owners said that it served approximately 400,000 requests per day. Sci-Hub reported on January 10, 2022 that its collection comprises 85,258,448 scientific articles, which is equivalent to 95% of all scientific journal articles with issued DOI numbers.
Sci-Hub and Elbakyan were sued twice for copyright infringement in the United States in 2015 and 2017, and lost both cases by default, leading to loss of some of its Internet domain names. The site has cycled through different domain names since then.
Sci-Hub has been lauded by some in the scientific, academic, and publishing communities for providing access to knowledge generated by the scientific community, often from some share of public funding. Publishers have criticized it for violating copyright, reducing the revenue of publishers, potentially being linked to activities compromising universities' network security (although the cybersecurity threat posed by Sci-Hub may have been exaggerated by publishers), and instigating publishers to make paywalls stricter.
Elbakyan responded by questioning the morality of the publishers' business and the legality of their methods in regards to the right to science and culture under Article 27 of the Universal Declaration of Human Rights, while maintaining that Sci-Hub should be "perfectly legal".
History
Sci-Hub was created by Alexandra Elbakyan, who was born in Kazakhstan in 1988. Elbakyan earned her undergraduate degree at Kazakh National Technical University studying information technology, then worked for a year for a computer security firm in Moscow, then joined a research team at the University of Freiburg in Germany in 2010 that was working on a brain–computer interface. She then became interested in transhumanism and after attending a transhumanism conference in the United States, Elbakyan spent her remaining time in the country doing a research internship at Georgia Institute of Technology.
She later returned to Kazakhstan, where she started research in a Kazakh university. According to Elbakyan, she experienced difficulty accessing scientific papers relevant to her research project. She began contributing to online forums dedicated to sharing research papers. In 2011, she developed Sci-Hub to automatically share papers. The site was launched on September 5, 2011.
In May 2021, Sci-Hub users collaborated to preserve the website's data, anticipating that the site may go offline. In September 2021, the site celebrated the tenth anniversary of its launch date by uploading over 2.3 million articles to its database.
Legal status
Sci-Hub has cycled through domain names, some of which have been blocked by domain registry operators. Sci-Hub remained reachable via alternative domains such as .io, then .cc, and .bz. Sci-Hub has also been accessible at times by directly entering the IP address, or through a .onion Tor Hidden Service.
On 8 January 2021, Twitter suspended Sci-Hub's account citing "counterfeit content" as the reason.
United States
In 2015, Elsevier filed a lawsuit against Sci-Hub, in Elsevier et al. v. Sci-Hub et al., at the United States District Court for the Southern District of New York. Library Genesis (LibGen) was also a defendant in the case which may be based in either the Netherlands or also in Russia. It was the largest copyright infringement case that had been filed in the US, or in the world, at the time. Elsevier alleged that Sci-Hub violated copyright law and induced others to do so, and it alleged violations of the Computer Fraud and Abuse Act as well as inducements to violate that law. Elsevier asked for monetary damages and an injunction to stop the sharing of the papers.
Elbakyan responded to the case in an interview by accusing Elsevier of violating the right to science and culture under Article 27 of the Universal Declaration of Human Rights. She later wrote a letter to the court about the case describing her reasons for creating Sci-Hub, in which she stated, "Payment of 32 dollars [for each download] is just insane when you need to skim or read tens or hundreds of these papers to do research."
At the time the website was hosted in St. Petersburg, Russia, where judgments made by American courts were not enforceable, and Sci-Hub did not defend the lawsuit. In June 2017, the court awarded Elsevier US$15 million in damages for copyright infringement by Sci-Hub and others in a default judgment. The judgment found that Sci-Hub used accounts of students and academic institutions to access articles through Elsevier's platform ScienceDirect. The judgment also granted the injunction, which led to the loss of the original sci-hub.org domain.
In June 2017, the American Chemical Society (ACS) filed a lawsuit against Sci-Hub in the United States District Court for the Eastern District of Virginia, alleging copyright and trademark infringement; it sought judgment US$4.8 million from Sci-Hub in damages, and Internet service provider blocking of the Sci-Hub website. On 6 November 2017, the ACS was granted a default judgment, and a permanent injunction was granted against all parties in active concert or participation with Sci-Hub that has notice of the injunction, "including any Internet search engines, web hosting and Internet service providers, domain name registrars, and domain name registries", to cease facilitating access to the service. On 23 November 2017, four Sci-Hub domains had been rendered inactive by the court order and its CloudFlare account was terminated.
In 2018 and 2019, the White House Office of the US Trade Representative named Sci-Hub as one of the most flagrant "notorious market" sites in the world.
In December 2019, The Washington Post reported that the US Justice Department was investigating whether Elbakyan had links with Russian intelligence, in part due to the assumption that tacit approval or assistance of the Russian government is required for an operation of the scale of Sci-Hub. Elsevier has used accusations over the alleged security threat that Sci-Hub poses to institutions to encourage educational institutions to block its use.
Sweden
In October 2018, Swedish ISPs were forced to block access to Sci-Hub after a court case brought by Elsevier; Bahnhof, a large Swedish ISP, in return soft-blocked the Elsevier website.
Russia
In November 2018, Russia's Federal Service for Supervision of Communications, Information Technology and Mass Media blocked Sci-Hub and its mirror websites after a Moscow City Court ruling to comply with Elsevier's and Springer Nature's complaints regarding intellectual property infringement. The site moved to another domain and is still available online as of the 22nd of January 2022.
France
On 7 March 2019, following a complaint by Elsevier and Springer Nature, a French court ordered French ISPs to block access to Sci-Hub and Library Genesis. However, the court order did not affect the academic network Renater, through which most French academic access to Sci-Hub presumably goes.
Belgium
Following the lawsuit by Elsevier in March 2019 in France, Elsevier, Springer, John Wiley, and Cambridge University Press filed a complaint against Proximus, VOO, Brutélé and Telenet to block access to Sci-Hub and LibGen. The publishers claimed to represent more than half of the scientific publishing sector, and indicated that over 90% of the contents on the sites infringed copyright laws; they won the lawsuit. Since then, the two sites are blocked by those ISPs; visitors are redirected to a stop page by Belgian Federal Police instead, citing illegality of the site's content under Belgian legislation.
The European Commission included Sci-Hub in its "Piracy Watch List".
India
In December 2020, Elsevier, Wiley and the American Chemical Society filed a copyright infringement lawsuit against Sci-Hub and Library Genesis in the Delhi High Court. The plaintiffs seek a dynamic injunction which means that any future domain, IP or name-change by the respondents will not require the plaintiffs to return to court for an additional injunction. The court restricted the sites from uploading, publishing or making any article available until 6 January 2021. In response to the lawsuit, as well as to Elbakyan's claim that the FBI had requested data from her Apple account, Reddit users on the subreddit r/DataHoarder organized to download and seed backups of the articles on Sci-Hub, with the intention of creating a decentralized and uncensorable version of the site.
The High Court agreed on 6 January 2021 to wait before passing any interim order in the case until they hear representations from scientists, researchers and students. A hearing was scheduled for 16 December 2021. A key component of Sci-Hub's legal defence is that it provides educational resources to researchers, and thus falls under a fair dealing exception in India's copyright law. This defence has previously been used by educational institutions to justify the reproduction of copyrighted materials for use by low-income students. A number of Indian academics have offered support to Sci-Hub after the lawsuit was filed. Lawrence Liang assisted in drawing support from the country's scientific community for the website. Multiple petitions were filed by scholars in India supporting Sci-Hub in the lawsuit.
United Kingdom
In February 2021 Elsevier and Springer Nature obtained an injunction on TalkTalk to block the sci-hub.se domain as a result of a ruling handed down by a UK court. In March 2021 City of London police's Intellectual Property Crime Unit issued a warning to students and universities against accessing the website and to have the website blocked by universities with allegations that the website could steal credentials, mainly to download content from publishers and cause users to "inadvertently download potentially dangerous content" when visited. However, the allegation was denied by Elbakyan.
Website
The site's operation is financed by user donations. The PHP code, the setup of the Linux web servers and the maintenance of it are all done by Elbakyan herself to avoid risk of moles or a broken team compromising the service. Over the years, various URL addresses and direct IP addresses have been used for Sci-Hub, as dozens of domain names have been confiscated by various legal authorities.
Article sourcing
Sci-Hub obtains paywalled articles using leaked credentials. The source of the credentials used by Sci-Hub is unclear. Some appear to have been donated, some were apparently sold before going to Sci-Hub, and some appear to have been obtained via phishing and were then used by Sci-Hub. Elbakyan denied personally sending any phishing emails and said, "The exact source of the passwords was never personally important to me."
According to The Scholarly Kitchen, a blog established by the Society for Scholarly Publishing whose members are involved in legal action against Sci-Hub, credentials used by Sci-Hub to access paywalled articles are correlated to access of other information on university networks (such as cyber spying on universities) and credential sales in black markets. Several articles have reported that Sci-Hub has penetrated the computer networks of more than 370 universities in 39 countries. These include more than 150 institutions in the US, more than 30 in Canada, 39 in the UK and more than 10 in Sweden. The universities in the UK include Cambridge, Oxford, Imperial and King's College, London.
Delivery to users
The Sci-Hub website provides access to articles from almost all academic publishers, including Elsevier, Springer/Nature, Institute of Electrical and Electronics Engineers, American Chemical Society, Wiley Blackwell, and the Royal Society of Chemistry, as well as open-access works, and distributes them without regard to publishers' copyrights. It does not require subscriptions or payment.
Users can access works from all sources with a unified interface, by entering the DOI in the search bar on the main page or in the Sci-Hub URL (like some academic link resolvers), or by appending the Sci-Hub domain to the domain of a publisher's URL (like some academic proxies). Sci-Hub redirects requests for some gold open access works, identified as such beyond the metadata available in CrossRef and Unpaywall. Some requests require the user to enter a CAPTCHA. Papers can also be accessed using a bot in the instant messaging service Telegram.
If the paper is in the repository already, the request is served immediately. If the paper is not already in the repository, a wait screen appears while the site presents someone else's credentials on behalf of the user to a series of proxies until it finds one that has access to the paper, which is then presented to the user and stored in the repository.
Until the end of 2014, Sci-Hub relied on LibGen as storage: papers requested by users were requested from LibGen and served from there if available, otherwise they were fetched by other means and then stored on LibGen. The permanent storage made it possible to serve more users than the previous system of deleting the cached content after 6 hours.
Since 2015, Sci-Hub uses its own storage for the same purpose. As of 2017, Sci-Hub was continuing to redirect requests for electronic books to Libgen.
After the site faced increased legal pressure in 2021, archivists initiated a rescue mission to secure enduring access to the website and its contents. They organized on a Reddit website to coordinate decentralized storage and delivery of Sci-Hub contents using BitTorrent technology.
Usage and content statistics
In February 2016, the website claimed to serve over 200,000 requests per day—an increase from an average of 80,000 per day before the "sci-hub.org" domain was blocked in 2015.
Server log data gathered from September 2015 to February 2016 and released by Elbakyan in 2016 revealed some usage information. A large amount of Sci-Hub's user activity came from American and European university campuses, and when adjusted for population, usage of Sci-Hub was high for developed countries. However, a large proportion of download requests came from developing countries such as Iran, China, India, Russia, Brazil, and Egypt. User activity covered many branches of science.
In March 2017, the website had 62 million papers in its collection, which were found to include 85% of the articles published in paywalled scholarly journals. Although only 69% of all published articles were in the database in March 2017, it has been estimated, based on scholarly citations from articles published between 2015 and July 2017, that at least 96% of requests for paywalled articles are successful.
A 2019 study of 27.8 million download requests via Sci-Hub indicates that 23.2 million of these were for journal articles, 4.7 million (22%) of which were articles from medical journals. The requests for medical literature came mostly from middle- and low-income countries (69%); the countries with the most requests in absolute numbers were India, China, the US, Brazil, and Iran.
On 27 July 2020, the Sci-Hub website reported that the cumulative number of downloads from the database exceeded one billion, that the average number of downloads per day was 300–600 thousand, and that the database continued its expansion into the pre-digital age, particularly into journal articles published prior to 1980. Among achievements in 2019, Sci-Hub reported the publication of about 15,000 letters by Charles Darwin, most of which were not available free of charge, although their copyrights had expired over 100 years ago. In 2019 Elbakyan also reported plans to allow access to Supplemental Information of journal articles in addition to the main texts, which are already available.
In the context of the big deal cancellations by several library systems in the world, in 2019, the wide usage of Sci-Hub was credited as one of the factors which reduced the apparent value of the subscriptions to toll access resources.
A 2020 a study by researchers from 4 countries on 3 continents found that articles downloaded from Sci-Hub were cited 1.72 times more than papers not downloaded from Sci-Hub; the study's methods and conclusions were disputed by Phil Davis in a Scholarly Kitchen article.
In a 2021 study conducted by the National Institute of Science, Technology and Development Studies and Banaras Hindu University on the use of Sci-Hub in India, 13,144,241 out of 150,575,861 download requests in 2017 were found to have come from Indian IP addresses. Of the research papers downloaded in India, 1,050,62 or 18.46% of these are already available in some form of open access. Indian users requested an average of 39,952 downloads per day from Sci-Hub in 2017.
A 2018 study found a relatively low use of Sci-Hub in China. This was attributed to blocking of many Sci-Hub hosting sites by Cyberspace Administration of China and the existence of a Chinese twin of Sci-Hub, which is not accessible outside of China, and is unknown to Western publishers. These reports contradict the data released by A. Elbakian in February 2022, that show PR China having the largest number of downloads of any country, US as the second largest (ca. 38% of PRC downloads), and France as the 3rd largest (24% of the USA). In February, 2022, Elbakyan reported that the regular url access to Sci-Hub domains is blocked in the UK, nevertheless UK users can still access it via a VPN from the US or another country.
User location
A 2016 analysis of 28 million requests to Sci-Hub published in Science with the title Who's downloading pirated papers? Everyone shows a map of Sci-Hub users with dots all over the world.
Archiving of scientific research
Sci-Hub effectively does academic archiving outside the bounds of contemporary copyright law, and, unlike Web archiving initiatives such as the Internet Archive, also provides access to academic works that do not have an open access license. There are data dumps of papers available on Sci-Hub.
In response to the COVID-19 pandemic, a group of online archivists have used Sci-Hub to create an archive of over 5000 articles about coronaviruses. They admit that making the archive openly accessible is illegal, but consider it a moral imperative.
Reception
Sci-Hub's interface is perceived by users as providing a superior user experience and convenience compared to the typical interfaces available to users who have access to a paid subscription.
Sci-Hub has been lauded as having "changed how we access knowledge". It raised awareness about the scientific publishing business models and its ethics of making researchers' institutions pay for their articles to be published, while providing and reviewing them without payment.
Support for open-access science publishing extends beyond Sci-Hub; Plan S is an initiative launched by Science Europe on 4 September 2018. It is an initiative of "cOAlition S", a consortium launched by major national research agencies and funders from twelve European countries. The plan requires scientists and researchers who benefit from state-funded research organisations and institutions to publish their work in open repositories or in journals that are available to all by 2021. The initiative is not a law.
Scientists in some European countries began negotiations with Elsevier and other academic publishers on introducing national open access.
Publishers have been very critical of Sci-Hub, going so far as to claim that it is undermining more widely accepted open-access initiatives, and that it ignores how publishers "work hard" to make access for third-world nations easier. It has also been criticized by librarians for compromising universities' network security and jeopardizing legitimate access to papers by university staff. The cybersecurity threat posed by Sci-Hub has been questioned, and the suggestion made that the threat has been exaggerated by large publishers keen to protect their business model by discrediting Sci-Hub or pushing Universities to block students access to Sci-Hub. However, even prominent Western institutions such as Harvard and Cornell have had to cut down their access to publications due to ever-increasing subscription costs, potentially causing some of the highest use of Sci-Hub to be in American cities with well-known universities (this may however be due to the convenience of the site rather than a lack of access). Sci-Hub can be seen as one venue in a general trend in which research is becoming more accessible. Many academics, university librarians and longtime advocates for open scholarly research believe Elbakyan is "giving academic publishers their Napster moment", referring to the illegal music-sharing service that "disrupted and permanently altered the industry".
For her actions in creating Sci-Hub, Elbakyan has been called a hero and "spiritual successor to Aaron Swartz" who in 2010 downloaded millions of academic articles from JSTOR. She has also been compared to Edward Snowden. She has also been called a "Robin Hood of science".
Elbakyan responded by attacking "a double-dipping model, that benefits only to publishers, whilst creating an illusion of conformance with the Open Access goals", in a reference to hybrid open-access journals run by legacy publishers (like Elsevier and ACS), which charge APCs for some articles to make them gratis open access, while still selling subscriptions and other licenses to access the same journals.
In August 2016, the Association of American Publishers sent a letter to Gabriel J. Gardner, a researcher at California State University who has written papers on Sci-Hub and similar sites. The letter asked Gardner to stop promoting the site, which he had discussed at a session of a meeting of the American Library Association. In response, the publishing institution was highly criticized for trying to silence legitimate research into the topic, and the letter has since been published in full, and responded to by the dean of library services at Cal State Long Beach, who supported Gardner's work.
In December 2016, Nature Publishing Group named Elbakyan as one of the ten people who most mattered in science in 2016.
See also
Explanatory notes
References
Further reading
Transcript and translation of "Why Science is Better with Communism? The Case of Sci-Hub", a presentation by Alexandra Elbakyan at the University of North Texas Open Access Symposium 2016.
"Sci-Hub/LibGen in Blogs and the Media (PDF)" Stephen Reid McLaughlin, March 2016
Academic publishing
File sharing communities
Intellectual property activism
Internet properties established in 2011
Open access projects
Science websites
Search engine software
Shadow libraries
Tor onion services
|
8762082
|
https://en.wikipedia.org/wiki/Mark%20I%20Fire%20Control%20Computer
|
Mark I Fire Control Computer
|
The Mark 1, and later the Mark 1A, Fire Control Computer was a component of the Mark 37 Gun Fire Control System deployed by the United States Navy during World War II and up to 1991 and possibly later. It was originally developed by Hannibal C. Ford of the Ford Instrument Company. and William Newell. It was used on a variety of ships, ranging from destroyers (one per ship) to battleships (four per ship). The Mark 37 system used tachymetric target motion prediction to compute a fire control solution. It contained a target simulator which was updated by further target tracking until it matched.
Weighing more than , the Mark 1 itself was installed in the plotting room, a watertight compartment that was located deep inside the ship's hull to provide as much protection against battle damage as possible.
Essentially an electromechanical analog computer, the Mark 1 was electrically linked to the gun mounts and the Mark 37 gun director, the latter mounted as high on the superstructure as possible to afford maximum visual and radar range. The gun director was equipped with both optical and radar range finding, and was able to rotate on a small barbette-like structure. Using the range finders and telescopes for bearing and elevation, the director was able to produce a continuously varying set of outputs, referred to as line-of-sight (LOS) data, that were electrically relayed to the Mark 1 via synchro motors. The LOS data provided the target's present range, bearing, and in the case of aerial targets, altitude. Additional inputs to the Mark 1A were continuously generated from the stable element, a gyroscopic device that reacted to the roll and pitch of the ship, the pitometer log, which measured the ship's speed through the water, and an anemometer, which provided wind speed and direction. The Stable Element would now be called a vertical gyro.
In "Plot" (the plotting room), a team of sailors stood around the Mark 1 and continuously monitored its operation. They would also be responsible for calculating and entering the average muzzle velocity of the projectiles to be fired before action started. This calculation was based on the type of propellant to be used and its temperature, the projectile type and weight, and the number of rounds fired through the guns to date.
Given these inputs, the Mark 1 automatically computed the lead angles to the future position of the target at the end of the projectile's time of flight, adding in corrections for gravity, relative wind, the magnus effect of the spinning projectile, and parallax, the latter compensation necessary because the guns themselves were widely displaced along the length of the ship. Lead angles and corrections were added to the LOS data to generate the line-of-fire (LOF) data. The LOF data, bearing and elevation, as well as the projectile's fuze time, was sent to the mounts by synchro motors, whose motion actuated hydraulic servos with excellent dynamic accuracy to aim the guns.
Once the system was "locked" on the target, it produced a continuous fire control solution. While these fire control systems greatly improved the long-range accuracy of ship-to-ship and ship-to-shore gunfire, especially on heavy cruisers and battleships, it was in the anti-aircraft warfare mode that the Mark 1 made the greatest contribution. However, the anti-aircraft value of analog computers such as the Mark 1 was greatly reduced with the introduction of jet aircraft, where the relative motion of the target became such that the computer's mechanism could not react quickly enough to produce accurate results. Furthermore, the target speed, originally limited to 300 knots by a mechanical stop, was twice doubled to 600, then 1,200 knots by gear ratio changes.
The design of the postwar Mark 1A may have been influenced by the Bell Labs Mark 8, which was developed as an all electrical computer, incorporating technology from the M9 gun data computer as a safeguard to ensure adequate supplies of fire control computers for the USN during WW2. Surviving Mark 1 computers were upgraded to the Mark 1A standard after World War II ended.
Among the upgrades were removing the vector solver from the Mark 1 and redesigning the reverse coordinate conversion scheme that updated target parameters.
The scheme kept the four component integrators, obscure devices not included in explanations of basic fire control mechanisms. They worked like a ball–type computer mouse, but had shaft inputs to rotate the ball and to determine the angle of its axis of rotation.
The round target course indicator on the right side of the star shell computer with the two panic buttons is a holdover from WW II days when early tracking data and initial angle–output position of the vector solver caused target speed to decrease. Pushbuttons slewed the vector solver quickly.
Digital fire control computers were used for the Tartar missile system installed in 1960s era destroyers, according to Billgx on the Talk page.
See also
Ship gun fire-control system
Admiralty Fire Control Table
High Angle Control System
Gun data computer
References
External links
Fire Control Fundamentals
Manual for the Mark 1 and Mark 1a Computer
Maintenance Manual for the Mark 1 Computer
Manual for the Mark 6 Stable Element
Gun Fire Control System Mark 37 Operating Instructions at ibiblio.org
Director section of Mark 1 Mod 1 computer operations at NavSource.org
Naval Ordnance and Gunnery, Vol. 2, Chapter 25, AA Fire Control Systems
Artillery operation
Mechanical computers
Military computers
Fire-control computers of World War II
Military equipment introduced in the 1930s
|
21434202
|
https://en.wikipedia.org/wiki/Phaungdawoo%20Monastic%20Education%20High%20School
|
Phaungdawoo Monastic Education High School
|
Phaungdawoo Monastic Education Affiliated High School () or Phaung Daw Oo Monastic School is a
high school of Theravada Buddhist monastic education located in Aungmyethazan Township, Mandalay, Myanmar.
Founded in 1993, the school comprises an Administration Department, Academic Department, Special Projects Development Department, Finance & Account Department, Vocational Department, Information Technology Department and Health Care Department. The school also has a notable HIV/AIDS prevention scheme and support groups. The school has a library, a clinic, and a furniture factory. The school was featured in a 2009 documentary, A Bright Future, for its implementation of child-centred approach (CCA) to teaching.
History
Phaung Daw Oo was founded on May 5, 1993 with 10 teachers and 394 students. Principal U Nayaka's main focus was to provide for students from poor families who would not normally be able to attend school. Students of all religions and ethnicities are welcomed at PDO. Although it is a "monastic" school run by Theraveda monks, it offers a complete secular curricula as well as special classes in Buddhism and Pali for novices monks and any secular students who opt to take them. The school's guiding vision is to promote outstanding students who can become future leaders in society and to provide for all students so they can pursue their studies absolutely free of charge (except preschool). In addition to its primary education mission, the school now partners with MEDG (Monastic Education Development Group)and donor organizations to offer many training workshops for teachers and school administrators. Now, Phaung Daw Oo Monastic High School is well-known both in Myanmar and abroad.
Campus
The Japan-Myanmar Friendship Building (178 ft. long by 30 ft. wide) has one hall and 15 classrooms.
The Australia-Myanmar Friendship Building (238 ft. long by 30 ft. wide) has 21 classrooms.
The German-Myanmar Friendship Building (144 ft. long by 30 ft. wide) has one hall and 16 classrooms.
The Win Thu Rein Building (70 ft. long by 30 ft. wide) has four classrooms.
The Mud House Building (90 ft. long by 30 wide) has three classrooms.
The NTTC Building ( about 150 ft. by 30 ft. wide) has a computer lab, 2 halls and 15 classrooms.
The Tech Building 3 story, large hall, 3 computer labs, IT support area, 2 carpentry shops.
The Clinic Building 2 story, several exam rooms, procedure areas for ophthalmology and dentistry.
The Office Building administration, finance & payroll, volunteer management, etc.
Several small buildings used by maintenance and engineering departments.
Academic Profile
Mainstream
The Mainstream Department is the largest teaching department. It delivers a curriculum that encompasses all academic topics in line with government education reform. In the 2015/2016 academic year, there were 5727 high school students, 1281 middle school students, and 841 primary students, respectively. The school employs nearly 200 teachers, most have been trained in the teaching methodologies consistent with the Child Centered Approach (CCA) and Reading, Writing and Critical Thinking (RWCT). Academic staff are actively involved in Continuing Professional Development (CPD) to improve their own teaching practice.
Fast Track
The Fast Track English Language Department (FT) began operation in 2002, spearheaded by Principal U Nayaka; who wanted Phaung Daw Oo students to be on a level to compete with international school students.
This teaching department consists of twelve classes, kindergarten (KG) through to Grade 8, with about 32 - 34 students in each class. Most teachers in FT have received teacher training in methodologies such as the Child Centered Approach (CCA); Reading, Writing, and Critical Thinking (RWCT); classroom management; and instruction in how to create lesson plans. Most of their trainers, some local and some foreign, were experts in teacher education. Students are normally taught all subjects in English, except Myanmar language. Curricula in FT are based on a combination of government textbooks and other resources: such as foreign texts and materials gleaned from the internet. Each year, every teacher in FT revises their lesson plans to keep abreast of education reforms, focusing only on approaches and methods that are deemed truly effective at improving their students' skills, attitudes, and critical thinking. Classes in music, art and sport are also available.
New Teacher Training Centre
The New Teacher Training Centre (NTTC), another teaching department, was founded in 2011 with 10 teachers. Initially, it was a five-year project that complements the FT program. The NTTC Department has continued to this day; sponsored by Forderverein Myanmar e.V.(a German NGO) The objective of the NTTC is to teach in English and promote the best modern teaching methods for students in Grade 5 and over; replacing rote learning with learning methods supported by cognitive science. The teachers in NTTC have been instructed and trained by senior experts from Germany, the USA and the UK.
Library
The Sutakarmi Library was founded in 2000 as a quiet place where teachers and students can spend their leisure time in study. The primary library donors were Diana and her husband, Graham Millington. The library consists of four rooms: a book room and media room (downstairs), and a research room and large study area/movie room (upstairs), staffed by six librarians. The library contains about 20,000 books covering a range of fiction, general and academic subjects; many in English. The majority of books have been donated by both local and foreign donors. Library hours are normally from 8:00 am until 8:00 pm, Monday to Friday. The library is also open at weekends and in school holidays. Computers are available in the media room, with access to free Wi-Fi, which makes online learning an enormously valuable provision. The librarians help students to use all the facilities and resources available and they provide activities such as story-telling and games to enhance students' education. The library has evolved to play an important role in helping students increase their knowledge and improve their research skills. It also provides a variety of resources for teachers, enabling them to increase student literacy.
Bridging Program
The Dutch NGO, World Child Care, is funding the Bridging Program which supports young adults, most of whom are working and attending distance University (independent study with weekend classes). The local teachers have been given special training and are supplemented by foreign volunteers. The program uses the British Council's Cambridge curricula for three levels of study in English and Sociology. Student level is determine by a standardized placement test and most students are expected to progress through all three levels after which many take the qualification test for the PCP program. Student goals are focused on English language skills that will result in better jobs or IELTS exams to qualify for scholarships to foreign universities or fellowships abroad. There are about 100 students in this program. This program also has computers for students use.
Pre-College Program (PCP)
This program is taught by a combination of foreign teachers and prior graduates teaching a broad "prep school" curriculum that focuses on aspects of social science, civic engagement, leadership and personal development relevant to Myanmar youth. At the end of their academic program, each student travels to a remote, rural village school to participate in "service learning" as an assistant teacher. Many of these students subsequently secure scholarships for short internships, leadership training or graduate study abroad. A maximum of 24 students are accepted each year after a competitive selection process. This program provides learning opportunities delivered with the aid of the latest technology afforded by a dedicated fast broadband connection. Students have access to free Wi-Fi throughout their studies.
Vocational and Technical Training
PDO also offers training in carpentry, tailoring and computer skills. Students learn woodworking skills in the workshop where much of the school's furniture is made and tailoring in the two large classrooms equipped with sewing machines. Students may also use these machines to make products (longyis, book bags, purses) that are sold to provide a small income to cover materials. Students from all programs can access, and many are required to take, courses to develop basic computer skills. There are several modern computer labs and some students have gone on to work with the school's IT Department, developing vital technological support skills.
Administration Profile
Boy's Dormitory
Being a monastic school, the Boy's Dormitory is a novice dormitory (donated by the German government) which houses over 700 pupils. All of the novices in Phaung Daw Oo come from different families and regions around Myanmar. Most of the novices come from various ethnic groups such as the Palong, the Shan, the PaO, the Wa, the Nega, and the Karen. Most are Plaung and Sha, and the rest Bamar, Wa, PaO, and Nega. They also attend school and have various dreams and goals for a better future. The number of novices is increasing annually, so recently they have faced some difficulties with space and water.
Girl's Dormitory
The Girl's Dormitory (128 ft. long by 32 ft. wide) was also donated by the German government, and also houses students from other divisions, including 149 teachers. There are 38 rooms housing 73 students, with six rooms for visitors. There are four stairs.
Golden House
This building was donated by a Mr. Nego from the World Child Care Organization, for children rendered needy and homeless by the 2008 Nargis Cyclone. The principal supports and educates about 150 children and they are cared for by 15 teachers.
Ethnic Group
There is no donor for this ethnic group. The ethnic house is supported by the principal and they are minority group and only ethnic girls, housing two teachers and 80 students. The principal provides space and water.
Hostel
The hostel building has two stairs and is for the care of orphans and street children, managed by six teachers who look after almost 52 children.
Vocational Training
Tailoring Class
With a facility that includes 50 sewing machines and two expert tailors on the teaching staff, this class also operates as a small businesses at Paung Daw Oo, generating income by selling traditional handicrafts and other items to foreigners.
Carpentry Workshop
The Carpentry Workshop was created in 2003 from funds donated by a Mr.Philippi and a Mr. Jager, of Germany, to train students interested in woodworking. There are six carpenters responsible for making new furniture and repairing old furniture for the school, and they also generate income for the school by serving outside customers.
Information Technology
The IT Department was opened from 2000–2001 and features 45 computers and seven teachers who give basic computer training to the students on a monthly basis. They are also responsible for all computer repair at the school, and generate revenue for the school by offering computer training to the wider public.
Physical Plant
The Physical Plant Department is maintained by three staff members who set up and maintain the electric power and water supply for the entire school.
School Clinic
The School Clinic was first opened in 2002 by leading physicians Dr. Khon Kyaw Oo, Dr. Sandimaung, and Dr. Win Thu. Currently, Dr. Myint Khaing Htay manages the clinic and treats patients. The clinic serves both the students as well as members of the surrounding population who do not have access to healthcare.
References
High schools in Mandalay
Educational institutions established in 1993
1993 establishments in Myanmar
Theravada Buddhist organizations
|
209920
|
https://en.wikipedia.org/wiki/Liero
|
Liero
|
Liero is a video game for MS-DOS, first released by Finnish programmer Joosa Riekkinen in 1998. The game has been described as a real-time version of Worms (a turn-based artillery game). Liero is Finnish for 'earthworm' and is pronounced . Inspired itself by the earlier game MoleZ, Liero provided inspiration for the later games Soldat and Noita.
Gameplay
In Liero, two worms fight each other to death for score (or frags) using a choice of five weapons from a total of 40 in a two-dimensional map. Most of the terrain, except for indestructible rocks, may be dug or destroyed by explosions. In addition to the weaponry, each player has a ninja rope which can be used to move faster through the map. This grappling hook-like device substitutes for jetpacks and can even latch onto the enemy worm to drag them closer to their foe.
While playing, there are health power-ups to heal the player's worm. It is also possible to replace one of the five weapons by picking up bonuses. Before playing, certain weapons can be selected to be available only in bonuses, in the entire game, or completely disabled.
Unlike most side-scrolling deathmatch games, the weapons in Liero have infinite ammo. Key factors of weapons include their reload rate and how fast they shoot, whereas in most other games of this type, key factors of weapons include how much ammo they sport and how frequently more ammo for that weapon can be found. Liero depends all on timing and swift maneuverability.
The gameplay mode can be deathmatch, Game of Tag or Capture the Flag. It can be played by two human players simultaneously in split screen or in a single player mode against the game's artificial intelligence, although the game's popularity is derived mostly from the fast-paced player-vs.-player action it provides.
Development history
Original Liero by Joosa Riekkinen
Joosa Riekkinen developed Liero as DOS game with the first version released in 1998. Liero was inspired by the previous freeware game MoleZ, and took many weapons and sounds from its precursor. The original Lieros latest version was 1.33, which was released in 1999. However, the author lost the Pascal source code in a hard disk crash, and due to the lack of backup, no new "classic" versions have been released since.
Community developments
Despite this, and with the author's approval, the Liero community has distributed several altered (or hacked) versions of the game through the LieroCDC, and others.
Merge
In 2009, "classic" Liero was officially merged with the OpenLiero project upon the release of Liero 1.34 (not to be confused with the total conversion by that name). The new versions are released by Gliptic, although Joosa Riekkinen endorses them as official. The original Liero data and binary files by Riekkinen were released available under the WTFPL license.
Lieros last release was version 1.36, release September 3, 2013. This version is compatible with almost any OS but lacks network gaming (unlike some of the remakes).
Clones, remakes and derivatives
Liero Xtreme
Liero Xtreme (often called LieroX, Liero Extreme or just LX) is a 2D shooter game. It is an unofficial sequel to Liero, and is the most popular of all the Liero clones. It features online play, fully customizable weapons, levels and characters. Liero Xtreme was created in C++ by Jason 'JasonB' Boettcher, an Australian programmer. With its source release on April 10, 2006, a new project has become available on October 24, 2006, known as OpenLieroX, while the development of the original LieroX project has stopped. At the time of writing (May 6, 2009), OpenLieroX has tripled in code size and has many new features.
The game is based on a deathmatch setting, where multiple players face off in a closed level. Each player is equipped with five weapons selected out of all the weapons allowed, and with a ninja rope that allows the player to move in any direction. Players begin with a set amount of lives, and whilst the game records the number of kills, the last man standing is usually considered the winner. Liero Xtreme also allows team deathmatches, which has made it common for players to form clans. OpenLieroX runs on Windows, on MacOSX, on Linux and on FreeBSD.
The first release announcement of Liero Extreme was made on October 14, 2002. LieroX has become very famous over the time. On February 14, 2006, Jason Boettcher stopped the LieroX development for good. The last version he released was 0.62b, which had many new features, but suffered from crashes and various errors, and did not catch on within the community which continues to play the 0.56b version. Before leaving the community, he released the source code of the even older version 0.55b under the zlib license. Development of LieroXtreme is now in hands of Karel Petranek and Albert Zeyer, used the source code to create OpenLieroX, which is compatible with the popular 0.56b version, but has multiple new features and bugfixes. Michał Futer took care of the new frontend. Currently the majority of players play OpenLieroX.
As a customizable game, it allows players and developers to script their own mods. Different mods have different sets of unique weapons, and may also differ in player gravity and movement. The default mod is Liero 1.0, also called Classic, which is roughly equal to the basic setting in original Liero. On top of this, several player-created mods are included in the standard game packs, some of which are more popular than the default setting. Similarly to Liero, the default level is Dirt Level, consisting of diggable terrain with some indestructible rock. The default level is comparatively rarely played compared to more complex player-created levels.
The game interface allow players to modify factors of the game such as which weapons in a mod are allowed, and how fast they reload and many other parameters which have huge impact on the game play.
NiL
NiL (recursive acronym for NiL Isn't Liero) is a clone of Liero, which runs on Linux and Windows and is released under the terms of the GNU General Public License. NiL is not limited to two players, like the original Liero is. It has support for an infinite number of players over a TCP network. It was met with considerable enthusiasm in the Linux gaming community.
The project was initiated by Flemming Frandsen in winter 1999 after he had stumbled across Liero, which he liked so much that he decided to reimplement it under Linux. He abandoned the project five months later, due to being too busy for it. NiL was dead until the beginning of 2004 when Christoph Brill, found out about the project and took over as maintainer. Thereafter Daniel Schneidereit joined the project as well, but soon left. Other contributors included Nils Thuerey, Harri Liusvaara, David Hewitt and Phil Howlett.
Development proceeded slowly as the project's source code became almost unmaintainable and NiL was lacking developers. By mid-2005 Alexander Kahl joined development, convinced Christoph to start over and re-think the whole concept of NiL, as the other Liero clone Gusanos already existed at that time. Development seems to have stopped around mid-2006.
OpenLieroX
OpenLieroX is a remake of classic Liero, built from scratch in a new engine. It adds many features to the game, such as modding support, custom sprites, maps, and weapons, and online and lan play. The game also supports more than two players at once.
WebLiero
Classic-like Liero clone in web browser with multiplayer capability.
Modification
Liero is a versatile game in terms of modification. All of its 40 weapons can be completely replaced with new ones that can be given different images and sound effects from the original set. The images of the worms themselves can be transformed into completely different characters, although their movement animations are less flexible regarding modification. The maps can be given permanent terrain other than rocks alone. Destroyable terrain can also be colored more than simply plain dirt. The AI can be modified to be harder or easier. Nearly the whole game can be converted into something entirely different, except for the main aspect having to do with slaughtering another player/AI.
Reception and usage
In 2006 Liero received a TopDog award from Home of the Underdogs.
OpenLieroX has been positively reviewed by multiple gaming news sites. In 2013 Derek Yu's webpage TIGSource reviewed Liero v1.36 favorably.
The Liero variants NiL, Gusanos, and OpenLieroX were downloaded multiple hundred thousand times from SourceForge alone between 2001 and 2016.
Calling Liero a "masterpiece", Liero was selected for a collection of 100 classical Finnish games, which were presented on the opening of the Finnish Museum of Games in Tampere in 2017.
References
External links
- Created by fans, endorsed by Joosa Riekkinen
OpenLiero official site (obsoleted by Liero 1.34)
OpenLieroX official site
NiL official site
Official Liero Xtreme website
MoleZ Official Site (spiritual precursor to Liero)
1998 video games
DOS games
Linux games
Artillery video games
Video games developed in Australia
Video games developed in Finland
Freeware games
Multiplayer hotseat games
Free software
Public-domain software with source code
Windows games
Software using the WTFPL license
|
1161030
|
https://en.wikipedia.org/wiki/Expect
|
Expect
|
Expect is an extension to the Tcl scripting language written by Don Libes. The program automates interactions with programs that expose a text terminal interface. Expect, originally written in 1990 for the Unix platform, has since become available for Microsoft Windows and other systems.
Basics
Expect is used to automate control of interactive applications such as Telnet, FTP, passwd, fsck, rlogin, tip, SSH, and others. Expect uses pseudo terminals (Unix) or emulates a console (Windows), starts the target program, and then communicates with it, just as a human would, via the terminal or console interface. Tk, another Tcl extension, can be used to provide a GUI.
Usage
Expect serves as a "glue" to link existing utilities together. The general idea is to figure out how to make Expect use the system's existing tools rather than figure out how to solve a problem inside of Expect.
A key usage of Expect involves commercial software products. Many of these products provide some type of command-line interface, but these usually lack the power needed to write scripts. They were built to service the users administering the product, but the company often does not spend the resources to fully implement a robust scripting language. An Expect script can spawn a shell, look up environmental variables, perform some Unix commands to retrieve more information, and then enter into the product's command-line interface armed with the necessary information to achieve the user's goal. After retrieving information by interacting with the product via its command-line interface, the script can make intelligent decisions about what action to take, if any.
Every time an Expect operation is completed, the results are stored in a local variable called $expect_out. This allows the script to harvest information to feedback to the user, and it also allows conditional behavior of what to send next based on the circumstances.
A common use of Expect is to set up a testing suite, whether it be for programs, utilities or embedded systems. DejaGnu is a testing suite written using Expect for use in testing. It has been used extensively for testing GCC and is very well suited to testing remote targets such as embedded development.
One can automate the generation of an Expect script using a tool called 'autoexpect'. This tool observes your actions and generates an Expect script using heuristics. Though generated code may be large and somewhat cryptic, one can always tweak the generated script to get the exact code.
# Assume $remote_server, $my_user_id, $my_password, and
# $my_command were read earlier in the script.
# Open a Telnet session to a remote server, and wait
# for a username prompt.
spawn telnet $remote_server
expect "username:"
# Send the username, and then wait for a password prompt.
send "$my_user_id\r"
expect "password:"
# Send the password, and then wait for a shell prompt.
send "$my_password\r"
expect "%"
# Send the prebuilt command, and then wait
# for another shell prompt.
send "$my_command\r"
expect "%"
# Capture the results of the command into a variable. This
# can be displayed, or written to disk.
set results $expect_out(buffer)
# Exit the Telnet session, and wait for a special
# end-of-file character.
send "exit\r"
expect eof
Another example is a script that automates FTP:
# Set timeout parameter to a proper value.
# For example, the file size is indeed big and the network
# speed is really one problem, you'd better set this
# parameter a value.
set timeout -1
# Open an FTP session to a remote server, and
# wait for a username prompt.
spawn ftp $remote_server
expect "username:"
# Send the username, and then wait for a password prompt.
send "$my_user_id\r"
expect "password:"
# Send the password, and then wait for an 'ftp' prompt.
send "$my_password\r"
expect "ftp>"
# Switch to binary mode, and then wait for an 'ftp' prompt.
send "bin\r"
expect "ftp>"
# Turn off prompting.
send "prompt\r"
expect "ftp>"
# Get all the files
send "mget *\r"
expect "ftp>"
# Exit the FTP session, and wait for a special
# end-of-file character.
send "bye\r"
expect eof
Below is an example that automates SFTP (with a password):
#!/usr/bin/env expect -f
# Procedure to attempt connecting; result 0 if OK, 1 otherwise
proc connect {passw} {
expect {
"Password:" {
send "$passw\r"
expect {
"sftp*" {
return 0
}
}
}
}
# Timed out
return 1
}
# Read the input parameters
set user [lindex $argv 0]
set passw [lindex $argv 1]
set host [lindex $argv 2]
set location [lindex $argv 3]
set file1 [lindex $argv 4]
set file2 [lindex $argv 5]
#puts "Argument data:\n";
#puts "user: $user";
#puts "passw: $passw";
#puts "host: $host";
#puts "location: $location";
#puts "file1: $file1";
#puts "file2: $file2";
# Check if all were provided
if { $user == "" || $passw == "" || $host == "" || $location == "" || $file1 == "" || $file2 == "" } {
puts "Usage: <user> <passw> <host> <location> <file1 to send> <file2 to send>\n"
exit 1
}
# Sftp to specified host and send the files
spawn sftp $user@$host
set rez [connect $passw]
if { $rez == 0 } {
send "cd $location\r"
set timeout -1
send "put $file2\r"
send "put $file1\r"
send "ls -l\r"
send "quit\r"
expect eof
exit 0
}
puts "\nError connecting to server: $host, user: $user and password: $passw!\n"
exit 1
Using passwords as command-line arguments, like in this example, is a huge security hole, as any other user on the machine can read this password by running "ps". You can, however, add code that will prompt you for your password rather than giving your password as an argument. This should be more secure. See the example below.
stty -echo
send_user -- "Enter Password: "
expect_user -re "(.*)\n"
send_user "\n"
stty echo
set PASS $expect_out(1,string)
Another example of automated SSH login to a user machine:
# Timeout is a predefined variable in Expect which by
# default is set to 10 seconds.
# spawn_id is another predefined variable in Expect.
# It is a good practice to close spawn_id handle
# created by spawn command.
set timeout 60
spawn ssh $user@machine
while {1} {
expect {
eof {break}
"The authenticity of host" {send "yes\r"}
"password:" {send "$password\r"}
"*\]" {send "exit\r"}
}
}
wait
close $spawn_id
Alternatives
Various projects implement Expect-like functionality in other languages, such as C#, Java, Scala, Groovy, Perl, Python, Ruby, Shell and Go. These are generally not exact clones of the original Expect, but the concepts tend to be very similar.
C#
Expect.NET — Expect functionality for C# (.NET)
DotNetExpect — An Expect-inspired console automation library for .NET
Erlang
lux - test automation framework with Expect style execution commands.
Go
GoExpect - Expect-like package for the Go language
go-expect - an Expect-like Go language library to automate control of terminal or console based programs.
Groovy
expect4groovy - a Groovy DSL implementation of Expect tool.
Java
ExpectIt — a pure Java 1.6+ implementation of the Expect tool. It is designed to be simple, easy to use and extensible.
expect4j — an attempt at a Java clone of the original Expect
ExpectJ — a Java implementation of the Unix expect utility
Expect-for-Java — pure Java implementation of the Expect tool
expect4java - a Java implementation of the Expect tool, but supports nested closures. There is also wrapper for Groovy language DSL.
Perl
Expect.pm — Perl module (newest version at metacpan.org)
Python
Pexpect — Python module for controlling interactive programs in a pseudo-terminal
winpexpect — port of pexpect to the Windows platform
paramiko-expect — A Python expect-like extension for the Paramiko SSH library which also supports tailing logs.
Ruby
RExpect — a drop in replacement for the expect.rb module in the standard library.
Expect4r — Interact with Cisco IOS, IOS-XR, and Juniper JUNOS CLI
Rust
rexpect - pexpect-like package for the Rust language.
Scala
scala-expect — a Scala implementation of a very small subset of the Expect tool.
Shell
Empty — expect-like utility to run interactive commands in the Unix shell-scripts
sexpect — Expect for shells. It's implemented in the client/server model which also supports attach/detach (like GNU screen).
References
Further reading
External links
(IBM Developerworks)
Scripting languages
Free software programmed in Tcl
Automation software
Tk (software)
Public-domain software with source code
|
87231
|
https://en.wikipedia.org/wiki/Surveillance
|
Surveillance
|
Surveillance is the monitoring of behavior, many activities, or information for the purpose of information gathering, influencing, managing or directing. This can include observation from a distance by means of electronic equipment, such as closed-circuit television (CCTV), or interception of electronically transmitted information like Internet traffic. It can also include simple technical methods, such as human intelligence gathering and postal interception.
Surveillance is used by citizens for protecting their neighborhoods. And by governments for intelligence gathering - including espionage, prevention of crime, the protection of a process, person, group or object, or the investigation of crime. It is also used by criminal organizations to plan and commit crimes, and by businesses to gather intelligence on criminals, their competitors, suppliers or customers. Religious organisations charged with detecting heresy and heterodoxy may also carry out surveillance.
Auditors carry out a form of surveillance.
A byproduct of surveillance is that it can unjustifiably violate people's privacy and is often criticized by civil liberties activists. Liberal democracies may have laws that seek to restrict governmental and private use of surveillance, whereas authoritarian governments seldom have any domestic restrictions.
Espionage is by definition covert and typically illegal according to the rules of the observed party, whereas most types of surveillance are overt and are considered legitimate. International espionage seems to be common among all types of countries.
Methods
Computer
The vast majority of computer surveillance involves the monitoring of data and traffic on the Internet. In the United States for example, under the Communications Assistance For Law Enforcement Act, all phone calls and broadband Internet traffic (emails, web traffic, instant messaging, etc.) are required to be available for unimpeded real-time monitoring by federal law enforcement agencies.
There is far too much data on the Internet for human investigators to manually search through all of it. Therefore, automated Internet surveillance computers sift through the vast amount of intercepted Internet traffic to identify and report to human investigators the traffic that is considered interesting or suspicious. This process is regulated by targeting certain "trigger" words or phrases, visiting certain types of web sites, or communicating via email or online chat with suspicious individuals or groups. Billions of dollars per year are spent by agencies, such as the NSA, the FBI and the now-defunct Information Awareness Office, to develop, purchase, implement, and operate systems such as Carnivore, NarusInsight, and ECHELON to intercept and analyze all of this data to extract only the information which is useful to law enforcement and intelligence agencies.
Computers can be a surveillance target because of the personal data stored on them. If someone is able to install software, such as the FBI's Magic Lantern and CIPAV, on a computer system, they can easily gain unauthorized access to this data. Such software could be installed physically or remotely. Another form of computer surveillance, known as van Eck phreaking, involves reading electromagnetic emanations from computing devices in order to extract data from them at distances of hundreds of meters. The NSA runs a database known as "Pinwale", which stores and indexes large numbers of emails of both American citizens and foreigners. Additionally, the NSA runs a program known as PRISM, which is a data mining system that gives the United States government direct access to information from technology companies. Through accessing this information, the government is able to obtain search history, emails, stored information, live chats, file transfers, and more. This program generated huge controversies in regards to surveillance and privacy, especially from U.S. citizens.
Telephones
The official and unofficial tapping of telephone lines is widespread. In the United States for instance, the Communications Assistance For Law Enforcement Act (CALEA) requires that all telephone and VoIP communications be available for real-time wiretapping by Federal law enforcement and intelligence agencies. Two major telecommunications companies in the U.S.—AT&T Inc. and Verizon—have contracts with the FBI, requiring them to keep their phone call records easily searchable and accessible for Federal agencies, in return for $1.8 million per year. Between 2003 and 2005, the FBI sent out more than 140,000 "National Security Letters" ordering phone companies to hand over information about their customers' calling and Internet histories. About half of these letters requested information on U.S. citizens.
Human agents are not required to monitor most calls. Speech-to-text software creates machine-readable text from intercepted audio, which is then processed by automated call-analysis programs, such as those developed by agencies such as the Information Awareness Office, or companies such as Verint, and Narus, which search for certain words or phrases, to decide whether to dedicate a human agent to the call.
Law enforcement and intelligence services in the United Kingdom and the United States possess technology to activate the microphones in cell phones remotely, by accessing phones' diagnostic or maintenance features in order to listen to conversations that take place near the person who holds the phone.
The StingRay tracker is an example of one of these tools used to monitor cell phone usage in the United States and the United Kingdom. Originally developed for counterterrorism purposes by the military, they work by broadcasting powerful signals that cause nearby cell phones to transmit their IMSI number, just as they would to normal cell phone towers. Once the phone is connected to the device, there is no way for the user to know that they are being tracked. The operator of the stingray is able to extract information such as location, phone calls, and text messages, but it is widely believed that the capabilities of the StingRay extend much further. A lot of controversy surrounds the StingRay because of its powerful capabilities and the secrecy that surrounds it.
Mobile phones are also commonly used to collect location data. The geographical location of a mobile phone (and thus the person carrying it) can be determined easily even when the phone is not being used, using a technique known as multilateration to calculate the differences in time for a signal to travel from the cell phone to each of several cell towers near the owner of the phone. The legality of such techniques has been questioned in the United States, in particular whether a court warrant is required. Records for one carrier alone (Sprint), showed that in a given year federal law enforcement agencies requested customer location data 8 million times.
In response to customers' privacy concerns in the post Edward Snowden era, Apple's iPhone 6 has been designed to disrupt investigative wiretapping efforts. The phone encrypts e-mails, contacts, and photos with a code generated by a complex mathematical algorithm that is unique to an individual phone, and is inaccessible to Apple. The encryption feature on the iPhone 6 has drawn criticism from FBI director James B. Comey and other law enforcement officials since even lawful requests to access user content on the iPhone 6 will result in Apple supplying "gibberish" data that requires law enforcement personnel to either break the code themselves or to get the code from the phone's owner. Because the Snowden leaks demonstrated that American agencies can access phones anywhere in the world, privacy concerns in countries with growing markets for smart phones have intensified, providing a strong incentive for companies like Apple to address those concerns in order to secure their position in the global market.
Although the CALEA requires telecommunication companies to build into their systems the ability to carry out a lawful wiretap, the law has not been updated to address the issue of smart phones and requests for access to e-mails and metadata. The Snowden leaks show that the NSA has been taking advantage of this ambiguity in the law by collecting metadata on "at least hundreds of millions" of "incidental" targets from around the world. The NSA uses an analytic tool known as CO-TRAVELER in order to track people whose movements intersect and to find any hidden connections with persons of interest.
The Snowden leaks have also revealed that the British Government Communications Headquarters (GCHQ) can access information collected by the NSA on American citizens. Once the data has been collected, the GCHQ can hold on to it for up to two years. The deadline can be extended with the permission of a "senior UK official".
Cameras
Surveillance cameras, or security cameras, are video cameras used for the purpose of observing an area. They are often connected to a recording device or IP network, and may be watched by a security guard or law enforcement officer. Cameras and recording equipment used to be relatively expensive and required human personnel to monitor camera footage, but analysis of footage has been made easier by automated software that organizes digital video footage into a searchable database, and by video analysis software (such as VIRAT and HumanID). The amount of footage is also drastically reduced by motion sensors which record only when motion is detected. With cheaper production techniques, surveillance cameras are simple and inexpensive enough to be used in home security systems, and for everyday surveillance.
As of 2016, there are about 350 million surveillance cameras worldwide. About 65% of these cameras are installed in Asia. The growth of CCTV has been slowing in recent years. In 2018, China was reported to have a huge surveillance network of over 170 million CCTV cameras with 400 million new cameras expected to be installed in the next three years, many of which use facial recognition technology.
In the United States, the Department of Homeland Security awards billions of dollars per year in Homeland Security grants for local, state, and federal agencies to install modern video surveillance equipment. For example, the city of Chicago, Illinois, recently used a $5.1 million Homeland Security grant to install an additional 250 surveillance cameras, and connect them to a centralized monitoring center, along with its preexisting network of over 2000 cameras, in a program known as Operation Virtual Shield. Speaking in 2009, Chicago Mayor Richard Daley announced that Chicago would have a surveillance camera on every street corner by the year 2016. New York City received a $350 million grant towards the development of the Domain Awareness System, which is an interconnected system of sensors including 18,000 CCTV cameras used for continual surveillance of the city by both police officers and artificial intelligence systems.
In the United Kingdom, the vast majority of video surveillance cameras are not operated by government bodies, but by private individuals or companies, especially to monitor the interiors of shops and businesses. According to 2011 Freedom of Information Act requests, the total number of local government operated CCTV cameras was around 52,000 over the entirety of the UK. The prevalence of video surveillance in the UK is often overstated due to unreliable estimates being requoted; for example one report in 2002 extrapolated from a very small sample to estimate the number of cameras in the UK at 4.2 million (of which 500,000 were in Greater London). More reliable estimates put the number of private and local government operated cameras in the United Kingdom at around 1.85 million in 2011.
In the Netherlands, one example city where there are cameras is The Hague. There, cameras are placed in city districts in which the most illegal activity is concentrated. Examples are the red-light districts and the train stations.
As part of China's Golden Shield Project, several U.S. corporations, including IBM, General Electric, and Honeywell, have been working closely with the Chinese government to install millions of surveillance cameras throughout China, along with advanced video analytics and facial recognition software, which will identify and track individuals everywhere they go. They will be connected to a centralized database and monitoring station, which will, upon completion of the project, contain a picture of the face of every person in China: over 1.3 billion people. Lin Jiang Huai, the head of China's "Information Security Technology" office (which is in charge of the project), credits the surveillance systems in the United States and the U.K. as the inspiration for what he is doing with the Golden Shield Project.
The Defense Advanced Research Projects Agency (DARPA) is funding a research project called Combat Zones That See that will link up cameras across a city to a centralized monitoring station, identify and track individuals and vehicles as they move through the city, and report "suspicious" activity (such as waving arms, looking side-to-side, standing in a group, etc.).
At Super Bowl XXXV in January 2001, police in Tampa, Florida, used Identix's facial recognition software, FaceIt, to scan the crowd for potential criminals and terrorists in attendance at the event (it found 19 people with pending arrest warrants).
Governments often initially claim that cameras are meant to be used for traffic control, but many of them end up using them for general surveillance. For example, Washington, D.C. had 5,000 "traffic" cameras installed under this premise, and then after they were all in place, networked them all together and then granted access to the Metropolitan Police Department, so they could perform "day-to-day monitoring".
The development of centralized networks of CCTV cameras watching public areas – linked to computer databases of people's pictures and identity (biometric data), able to track people's movements throughout the city, and identify whom they have been with – has been argued by some to present a risk to civil liberties. Trapwire is an example of such a network.
Social network analysis
One common form of surveillance is to create maps of social networks based on data from social networking sites such as Facebook, MySpace, Twitter as well as from traffic analysis information from phone call records such as those in the NSA call database, and others. These social network "maps" are then data mined to extract useful information such as personal interests, friendships & affiliations, wants, beliefs, thoughts, and activities.
Many U.S. government agencies such as the Defense Advanced Research Projects Agency (DARPA), the National Security Agency (NSA), and the Department of Homeland Security (DHS) are investing heavily in research involving social network analysis. The intelligence community believes that the biggest threat to U.S. power comes from decentralized, leaderless, geographically dispersed groups of terrorists, subversives, extremists, and dissidents. These types of threats are most easily countered by finding important nodes in the network, and removing them. To do this requires a detailed map of the network.
Jason Ethier of Northeastern University, in his study of modern social network analysis, said the following of the Scalable Social Network Analysis Program developed by the Information Awareness Office:
AT&T developed a programming language called "Hancock", which is able to sift through enormous databases of phone call and Internet traffic records, such as the NSA call database, and extract "communities of interest"—groups of people who call each other regularly, or groups that regularly visit certain sites on the Internet. AT&T originally built the system to develop "marketing leads", but the FBI has regularly requested such information from phone companies such as AT&T without a warrant, and, after using the data, stores all information received in its own databases, regardless of whether or not the information was ever useful in an investigation.
Some people believe that the use of social networking sites is a form of "participatory surveillance", where users of these sites are essentially performing surveillance on themselves, putting detailed personal information on public websites where it can be viewed by corporations and governments. In 2008, about 20% of employers reported using social networking sites to collect personal data on prospective or current employees.
Biometric
Biometric surveillance is a technology that measures and analyzes human physical and/or behavioral characteristics for authentication, identification, or screening purposes. Examples of physical characteristics include fingerprints, DNA, and facial patterns. Examples of mostly behavioral characteristics include gait (a person's manner of walking) or voice.
Facial recognition is the use of the unique configuration of a person's facial features to accurately identify them, usually from surveillance video. Both the Department of Homeland Security and DARPA are heavily funding research into facial recognition systems. The Information Processing Technology Office ran a program known as Human Identification at a Distance which developed technologies that are capable of identifying a person at up to by their facial features.
Another form of behavioral biometrics, based on affective computing, involves computers recognizing a person's emotional state based on an analysis of their facial expressions, how fast they are talking, the tone and pitch of their voice, their posture, and other behavioral traits. This might be used for instance to see if a person's behavior is suspect (looking around furtively, "tense" or "angry" facial expressions, waving arms, etc.).
A more recent development is DNA profiling, which looks at some of the major markers in the body's DNA to produce a match. The FBI is spending $1 billion to build a new biometric database, which will store DNA, facial recognition data, iris/retina (eye) data, fingerprints, palm prints, and other biometric data of people living in the United States. The computers running the database are contained in an underground facility about the size of two American football fields.
The Los Angeles Police Department is installing automated facial recognition and license plate recognition devices in its squad cars, and providing handheld face scanners, which officers will use to identify people while on patrol.
Facial thermographs are in development, which allow machines to identify certain emotions in people such as fear or stress, by measuring the temperature generated by blood flow to different parts of the face. Law enforcement officers believe that this has potential for them to identify when a suspect is nervous, which might indicate that they are hiding something, lying, or worried about something.
In his paper in Ethics and Information Technology, Avi Marciano maps the harms caused by biometric surveillance, traces their theoretical origins, and brings these harms together in one integrative framework to elucidate their cumulative power. Marciano proposes four types of harms: Unauthorized use of bodily information, denial or limitation of access to physical spaces, bodily social sorting, and symbolic ineligibility through construction of marginality and otherness. Biometrics' social power, according to Marciano, derives from three main features: their complexity as "enigmatic technologies", their objective-scientific image, and their increasing agency, particularly in the context of automatic decision-making.
Aerial
Aerial surveillance is the gathering of surveillance, usually visual imagery or video, from an airborne vehicle—such as an unmanned aerial vehicle, helicopter, or spy plane. Military surveillance aircraft use a range of sensors (e.g. radar) to monitor the battlefield.
Digital imaging technology, miniaturized computers, and numerous other technological advances over the past decade have contributed to rapid advances in aerial surveillance hardware such as micro-aerial vehicles, forward-looking infrared, and high-resolution imagery capable of identifying objects at extremely long distances. For instance, the MQ-9 Reaper, a U.S. drone plane used for domestic operations by the Department of Homeland Security, carries cameras that are capable of identifying an object the size of a milk carton from altitudes of , and has forward-looking infrared devices that can detect the heat from a human body at distances of up to . In an earlier instance of commercial aerial surveillance, the Killington Mountain ski resort hired 'eye in the sky' aerial photography of its competitors' parking lots to judge the success of its marketing initiatives as it developed starting in the 1950s.
The United States Department of Homeland Security is in the process of testing UAVs to patrol the skies over the United States for the purposes of critical infrastructure protection, border patrol, "transit monitoring", and general surveillance of the U.S. population. Miami-Dade police department ran tests with a vertical take-off and landing UAV from Honeywell, which is planned to be used in SWAT operations. Houston's police department has been testing fixed-wing UAVs for use in "traffic control".
The United Kingdom, as well, is working on plans to build up a fleet of surveillance UAVs ranging from micro-aerial vehicles to full-size drones, to be used by police forces throughout the U.K.
In addition to their surveillance capabilities, MAVs are capable of carrying tasers for "crowd control", or weapons for killing enemy combatants.
Programs such as the Heterogeneous Aerial Reconnaissance Team program developed by DARPA have automated much of the aerial surveillance process. They have developed systems consisting of large teams drone planes that pilot themselves, automatically decide who is "suspicious" and how to go about monitoring them, coordinate their activities with other drones nearby, and notify human operators if something suspicious is occurring. This greatly increases the amount of area that can be continuously monitored, while reducing the number of human operators required. Thus a swarm of automated, self-directing drones can automatically patrol a city and track suspicious individuals, reporting their activities back to a centralized monitoring station.
In addition, researchers also investigate possibilities of autonomous surveillance by large groups of micro aerial vehicles stabilized by decentralized bio-inspired swarming rules.
Corporate
Corporate surveillance is the monitoring of a person or group's behavior by a corporation. The data collected is most often used for marketing purposes or sold to other corporations, but is also regularly shared with government agencies. It can be used as a form of business intelligence, which enables the corporation to better tailor their products and/or services to be desirable by their customers. Although there is a common belief that monitoring can increase productivity, it can also create consequences such as increasing chances of deviant behavior and creating punishments that are not equitable to their actions. Additionally, monitoring can cause resistance and backlash because it insinuates an employer's suspicion and lack of trust.
Data mining and profiling
Data mining is the application of statistical techniques and programmatic algorithms to discover previously unnoticed relationships within the data. Data profiling in this context is the process of assembling information about a particular individual or group in order to generate a profile — that is, a picture of their patterns and behavior. Data profiling can be an extremely powerful tool for psychological and social network analysis. A skilled analyst can discover facts about a person that they might not even be consciously aware of themselves.
Economic (such as credit card purchases) and social (such as telephone calls and emails) transactions in modern society create large amounts of stored data and records. In the past, this data was documented in paper records, leaving a "paper trail", or was simply not documented at all. Correlation of paper-based records was a laborious process—it required human intelligence operators to manually dig through documents, which was time-consuming and incomplete, at best.
But today many of these records are electronic, resulting in an "electronic trail". Every use of a bank machine, payment by credit card, use of a phone card, call from home, checked out library book, rented video, or otherwise complete recorded transaction generates an electronic record. Public records—such as birth, court, tax and other records—are increasingly being digitized and made available online. In addition, due to laws like CALEA, web traffic and online purchases are also available for profiling. Electronic record-keeping makes data easily collectable, storable, and accessible—so that high-volume, efficient aggregation and analysis is possible at significantly lower costs.
Information relating to many of these individual transactions is often easily available because it is generally not guarded in isolation, since the information, such as the title of a movie a person has rented, might not seem sensitive. However, when many such transactions are aggregated they can be used to assemble a detailed profile revealing the actions, habits, beliefs, locations frequented, social connections, and preferences of the individual. This profile is then used, by programs such as ADVISE and TALON, to determine whether the person is a military, criminal, or political threat.
In addition to its own aggregation and profiling tools, the government is able to access information from third parties — for example, banks, credit companies or employers, etc. — by requesting access informally, by compelling access through the use of subpoenas or other procedures, or by purchasing data from commercial data aggregators or data brokers. The United States has spent $370 million on its 43 planned fusion centers, which are national network of surveillance centers that are located in over 30 states. The centers will collect and analyze vast amounts of data on U.S. citizens. It will get this data by consolidating personal information from sources such as state driver's licensing agencies, hospital records, criminal records, school records, credit bureaus, banks, etc. – and placing this information in a centralized database that can be accessed from all of the centers, as well as other federal law enforcement and intelligence agencies.
Under United States v. Miller (1976), data held by third parties is generally not subject to Fourth Amendment warrant requirements.
Human operatives
Organizations that have enemies who wish to gather information about the groups' members or activities face the issue of infiltration.
In addition to operatives' infiltrating an organization, the surveilling party may exert pressure on certain members of the target organization to act as informants (i.e., to disclose the information they hold on the organization and its members).
Fielding operatives is very expensive, and for governments with wide-reaching electronic surveillance tools at their disposal the information recovered from operatives can often be obtained from less problematic forms of surveillance such as those mentioned above. Nevertheless, human infiltrators are still common today. For instance, in 2007 documents surfaced showing that the FBI was planning to field a total of 15,000 undercover agents and informants in response to an anti-terrorism directive sent out by George W. Bush in 2004 that ordered intelligence and law enforcement agencies to increase their HUMINT capabilities.
Satellite imagery
On May 25, 2007, the U.S. Director of National Intelligence Michael McConnell authorized the National Applications Office (NAO) of the Department of Homeland Security to allow local, state, and domestic Federal agencies to access imagery from military intelligence Reconnaissance satellites and Reconnaissance aircraft sensors which can now be used to observe the activities of U.S. citizens. The satellites and aircraft sensors will be able to penetrate cloud cover, detect chemical traces, and identify objects in buildings and "underground bunkers", and will provide real-time video at much higher resolutions than the still-images produced by programs such as Google Earth.
Identification and credentials
One of the simplest forms of identification is the carrying of credentials. Some nations have an identity card system to aid identification, whilst others are considering it but face public opposition. Other documents, such as passports, driver's licenses, library cards, banking or credit cards are also used to verify identity.
If the form of the identity card is "machine-readable", usually using an encoded magnetic stripe or identification number (such as a Social Security number), it corroborates the subject's identifying data. In this case it may create an electronic trail when it is checked and scanned, which can be used in profiling, as mentioned above.
Wireless Tracking
This section refers to methods that involve the monitoring of tracking devices through the aid of wireless signals.
Mobile phones
Mobile carrier antennas are also commonly used to collect geolocation data on mobile phones. The geographical location of a powered mobile phone (and thus the person carrying it) can be determined easily (whether it is being used or not), using a technique known as multilateration to calculate the differences in time for a signal to travel from the cell phone to each of several cell towers near the owner of the phone. Dr. Victor Kappeler of Eastern Kentucky University indicates that police surveillance is a strong concern, stating the following statistics from 2013:
A comparatively new off-the-shelf surveillance device is an IMSI-catcher, a telephone eavesdropping device used to intercept mobile phone traffic and track the movement of mobile phone users. Essentially a "fake" mobile tower acting between the target mobile phone and the service provider's real towers, it is considered a man-in-the-middle (MITM) attack. IMSI-catchers are used in some countries by law enforcement and intelligence agencies, but their use has raised significant civil liberty and privacy concerns and is strictly regulated in some countries.
In March 2020, British daily The Guardian, based on the claims of a whistleblower, accused the government of Saudi Arabia of exploiting global mobile telecom network weaknesses to spy on its citizens traveling around the United States. The data shared by the whistleblower in support of the claims, showed that a systematic spying campaign was being run by the kingdom exploiting the flaws of SS7, a global messaging system. The data showed that millions of secret tracking commands originated from Saudi in a duration of four-months, starting from November 2019.
RFID tagging
Radio Frequency Identification (RFID) tagging is the use of very small electronic devices (called "RFID tags") which are applied to or incorporated into a product, animal, or person for the purpose of identification and tracking using radio waves. The tags can be read from several meters away. They are extremely inexpensive, costing a few cents per piece, so they can be inserted into many types of everyday products without significantly increasing the price, and can be used to track and identify these objects for a variety of purposes.
Some companies appear to be "tagging" their workers by incorporating RFID tags in employee ID badges. Workers in U.K. considered strike action in protest of having themselves tagged; they felt that it was dehumanizing to have all of their movements tracked with RFID chips. Some critics have expressed fears that people will soon be tracked and scanned everywhere they go. On the other hand, RFID tags in newborn baby ID bracelets put on by hospitals have foiled kidnappings.
In a 2003 editorial, CNET News.com's chief political correspondent, Declan McCullagh, speculated that, soon, every object that is purchased, and perhaps ID cards, will have RFID devices in them, which would respond with information about people as they walk past scanners (what type of phone they have, what type of shoes they have on, which books they are carrying, what credit cards or membership cards they have, etc.). This information could be used for identification, tracking, or targeted marketing. , this has largely not come to pass.
RFID tagging on humans
A human microchip implant is an identifying integrated circuit device or RFID transponder encased in silicate glass and implanted in the body of a human being. A subdermal implant typically contains a unique ID number that can be linked to information contained in an external database, such as personal identification, medical history, medications, allergies, and contact information.
Several types of microchips have been developed in order to control and monitor certain types of people, such as criminals, political figures and spies, a "killer" tracking chip patent was filed at the German Patent and Trademark Office (DPMA) around May 2009.
Verichip is an RFID device produced by a company called Applied Digital Solutions (ADS). Verichip is slightly larger than a grain of rice, and is injected under the skin. The injection reportedly feels similar to receiving a shot. The chip is encased in glass, and stores a "VeriChip Subscriber Number" which the scanner uses to access their personal information, via the Internet, from Verichip Inc.'s database, the "Global VeriChip Subscriber Registry". Thousands of people have already had them inserted. In Mexico, for example, 160 workers at the Attorney General's office were required to have the chip injected for identity verification and access control purposes.
Implantable microchips have also been used in healthcare settings, but ethnographic researchers have identified a number of ethical problems with such uses; these problems include unequal treatment, diminished trust, and possible endangerment of patients.
Geolocation devices
Global Positioning System
In the U.S., police have planted hidden GPS tracking devices in people's vehicles to monitor their movements, without a warrant. In early 2009, they were arguing in court that they have the right to do this.
Several cities are running pilot projects to require parolees to wear GPS devices to track their movements when they get out of prison.
Devices
Covert listening devices and video devices, or "bugs", are hidden electronic devices which are used to capture, record, and/or transmit data to a receiving party such as a law enforcement agency.
The U.S. has run numerous domestic intelligence operations, such as COINTELPRO, which have bugged the homes, offices, and vehicles of thousands of U.S. citizens, usually political activists, subversives, and criminals.
Law enforcement and intelligence services in the U.K. and the United States possess technology to remotely activate the microphones in cell phones, by accessing the phone's diagnostic/maintenance features, in order to listen to conversations that take place nearby the person who holds the phone.
Postal services
As more people use faxes and e-mail the significance of surveilling the postal system is decreasing, in favor of Internet and telephone surveillance. But interception of post is still an available option for law enforcement and intelligence agencies, in certain circumstances. This is not a common practice, however, and entities like the US Army require high levels of approval to conduct.
The U.S. Central Intelligence Agency and Federal Bureau of Investigation have performed twelve separate mail-opening campaigns targeted towards U.S. citizens. In one of these programs, more than 215,000 communications were intercepted, opened, and photographed.
Stakeout
A stakeout is the coordinated surveillance of a location or person. Stakeouts are generally performed covertly and for the purpose of gathering evidence related to criminal activity. The term derives from the practice by land surveyors of using survey stakes to measure out an area before the main building project begins.
Internet of things
The Internet of Things (IoT) is a term that refers to the future of technology in which data can be collected without human and computer interaction. IoTs can be used for identification, monitoring, location tracking, and health tracking. While IoTs have the benefit of being a time-saving tool that makes activities simpler, they raise the concern of government surveillance and privacy regarding how data will be used.
Controversy
Support
Supporters of surveillance systems believe that these tools can help protect society from terrorists and criminals. They argue that surveillance can reduce crime by three means: by deterrence, by observation, and by reconstruction. Surveillance can deter by increasing the chance of being caught, and by revealing the modus operandi. This requires a minimal level of invasiveness.
Another method on how surveillance can be used to fight criminal activity is by linking the information stream obtained from them to a recognition system (for instance, a camera system that has its feed run through a facial recognition system). This can for instance auto-recognize fugitives and direct police to their location.
A distinction here has to be made however on the type of surveillance employed. Some people that say support video surveillance in city streets may not support indiscriminate telephone taps and vice versa. Besides the types, the way in how this surveillance is done also matters a lot; i.e. indiscriminate telephone taps are supported by much fewer people than say telephone taps done only to people suspected of engaging in illegal activities.
Surveillance can also be used to give human operatives a tactical advantage through improved situational awareness, or through the use of automated processes, i.e. video analytics. Surveillance can help reconstruct an incident and prove guilt through the availability of footage for forensics experts. Surveillance can also influence subjective security if surveillance resources are visible or if the consequences of surveillance can be felt.
Some of the surveillance systems (such as the camera system that has its feed run through a facial recognition system mentioned above) can also have other uses besides countering criminal activity. For instance, it can help on retrieving runaway children, abducted or missing adults and mentally disabled people.
Other supporters simply believe that there is nothing that can be done about the loss of privacy, and that people must become accustomed to having no privacy. As Sun Microsystems CEO Scott McNealy said: "You have zero privacy anyway. Get over it."
Another common argument is: "If you aren't doing something wrong then you don't have anything to fear." Which follows that if one is engaging in unlawful activities, in which case they do not have a legitimate justification for their privacy. However, if they are following the law the surveillance would not affect them.
Opposition
With the advent of programs such as the Total Information Awareness program and ADVISE, technologies such as high speed surveillance computers and biometrics software, and laws such as the Communications Assistance for Law Enforcement Act, governments now possess an unprecedented ability to monitor the activities of their subjects. Many civil rights and privacy groups, such as the Electronic Frontier Foundation and American Civil Liberties Union, have expressed concern that by allowing continual increases in government surveillance of citizens we will end up in a mass surveillance society, with extremely limited, or non-existent political and/or personal freedoms. Fears such as this have led to numerous lawsuits such as Hepting v. AT&T.
Some critics state that the claim made by supporters should be modified to read: "As long as we do what we're told, we have nothing to fear.". For instance, a person who is part of a political group which opposes the policies of the national government, might not want the government to know their names and what they have been reading, so that the government cannot easily subvert their organization, arrest, or kill them. Other critics state that while a person might not have anything to hide right now, the government might later implement policies that they do wish to oppose, and that opposition might then be impossible due to mass surveillance enabling the government to identify and remove political threats. Further, other critics point to the fact that most people do have things to hide. For example, if a person is looking for a new job, they might not want their current employer to know this. Also if an employer wishes total privacy to watch over their own employee and secure their financial information it may become impossible, and they may not wish to hire those under surveillance.
In December 2017, the Government of China took steps to oppose widespread surveillance by security-company cameras, webcams, and IP Cameras after tens-of-thousands were made accessible for internet viewing by IT company Qihoo
Totalitarianism
Programs such as the Total Information Awareness program, and laws such as the Communications Assistance For Law Enforcement Act have led many groups to fear that society is moving towards a state of mass surveillance with severely limited personal, social, political freedoms, where dissenting individuals or groups will be strategically removed in COINTELPRO-like purges.
Kate Martin, of the Center For National Security Studies said of the use of military spy satellites being used to monitor the activities of U.S. citizens: "They are laying the bricks one at a time for a police state."
Some point to the blurring of lines between public and private places, and the privatization of places traditionally seen as public (such as shopping malls and industrial parks) as illustrating the increasing legality of collecting personal information. Traveling through many public places such as government offices is hardly optional for most people, yet consumers have little choice but to submit to companies' surveillance practices. Surveillance techniques are not created equal; among the many biometric identification technologies, for instance, face recognition requires the least cooperation. Unlike automatic fingerprint reading, which requires an individual to press a finger against a machine, this technique is subtle and requires little to no consent.
Psychological/social effects
Some critics, such as Michel Foucault, believe that in addition to its obvious function of identifying and capturing individuals who are committing undesirable acts, surveillance also functions to create in everyone a feeling of always being watched, so that they become self-policing. This allows the State to control the populace without having to resort to physical force, which is expensive and otherwise problematic.
With the development of digital technology, individuals have become increasingly perceptible to one another, as surveillance becomes virtual. Online surveillance is the utilization of the internet to observe one's activity. Corporations, citizens, and governments participate in tracking others' behaviours for motivations that arise out of business relations, to curiosity, to legality. In her book Superconnected, Mary Chayko differentiates between two types of surveillance: vertical and horizontal. Vertical surveillance occurs when there is a dominant force, such as the government that is attempting to control or regulate the actions of a given society. Such powerful authorities often justify their incursions as a means to protect society from threats of violence or terrorism. Some individuals question when this becomes an infringement on civil rights.
Horizontal diverges from vertical surveillance as the tracking shifts from an authoritative source to an everyday figure, such as a friend, coworker, or stranger that is interested in one's mundane activities. Individuals leave traces of information when they are online that reveal their interests and desires of which others observe. While this can allow people to become interconnected and develop social connections online, it can also increase potential risk to harm, such as cyberbullying or censoring/stalking by strangers, reducing privacy.
In addition, Simone Browne argues that surveillance wields an immense racializing quality such that it operates as "racializing surveillance." Browne uses racializing surveillance to refer to moments when enactments of surveillance are used to reify boundaries, borders, and bodies along racial lines and where the outcome is discriminatory treatment of those who are negatively racialized by such surveillance. Browne argues racializing surveillance pertains to policing what is "in or out of place."
Privacy
Numerous civil rights groups and privacy groups oppose surveillance as a violation of people's right to privacy. Such groups include: Electronic Privacy Information Center, Electronic Frontier Foundation, American Civil Liberties Union and Privacy International.
There have been several lawsuits such as Hepting v. AT&T and EPIC v. Department of Justice by groups or individuals, opposing certain surveillance activities.
Legislative proceedings such as those that took place during the Church Committee, which investigated domestic intelligence programs such as COINTELPRO, have also weighed the pros and cons of surveillance.
Court cases
People vs. Diaz (2011) was a court case in the realm of cell phone privacy, even though the decision was later overturned. In this case, Gregory Diaz was arrested during a sting operation for attempting to sell ecstasy. During his arrest, police searched Diaz's phone and found more incriminating evidence including SMS text messages and photographs depicting illicit activities. During his trial, Diaz attempted to have the information from his cell phone removed from evidence, but the courts deemed it as lawful and Diaz's appeal was denied on the California State Court level and, later, the Supreme Court level. Just three short years after, this decision was overturned in the case Riley vs. California (2014).
Riley vs. California (2014) was a U.S. Supreme Court case in which a man was arrested for his involvement in a drive-by shooting. A few days after the shooting the police made an arrest of the suspect (Riley), and, during the arrest, the police searched him. However, this search was not only of Riley's person, but also the police opened and searched his cell phone, finding pictures of other weapons, drugs, and of Riley showing gang signs. In court, the question arose whether searching the phone was lawful or if the search was protected by the 4th amendment of the constitution. The decision held that the search of Riley's cell phone during the arrest was illegal, and that it was protected by the 4th Amendment.
Countersurveillance, inverse surveillance, sousveillance
Countersurveillance is the practice of avoiding surveillance or making surveillance difficult. Developments in the late twentieth century have caused counter surveillance to dramatically grow in both scope and complexity, such as the Internet, increasing prevalence of electronic security systems, high-altitude (and possibly armed) UAVs, and large corporate and government computer databases.
Inverse surveillance is the practice of the reversal of surveillance on other individuals or groups (e.g., citizens photographing police). Well-known examples include George Holliday's recording of the Rodney King beating and the organization Copwatch, which attempts to monitor police officers to prevent police brutality. Counter-surveillance can be also used in applications to prevent corporate spying, or to track other criminals by certain criminal entities. It can also be used to deter stalking methods used by various entities and organizations.
Sousveillance is inverse surveillance, involving the recording by private individuals, rather than government or corporate entities.
Popular culture
In literature
George Orwell's novel Nineteen Eighty-Four portrays a fictional totalitarian surveillance society with a very simple mass surveillance system consisting of human operatives, informants, and two-way "telescreens" in people's homes. Because of the impact of this book, mass-surveillance technologies are commonly called "Orwellian" when they are considered problematic.
The novel mistrust highlights the negative effects from the overuse of surveillance at Reflection House. The central character Kerryn installs secret cameras to monitor her housemates – see also Paranoia.
The book The Handmaid's Tale, as well as a film and TV series based on it, portray a totalitarian Christian theocracy where all citizens are kept under constant surveillance.
In the book The Girl with the Dragon Tattoo, Lisbeth Salander uses computers to get information on people, as well as other common surveillance methods, as a freelancer.
V for Vendetta, a British graphic novel written by Alan Moore
David Egger's novel The Circle exhibits a world where a single company called "The Circle" produces all of the latest and highest quality technologies from computers and smartphones, to surveillance cameras known as "See-Change cameras". This company becomes associated with politics when starting a movement where politicians go "transparent" by wearing See-Change cameras on their body to prevent keeping secrets from the public about their daily work activity. In this society, it becomes mandatory to share personal information and experiences because it is The Circle's belief that everyone should have access to all information freely. However, as Eggers illustrates, this takes a toll on the individuals and creates a disruption of power between the governments and the private company. The Circle presents extreme ideologies surrounding mandatory surveillance. Eamon Bailey, one of the Wise Men, or founders of The Circle, believes that possessing the tools to access information about anything or anyone, should be a human right given to all of the world's citizens. By eliminating all secrets, any behaviour that has been deemed shameful will either become normalized or no longer considered shocking. Negative actions will eventually be eradicated from society altogether, through the fear of being exposed to other citizens This would be achieved in part by everyone going transparent, something that Bailey highly supports, although it's notable that none of the Wise Men ever became transparent themselves. One major goal of The Circle is to have all of the world's information filtered through The Circle, a process they call "Completion". A single, private company would then have full access and control over all information and privacy of individuals and governments. Ty Gospodinov, the first founder of The Circle, has major concerns about the completion of the circle. He warns that this step would give The Circle too much power and control, and would quickly lead to totalitarianism.
In music
The Dead Kennedys' song "I Am The Owl" is about government surveillance and social engineering of political groups.
The Vienna Teng song "Hymn of Acxiom" is about corporate data collection and surveillance.
Onscreen
The film Gattaca portrays a society that uses biometric surveillance to distinguish between people who are genetically engineered "superior" humans and genetically natural "inferior" humans.
In the movie Minority Report, the police and government intelligence agencies use micro aerial vehicles in SWAT operations and for surveillance purposes.
HBO's crime-drama series The Sopranos regularly portrays the FBI's surveillance of the DiMeo Crime Family. Audio devices they use include "bugs" placed in strategic locations (e.g., in "I Dream of Jeannie Cusamano" and "Mr. Ruggerio's Neighborhood") and hidden microphones worn by operatives (e.g., in "Rat Pack") and informants (e.g., in "Funhouse", "Proshai, Livushka" and "Members Only"). Visual devices include hidden still cameras (e.g., in "Pax Soprana") and video cameras (e.g., in "Long Term Parking").
The movie THX-1138 portrays a society wherein people are drugged with sedatives and antidepressants, and have surveillance cameras watching them everywhere they go.
The movie The Lives of Others portrays the monitoring of East Berlin by agents of the Stasi, the GDR's secret police.
The movie The Conversation portrays many methods of audio surveillance.
The movie V for Vendetta, a 2005 dystopian political thriller film directed by James McTeigue and written by the Wachowskis, is about British government trying to brainwash people by media, obtain their support by fearmongering, monitor them by mass surveillance devices, and suppress or kill any political or social objection.
The movie Enemy of the State a 1998 American action-thriller film directed by Tony Scott is about using U.S. citizens' data to search their background and surveillance devices to capture everyone that is identified as "enemy".
The British TV series The Capture explores the potential for video surveillance to be manipulated in order to support a conviction to pursue a political agenda.
See also
Mass surveillance
Sousveillance
Surveillance art
Surveillance capitalism
Surveillance system monitor
Trapwire
Participatory surveillance
PRISM (surveillance program)
References
Further reading
Allmer, Thomas. (2012). Towards a Critical Theory of Surveillance in Informational Capitalism. Frankfurt am Main: Peter Lang.
Andrejevic, Mark. 2007. iSpy: Surveillance and Power in the Interactive Era. Lawrence, KS: University Press of Kansas.
Ball, Kirstie, Kevin D. Haggerty, and David Lyon, eds. (2012). Routledge Handbook of Surveillance Studies. New York: Routledge.
Brayne, Sarah. (2020). Predict and Surveil: Data, Discretion, and the Future of Policing. New York: Oxford University Press.
Browne, Simone. (2015). Dark Matters: On the Surveillance of Blackness. Durham: Duke University Press.
Coleman, Roy, and Michael McCahill. 2011. Surveillance & Crime. Thousand Oaks, Calif.: Sage.
Feldman, Jay. (2011). Manufacturing Hysteria: A History of Scapegoating, Surveillance, and Secrecy in Modern America. New York, NY: Pantheon Books.
Fuchs, Christian, Kees Boersma, Anders Albrechtslund, and Marisol Sandoval, eds. (2012). "Internet and Surveillance: The Challenges of Web 2.0 and Social Media". New York: Routledge.
Garfinkel, Simson, Database Nation; The Death of Privacy in the 21st Century. O'Reilly & Associates, Inc.
Gilliom, John. (2001). Overseers of the Poor: Surveillance, Resistance, and the Limits of Privacy, University Of Chicago Press,
Haque, Akhlaque. (2015). Surveillance, Transparency and Democracy: Public Administration in the Information Age. University of Alabama Press, Tuscaloosa, AL.
Harris, Shane. (2011). The Watchers: The Rise of America's Surveillance State. London, UK: Penguin Books Ltd.
Hier, Sean P., & Greenberg, Joshua (Eds.). (2009). Surveillance: Power, Problems, and Politics. Vancouver, CA: UBC Press.
Jensen, Derrick and Draffan, George (2004) Welcome to the Machine: Science, Surveillance, and the Culture of Control Chelsea Green Publishing Company.
Lewis, Randolph. (2017). Under Surveillance: Being Watched in Modern America. Austin: University of Texas Press.
Lyon, David (2001). Surveillance Society: Monitoring in Everyday Life. Philadelphia: Open University Press.
Lyon, David (Ed.). (2006). Theorizing Surveillance: The Panopticon and Beyond. Cullompton, UK: Willan Publishing.
Lyon, David (2007) Surveillance Studies: An Overview. Cambridge: Polity Press.
Matteralt, Armand. (2010). The Globalization of Surveillance. Cambridge, UK: Polity Press.
Monahan, Torin, ed. (2006). Surveillance and Security: Technological Politics and Power in Everyday Life. New York: Routledge.
Monahan, Torin. (2010). Surveillance in the Time of Insecurity. New Brunswick: Rutgers University Press.
Monahan, Torin, and David Murakami Wood, eds. (2018). Surveillance Studies: A Reader. New York: Oxford University Press.
Parenti, Christian The Soft Cage: Surveillance in America From Slavery to the War on Terror, Basic Books,
Petersen, J.K. (2012) Handbook of Surveillance Technologies, Third Edition, Taylor & Francis: CRC Press, 1020 pp.,
Staples, William G. (2000). Everyday Surveillance: Vigilance and Visibility in Post-Modern Life. Lanham, MD: Rowman & Littlefield Publishers.
General information
(Volume 66, Number 3, July–August)
ACLU, "The Surveillance-Industrial Complex: How the American Government Is Conscripting Businesses and Individuals in the Construction of a Surveillance Society"
Balkin, Jack M. (2008). "The Constitution in the National Surveillance State", Yale Law School
Bibo, Didier and Delmas-Marty, "The State and Surveillance: Fear and Control"
EFF Privacy Resources
EPIC Privacy Resources
ICO. (September 2006). "A Report on the Surveillance Society for the Information Commissioner by the Surveillance Studies Network".
Privacy Information Center
Historical information
COINTELPRO—FBI counterintelligence programs designed to neutralize political dissidents
Reversing the Whispering Gallery of Dionysius – A Short History of Electronic Surveillance in the United States
Legal resources
EFF Legal Cases
Guide to lawful intercept legislation around the world
External links
Crime prevention
Espionage techniques
Law enforcement
Law enforcement techniques
National security
Privacy
Security
|
1381502
|
https://en.wikipedia.org/wiki/IEEE%20802.22
|
IEEE 802.22
|
IEEE 802.22, is a standard for wireless regional area network (WRAN) using white spaces in the television (TV) frequency spectrum.
The development of the IEEE 802.22 WRAN standard is aimed at using cognitive radio (CR) techniques to allow sharing of geographically unused spectrum allocated to the television broadcast service, on a non-interfering basis, to bring broadband access to hard-to-reach, low population density areas, typical of rural environments, and is therefore timely and has the potential for a wide applicability worldwide. It is the first worldwide effort to define a standardized air interface based on CR techniques for the opportunistic use of TV bands on a non-interfering basis.
IEEE 802.22 WRANs are designed to operate in the TV broadcast bands while assuring that no harmful interference is caused to the incumbent operation: digital TV and analog TV broadcasting, and low power licensed devices such as wireless microphones.
The standard was expected to be finalized in Q1 2010, but was finally published in July 2011.
IEEE P802.22.1 is a related standard being developed to enhance harmful interference protection for low power licensed devices operating in TV Broadcast Bands..
IEEE P802.22.2 is a recommended practice for the installation and deployment of IEEE 802.22 Systems.
IEEE 802.22 WG is a working group of IEEE 802 LAN/MAN standards committee which was chartered to write the 802.22 standard. The two 802.22 task groups (TG1 and TG2) are writing 802.22.1 and 802.22.2 respectively.
Technology
In response to a notice of proposed rulemaking (NPRM) issued by the U.S. Federal Communications Commission (FCC) in May 2004, the IEEE 802.22 working group on Wireless Regional Area Networks was formed in October 2004.
Its project, formally called as Standard for Wireless Regional Area Networks (WRAN) - Specific requirements - Part 22: Cognitive Wireless RAN Medium Access Control (MAC) and Physical Layer (PHY) Specifications: Policies and procedures for operation in the TV Bands focused on constructing a consistent, national fixed point-to-multipoint WRAN that will use UHF/VHF TV bands between 54 and 862 MHz. Specific TV channels as well as the guard bands of these channels are planned to be used for communication in IEEE 802.22.
The Institute of Electrical and Electronics Engineers (IEEE), together with the FCC, pursued a centralized approach for available spectrum discovery. Specifically each base station (BS) would be armed with a GPS receiver which would allow its position to be reported. This information would be sent back to centralized servers (in the USA these would be managed by the FCC), which would respond with the information about available free TV channels and guard bands in the area of the BS. Other proposals would allow local spectrum sensing only, where the BS would decide by itself which channels are available for communication. A combination of these two approaches is also envisioned. Devices which would operate in the TV white space band (TVWS) would be mainly of two types: Fixed and Personal/Portable. Fixed devices would have geolocation capability with an embedded GPS device. Fixed devices also communicate with the central database to identify other transmitters in the area operating in TVWS. Other measures suggested by the FCC and IEEE to avoid interference include dynamic spectrum sensing and dynamic power control.
Overview of the WRAN topology
The initial drafts of the 802.22 standard specify that the network should operate in a point to multipoint basis (P2MP). The system will be formed by base stations (BS) and customer-premises equipment (CPE). The CPEs will be attached to a BS via a wireless link. The BSs will control the medium access for all the CPEs attached to it.
One key feature of the WRAN Base Stations is that they will be capable of performing a cognitive sensing. This is that the CPEs will be sensing the spectrum and will be sending periodic reports to the BS informing it about what they sense. The BS, with the information gathered, will evaluate whether a change is necessary in the channel used, or on the contrary, if it should stay transmitting and receiving in the same one.
An approach to the PHY layer
The PHY layer must be able to adapt to different conditions and also needs to be flexible for jumping from channel to channel without errors in transmission or losing clients (CPEs). This flexibility is also required for being able to dynamically adjust the bandwidth, modulation and coding schemes. OFDMA will be the modulation scheme for transmission in up and downlinks. With OFDMA it will be possible to achieve this fast adaptation needed for the BS's and CPEs.
By using just one TV channel (a TV channel has a bandwidth of 6 MHz; in some countries they can be of 7 or 8 MHz) the approximate maximum bit rate is 19 Mbit/s at a 30 km distance. The speed and distance achieved is not enough to fulfill the requirements of the standard. The feature Channel Bonding deals with this problem. Channel Bonding consists in using more than one channel for Tx / Rx. This allows the system to have higher bandwidth which will be reflected in a better system performance.
An approach to the MAC layer
This layer will be based on cognitive radio technology. It also needs to be able to adapt dynamically to changes in the environment by sensing the spectrum. The MAC layer will consist of two structures: Frame and Superframe. A superframe will be formed by many frames. The superframe will have a superframe control header (SCH) and a preamble. These will be sent by the BS in every channel that it's possible to transmit and not cause interference. When a CPE is turned on, it will sense the spectrum, find out which channels are available and will receive all the needed information to attach to the BS.
Two different types of spectrum measurement will be done by the CPE: in-band and out-of-band. The in-band measurement consists in sensing the actual channel that is being used by the BS and CPE. The out-of-band measurement will consist in sensing the rest of the channels. The MAC layer will perform two different types of sensing in either in-band or out-of-band measurements: fast sensing and fine sensing. Fast sensing will consist in sensing at speeds of under 1ms per channel. This sensing is performed by the CPE and the BS and the BS's will gather all the information and will decide if there is something new to be done. The fine sensing takes more time (approximately 25 ms per channel or more) and it is used based on the outcome of the previous fast sensing mechanism.
These sensing mechanisms are primarily used to identify if there is an incumbent transmitting, and if there is a need to avoid interfering with it.
To perform reliable sensing, in the basic operation mode on a single frequency band as described above (the "listen-before-talk" mode) one has to allocate quiet times, in which no data transmission is permitted. Such periodic interruption of data transmission could impair the QoS of cognitive radio systems. This issue is addressed by an alternative operation mode proposed in IEEE 802.22 called Dynamic frequency hopping (DFH) where data transmission of the WRAN systems are performed in parallel with spectrum sensing without any interruption.
Encryption, authentication, and authorization
Only the AES-GCM authenticated encryption cipher algorithm is supported.
EAP-TLS or EAP-TTLS must be used for authentication and encryption key derivation. IEEE 802.22 defines an X.509v3 certificate profile which uses extensions for authenticating and authorization of devices based on information such as device manufacturer, MAC address, and FCC ID (the Manufacturer/ServiceProvider certificate, the CPE certificate, and the BS certificate, respectively).
This could allow for a type of customer lock-in where the network providers refuse network access to devices that have not been vetted by manufacturers of the network providers' choice (i.e. the device must possess a private key of an X.509 certificate with a chain of trust to a manufacturer certificate authority (CA) that the network provider will accept), not unlike the SIM lock in modern cellular networks and DOCSIS "certification testers" in cable networks.
Comparison with 802.11af
In addition to 802.22, the IEEE has standardized another white space cognitive radio standard, 802.11af. While 802.22 is a wireless regional area network (WRAN) standard, for ranges up to 100 km, 802.11af is a wireless LAN standard designed for ranges up to 1 km. Coexistence between 802.22 and 802.11af standards can be implemented either in centralized or distributed manners and based on various coexistence techniques.
See also
IEEE 802.11af, a standard for wireless LANs in TV white space
Geolocation Database
How is spectrum sensing done
References
IEEE 802
Wireless networking standards
|
2266694
|
https://en.wikipedia.org/wiki/Walter%20Isaacson
|
Walter Isaacson
|
Walter Seff Isaacson (born May 20, 1952) is an American author, journalist, and professor. He has been the President and CEO of the Aspen Institute, a nonpartisan policy studies organization based in Washington, D.C., the chair and CEO of CNN, and the editor of Time.
Born in New Orleans, Louisiana, he attended Harvard University and the University of Oxford as a Rhodes scholar at Pembroke College. He is the author of The Code Breaker: Jennifer Doudna, Gene Editing, and the Future of the Human Race (2021), Leonardo da Vinci (2017), The Innovators: How a Group of Hackers, Geniuses, and Geeks Created the Digital Revolution (2014), Steve Jobs (2011), American Sketches (2009), Einstein: His Life and Universe (2007), Benjamin Franklin: An American Life (2003), and Kissinger: A Biography (1992). He is the co-author with Evan Thomas of The Wise Men: Six Friends and the World They Made (1986).
Isaacson is a professor at Tulane University and an advisory partner at Perella Weinberg Partners, a New York City-based financial services firm. He was vice chair of the Louisiana Recovery Authority, which oversaw the rebuilding after Hurricane Katrina, chaired the government board that runs Voice of America, and was a member of the Defense Innovation Board.
Early life and education
Isaacson was born in New Orleans, Louisiana, the son of Betty "Betsy" Lee (née Seff) and Irwin Isaacson. His father was a "kindly Jewish distracted humanist engineer with a reverence for science" and his mother Betsy was a realtor. He attended New Orleans' Isidore Newman School, where he was student body president, Deep Springs College for the Telluride Association Summer Program (TASP), and Harvard University, where he majored in History and Literature and graduated in 1974. At Harvard, Isaacson was the president of the Signet Society, member of the Harvard Lampoon, and resident of Lowell House. He later attended the University of Oxford as a Rhodes scholar at Pembroke College, where he studied Philosophy, Politics, and Economics (PPE) and graduated with First-Class Honours.
Career
Media
Isaacson began his career in journalism at The Sunday Times in London, followed by a position with the New Orleans Times-Picayune. He joined Time magazine in 1978, serving as the magazine's political correspondent, national editor, and editor of new media before becoming the magazine's 14th editor in 1996.
Isaacson became chairman and CEO of CNN in July 2001, replacing Tom Johnson, and only two months later guided CNN through the events of 9/11. Shortly after his appointment at CNN, Isaacson attracted attention for seeking the views of Republican Party leaders on Capitol Hill regarding criticisms that CNN broadcast content that was unfair to Republicans or conservatives. He was quoted in Roll Call magazine as saying: "I was trying to reach out to a lot of Republicans who feel that CNN has not been as open to covering Republicans, and I wanted to hear their concerns." The CEO's conduct was criticized by the Fairness & Accuracy In Reporting (FAIR) organization, which said that Isaacson's "pandering" behavior was endowing conservative politicians with power over CNN.
In January 2003, he announced that he would step down as president at CNN to become president of the Aspen Institute. Jim Walton replaced Isaacson as president of CNN.
Isaacson served as the president and CEO of the Aspen Institute from 2003 until 2018, when he announced that he would step down to become a professor of history at Tulane University and an advisory partner at the New York City financial services firm Perella Weinberg Partners. In November 2017, the Aspen Institute named Dan Porterfield, the president of Franklin & Marshall College, as Isaacson's successor.
In March 2017, Isaacson launched a podcast with Dell Technologies called Trailblazers, which focuses on technology's effects on business. In 2018, Isaacson was named as a cohost of Amanpour & Company, a new show on PBS and CNN that replaced The Charlie Rose Show.
Writing
Isaacson is the author of multiple published books including American Sketches (2009), Einstein: His Life and Universe (2007), Benjamin Franklin: An American Life (2003) and Kissinger: A Biography (1992). He additionally co-authored with Evan Thomas the work The Wise Men: Six Friends and the World They Made (1986).
On October 24, 2011, Steve Jobs, Isaacson's authorized biography of Apple Inc.'s Jobs, was published by Simon & Schuster, only several weeks after Jobs' death. It became an international best-seller, breaking all records for sales of a biography. The book was based on over forty interviews with Jobs over a two-year period up until shortly before his death, and on conversations with friends, family members, and business rivals of the entrepreneur.
In October 2014, Isaacson published The Innovators: How a Group of Inventors, Hackers, Geniuses, and Geeks Created the Digital Revolution, which explores the history of the key technological innovations that are prominent in the digital revolution, most notably the parallel developments of the computer and the Internet. It became a New York Times bestseller. Writing for the New York Times, Janet Maslin described the author as "a kindred spirit to the visionaries and enthusiasts" who Isaacson wrote about.
He is the editor of Profiles in Leadership: Historians on the Elusive Quality of Greatness (2010, W. W. Norton).
His biography of Leonardo da Vinci was published on October 17, 2017, to positive reviews from critics. In August 2017, Paramount Pictures won a bidding war against Universal Pictures for the rights to adapt Isaacson's biography of da Vinci. The studio bought the rights under its deal with Leonardo DiCaprio's Appian Way Productions, which said that it planned to produce the film with DiCaprio as the star. Screenwriter John Logan (The Aviator, Gladiator) has been tapped to pen the script.
His book The Code Breaker: Jennifer Doudna, Gene Editing, and the Future of the Human Race was published in March 2021 by Simon & Schuster. It is a biography of Jennifer Doudna, the winner of the 2020 Nobel Prize in Chemistry for her work on the CRISPR system of gene editing. The book debuted at number one on The New York Times nonfiction best-seller list for the week ending March 13, 2021. Publishers Weekly called it a "gripping account of a great scientific advancement and of the dedicated scientists who realized it."
In August 2021, businessman Elon Musk announced that Isaacson was in the process of writing his biography.
Government
In October 2005, the Governor of Louisiana, Kathleen Blanco, appointed Isaacson vice chairman of the Louisiana Recovery Authority, a board that oversaw spending on the recovery from Hurricane Katrina. In December 2007, he was appointed by President George W. Bush to the chairman of the U.S.-Palestinian Partnership, which seeks to create economic and educational opportunities in the Palestinian territories. Secretary of State Hillary Clinton appointed him vice-chair of the Partners for a New Beginning, which encourages private-sector investments and partnerships in the Muslim world.
He also served as the co-chair of the U.S.-Vietnamese Dialogue on Agent Orange, which in January 2008 announced completion of a project to contain the dioxin left behind by the U.S. at the Da Nang air base and plans to build health centers and a dioxin laboratory in the affected regions. In 2008, he was appointed to be a member of the Advisory Committee of the National Institutes of Health. In 2009, he was appointed by President Obama to be Chairman of the Broadcasting Board of Governors, which runs Voice of America, Radio Free Europe, and the other international broadcasts of the U.S. government; he served until January 2012. In 2014, he was appointed by New Orleans Mayor Mitch Landrieu to be the co-chair of the New Orleans Tricentennial Commission, which planned the city's 300th-anniversary commemoration in 2018. In 2015, he was appointed to the board of My Brother's Keeper Alliance, which seeks to carry out President Obama's anti-poverty and youth opportunity initiatives. In 2016, he was appointed by Mayor Mitch Landrieu and confirmed by the City Council to be a member of the New Orleans City Planning Commission. He is a member of the U.S. Department of Defense Innovation Advisory Board. In 2018, he was appointed by New Orleans mayor-elect LaToya Cantrell to be co-chair of her transition team.
Positions
Isaacson is an advisory partner at Perella Weinberg, a financial services firm. He is the chairman emeritus of the board of Teach for America and is on the boards of United Airlines, Halliburton Labs, The New Orleans Advocate/Times-Picayune, New Schools New Orleans, Bloomberg Philanthropies, the Rockefeller Foundation, the Carnegie Institution for Science and the Society of American Historians, of which he served as president in 2012.
In March 2019, Isaacson became the editor-at-large and senior adviser for Arcadia Publishing, where he will be promoting books for the company as well as editing, new strategy development, and partnerships.
Isaacson is an Associate of the History of Science Department and a member of the Lowell House Senior Common Room at Harvard University. He is also an Honorary Fellow of Pembroke College, Oxford. Isaacson teaches a course at Tulane called History Of the Digital Revolution, an open seminar filled with discussion about technology, culture, and the progression of society.
Honors
Isaacson's book Steve Jobs about the life of the entrepreneur, earned Isaacson the 2012 Gerald Loeb Award.
In 2012, he was selected as one of the Time 100, the magazine's list of the most influential people in the world. Isaacson is a fellow of the Royal Society of Arts and was awarded its 2013 Benjamin Franklin Medal. He is also a member of the American Academy of Arts and Sciences, the American Philosophical Society and an Honorary Fellow of Pembroke College, Oxford.
In 2014, the National Endowment for the Humanities selected Isaacson for the Jefferson Lecture, the U.S. federal government's highest honor for achievement in the humanities. The title of Isaacson's lecture was "The Intersection of the Humanities and the Sciences."
He has honorary degrees from Tufts University, Cooper Union, William & Mary, Franklin University Switzerland, University of New Orleans, University of South Carolina, City University of New York (Hunter College), Pomona College, Lehigh University, Duke University, and Colorado Mountain College, where the Isaacson School of Media and Communications is named after him. He was the 2015 recipient of The Nichols-Chancellor's Medal at Vanderbilt University.
Bibliography
Kissinger: A Biography. (Simon & Schuster, 1992)
Benjamin Franklin: An American Life. (Simon & Schuster, 2003)
Einstein: His Life and Universe. (Simon & Schuster, 2007)
American Sketches. (Simon & Schuster, 2009)
Steve Jobs. (Simon & Schuster, 2011)
The Innovators: How a Group of Inventors, Hackers, Geniuses, and Geeks Created the Digital Revolution. (Simon & Schuster, 2014)
Leonardo Da Vinci. (Simon & Schuster, 2017)
The Code Breaker: Jennifer Doudna, Gene Editing, and the Future of the Human Race. (Simon & Schuster, 2021)
See also
New Yorkers in journalism
U.S.-Vietnam Dialogue Group on Agent Orange/Dioxin
Partners for a New Beginning
References
External links
Official website at Tulane University
Members of the Council on Foreign Relations
1952 births
Living people
20th-century American biographers
20th-century American journalists
20th-century American male writers
21st-century American biographers
21st-century American journalists
Alumni of Pembroke College, Oxford
American historians of science
American magazine editors
American male journalists
American nonprofit chief executives
American Rhodes Scholars
American technology journalists
The Atlantic (magazine) people
Fellows of the Royal Society of Arts
Gerald Loeb Award winners for Business Books
The Harvard Lampoon alumni
Isidore Newman School alumni
Jewish American historians
Jewish American journalists
Leonardo da Vinci scholars
Presidents of CNN
American male biographers
Time (magazine) people
Writers from New Orleans
Members of the American Philosophical Society
|
32725704
|
https://en.wikipedia.org/wiki/Counter-Strike%3A%20Global%20Offensive
|
Counter-Strike: Global Offensive
|
Counter-Strike: Global Offensive (CS:GO) is a multiplayer first-person shooter developed by Valve and Hidden Path Entertainment. It is the fourth game in the Counter-Strike series. Developed for over two years, Global Offensive was released for Windows, macOS, Xbox 360, and PlayStation 3 in August 2012, and for Linux in 2014. Valve still regularly updates the game, both with smaller balancing patches and larger content additions.
The game pits two teams, Terrorists and Counter-Terrorists, against each other in different objective-based game modes. The most common game modes involve the Terrorists planting a bomb while Counter-Terrorists attempt to stop them, or Counter-Terrorists attempting to rescue hostages that the Terrorists have captured. There are nine official game modes, all of which have distinct characteristics specific to that mode. The game also has matchmaking support that allows players to play on dedicated Valve servers, in addition to community-hosted servers with custom maps and game modes. A battle-royale game-mode, "Danger Zone", was introduced in December 2018.
Global Offensive received positive reviews from critics on release, who praised the game for its gameplay and faithfulness to the Counter-Strike series, though it was criticized for some early features and the differences between the console and PC versions. Since its release, it has drawn in an estimated 11 million players per month, and remains one of the most played games on Valve's Steam platform. In December 2018, Valve transitioned the game to a free-to-play model, focusing on revenue from cosmetic items.
The game has an active esports scene, continuing the history of international competitive play from previous games in the series. Teams compete in professional leagues and tournaments, and Global Offensive is now one of the largest global esports.
Gameplay
Global Offensive, like prior games in the Counter-Strike series, is an objective-based, multiplayer first-person shooter. Two opposing teams, the Terrorists and the Counter-Terrorists, compete in game modes to repeatedly complete objectives, such as securing a location to plant or defuse a bomb and rescuing or capturing hostages. At the end of each short round, players are rewarded based on individual and team performance with in-game currency to spend on other weapons or utility in subsequent rounds. Winning rounds generally rewards more money than losing does, and completing map-based objectives, including killing enemies, gives additional cash bonuses.
Global Offensive has nine official game modes: Competitive, Casual, Deathmatch, Arms Race, Demolition, Wingman, Flying Scoutsman, Retakes and Danger Zone. Competitive mode, the primary gameplay experience, pits two teams of five players against each other in a best-of-30 match. When playing Competitive, players have a skill rank based on a Glicko rating system and are paired with and against other players around the same ranking. The Casual and Deathmatch modes are less serious than Competitive mode and do not register friendly fire. Both are primarily used as a practice tool. Arms Race and Demolition, both based on mods for previous iterations in the series, were added alongside eight new maps for the modes. Arms Race is the Global Offensive variant of the "Gun Game" mode in other games in the series. Demolition is another bomb defusal game mode, with gun upgrades only given to players who killed an enemy in the previous round. Wingman is a two-on-two bomb defusal game mode taking place over sixteen rounds. Similar to Competitive, players are paired based on a dynamic skill ranking. The Flying Scoutsman mode equips players with only a SSG 08 (known in-game as the "Scout") and a knife in a low-gravity map. Retakes is a gamemode where three Terrorists will defend an already planted C4 against 4 Counter-Terrorists. Players will also be able to choose a loadout card at the beginning of each round to retake (or defend) the bomb site. Danger Zone is a battle-royale mode in which up to 18 players search for weapons, equipment, and money in an effort to be the last person or team remaining. Valve also included an offline practice mode designed to help new players learn how to use guns and grenades, called the Weapons Course. Apart from the Weapons Course, all other game modes can be played online or offline with bots.
There are five categories of purchasable weaponry: rifles, submachine guns, "heavy" weaponry (light machine guns and shotguns), pistols, and grenades. Each gun in Global Offensive has a unique recoil pattern that can be controlled, a gameplay feature the series has long been associated with. Global Offensive also introduced weapons and equipment not seen in previous installments, including tasers and an incendiary grenade.
In-game matchmaking is supported for all online game modes and is managed through the Steam platform. The game servers run Valve Anti-Cheat to prevent cheating. One form of matchmaking in Global Offensive to prevent cheating, Prime Matchmaking, hosts matches that can only be played with other users with the "Prime" status. This feature also results in more equal matches as there are fewer "smurfs" in these matches. The PC version of Global Offensive also supports private dedicated servers that players may connect to through the community server menu in-game. These servers may be heavily modified and can drastically differ from the base game modes. There have been many community made mods for the game, one of the popular ones being "kz", a mod that makes players complete obstacle courses requiring advanced strafing and jumping techniques.
Development and release
Counter-Strike: Global Offensive is the sequel to the popular first-person shooter Counter-Strike: Source, developed by Valve. Global Offensives development began when Hidden Path Entertainment attempted to port Counter-Strike: Source onto video game consoles. During its development, Valve saw the opportunity to turn the port into a full game and expand on the predecessor's gameplay. Global Offensive began development in March 2010, and was revealed to the public on August 12, 2011. The closed beta started on November 30, 2011, and was initially restricted to around ten thousand people who received a key at events intended to showcase Global Offensive. After issues with client and server stability were addressed, the beta was opened up to progressively more people, and at E3 2012, Valve announced that Global Offensive would be released on August 21, 2012, with the open beta starting roughly a month before that. Before the public beta, Valve invited professional Counter-Strike players to play-test the game and give feedback.
There were plans for cross-platform multiplayer play between Windows, OS X, Linux, Xbox 360, and PlayStation 3 players, but this was ultimately dropped so that the PC and Mac versions could be actively updated. On August 21, 2012, the game was publicly released on all platforms except Linux, which would not be released until September 23, 2014.
Since the initial release of Global Offensive, Valve has continued to update the game by introducing new maps and weapons, game modes, and weapon balancing changes. One of the first major additions to the game post-release was the "Arms Deal" update. Released on August 13, 2013, the update added cosmetic weapon finishes, or skins, to the game. These items are obtainable by a loot box mechanism; players would receive cases that could be unlocked using virtual keys, purchased through in-game microtransactions. Global Offensive has Steam Workshop support, allowing users to upload user-created content, such as maps, weapon skins, and custom game-modes. Some popular user-created skins are added to the game and are obtainable from unboxing them in cases. The creators of the skins are paid when their item is added to a case. These skins helped form a virtual economy in Global Offensive, leading to the creation of gambling, betting, and trading sites. The addition of skins and the associated virtual economy launched Global Offensive's player count past the other games in the Counter-Strike series and is one of the most important updates in the game's history.
Events called "Operations" are held occasionally and can be accessed through purchasable expansion packs in the form of "operation passes." These passes grant access to operation objectives which are spread over different game modes, such as Arms Race and Deathmatch, or in operation-specific game modes, first seen in Operation Hydra, released in May 2017. Completing these challenges rewards the player with XP and the ability to upgrade the operation "coin." The maps in the operations are community made, meaning some of the revenue made goes towards the map designers.
An update in October 2014 added "music kits", which replace the default in-game music with music from soundtrack artists commissioned by Valve. If a player with a music kit equipped becomes the round's most valuable player, their music will play for others at the end of the round. There is a feature that allows kits to be borrowed, and kits can be sold and exchanged through Steam's Community Market.
In 2016, the game saw two remakes of original Counter-Strike maps, as well as the introduction of Prime matchmaking and additional items. As a part of the Operation Wildfire promotion, Nuke was remade and re-released in February with the primary goals being to balance the map and make it more aesthetically pleasing. In April, Prime matchmaking was added to the game. To partake in this mode, the user had to have a verified phone number connected to their account. It was introduced in an attempt to prevent legitimate players from playing with cheaters or high-skilled players playing on alternative, lower ranked accounts, a practice colloquially known as "smurfing". Inferno, another original map, was re-released in October. Valve said they had three reasons behind the remake: "to improve visibility; to make it easier to move around in groups; and to tune it with player feedback." Also in October, consumable items called graffiti were added to the game. These items replaced a feature present in the previous iterations of the series called sprays. Previously, players could customize their sprays. Graffiti ideas can be uploaded to the Steam Workshop in the similar manner as gun skins and players can buy and trade the existing graffiti in game. One month later, glove skins were added.
In September 2017, Valve Company worked with the publisher Perfect World to release Global Offensive in mainland China. Chinese citizens, with their identification verified, can receive the game for free and earn Prime matchmaking status immediately. The game is played through Perfect World's launcher and contains numerous exclusive changes to the game, including the censorship of skulls and other symbols. Some other changes were in the cosmetics in certain maps, for example, the hammer and sickle on Cache and Train were removed. In preparation for the release, multiple cities in China celebrated and heavily promoted its upcoming release. Users who played the game during its launch month received free promotional cosmetics. In compliance with Chinese law, Valve also had to disclose its loot box gambling odds.
In November 2017, an update to the competitive matchmaking was announced. Called the "Trust Factor", it meant a player's "Trust Factor" would be calculated through both in-game and Steam-wide actions. Factors such as playtime on Global Offensive, times a user has been reported for cheating, playtime on other Steam games, and other behaviors hidden by Valve are taken into consideration when a user's "Trust Factor" is developed. This was done in an attempt to let the community bond back together in matchmaking, as Prime matchmaking separated Prime and non-Prime players from each other. Valve will not let users view their "Trust Factor" or reveal all of the factors deciding one's "Trust". In August 2018, an offline version of the game was released that allows the players to play offline with bots.
An update released on December 6, 2018, made the game fully free to play. Users who had purchased the game prior to this update were automatically updated to "Prime" status and given modes that can drop cosmetic items. In addition, the new version introduced a battle royale mode called "Danger Zone".
In November 2019, Operation Shattered Web was released. It operated similarly to the previous operations and introduced new character models and a battle pass system.
In April 2020, source code for 2018 versions of Counter-Strike: Global Offensive and Team Fortress 2 were leaked on the Internet. This created fears that malicious users would take advantage of the code to develop potential remote code execution software and attack game servers or players' own computers. Several ongoing fan projects temporarily halted development in wake of this news until better confirmation of the impact of the leak could be determined. Valve confirmed the legitimacy of the code leaks, but stated they do not believe it impacts servers and clients running the latest official builds of either game.
In December 2020, Operation Broken Fang was released accompanied with a cinematic trailer, the first official Counter-Strike: Global Offensive cinematic trailer in eight years since the official launch trailer.
In May 2021, a subscription service called "CS:GO 360 Stats" was released for per month. It includes access to detailed match stats from official Competitive, Premier, and Wingman game modes and the Round Win Chance report introduced in Operation Broken Fang. The update was met with a mixed response from players, with many pointing to free third-party websites that provided similar stats.
In September 2021, Operation Riptide was released, adding gameplay and matchmaking changes, new maps, and new cosmetic items.
In January 2022, an update adding Flick Stick support for gyroscopic game controllers was released. Flick Stick is a control scheme which lets the player quickly "flick" their view using the right analog stick, while delegating all fine aiming to gyro movements.
Gambling, third-party betting and money laundering
Following the introduction of the Arms Deal update in August 2013, skins formed a virtual economy due to their rarity and other high-value factors that influenced their desirability. Due to this, the creation of a number of skin trading sites enabled by the Steamworks API were created. Some of these sites began to offer gambling functionality, allowing users to bet on the outcome of professional matches with skins. In June and July 2016, two formal lawsuits were filed against these gambling sites and Valve, stating that these encourage underage gambling and undisclosed promotion by some streamers. Valve in turn began to take steps to prevent these sites from using Steamworks for gambling purposes, and several of these sites ceased operating as a result. In July 2018, Valve disabled the opening of containers in Belgium and the Netherlands after their loot boxes appeared to violate Dutch and Belgian gambling laws.
In 2019, Valve made changes to Global Offensive's loot box mechanics due to a realization that "nearly all" of the trading on loot box keys was done by criminal organizations as a method of money laundering. Valve released a statement, saying, “In the past, most key trades we observed were between legitimate customers. However, worldwide fraud networks have recently shifted to using CS:GO keys to liquidate their gains. At this point, nearly all key purchases that end up being traded or sold on the marketplace are believed to be fraud-sourced. As a result we have decided that newly purchased keys will not be tradeable or marketable.”
Professional competition
Global Offensive has one of the most popular esport scenes in the world. The Global Offensive professional scene consists of leagues and tournaments hosted by third-party organisations, and Valve-sponsored tournaments known as Major Championships. Majors are considered the most prestigious tournaments in Counter-Strike circuit and have among the largest prize pools; originally announced at , the prize pools for Majors have risen to since MLG Columbus 2016. Astralis is the most successful Global Offensive team of all time, with the core members of that team winning four Majors together.
In 2014, the "first large match fixing scandal" in the Global Offensive community took place, where team iBuyPower purposefully lost a match against NetCodeGuides.com. The seven professional players that were involved in the scandal were permanently banned from all Majors by Valve, although some other organizers eventually allowed the players to compete at their tournaments.
Esports organizations Cloud9 and Dignitas, among others, announced plans in February 2020 to launch Flashpoint, a franchise-based league for Counter-Strike, countering concerns over the state of the current promotion/relegation leagues. The league was to be owned by the teams rather than a single organization, similar to the Overwatch League.
Media coverage
As the game and the scene grew in popularity, companies, including WME/IMG and Turner Broadcasting, began to televise Global Offensive professional games, with the first being ELEAGUE Major 2017, held in the Fox Theatre and broadcast on US cable television network TBS in 2016. On August 22, 2018, Turner announced their further programming of Global Offensive with ELEAGUE’s Esports 101: CSGO and ELEAGUE CS:GO Premier 2018's docu-series on the TBS network.
Reception
Counter-Strike: Global Offensive received generally positive reception from critics, according to review aggregator Metacritic. Since the game's release, Global Offensive has remained one of the most played and highest-grossing games on Steam. The game won the fan's choice "eSports Game of the Year" award at The Game Awards 2015.
Reviewers praised Global Offensives faithfulness to the previous game, Counter-Strike: Source, with Allistair Pinsof of Destructoid rating the game very highly and saying that Global Offensive is a "polished and better looking" version of the game. GameSpot writer Eric Neigher said in their review that this game stays true to its predecessors by adding much content, but tweaking small amounts and retaining their best features. The reviewers at gamesTM wrote in their review that the game stood "as a glowing reminder that quality game design is rewarded in longevity and variety." They also continued onto congratulate Valve that they had not only updated the popular game, but "had completely outclassed its contemporaries." Martin Gaston of VideoGamer.com wrote that although he was too old to truly enjoy the game, he believed that it was a "fine installment of one of the best games ever made," and that some people will experience "what will become the definitive moments of their gaming lives." Xav de Matos for Engadget wrote that for the price, "Global Offensive is a great extension to that legacy." Mitch Dyer from IGN said that "Global Offensive is definitely a Counter-Strike sequel – it looks and feels familiar, with minor tweaks here and there to help balance old issues and surprise longtime players."
Some of the features in the early releases of the game were criticized by reviewers. GameSpys Mike Sharkey did not believe that the new content added was good or that there was much of it, and said that the Elo rating system seemed ineffective with many players of various skill levels all playing at once throughout the early days of release. Evan Lahti from PC Gamer noted that the majority of new official maps in Global Offensive were only for Arms Race or Demolition game modes, while Classic maps were only given "smart adjustments" to minor details. Pinsof thought that in its release state, it would not be the final version of the game. Paul Goodman said that for long-time fans of the series, Global Offensive will start to show the game's age, expressing that he "couldn't help but feel that I had been there and done that a dozen times before."
Although reviewers liked the console versions of the game, they believed there were obvious differences between the PC and console versions. Neigher believed that due to playing with thumbsticks and shoulder buttons "you definitely won't be getting the ultimate CS:GO experience." Ron Vorstermans for Gamer.nl said that the PC version is there to play at a higher competitive level, though he went on to say that the console versions are not inferior because of the PC's superiority for competition. Dyer wrote that the PlayStation 3 version was at an advantage to the Xbox version because of the ability to connect a keyboard and mouse to the system. He continued on to say that the user-interface on both of the consoles was as good as the PC one. Mark Langshaw of Digital Spy opined that although the game has support for the PlayStation Move, using it only makes the "already unforgiving game all the more challenging."
The game was nominated for "Best Spectator Game" in IGNs Best of 2017 Awards, for "eSports Game of the Year" at the 2017, 2018, and 2019 Golden Joystick Awards, for "Best eSports Game" at The Game Awards 2017, The Game Awards 2019 and The Game Awards 2020, and for "Game, eSports" at the 17th Annual National Academy of Video Game Trade Reviewers Awards. In 2018, the game was nominated for "Fan Favorite eSports Game" and "Fan Favorite eSports League Format" with the Majors at the Gamers' Choice Awards, and for "eSports Title of the Year" at the Australian Games Awards.
References
External links
Global Offensive at ValveSoftware.com
2012 video games
Asymmetrical multiplayer video games
Battle royale games
Counter-Strike
Esports games
Free-to-play video games
Linux games
MacOS games
Multiplayer online games
Multiplayer video games
PlayStation 3 games
PlayStation Network games
Source (game engine) games
Tactical shooter video games
Valve Corporation games
Video game sequels
Video games about bomb disposal
Video games about police officers
Video games about the Special Air Service
Video games about terrorism
Video games about the United States Navy SEALs
Video games containing loot boxes
Video games developed in the United States
Video games scored by Kelly Bailey
Video games scored by Lennie Moore
Video games scored by Mike Morasky
Video games set in Germany
Video games set in Italy
Video games set in the Middle East
Video games set in the United States
Video games using Havok
Video games with downloadable content
Video games with Steam Workshop support
Windows games
Xbox 360 Live Arcade games
|
243401
|
https://en.wikipedia.org/wiki/Infosys
|
Infosys
|
Infosys Limited is an Indian multinational information technology company that provides business consulting, information technology and outsourcing services. The company was founded in Pune and is headquartered in Bangalore. Infosys is the second-largest Indian IT company after Tata Consultancy Services by 2020 revenue figures and the 602nd largest public company in the world according to Forbes Global 2000 ranking. The credit rating of the company is CRISIL AAA / Stable / CRISIL A1+ (rating by CRISIL).
On 24 August 2021, Infosys became the fourth Indian company to cross $100 billion in market capitalization.
History
Infosys was founded by seven engineers in Pune, Maharashtra, India with an initial capital of $250 in 1981. It was registered as Infosys Consultants Private Limited on 2 July 1981. In 1983, it relocated its office to Bangalore, Karnataka, India.
The company changed its name to Infosys Technologies Private Limited in April 1992 and to Infosys Technologies Limited when it became a public limited company in June 1992. It was later renamed to Infosys Limited in June 2011.
An initial public offering (IPO) was floated in February 1993 with an offer price of per share against a book value of per share. The IPO was undersubscribed but it was "bailed out" by US investment bank Morgan Stanley, which picked up a 13% equity stake at the offer price. Its shares were listed in June 1993 with trading opening at per share.
Infosys shares were listed on the Nasdaq stock exchange in 1999 as American depositary receipts. It became the first Indian company to be listed on Nasdaq. The share price surged to by 1999 making it the costliest share on the market at the time. At that time, Infosys was among the 20 biggest companies by market capitalization on the Nasdaq. The ADR listing was shifted from Nasdaq to NYSE Euronext to give European investors better access to the company's shares.
On 28 July 2010, then British Prime Minister David Cameron visited Infosys HQ in Bangalore and addressed Infosys employees.
Its annual revenue reached US$100 million in 1999, US$1 billion in 2004 and US$10 billion in 2017.
In 2012, Infosys announced a new office in Milwaukee, Wisconsin, to serve Harley-Davidson, being the 18th international office in the United States. Infosys hired 1,200 United States employees in 2011, and expanded the workforce by an additional 2,000 employees in 2012.
In April 2018, Infosys announced expanding in Indianapolis, Indiana. The development will include more than 120 acres and is expected to result in 3,000 new jobs—1,000 more than previously announced.
In July 2014, Infosys started a product subsidiary called EdgeVerve Systems, focusing on enterprise software products for business operations, customer service, procurement and commerce network domains. In August 2015, the Finacle Global Banking Solutions assets were officially transferred from Infosys and became part of the product company EdgeVerve Systems product portfolio.
Products and services
Infosys provides software development, maintenance and independent validation services to companies in finance, insurance, manufacturing and other domains.
One of its known products is Finacle which is a universal banking solution with various modules for retail and corporate banking.
Its key products and services are:
NIA – Next Generation Integrated AI Platform (formerly known as Mana)
Infosys Consulting – a global management consulting service
Cloud-based enterprise transformation services
Infosys Information Platform (IIP) – Analytics platform
EdgeVerve Systems which includes Finacle, a global banking platform
Panaya Cloud Suite
Skava – now rebranded as Infosys Equinox
Engineering Services
Digital Marketing
Geographical presence
Infosys has 82 sales and marketing offices and 123 development centres across the world as of 31 March 2018, with major presence in India, United States, China, Australia, Japan, Middle East and Europe.
In 2019, 60%, 24%, and 3% of its revenues were derived from projects in North America, Europe, and India, respectively. The remaining 13% of revenues were derived from the rest of the world.
Acquisitions
Listing and shareholding pattern
In India, shares of Infosys are listed on the BSE where it is a part of the BSE SENSEX and the NSE where it is a NIFTY 50 Constituent. Its shares are listed by way of American depositary receipts (ADRs) at the New York Stock Exchange.
Over a period of time, the shareholding of its promoters has gradually reduced, starting from June 1993 when its shares were first listed. The promoters' holdings reduced further when Infosys became the first Indian-registered company to list Employees Stock Options Schemes and ADRs on NASDAQ on 11 March 1999. As of 29 July 2021, the promoter holding was 12.95%, foreign institutional investors (FIIs) hold 33.39%, and domestic institutional investors (DIIs) hold 21.98%.
Employees
Infosys had a total of 259,619 employees (generally known as "Infoscions") as of 2021, out of which 38.6% were women. Out of its total workforce, 229,658 are software professionals and remaining 13,796 work for support and sales. In 2016, 89% of its employees were based in India.
During the financial year 2019, Infosys received 2,333,420 applications from prospective employees, interviewed 180,225 candidates and had a gross addition of 94,324 employees, a 4% hiring rate. These numbers do not include its subsidiaries.
In it's Q3FY22 results in January, Infosys has reported that attrition has risen to 25.5%, from 20.1% in the September quarter. It has announced a profit of Rs 5,809 crore for the third quarter and said it is planning to hire 55,000 freshers for FY22 as part of its global graduate hiring program.
Training Centre in Mysore
As the world's largest corporate university, the Infosys global education centre in the 337 acre campus has 400 instructors and 200+ classrooms, with international benchmarks at its core. Established in 2002, it had trained around 125,000 engineering graduates by June 2015. It can train 14,000 employees at a given point of time on various technologies.
The Infosys Leadership Institute (ILI), based in Mysuru, has 196 rooms and trains about 4,000 trainees annually. Its purpose is to prepare and develop the senior leaders in Infosys for current and future executive leadership roles.
The Infosys Training Centre in Mysuru also provides a number of extracurricular facilities like tennis, badminton, basketball, swimming pool, gym and bowling alley. The bowling alley is cheapest in the world with INR 50 per round. It has an international level cricket ground approved by BCCI.
CEOs
Since its establishment in 1981 till 2014, the CEOs of Infosys were its promoters, with N.R. Narayana Murthy leading the company in its initial 21 years. Dr. Vishal Sikka was the first non-promoter CEO of Infosys who worked for around 3 years. Dr. Vishal Sikka resigned in August 2017. After his resignation, UB Pravin Rao was appointed as Interim CEO and MD of Infosys. Infosys appointed Salil Parekh as chief executive officer (CEO) and managing director (MD) of the company with effect from 2 January 2018.
List of Infosys CEOs
Awards and recognition
In 2021, Infosys was positioned as a leader in the Forrester Wave Application Modernization & Migration Services.
In 2021, Infosys was positioned as a Leader in Gartner Magic Quadrant for Data and Analytics Services.
In 2020, Infosys was ranked No. 1 in the HFS Top 10 Agile Software Development 2020 report.
In 2020, Infosys was recognized as a leader in Retail and CPG Digital Services by Avasant.
In 2019, Infosys was a winner of the United Nations Global Climate Action Award in 'Climate Neutral Now' category.
In 2019, Infosys was ranked as the 3rd Best Regarded Company in the World by Forbes.
In 2017, HfS Research included Infosys in Winner's Circle of HfS Blueprint for Managed Security Services, Industry 4.0 services and Utility Operations.
In 2013, Infosys was ranked 18th largest IT services provider in the world by HfS Research. In the same year, it was ranked 53rd in Forbes list of World's Most Innovative Companies.
In 2012, Infosys was ranked amongst the world's most innovative companies by Forbes. In the same year, Infosys was in the list of top twenty green companies in Newsweek Green Rankings for 2012.
In 2006, Institute of Chartered Accountants of India included Infosys into Hall of Fame for being the winner of Best Presented Accounts for 11 consecutive years.
Financial
Controversies
Settlement of tax fraud in the US
In December 2019, the Attorney General of California, Xavier Becerra announced an $800,000 settlement against Infosys and its BPM (business process management) subsidiary. Close to 500 Infosys employees were working in the state on Infosys-sponsored B-1 visas instead of H-1B visas between 2006 and 2017, as per an official post available on the website of State of California.
This misclassification resulted in Infosys avoiding California payroll taxes such as unemployment insurance, disability insurance, and employment training taxes.
Accusation of visa fraud in the US
In 2011, Infosys was accused of committing visa fraud by using B-1 (visitor) visas for work requiring H-1B (work) visas. The allegations were initially made by an American employee of Infosys in an internal complaint. He subsequently sued the company, claiming that he was harassed and sidelined after speaking out. Although that case was dismissed, it along with another similar case, brought the allegations to the notice of the US authorities – and the U.S. Department of Homeland Security and a federal grand jury started investigating.
In October 2013, Infosys agreed to settle the civil suit with US authorities by paying US$34 million. Infosys refused to admit guilt and stressed that it only agreed to pay the fine to avoid the nuisance of "prolonged litigation". In its statement, the company said "As reflected in the settlement, Infosys denies and disputes any claims of systemic visa fraud, misuse of visas for competitive advantage, or immigration abuse. Those claims are assertions that remain unproven".
Displacement of American workers at Southern California Edison and Disney
In 2015, the United States Department of Labor began an investigation of Infosys after claims were made that the company used workers with H-1B visas to replace workers at Disney and Southern California Edison. The investigation did not find any wrongdoing.
Allegations of Financial Irregularities
In 2019, whistle blowers alleged irregularities in the company's financial accounting. Internal investigations conducted by the company concluded that the allegations were without merit. External auditors said that Infosys' approach to revenue recognition was in line with IAS 34 regulations.
See also
List of IT consulting firms
List of IT companies in India
List of acquisitions by Infosys BPO Limited
References
External links
Information technology consulting firms of India
International information technology consulting firms
Software companies of India
Outsourcing companies
Multinational companies headquartered in India
Technology companies established in 1981
Indian companies established in 1981
Information technology companies of Bhubaneswar
Outsourcing in India
BSE SENSEX
NIFTY 50
Companies listed on the New York Stock Exchange
Indian brands
Companies based in Bangalore
1981 establishments in Karnataka
Companies listed on the National Stock Exchange of India
Companies listed on the Bombay Stock Exchange
|
53452
|
https://en.wikipedia.org/wiki/Euler%27s%20totient%20function
|
Euler's totient function
|
In number theory, Euler's totient function counts the positive integers up to a given integer that are relatively prime to . It is written using the Greek letter phi as or , and may also be called Euler's phi function. In other words, it is the number of integers in the range for which the greatest common divisor is equal to 1. The integers of this form are sometimes referred to as totatives of .
For example, the totatives of are the six numbers 1, 2, 4, 5, 7 and 8. They are all relatively prime to 9, but the other three numbers in this range, 3, 6, and 9 are not, since and . Therefore, . As another example, since for the only integer in the range from 1 to is 1 itself, and .
Euler's totient function is a multiplicative function, meaning that if two numbers and are relatively prime, then .
This function gives the order of the multiplicative group of integers modulo (the group of units of the ring ). It is also used for defining the RSA encryption system.
History, terminology, and notation
Leonhard Euler introduced the function in 1763. However, he did not at that time choose any specific symbol to denote it. In a 1784 publication, Euler studied the function further, choosing the Greek letter to denote it: he wrote for "the multitude of numbers less than , and which have no common divisor with it". This definition varies from the current definition for the totient function at but is otherwise the same. The now-standard notation comes from Gauss's 1801 treatise Disquisitiones Arithmeticae, although Gauss didn't use parentheses around the argument and wrote . Thus, it is often called Euler's phi function or simply the phi function.
In 1879, J. J. Sylvester coined the term totient for this function, so it is also referred to as Euler's totient function, the Euler totient, or Euler's totient. Jordan's totient is a generalization of Euler's.
The cototient of is defined as . It counts the number of positive integers less than or equal to that have at least one prime factor in common with .
Computing Euler's totient function
There are several formulas for computing .
Euler's product formula
It states
where the product is over the distinct prime numbers dividing . (For notation, see Arithmetical function.)
An equivalent formulation for , where are the distinct primes dividing n, is:The proof of these formulas depends on two important facts.
Phi is a multiplicative function
This means that if , then . Proof outline: Let , , be the sets of positive integers which are coprime to and less than , , , respectively, so that , etc. Then there is a bijection between and by the Chinese remainder theorem.
Value of phi for a prime power argument
If is prime and , then
Proof: Since is a prime number, the only possible values of are , and the only way to have is if is a multiple of , i.e. , and there are such multiples less than . Therefore, the other numbers are all relatively prime to .
Proof of Euler's product formula
The fundamental theorem of arithmetic states that if there is a unique expression where are prime numbers and each . (The case corresponds to the empty product.) Repeatedly using the multiplicative property of and the formula for gives
This gives both versions of Euler's product formula.
An alternative proof that does not require the multiplicative property instead uses the inclusion-exclusion principle applied to the set , excluding the sets of integers divisible by the prime divisors.
Example
In words: the distinct prime factors of 20 are 2 and 5; half of the twenty integers from 1 to 20 are divisible by 2, leaving ten; a fifth of those are divisible by 5, leaving eight numbers coprime to 20; these are: 1, 3, 7, 9, 11, 13, 17, 19.
The alternative formula uses only integers:
Fourier transform
The totient is the discrete Fourier transform of the gcd, evaluated at 1. Let
where for . Then
The real part of this formula is
For example, using and :Unlike the Euler product and the divisor sum formula, this one does not require knowing the factors of . However, it does involve the calculation of the greatest common divisor of and every positive integer less than , which suffices to provide the factorization anyway.
Divisor sum
The property established by Gauss, that
where the sum is over all positive divisors of , can be proven in several ways. (See Arithmetical function for notational conventions.)
One proof is to note that is also equal to the number of possible generators of the cyclic group ; specifically, if with , then is a generator for every coprime to . Since every element of generates a cyclic subgroup, and all subgroups are generated by precisely elements of , the formula follows. Equivalently, the formula can be derived by the same argument applied to the multiplicative group of the th roots of unity and the primitive th roots of unity.
The formula can also be derived from elementary arithmetic. For example, let and consider the positive fractions up to 1 with denominator 20:
Put them into lowest terms:
These twenty fractions are all the positive ≤ 1 whose denominators are the divisors . The fractions with 20 as denominator are those with numerators relatively prime to 20, namely , , , , , , , ; by definition this is fractions. Similarly, there are fractions with denominator 10, and fractions with denominator 5, etc. Thus the set of twenty fractions is split into subsets of size for each dividing 20. A similar argument applies for any n.
Möbius inversion applied to the divisor sum formula gives
where is the Möbius function, the multiplicative function defined by and for each prime and . This formula may also be derived from the product formula by multiplying out to get
An example:
Some values
The first 100 values are shown in the table and graph below:
{| class="wikitable" style="text-align: right"
|+ for
! +
! 1 || 2 || 3 || 4 || 5 || 6 || 7 || 8 || 9 || 10
|-
! 0
| 1 || 1 || 2 || 2 || 4 || 2 || 6 || 4 || 6 || 4
|-
! 10
| 10 || 4 || 12 || 6 || 8 || 8 || 16 || 6 || 18 || 8
|-
! 20
| 12 || 10 || 22 || 8 || 20 || 12 || 18 || 12 || 28 || 8
|-
! 30
| 30 || 16 || 20 || 16 || 24 || 12 || 36 || 18 || 24 || 16
|-
! 40
| 40 || 12 || 42 || 20 || 24 || 22 || 46 || 16 || 42 || 20
|-
! 50
| 32 || 24 || 52 || 18 || 40 || 24 || 36 || 28 || 58 || 16
|-
! 60
| 60 || 30 || 36 || 32 || 48 || 20 || 66 || 32 || 44 || 24
|-
! 70
| 70 || 24 || 72 || 36 || 40 || 36 || 60 || 24 || 78 || 32
|-
! 80
| 54 || 40 || 82 || 24 || 64 || 42 || 56 || 40 || 88 || 24
|-
! 90
| 72 || 44 || 60 || 46 || 72 || 32 || 96 || 42 || 60 || 40
|}
In the graph at right the top line is an upper bound valid for all other than one, and attained if and only if is a prime number. A simple lower bound is , which is rather loose: in fact, the lower limit of the graph is proportional to .
Euler's theorem
This states that if and are relatively prime then
The special case where is prime is known as Fermat's little theorem.
This follows from Lagrange's theorem and the fact that is the order of the multiplicative group of integers modulo .
The RSA cryptosystem is based on this theorem: it implies that the inverse of the function , where is the (public) encryption exponent, is the function , where , the (private) decryption exponent, is the multiplicative inverse of modulo . The difficulty of computing without knowing the factorization of is thus the difficulty of computing : this is known as the RSA problem which can be solved by factoring . The owner of the private key knows the factorization, since an RSA private key is constructed by choosing as the product of two (randomly chosen) large primes and . Only is publicly disclosed, and given the difficulty to factor large numbers we have the guarantee that no one else knows the factorization.
Other formulae
Note the special cases
Compare this to the formula
(See least common multiple.)
is even for . Moreover, if has distinct odd prime factors,
For any and such that there exists an such that .
where is the radical of (the product of all distinct primes dividing ).
( cited in)
(where is the Euler–Mascheroni constant).
where is a positive integer and is the number of distinct prime factors of .
Menon's identity
In 1965 P. Kesava Menon proved
where is the number of divisors of .
Formulae involving the golden ratio
Schneider found a pair of identities connecting the totient function, the golden ratio and the Möbius function . In this section is the totient function, and is the golden ratio.
They are:
and
Subtracting them gives
Applying the exponential function to both sides of the preceding identity yields an infinite product formula for :
The proof is based on the two formulae
Generating functions
The Dirichlet series for may be written in terms of the Riemann zeta function as:
The Lambert series generating function is
which converges for .
Both of these are proved by elementary series manipulations and the formulae for .
Growth rate
In the words of Hardy & Wright, the order of is "always 'nearly '."
First
but as n goes to infinity, for all
These two formulae can be proved by using little more than the formulae for and the divisor sum function .
In fact, during the proof of the second formula, the inequality
true for , is proved.
We also have
Here is Euler's constant, , so and .
Proving this does not quite require the prime number theorem. Since goes to infinity, this formula shows that
In fact, more is true.
and
The second inequality was shown by Jean-Louis Nicolas. Ribenboim says "The method of proof is interesting, in that the inequality is shown first under the assumption that the Riemann hypothesis is true, secondly under the contrary assumption."
For the average order, we have
due to Arnold Walfisz, its proof exploiting estimates on exponential sums due to I. M. Vinogradov and N. M. Korobov (this is currently the best known estimate of this type). The "Big " stands for a quantity that is bounded by a constant times the function of inside the parentheses (which is small compared to ).
This result can be used to prove that the probability of two randomly chosen numbers being relatively prime is .
Ratio of consecutive values
In 1950 Somayajulu proved
In 1954 Schinzel and Sierpiński strengthened this, proving that the set
is dense in the positive real numbers. They also proved that the set
is dense in the interval (0,1).
Totient numbers
A totient number is a value of Euler's totient function: that is, an for which there is at least one for which . The valency or multiplicity of a totient number is the number of solutions to this equation. A nontotient is a natural number which is not a totient number. Every odd integer exceeding 1 is trivially a nontotient. There are also infinitely many even nontotients, and indeed every positive integer has a multiple which is an even nontotient.
The number of totient numbers up to a given limit is
for a constant .
If counted accordingly to multiplicity, the number of totient numbers up to a given limit is
where the error term is of order at most for any positive .
It is known that the multiplicity of exceeds infinitely often for any .
Ford's theorem
proved that for every integer there is a totient number of multiplicity : that is, for which the equation has exactly solutions; this result had previously been conjectured by Wacław Sierpiński, and it had been obtained as a consequence of Schinzel's hypothesis H. Indeed, each multiplicity that occurs, does so infinitely often.
However, no number is known with multiplicity . Carmichael's totient function conjecture is the statement that there is no such .
Perfect totient numbers
Applications
Cyclotomy
In the last section of the Disquisitiones Gauss proves that a regular -gon can be constructed with straightedge and compass if is a power of 2. If is a power of an odd prime number the formula for the totient says its totient can be a power of two only if is a first power and is a power of 2. The primes that are one more than a power of 2 are called Fermat primes, and only five are known: 3, 5, 17, 257, and 65537. Fermat and Gauss knew of these. Nobody has been able to prove whether there are any more.
Thus, a regular -gon has a straightedge-and-compass construction if n is a product of distinct Fermat primes and any power of 2. The first few such are
2, 3, 4, 5, 6, 8, 10, 12, 15, 16, 17, 20, 24, 30, 32, 34, 40,... .
The RSA cryptosystem
Setting up an RSA system involves choosing large prime numbers and , computing and , and finding two numbers and such that . The numbers and (the "encryption key") are released to the public, and (the "decryption key") is kept private.
A message, represented by an integer , where , is encrypted by computing .
It is decrypted by computing . Euler's Theorem can be used to show that if , then .
The security of an RSA system would be compromised if the number could be factored or if could be computed without factoring .
Unsolved problems
Lehmer's conjecture
If is prime, then . In 1932 D. H. Lehmer asked if there are any composite numbers such that divides . None are known.
In 1933 he proved that if any such exists, it must be odd, square-free, and divisible by at least seven primes (i.e. ). In 1980 Cohen and Hagis proved that and that . Further, Hagis showed that if 3 divides then and .
Carmichael's conjecture
This states that there is no number with the property that for all other numbers , , . See Ford's theorem above.
As stated in the main article, if there is a single counterexample to this conjecture, there must be infinitely many counterexamples, and the smallest one has at least ten billion digits in base 10.
See also
Carmichael function
Duffin–Schaeffer conjecture
Generalizations of Fermat's little theorem
Highly composite number
Multiplicative group of integers modulo
Ramanujan sum
Totient summatory function
Dedekind psi function
Notes
References
The Disquisitiones Arithmeticae has been translated from Latin into English and German. The German edition includes all of Gauss' papers on number theory: all the proofs of quadratic reciprocity, the determination of the sign of the Gauss sum, the investigations into biquadratic reciprocity, and unpublished notes.
References to the Disquisitiones are of the form Gauss, DA, art. nnn.
. See paragraph 24.3.2.
Dickson, Leonard Eugene, "History Of The Theory Of Numbers", vol 1, chapter 5 "Euler's Function, Generalizations; Farey Series", Chelsea Publishing 1952
.
.
External links
Euler's Phi Function and the Chinese Remainder Theorem — proof that is multiplicative
Euler's totient function calculator in JavaScript — up to 20 digits
Dineva, Rosica, The Euler Totient, the Möbius, and the Divisor Functions
Plytage, Loomis, Polhill Summing Up The Euler Phi Function
Modular arithmetic
Multiplicative functions
Articles containing proofs
Algebra
Number theory
Leonhard Euler
|
432749
|
https://en.wikipedia.org/wiki/ENEA%20AB
|
ENEA AB
|
Enea AB is a global information technology company with its headquarters in Kista, Sweden that provides real-time operating systems and consulting services. Enea, which is an abbreviation of Engmans Elektronik Aktiebolag, also produces the OSE operating system.
History
Enea was founded 1968 by Rune Engman as Engmans Elektronik AB. Their first product was an operating system for a defence computer used by the Swedish Air Force. During the 1970s the firm developed compiler technology for the Simula programming language.
During the early days of the European Internet-like connections, Enea employee Björn Eriksen connected Sweden to EUnet using UUCP, and registered enea as the first Swedish domain in April 1983. The domain was later converted to the internet domain enea.se when the network was switched over to TCP and the Swedish top domain .se was created in 1986.
Products
OSE
The Enea family of real-time operating systems was first released in 2009.
The Enea Operating System Embedded (OSE) is a family of real-time, microkernel, embedded operating system created by Bengt Eliasson for ENEA AB, which at the time was collaborating with Ericsson to develop a multi-core system using Assembly, C, and C++. Enea OSE Multicore Edition is based on the same microkernel architecture. The kernel design that combines the advantages of both traditional asymmetric multiprocessing (AMP) and symmetric multiprocessing (SMP). Enea OSE Multicore Edition offers both AMP and SMP processing in a hybrid architecture. OSE supports many processors, mainly 32-bit. These include the ColdFire, ARM, PowerPC, and MIPS based system on a chip (SoC) devices.
The Enea OSE family features three OSs: OSE (also named OSE Delta) for processors by ARM, PowerPC, and MIPS, OSEck for various DSP's, and OSE Epsilon for minimal devices, written in pure assembly (ARM, ColdFire, C166, M16C, 8051). OSE is a closed-source proprietarily licensed software released on 20 March 2018. OSE uses events (or signals) in the form of messages passed to and from processes in the system. Messages are stored in a queue attached to each process. A link handler mechanism allows signals to be passed between processes on separate machines, over a variety of transports. The OSE signalling mechanism formed the basis of an open-source inter-process kernel design project named LINX.
Linux
Enea Linux provides an open, cross-development tool chain and runtime environment based on the Yocto Project embedded Linux configuration system.
Hypervisor
Enea Hypervisor is also based on OSE microkernel technology and runs Enea OSE applications and takes as guests Linux Operating System and optionally semiconductor specific executive environments for bare-metal speed packet processing
Optima
Enea Optima development tool suite for developing, debugging, and profiling embedded systems software
The Element
The Element middleware software for high-availability systems, based on technology developed by Equipe Communications Corp
Collaborative project and community memberships
Enea is a member of various collaborative projects and open source communities:
Linux Foundation
Automotive Grade Linux
Linux OPNFV
Yocto Project
Linaro
Open Data Plane (ODP)
References
Information technology companies of Sweden
Companies based in Stockholm
Real-time operating systems
Embedded operating systems
ARM operating systems
Microkernel-based operating systems
|
50237338
|
https://en.wikipedia.org/wiki/Apache%20Fortress
|
Apache Fortress
|
Apache Fortress is an open source project of the Apache Software Foundation and a subproject of the Apache Directory. It is an authorization system, written in Java, that provides role-based access control, delegated administration and password policy using an LDAP backend.
Standards implemented:
Role-Based Access Control (RBAC) ANSI INCITS 359
Administrative Role-Based Access Control (ARBAC02)
IETF Password Policy (draft)
Unix Users and Groups (RFC2307)
Fortress has four separate components:
Core - A set of security authorization APIs.
Realm - A Web Container plug-in that provides security for the Apache Tomcat container.
Rest - HTTP protocol wrappers of core APIs using Apache CXF.
Web - HTML pages of core APIs using Apache Wicket.
History
Fortress was first contributed in 2011 to the OpenLDAP Foundation and moved to the Apache Directory project in 2014.
Releases
API
Fortress provides security functions via APIs corresponding to the standards implemented. For example, RBAC API design mimics the functional specifications of ANSI INCITS 359 with function names, entities being the same.
References
External links
Apache Fortress Project Page
py-fortress on PyPI
Fortress
Directory services
Access control
|
15150859
|
https://en.wikipedia.org/wiki/OpenFabrics%20Alliance
|
OpenFabrics Alliance
|
The OpenFabrics Alliance is a non-profit organization that promotes remote direct memory access (RDMA) switched fabric technologies for server and storage connectivity. These high-speed data-transport technologies are used in high-performance computing facilities, in research and various industries.
The OpenFabrics Alliance aims to develop open-source software that supports the three major RDMA fabric technologies: InfiniBand, RDMA over Converged Ethernet (RoCE) and iWARP. The software includes two packages, one that runs on Linux and FreeBSD and one that runs on Microsoft Windows. The alliance worked with two large Linux distributors—SUSE and Red Hat—as well as Microsoft on compatibility with their operating systems.
History
Founded in June 2004 as the OpenIB Alliance, the organization originally developed an InfiniBand software stack for Linux. Initial funding for the Alliance was provided by the United States Department of Energy. The alliance released the first version of the OpenFabrics Enterprise Distribution (OFED) in 2005.
In 2005 the OpenIB Alliance announced support for Microsoft Windows. In 2006, the organization again expanded its charter to include support for iWARP, which is a transport technology that competes with InfiniBand. At that time the alliance changed its name to the OpenFabrics Alliance. Subsequent releases have added support for iWARP and Windows.
In 2011, OFED stack was ported to FreeBSD and included in FreeBSD 9.
OpenFabrics Enterprise Distribution
A community of developers from hardware manufacturers, software vendors, system integrators, government agencies and academia to work on OFED. The OpenFabrics Alliance provides architectures, software repositories, interoperability tests, bug databases, workshops, and BSD- and GPL-licensed code to facilitate development.
The OFED stack includes software drivers, core kernel-code, middleware, and user-level interfaces. It offers a range of standard protocols, including IPoIB (IP over InfiniBand), SDP, SRP, iSER, RDS and DAPL (the Direct Access Programming Library). It also supports many other protocols, including various MPI implementations, and it supports many file systems, including Lustre and NFS over RDMA.
Interoperability testing
On June 25, 2007, the OpenFabrics Alliance announced the OFA-UNH-IOL Logo Program in partnership with the University of New Hampshire InterOperability Laboratory. The program enables manufacturers of InfiniBand and iWARP products to test and certify that their products support the OpenFabrics software stack, and test their compatibility with other products.
The alliance sponsors interoperability events at the University of New Hampshire. The test scenarios are available to the public, as are the test results for all products that earn the logo. During interoperability events, all participating companies have the opportunity to observe all tests run on all products.
Members
Corporate members of the OpenFabrics Alliance include Advanced Micro Devices, Appro, Broadcom, Cray Inc., Chelsio Communications, Cisco Systems, DataDirect Networks, Emulex, Flextronics, Hewlett Packard, Huawei, IBM, Intel, LSI Corporation, Mellanox Technologies, NetEffect, Neterion, NetApp, NetXen, Oracle Corporation, QLogic, RapidIO, Red Hat, Silicon Graphics, SuSE and System Fabric Works.
Research members include:
Lawrence Livermore National Laboratory
Los Alamos National Laboratory
Sandia National Laboratories
Consulting members include the Ethernet Alliance, the InfiniBand Trade Association, Lamprey Networks, Ohio State University, and the University of New Hampshire InterOperability Laboratory.
In 2007, Credit Suisse became the first financial-services firm to join the alliance.
References
External links
Official website
Computer memory
Computer network organizations
|
1236562
|
https://en.wikipedia.org/wiki/Vendetta%20Online
|
Vendetta Online
|
Vendetta Online is a twitch-based, science fiction massively multiplayer online role-playing game (MMORPG) developed by Guild Software for the operating systems Android, Linux, Mac OS X, iOS, and Microsoft Windows. It uses the NAOS game engine, a fully real-time flight model and combat system, to offer first-person/third-person shooter-style player versus player and player versus environment battle action against the backdrop of a massively multiplayer universe. Vendetta Online shipped as a commercial MMORPG on November 1, 2004 with a subscription-based business model, although it has been running continuously since April 2002. Vendetta Online is available to play across a wide array of platforms, including the Oculus Rift virtual reality display, allowing all users to directly interact in a single, contiguous galaxy. It is also notable for its twitch combat and fidelity to real physics.
Gameplay
The twitch gameplay in Vendetta Online revolves around lining up a correct shot against enemy ships, while avoiding incoming fire. The dynamics of this can be complex, as ships are moving through 3-D space, and the weapons fire itself has a specific velocity, modified by the ships' absolute velocity. The ships mostly obey the laws of Newtonian Mechanics although a few artificial limits are put in place to increase the playability of the game. The control scheme includes a full six degrees of freedom, allowing users to assign keys or control axes (via joystick, thumb-stick, throttle, accelerometer, or touch area) to yaw, pitch, roll, and thrust along three axes. Ships can also be controlled with the mouse and keyboard using "mouselook" mode, in which the direction of view is controlled by the mouse and the ship's nose auto-corrects to catch up. Ships each have detailed specifications such as mass, thrust, torque, cargo capacity, top speed, turbo energy drain, armor, and weapon ports. There are also many variants of the main ship types, some holding certain advantages over others, with some advanced variants belonging to special factions. Depending on what equipment is being used, and what is carried in the cargo hold, the mass of the ship (and consequently maneuverability) will change greatly, affecting combat.
Weaponry
There are a wide variety of weapons in Vendetta Online, some requiring energy to fire, others relying on ammunition, proximity fuzes, target tracking, or some combination thereof. As most weapons have a cone of fire assisted by autoaim, effectively deploying weaponry relies less on careful aiming than it does on managing the ship's momentum, energy, attitude and distance to target. Knowledge of combat proves an effective tool, as a veteran pilot flying mediocre equipment will often defeat a less experienced pilot using superior equipment, by virtue of tactics alone. However, certain weapons do require careful aiming, such as railguns. Weapons may cause the target ship to be displaced, or to spin through concussive force. Weapons may also hit a target that has not been selected, or hit an unintended target or friendly ship.
Factions
In Vendetta Online, there are three playable nations and many minor factions. The nations are the Itani Nation, the Serco Dominion, and the Union of Independent Territories (usually referred to as the UIT). The Itani are reputed for their science and maneuverable ships, the Serco for their warrior nature and armored ships, and the UIT for their trading abilities and neutrality. Choosing which nation to play affects initial faction standings, and which missions will later become available.
Licenses and faction standing
Vendetta Online uses a five-category license system in which players earn experience toward different licenses by participating in different avenues of gameplay. For example, to earn a better trade license a player may trade commodities between stations for profit or take missions from the local trade guild. As a player's licenses improve, different ships and equipment may become available to purchase and various types of missions appear. The player's standing in the current faction that controls the space around them also affects what opportunities may be afforded them. Faction standing can be gained or lost through missions or by destroying ships of various alignment.
Long-term objectives
As the player progresses in licenses and faction, certain large-scale goals become available. The player may take part in the war against the Hive, an ever-expanding race of NPC Robots, vying for control of areas of space containing asteroids rich in valuable minerals. By banding together with other players, they may take down powerful enemies such as the Hive Queen or Leviathan, affecting Hive activities across many systems. The player may also join the military and take part in large scale conflict between the Serco and the Itani.
Conquerable Stations - Several conquerable stations are scattered across unclaimed systems on the outskirts of national territory (an area known as Greyspace). Controlling these stations allows players to manufacture unique items and weaponry. Conquering a station involves destroying all of the station's defensive turrets within a timed window of opportunity. Once all turrets have been destroyed, a short timer is triggered at the end of which the first player to dock with the station is given an access key, which may then be provided to allies. Whenever a conquerable station comes under attack, any player with a key is sent a message, allowing them to organize a defense or a counter-attack.
Capture the Cargo (CtC) - Serco and Itani convoys depart hourly from within Greyspace, bound for their respective national territories. These convoys carry a valuable good known as Purified Xithricite Ore, which cannot be found elsewhere in the game. Players can lie in wait along the convoy routes and ambush the ships, picking up the ore and redirecting it to their own nation. At the end of each week the ore is tallied, and players belonging to the winning nation are given access to a special weapon. The losing nation is able to keep 20% of the ore that was collected as a surplus, the winning nation begins with nothing. UIT players are able to help either side as they see fit.
Race Tracks - There are a series of racetracks in Greyspace that record player top times. The racetracks are in the form of tubes, with certain segments transparent and certain segments opaque, presenting a visual hazard. Some tracks include branching tubes, allowing multiple courses through a single track. Players may take damage by colliding with the tube walls at high speed, challenging them to find the best line through each track while avoiding the tube walls. There are checkpoints along the way that allow the player to gauge his or her lap time. At the end of every month an achievement badge is awarded to any player holding a top time on any track.
Deneb War - Players belonging to either the Itani or the Serco military are able to engage in battles along the border between the nations. Depending on the outcome of each battle, territory may be claimed for either nation. Battles range in size from small fighter skirmishes to large battles between fleets of opposing capital ships. War convoys are sent out daily to blockade the entrance to the losing nation from Greyspace. This affects convoys carrying Purified Xithricite Ore, as the convoy route lies along the area that is blockaded.
Manufacturing
In June 2009, the game acquired the ability for several players to serve as crew together on one ship. Two years later in June 2011, a guild of players finished construction of Vendetta Onlines first player-owned capital ship. Player-owned capships are constructed through a series of manufacturing missions, in which the player delivers a list of necessary materials to the station, and receives the resulting components. The components are assembled through a final mission at a hidden capital shipyard station.
Various weapons, addons, and fighter class ships may be assembled through manufacturing missions as well. Most manufactured items hold some benefit over regularly purchased items. Material elements used in manufacturing are often scarce, including rare ore and unique items dropped by Hive robots. Manufactured items, together with the rare ore and components used to create them, form a base leg of Vendetta Online'''s emerging player economy.
Synopsis
SettingVendetta Online has a long backstory, which details the fate of humanity from 2140 to 4432. It was written by the lead game developer, John Bergman. The backstory is written from a historical perspective, from the viewpoint of an Itani monk chronicling the events leading to the galaxy's present state. It outlines drastic advances in the fields of medicine and biotechnology during political unrest and warfare on Earth, the discovery of a stable wormhole near Saturn leading to a solar system across the galaxy, and the subsequent exploration, colonization, and inadvertent exile humankind experiences as a result of the wormhole's collapse. The three national factions are described in great detail, including their origins, personality, and notable historical figures. Subterfuge, misunderstanding, and betrayal are common themes throughout the backstory.
Galactic Trade Standard
Galactic Trade Standard (GTS) is a fictional language unique to Vendetta Online. Similar to real-world East Asian languages, it includes both phonetic and symbolic alphabets. The phonetic alphabet is simplified to include major human sounds with few letters. The symbolic alphabet includes pictographic representations of objects and ideas relating to trade within Vendetta. The language was constructed to allow traders from foreign nations to communicate with one another easily, as certain nations spent centuries in isolation from each other before regaining contact. GTS can be found throughout the galaxy on projected neon signs outside of space stations, most often advertising goods and services but also sometimes including cryptic messages or humorous phrases.
Development
Vendetta Online's development team is relatively small compared to many other titles in the same genre, at its largest including four people. The four original developers are John Bergman, Ray Ratelis, Andy Sloane, and Waylon Brinck. John Bergman remains the managing director and has invested much of his own personal finances into the project. The game is based in Milwaukee, Wisconsin and has been in development since 1998. In April 2002, the game opened with a public alpha test to prove the efficiency of the twitch style combat in the low bandwidth environment of that era. In November 2004, the game launched as a retail product, with the Linux mascot featured as part of the game box artwork. The retail-CD included Mac, Linux, and PC native versions of the game.
Since launch, the development team has maintained Mac, Linux, and PC support for the game, allowing users on all three platforms to play and interact in the same universe. In 2010, the developers announced that the game would be ported to Android devices, allowing mobile device users to play. In March 2011, the game became available on mobile devices with the Nvidia Tegra, making it the first true PC MMORPG to make the jump to mobile. In September 2011, Vendetta launched on the Xperia Play. By December 2011, the game was available on a number of Android phones, and in October 2012, it was launched for Windows mobile devices on Windows RT, becoming the first game to appear in both the Nvidia TegraZone market and the Windows Store. On April 17, 2013, Vendetta launched on iTunes for iOS devices including the iPad 2 and later. Vendetta was the only MMORPG available at launch on the Android-based Ouya console, and continues to be accessible from the Ouya store. On July 24, 2013, Vendetta officially launched support for the Oculus Rift family of devices, marking the first time a live MMORPG supported the virtual reality display. On October 20, 2016, Vendetta launched for Samsung Gear VR, UploadVR noting it as "the most ambitious space shooter on Gear VR so far". On March 30, 2017, Vendetta launched for Google Daydream VR. On September 21, 2017, Vendetta officially launched for iPhone. On June 9, 2018, Vendetta launched for Oculus Go.
Users on all platforms are fed into the same, contiguous galaxy, where they can directly interact. The gameplay itself has been described as "device agnostic", meaning that users are able to approach the game with an even level of parity across platforms, and are blind to what platform other players with whom they are interacting are using. In general, the development team has a good relationship with the Vendetta Online user-base, often stopping to play online along with subscribed users and responding to user comments, questions and suggestions.
Player Contribution Corps
The PCC (Player Contribution Corps) is a hand-picked group of players that creates content for the game (mainly missions). These players use an in-house developed browser-based mission editor and submit their content to a publicly accessible test server. There the mission can be tested by anyone, and commented on. The mission can then be resubmitted, based on new suggestions. Typically, this process requires several months or more for a mission tree to be brought up to standard and implemented.
The PCC is a separate group from the remainder of the player base as an additional emphasis on "Trust, Maturity, Involvement and English" is expected of them. This is in part to ensure that the PCC remains a productive and collaborative environment that continues to produce high-quality content. The PCC is also responsible for helping design Lua code for the game's User Interface, and testing new equipment before it becomes widely available. The PCC remains an innovative development on the MMO scene, bridging the gap between "sandbox" style player-contributed content and traditional top-down corporate game design, generated in a method reminiscent of Lucas' 'Expanded Universe'.
Reception
Upon initial release, Vendetta Online received mixed reviews. GameSpot's 2005 review rated Vendetta as "good", highlighting the novel twitch-based space combat, and a successful implementation of multiplayer space trading and combat gameplay. However, it was felt that the limited content did not justify the subscription value, and the ship designs were deemed to be uninspiring. A more recent review by Kotaku stated that Vendetta was "a deep game with an intuitive interface" taking note of the port to Android mobile devices. Macworld called Vendetta "a brilliant online gaming experience, clearly influenced by classic space games Elite and Frontier". Joystiq praised Vendettas development saying "For active developer participation, Vendetta Online definitely runs near the head of the pack and might even lead it". In January of 2013, Darrell Etherington of TechCrunch named Vendetta'' "The Most Multiplatform MMO Game Ever".
References
External links
Active massively multiplayer online games
Massively multiplayer online role-playing games
2004 video games
Android (operating system) games
Space massively multiplayer online role-playing games
Linux games
Lua (programming language)-scripted video games
MacOS games
Space trading and combat simulators
Video games developed in the United States
Windows games
Ouya games
Oculus Rift games
IOS games
Strategy First games
|
6473868
|
https://en.wikipedia.org/wiki/Outline%20of%20finance
|
Outline of finance
|
The following outline is provided as an overview of and topical guide to finance:
Finance – addresses the ways in which individuals and organizations raise and allocate monetary resources over time, taking into account the risks entailed in their projects.
Overview
The term finance may incorporate any of the following:
The study of money and other assets
The management and control of those assets
Profiling and managing project risks
Fundamental financial concepts
Finance
Arbitrage
Capital (economics)
Capital asset pricing model
Cash flow
Cash flow matching
Debt
Default
Consumer debt
Debt consolidation
Debt settlement
Credit counseling
Bankruptcy
Debt diet
Debt-snowball method
Debt of developing countries
Asset types
Real Estate
Securities
Commodities
Futures
Cash
Discounted cash flow
Financial capital
Funding
Financial modeling
Entrepreneur
Entrepreneurship
Fixed income analysis
Gap financing
Global financial system
Hedge
Basis risk
Interest rate
Risk-free interest rate
Term structure of interest rates
Short-rate model
Vasicek model
Cox–Ingersoll–Ross model
Hull–White model
Chen model
Black–Derman–Toy model
Interest
Effective interest rate
Nominal interest rate
Interest rate basis
Fisher equation
Crowding out
Annual percentage rate
Interest coverage ratio
Investment
Foreign direct investment
Gold as an investment
Over-investing
Leverage
Long (finance)
Liquidity
Margin (finance)
Mark to market
Market impact
Medium of exchange
Microcredit
Money
Money creation
Currency
Coin
Banknote
Counterfeit
History of money
Monetary reform
Portfolio
Modern portfolio theory
Mutual fund separation theorem
Post-modern portfolio theory
Reference rate
Reset
Return
Absolute return
Investment performance
Relative return
Risk
Financial risk
Risk management
Financial risk management
Uncompensated risk
Risk measure
Coherent risk measure
Deviation risk measure
Distortion risk measure
Spectral risk measure
Value at risk
Expected shortfall
Entropic value at risk
Scenario analysis
Short (finance)
Speculation
Day trading
Position trader
Spread trade
Standard of deferred payment
Store of value
Time horizon
Time value of money
Discounting
Present value
Future value
Net present value
Internal rate of return
Modified internal rate of return
Annuity
Perpetuity
Trade
Free trade
Free market
Fair trade
Unit of account
Volatility
Yield
Yield curve
History
History of finance
History of banking
History of insurance
Tulip mania (Dutch Republic), 1620s/1630s
South Sea Bubble (UK) & Mississippi Company (France), 1710s; see also Stock market bubble
Vix pervenit 1745, on usury and other dishonest profit
Panic of 1837 (US)
Railway Mania (UK), 1840s
Erie War (US), 1860s
Long Depression, 1873–1896 (mainly US and Europe, though other parts of the world were affected)
Post-World War I hyperinflation; see Hyperinflation and Inflation in the Weimar Republic
Wall Street Crash of 1929
Great Depression 1930s
Bretton Woods Accord 1944
1973 oil crisis
1979 energy crisis
Savings and Loan Crisis 1980s
Black Monday 1987
Asian financial crisis 1990s
Dot-com bubble 1995-2001
Stock market downturn of 2002
United States housing bubble
Financial crisis of 2007–08, followed by the Great Recession
Finance terms by field
Accounting (financial record keeping)
Auditing
Accounting software
Book keeping
FASB
Financial accountancy
Financial statements
Balance sheet
Cash flow statement
Income statement
Management accounting
Philosophy of Accounting
Working capital
Banking
See articles listed under:
Corporate finance
Balance sheet analysis
Financial ratio
Business plan
Capital budgeting
Investment policy
Business valuation
Stock valuation
Fundamental analysis
Real options
Valuation topics
Fisher separation theorem
Sources of financing
Securities
Debt
Initial public offering
Capital structure
Cost of capital
Weighted average cost of capital
Modigliani–Miller theorem
Hamada's equation
Dividend policy
Dividend
Dividend tax
Dividend yield
Modigliani–Miller theorem
Corporate action
(Strategic) Financial management
Managerial finance
Management accounting
Mergers and acquisitions
leveraged buyout
takeover
corporate raid
Contingent value rights
Real options
Working capital management
Working capital
Current assets
Current liabilities
Return on investment
Return on capital
Return on assets
Return on equity
loan covenant
cash conversion cycle
Cash management
Inventory optimization
Supply chain management
Just In Time (JIT)
Economic order quantity (EOQ)
Economic production quantity (EPQ)
Economic batch quantity
Credit (finance)
Credit scoring
Default risk
Discounts and allowances
Factoring (trade) & Supply chain finance
Investment management
Active management
Efficient market hypothesis
Portfolio
Modern portfolio theory
Capital asset pricing model
Arbitrage pricing theory
Passive management
Index fund
Activist shareholder
Mutual fund
Open-end fund
Closed-end fund
List of mutual-fund families
Financial engineering
Long-Term Capital Management
Hedge fund
Hedge
#Quantitative investing, below
Personal finance
529 plan (US college savings)
ABLE account (US plan for benefit of individuals with disabilities)
Asset allocation
Asset location
Budget
Coverdell Education Savings Account (Coverdell ESAs, formerly known as Education IRAs)
Credit and debt
Credit card
Debt consolidation
Mortgage loan
Continuous-repayment mortgage
Debit card
Direct deposit
Employment contract
Commission
Employee stock option
Employee or fringe benefit
Health insurance
Paycheck
Salary
Wage
Financial literacy
Insurance
Predatory lending
Retirement plan
Australia – Superannuation in Australia
Canada
Registered retirement savings plan
Tax-free savings account
Japan – Nippon individual savings account
New Zealand – KiwiSaver
United Kingdom
Individual savings account
Self-invested personal pension
United States
401(a)
401(k)
403(b)
457 plan
Keogh plan
Individual retirement account
Roth IRA
Traditional IRA
SEP IRA
SIMPLE IRA
Pension
Simple living
Social security
Tax advantage
Wealth
Comparison of accounting software
Personal financial management
Investment club
Collective investment scheme
Public finance
Central bank
Federal Reserve
Fractional-reserve banking
Deposit creation multiplier
Tax
Capital gains tax
Estate tax (and inheritance tax)
Gift tax
Income tax
Inheritance tax
Payroll tax
Property tax (including land value tax)
Sales tax (including value added tax, excise tax, and use tax)
Transfer tax (including stamp duty)
Tax advantage
Tax, tariff and trade
Tax amortization benefit
Crowding out
Industrial policy
Agricultural policy
Currency union
Monetary reform
Constraint finance
Environmental finance
Feminist economics
Green economics
Islamic economics
Uneconomic growth
Value of Earth
Value of life
Insurance
Actuarial science
Annuities
Catastrophe modeling
Earthquake loss
Extended coverage
Insurable interest
Insurable risk
Insurance
Health insurance
Disability insurance
Accident insurance
Flexible spending account
Health savings account
Long term care insurance
Medical savings account
Life insurance
Life insurance tax shelter
Permanent life insurance
Term life insurance
Universal life insurance
Variable universal life insurance
Whole life insurance
Property insurance
Auto insurance
Boiler insurance
Business interruption insurance
Condo insurance
Earthquake insurance
Home insurance
Title insurance
Pet insurance
Renters' insurance
Casualty insurance
Fidelity bond
Liability insurance
Political risk insurance
Surety bond
Terrorism insurance
Credit insurance
Trade credit insurance
Payment protection insurance
Credit derivative
Mid-term adjustment
Reinsurance
Self insurance
Travel insurance
Niche insurance
Insurance contract
Loss payee clause
Risk Retention Group
Economics and finance
Finance-related areas of economics
Financial economics
Monetary economics
Mathematical economics
Managerial economics
Economic growth theory
Decision theory
Game theory
Experimental economics / Experimental finance
Behavioral economics / Behavioral finance
Corporate finance theory
Fisher separation theorem
Modigliani–Miller theorem
The Theory of Investment Value
Agency theory
Capital structure
Capital structure substitution theory
Pecking order theory
Market timing hypothesis
Trade-off theory of capital structure
Merton model
Tax shield
Dividend policy
Walter model
Gordon model
Lintner model
Residuals theory
Clientele effect
Dividend puzzle
Dividend tax
Capital budgeting (valuation)
Clean surplus accounting
Residual income valuation
Economic value added / Market value added
T-model
Adjusted present value
Penalized present value
Expected commercial value
Risk-adjusted net present value
Contingent claim valuation
Real options
Monte Carlo methods
Asset pricing theory
Value (economics)
Fair value
Intrinsic value
Market price
Expected value
Opportunity cost
Risk premium
Equilibrium price
market efficiency
economic equilibrium
rational expectations
Risk factor (finance)
General equilibrium theory
Supply and demand
Competitive equilibrium
Economic equilibrium
Partial equilibrium
Arbitrage-free price
Rational pricing
§ Arbitrage free pricing
§ Risk neutral valuation
Contingent claim analysis
Brownian model of financial markets
Complete market & Incomplete markets
Utility
Risk aversion
Expected utility hypothesis
Utility maximization problem
Marginal utility
Generalized expected utility
Economic efficiency
Efficient-market hypothesis
efficient frontier
Production–possibility frontier
Allocative efficiency
Pareto efficiency
Productive efficiency
State prices
Arrow–Debreu model
Stochastic discount factor
Pricing kernel
application:
Fundamental theorem of asset pricing
Rational pricing
Arbitrage-free
No free lunch with vanishing risk
Self-financing portfolio
Stochastic dominance
Marginal conditional stochastic dominance
Martingale pricing
Brownian model of financial markets
Random walk hypothesis
Risk-neutral measure
Martingale (probability theory)
Sigma-martingale
Semimartingale
Asset pricing models
Equilibrium pricing
Equities; foreign exchange and commodities
Capital asset pricing model
Consumption-based CAPM
Intertemporal CAPM
Single-index model
Multiple factor models
Fama–French three-factor model
Carhart four-factor model
Arbitrage pricing theory
Bonds; other interest rate instruments
Vasicek
Rendleman–Bartter
Cox–Ingersoll–Ross
Risk neutral pricing
Equities; foreign exchange and commodities; interest rates
Black–Scholes
Black
Garman–Kohlhagen
Heston
CEV
SABR
Bonds; other interest rate instruments
Ho–Lee
Hull–White
Black–Derman–Toy
Black–Karasinski
Kalotay–Williams–Fabozzi
Longstaff–Schwartz
Chen
Rendleman–Bartter
Heath–Jarrow–Morton
Cheyette
Brace–Gatarek–Musiela
LIBOR market model
Mathematics and finance
Time value of money
Present value
Future value
Discounting
Net present value
Internal rate of return
Annuity
Perpetuity
Financial mathematics
Mathematical tools
Probability
Probability distribution
Binomial distribution
Log-normal distribution
Poisson distribution
Stochastic calculus
Brownian motion
Geometric Brownian motion
Cameron–Martin theorem
Feynman–Kac formula
Girsanov's theorem
Itô's lemma
Martingale representation theorem
Radon–Nikodym derivative
Stochastic differential equations
Stochastic process
Jump process
Lévy process
Markov process
Ornstein–Uhlenbeck process
Wiener process
Monte Carlo methods
Low-discrepancy sequence
Monte Carlo integration
Quasi-Monte Carlo method
Random number generation
Partial differential equations
Finite difference method
Heat equation
Numerical partial differential equations
Crank–Nicolson method
Finite difference method: Numerical analysis
Volatility
ARCH model
GARCH model
Stochastic volatility
Stochastic volatility jump
Derivatives pricing
Underlying logic (see also #Economics and finance above)
Rational pricing
Risk-neutral measure
Arbitrage-free pricing
Brownian model of financial markets
Martingale pricing
Forward contract
Forward contract pricing
Futures
Futures contract pricing
Options (incl. Real options and ESOs)
Valuation of options
Black–Scholes formula
Approximations for American options
Barone-Adesi and Whaley
Bjerksund and Stensland
Black's approximation
Optimal stopping
Roll–Geske–Whaley
Black model
Binomial options model
Finite difference methods for option pricing
Garman–Kohlhagen model
The Greeks
Lattice model (finance)
Margrabe's formula
Monte Carlo methods for option pricing
Monte Carlo methods in finance
Quasi-Monte Carlo methods in finance
Least Square Monte Carlo for American options
Trinomial tree
Volatility
Implied volatility
Historical volatility
Volatility smile (& Volatility surface)
Stochastic volatility
Constant elasticity of variance model
Heston model
SABR volatility model
Local volatility
Implied binomial tree
Implied trinomial tree
Edgeworth binomial tree
Johnson binomial tree
Swaps
Swap valuation
Multi-curve framework
Interest rate derivatives (bond options, swaptions, caps and floors, and others)
Black model
caps and floors
swaptions
Bond options
Short-rate models (generally applied via lattice based- and specialized simulation-models, although "Black like" formulae exist in some cases.)
Rendleman–Bartter model
Vasicek model
Ho–Lee model
Hull–White model
Cox–Ingersoll–Ross model
Black–Karasinski model
Black–Derman–Toy model
Kalotay–Williams–Fabozzi model
Longstaff–Schwartz model
Chen model
Forward rate / Forward curve -based models (Application as per short-rate models)
LIBOR market model (also called: Brace–Gatarek–Musiela Model, BGM)
Heath–Jarrow–Morton Model (HJM)
Cheyette model
Valuation adjustments
Credit valuation adjustment
XVA
Yield curve modelling
Multi-curve framework
Bootstrapping (finance)
Nelson-Siegel
Portfolio mathematics
#Mathematical techniques below
#Quantitative investing below
Portfolio optimization
§ Optimization methods
§ Mathematical tools
Merton's portfolio problem
Kelly criterion
Roy's safety-first criterion
Specific applications:
Black–Litterman model
Universal portfolio algorithm
Markowitz model
Treynor–Black model
Financial markets
Market and instruments
Capital markets
Securities
Financial markets
Primary market
Initial public offering
Aftermarket
Free market
Bull market
Bear market
Bear market rally
Market maker
Dow Jones Industrial Average
Nasdaq
List of stock exchanges
List of stock market indices
List of corporations by market capitalization
Value Line Composite Index
Equity market
Stock market
Stock
Common stock
Preferred stock
Treasury stock
Equity investment
Index investing
Private Equity
Financial reports and statements
Fundamental analysis
Dividend
Dividend yield
Stock split
Equity valuation
Dow theory
Elliott wave principle
Economic value added
Fibonacci retracement
Gordon model
Growth stock
PEG ratio
PVGO
Mergers and acquisitions
Leveraged buyout
Takeover
Corporate raid
PE ratio
Market capitalization
Income per share
Stock valuation
Technical analysis
Chart patterns
V-trend
Paper valuation
Investment theory
Behavioral finance
Dead cat bounce
Efficient market hypothesis
Market microstructure
Stock market crash
Stock market bubble
January effect
Mark Twain effect
Quantitative behavioral finance
Quantitative analysis (finance)
Statistical arbitrage
Bond market
Bond (finance)
Zero-coupon bond
Junk bonds
Convertible bond
Accrual bond
Municipal bond
Sovereign bond
Bond valuation
Yield to maturity
Bond duration
Bond convexity
Fixed income
Money market
Repurchase agreement
International Money Market
Currency
Exchange rate
International currency codes
Table of historical exchange rates
Commodity market
Commodity
Asset
Commodity Futures Trading Commission
Commodity trade
Drawdowns
Forfaiting
Fundamental analysis
Futures contract
Fungibility
Gold as an investment
Hedging
Jesse Lauriston Livermore
List of traded commodities
Ownership equity
Position trader
Risk (Futures)
Seasonal traders
Seasonal spread trading
Slippage
Speculation
Spread trade
Technical analysis
Breakout
Bear market
Bottom (technical analysis)
Bull market
MACD
Moving average
Open Interest
Parabolic SAR
Point and figure charts
Resistance
RSI
Stochastic oscillator
Stop loss
Support
Top (technical analysis)
Trade
Trend
Derivatives market
Derivative (finance)
(see also Financial mathematics topics; Derivatives pricing)
Underlying instrument
Forward markets and contracts
Forward contract
Futures markets and contracts
Backwardation
Contango
Futures contract
Financial future
Currency future
Interest rate future
Single-stock futures
Stock market index future
Futures exchange
Option markets and contracts
Options
Stock option
Box spread
Call option
Put option
Strike price
Put–call parity
The Greeks
Black–Scholes formula
Black model
Binomial options model
Implied volatility
Option time value
Moneyness
At-the-money
In-the-money
Out-of-the-money
Straddle
Option style
Vanilla option
Exotic option
Binary option
European option
Interest rate floor
Interest rate cap
Bermudan option
American option
Quanto option
Asian option
Employee stock option
Warrants
Foreign exchange option
Interest rate options
Bond options
Real options
Options on futures
Swap markets and contracts
Swap (finance)
Interest rate swap
Basis swap
Asset swap
Forex swap
Stock swap
Equity swaps
Currency swap
Variance swap
Derivative markets by underlyings
Equity derivatives
Contract for difference (CFD)
Exchange-traded fund (ETF)
Closed-end fund
Inverse exchange-traded fund
Equity options
Equity swap
Real estate investment trust (REIT)
Warrants
Covered warrant
Interest rate derivatives
LIBOR
Forward rate agreement
Interest rate swap
Interest rate cap
Exotic interest rate option
Bond option
Interest rate future
Money market instruments
Range accrual Swaps/Notes/Bonds
In-arrears Swap
Constant maturity swap (CMS) or Constant Treasury Swap (CTS) derivatives (swaps, caps, floors)
Interest rate Swaption
Bermudan swaptions
Cross currency swaptions
Power Reverse Dual Currency note (PRDC or Turbo)
Target redemption note (TARN)
CMS steepener
Snowball
Inverse floater
Strips of Collateralized mortgage obligation
Ratchet caps and floors
Credit derivatives
Credit default swap
Collateralized debt obligation
Credit default option
Total return swap
Securitization
Strip financing
Foreign exchange derivative
Basis swap
Currency future
Currency swap
Foreign exchange binary option
Foreign exchange forward
Foreign exchange option
Forward exchange rate
Foreign exchange swap
Foreign exchange hedge
Non-deliverable forward
Power reverse dual-currency note
Financial regulation
Corporate governance
Financial regulation
Bank regulation
Banking license
License
Designations and accreditation
Certified Financial Planner
Chartered Financial Analyst
CFA Institute
Chartered Alternative Investment Analyst
Professional risk manager
Chartered Financial Consultant
Canadian Securities Institute
Independent financial adviser
Chartered Insurance Institute
Financial risk manager
Chartered Market Technician
Certified Financial Technician
Litigation
Liabilities Subject to Compromise
Fraud
Forex scam
Insider trading
Legal origins theory
Petition mill
Ponzi scheme
Industry bodies
International Swaps and Derivatives Association
National Association of Securities Dealers
Regulatory bodies
International
Bank for International Settlements
International Organization of Securities Commissions
Security Commission
Basel Committee on Banking Supervision
Basel Accords – Basel I, Basel II, Basel III
International Association of Insurance Supervisors
International Accounting Standards Board
European Union
European Securities Committee (EU)
Committee of European Securities Regulators (EU)
Regulatory bodies by country
United Kingdom
Financial Conduct Authority
Prudential Regulation Authority (United Kingdom)
United States
Commodity Futures Trading Commission
Federal Reserve
Federal Trade Commission
Municipal Securities Rulemaking Board
Office of the Comptroller of the Currency
Securities and Exchange Commission
United States legislation
Glass–Steagall Act (US)
Gramm–Leach–Bliley Act (US)
Sarbanes–Oxley Act (US)
Securities Act of 1933 (US)
Securities Exchange Act of 1934 (US)
Investment Advisers Act of 1940 (US)
USA PATRIOT Act
Actuarial topics
Actuarial topics
Valuation
Underlying theory
Value (economics)
Valuation (finance) and specifically § Valuation overview
"The Theory of Investment Value"
Valuation risk
Real versus nominal value (economics)
Real prices and ideal prices
Fair value
Fair value accounting
Intrinsic value
Market price
Value in use
Fairness opinion
Asset pricing (see also #Economics and finance above)
Equilibrium price
market efficiency
economic equilibrium
rational expectations
Arbitrage-free price
Context
(Corporate) Bonds
Bond valuation
Equity valuation
#Equity valuation above
Fundamental analysis
Stock valuation
Business valuation
Capital budgeting and
The Theory of Investment Value
Real estate valuation
Real estate appraisal
Real estate economics
Considerations
Bonds
covenants and indentures
secured / unsecured debt
senior / subordinated debt
embedded options
Equity
Minimum acceptable rate of return
Margin of safety (financial)
Enterprise value
Sum-of-the-parts analysis
Conglomerate discount
Minority discount
Control premium
Accretion/dilution analysis
Certainty equivalent
Haircut (finance)
Paper valuation
Discounted cash flow valuation
Bond valuation
Modeling
embedded options:
Pull to par
#Contingent claim valuation below
Results
Clean price
Dirty price
Yield to maturity
Coupon yield
Current yield
Duration
Convexity
embedded options:
Option-adjusted spread
effective duration
effective convexity
Cash flows
Principal (finance)
Coupon (bond)
Fixed rate bond
Floating rate note
Zero-coupon bond
Accrual bond
sinking fund provisions
Real estate valuation
Income approach
Net Operating Income
German income approach
Equity valuation
Results
Net present value
Adjusted present value
Equivalent Annual Cost
Payback period
Discounted payback period
Internal rate of return
Modified Internal Rate of Return
Return on investment
Profitability index
Specific models and approaches
Dividend discount model
Gordon growth model
Market value added / Economic value added
Residual income valuation
First Chicago Method
rNPV
Fed model
Chepakovich valuation model
Sum of perpetuities method
Benjamin Graham formula
LBO valuation model
Goldman Sachs asset management factor model
Cash flows
Cash flow forecasting
EBIDTA
NOPAT
Free cash flow
Free cash flow to firm
Free cash flow to equity
Dividends
#Financial modeling below, re modeling:
terminal value
required return
Relative valuation
Bonds
Yield spread
I-spread
Option-adjusted spread
Z-spread
Asset swap spread
Credit spread (bond)
Bond credit rating
Altman Z-score
Ohlson O-score
Book value
Debt-to-equity ratio
Debt-to-capital ratio
Current ratio
Quick ratio
Debt ratio
Real estate
Capitalization rate
Gross rent multiplier
Sales comparison approach
Cash on cash return
Equity
Financial ratio
Market-based valuation
Comparable company analysis
Dividend yield
Yield gap
Return on equity
DuPont analysis
PE ratio
PEG ratio
Cyclically adjusted price-to-earnings ratio
PVGO
P/B ratio
Price to cash based earnings
Price to Sales
EV/EBITDA
EV/Sales
Stock image
Valuation using the Market Penetration Model
Graham number
Tobin's q
Contingent claim valuation
Valuation techniques
general
Valuation of options
#Derivatives pricing above
as typically employed
Real options valuation
Monte Carlo methods in finance
Applications
Corporate investments and projects
Real options
Contingent value rights
Balance sheet assets and liabilities
warrants and other convertible securities
securities with embedded options such as callable bonds
employee stock options
structured finance investments (funding dependent)
special purpose entities (funding dependent)
Other approaches
"Fundamentals"-based (relying on accounting information)
T-model
Residual income valuation
Clean surplus accounting
Net asset value method
Excess earnings method
Historical earnings valuation
Future maintainable earnings valuation
Graham number
Financial modeling
Cash flow
Cash flow forecasting
Cash flow statement
Operating cash flow
EBIDTA
NOPAT
Free cash flow
Free cash flow to firm
Free cash flow to equity
Dividends
Cash is king
Mid-year adjustment
Owner earnings
Required return (i.e. discount rate)
Cost of capital
Weighted average cost of capital
Cost of equity
Cost of debt
Capital Asset Pricing Model
Hamada's equation
Pure play method
Arbitrage pricing theory
Total Beta
T-model
cash-flow T-model
Terminal value
Forecast period (finance)
long term growth rate
Forecasted financial statements
Financial forecast
Revenue
Revenue model
Net sales
Costs
Profit margin
Gross margin
Net margin
Cost of goods sold
Operating expenses
Operating ratio
Cost driver
Fixed cost
Variable cost
Overhead cost
Value chain
activity based costing
common-size analysis
Profit model
Capital
Capital structure
common-size analysis
Equity (finance)
Shareholders' equity
Book value
Retained earnings
Financial capital
Long term asset / Fixed asset
Fixed-asset turnover
Long-term liabilities
Debt-to-equity ratio
Debt-to-capital ratio
Working capital
Current asset
Current liability
Inventory turnover / Days in inventory, Cost of goods sold
Debtor & Creditor days
Days sales outstanding / Days payable outstanding
Portfolio theory
General concepts
Portfolio (finance)
Portfolio manager
Investment management
Active management
Passive management (Buy and hold)
Index fund
Core & Satellite
Smart beta
Expense ratio
Investment style
Value investing
Contrarian investing
Growth investing
CAN SLIM
Index investing
Magic formula investing
Momentum investing
Quality investing
Style investing
Factor investing
Investment strategy
Benchmark-driven investment strategy
Liability-driven investment strategy
Investor profile
Rate of return on a portfolio / Investment performance
Risk return ratio
Risk–return spectrum
Risk factor (finance)
Portfolio optimization
Diversification (finance)
Asset classes
Exter's Pyramid
Asset allocation
Tactical asset allocation
Global tactical asset allocation
Strategic asset allocation
Dynamic asset allocation
Sector rotation
Correlation & covariance
Covariance matrix
Correlation matrix
Risk-free interest rate
Leverage (finance)
Utility function
Intertemporal portfolio choice
Portfolio insurance
Constant proportion portfolio insurance
Quantitative investment / Quantitative fund (see below)
Modern portfolio theory
Portfolio optimization
Risk return ratio
Risk–return spectrum
Economic efficiency
Efficient-market hypothesis
Random walk hypothesis
Utility maximization problem
Markowitz model
Merton's portfolio problem
Kelly criterion
Roy's safety-first criterion
Theory and results (derivation of the CAPM)
Equilibrium price
Market price
Systematic risk
Risk factor (finance)
Idiosyncratic risk / Specific risk
Mean-variance analysis (Two-moment decision model)
Efficient frontier (Mean variance efficiency)
Feasible set
Mutual fund separation theorem
Separation property (finance)
Tangent portfolio
Market portfolio
Beta (finance)
Fama–MacBeth regression
Hamada's equation
Capital allocation line
Capital market line
Security characteristic line
Capital asset pricing model
Single-index model
Security market line
Roll's critique
Related measures
Alpha (finance)
Sharpe ratio
Treynor ratio
Jensen's alpha
Optimization models
Markowitz model
Treynor–Black model
Equilibrium pricing models (CAPM and extensions)
Capital asset pricing model (CAPM)
Consumption-based capital asset pricing model (CCAPM)
Intertemporal CAPM (ICAPM)
Single-index model
Multiple factor models (see Risk factor (finance))
Fama–French three-factor model
Carhart four-factor model
Arbitrage pricing theory (APT)
Post-modern portfolio theory
Approaches
Behavioral portfolio theory
Stochastic portfolio theory
Maslowian portfolio theory
Dedicated portfolio theory (fixed income specific)
Optimization considerations
Pareto efficiency
Bayesian efficiency
Multiple-criteria decision analysis
Multi-objective optimization
Stochastic dominance
Second-order Stochastic dominance
Marginal conditional stochastic dominance
Downside risk
Risk parity
Tail risk parity
Volatility skewness
Semivariance
Expected shortfall (ES; also called conditional value at risk (CVaR), average value at risk (AVaR), expected tail loss (ETL))
Tail value at risk
Statistical dispersion
Discounted maximum loss
Indifference price
Measures
Dual-beta
Downside beta
Upside beta
Upside potential ratio
Upside risk
Downside risk
Sortino ratio
Omega ratio
Bias ratio
Information ratio
Active return
Active risk
Deviation risk measure
Distortion risk measure
Spectral risk measure
Optimization models
Black–Litterman model
Universal portfolio algorithm
Performance measurement
Performance attribution
Market timing
Stock selection
Fixed-income attribution
Benchmark
Lipper average
Returns-based style analysis
Rate of return on a portfolio
Holding period return
Tracking error
Alpha (finance)
Beta (finance)
Simple Dietz method
Modified Dietz method
Modigliani risk-adjusted performance
Upside potential ratio
Maximum Downside Exposure
Maximum drawdown
Sterling ratio
Sharpe ratio
Treynor ratio
Jensen's alpha
Bias ratio
V2 ratio
Calmar ratio (hedge fund specific)
Mathematical techniques
Quadratic programming
Critical line method
Nonlinear programming
Mixed integer programming
Stochastic programming (§ Multistage portfolio optimization)
Copula (probability theory) (§ Quantitative finance)
Principal component analysis (§ Quantitative finance)
Deterministic global optimization
Genetic algorithm ()
Machine learning (§ Applications)
Artificial neural network
Quantitative investing
Quantitative investing
Quantitative fund
and § Algorithmic trading quantitative analyst
Trading:
Automated trading
High-frequency trading
Algorithmic trading
Program trading
Systematic trading
Trading strategy
Mirror trading
Copy trading
Social trading
VWAP
TWAP
Portfolio optimization:
Black–Litterman model
Universal portfolio algorithm
Markowitz model
Treynor–Black model
other models
Factor investing
low-volatility investing
value investing
momentum investing
Risks:
Best execution
Implementation shortfall
Trading curb
Market impact
Market depth
Slippage (finance)
Transaction costs
Discussion:
2010 flash crash
Leading companies:
Prediction Company
Renaissance Technologies
D. E. Shaw & Co
AQR Capital
Barclays Investment Bank
Cantab Capital Partners
Robeco
Financial software tools
Straight Through Processing Software
Technical Analysis Software
Fundamental Analysis Software
Algorithmic trading
Electronic trading platform
List of numerical-analysis software
Comparison of numerical-analysis software
Financial institutions
Financial institutions
Bank
List of banks
List of banks in the Arab World
List of banks in Africa
List of banks in the Americas
List of banks in Asia
List of banks in Europe
List of banks in Oceania
List of international banking institutions
Advising bank
Central bank
List of central banks
Commercial bank
Community development bank
Cooperative bank
Custodian bank
Depository bank
Ethical bank
Investment bank
Islamic banking
Merchant bank
Microcredit
Mutual savings bank
National bank
Offshore bank
Private bank
Savings bank
Swiss bank
Bank holding company
Building society
Broker
Broker-dealer
Brokerage firm
Commodity broker
Insurance broker
Prime brokerage
Retail broker
Stockbroker
Clearing house
Commercial lender
Community development financial institution
Credit rating agency
Credit union
Diversified financial
Edge Act Corporation
Export Credit Agencies
Financial adviser
Financial intermediary
Financial planner
Futures exchange
List of futures exchanges
Government sponsored enterprise
Hard money lender
Independent financial adviser
Industrial loan company
Insurance company
Investment adviser
Investment company
Investment trust
Large and Complex Financial Institutions
Mutual fund
Non-banking financial company
Savings and loan association
Stock exchange
List of stock exchanges
Trust company
Education
For the typical finance career path and corresponding education requirements see:
Financial analyst generally, and esp. § Qualification, discussing various investment, banking, and corporate roles (i.e. financial management, corporate finance, investment banking, securities analysis & valuation, portfolio & investment management, credit analysis, working capital & treasury management; see )
Quantitative analyst, and , specifically re roles in quantitative finance (i.e. derivative pricing & hedging, interest rate modeling, financial risk management, financial engineering, computational finance; also, the mathematically-intensive variant on the banking roles; see )
Business education lists undergraduate degrees in business, commerce, accounting and economics; "finance" may be taken as a major in most of these, whereas "quantitative finance" is almost invariably postgraduate, following a math-focused Bachelors; the most common degrees for (entry level) investment, banking, and corporate roles are:
Bachelor of Business Administration (BBA)
Bachelor of Commerce (BCom)
Bachelor of Accountancy (B.Acc)
Bachelor of Economics (B.Econ)
The tagged BS / BA "in Finance" - the undergraduate version of the MSF below - or less common, "in Investment Management" or "in Personal Finance"
At the postgraduate level, the MBA, MCom and MSM (and recently the Master of Applied Economics) similarly offer training in finance generally; at this level there are also the following specifically focused master's degrees, with MSF the broadest - see for their focus and inter-relation:
Master of Applied Finance (M.App.Fin)
Master of Computational Finance
Master's in Corporate Finance
Master of Finance (M.Fin, MIF)
Master's in Financial Analysis
Master of Financial Economics
Master of Financial Engineering (MFE)
Master of Financial Planning
Master's in Financial Management
Master of Financial Mathematics
Master's in Financial Risk Management
Master's in Investment Management
Master of Mathematical Finance
Master of Quantitative Finance (MQF)
Master of Science in Finance (MSF, MSc Finance)
Master of Science in Global Finance
Doctoral-training in finance is usually a requirement for academia, but not relevant to industry
quants often enter the profession with PhDs in disciplines such as physics, mathematics, engineering, and computer science, and learn finance "on the job”
as an academic field, finance theory is studied and developed within the disciplines of management, (financial) economics, accountancy, and applied / financial mathematics.
For specialized roles, there are various Professional Certifications in financial services (see #Designations and accreditation above); the best recognized are arguably:
Association of Corporate Treasurers (MCT / FCT)
Certificate in Quantitative Finance (CQF)
Certified Financial Planner (CFP)
Certified International Investment Analyst (CIIA)
Certified Treasury Professional (CTP)
Chartered Alternative Investment Analyst (CAIA)
Chartered Financial Analyst (CFA)
Chartered Wealth Manager (CWM)
CISI Diploma in Capital Markets (MCSI)
Financial Risk Manager (FRM)
Professional Risk Manager (PRM)
Various organizations offer executive education, CPD, or other focused training programs, including:
Amsterdam Institute of Finance
Canadian Securities Institute
Chartered Institute for Securities & Investment
GARP
ICMA Centre
The London Institute of Banking & Finance
New York Institute of Finance
PRMIA
South African Institute of Financial Markets
Swiss Finance Institute
See also qualifications in related fields:
Actuarial credentialing and exams
Business education
Economics education
Related lists
Index of accounting articles
Outline of business management
Outline of marketing
Outline of economics
Outline of production
List of international trade topics
List of business law topics
List of business theorists
Actuarial topics
External links
Wharton Finance Knowledge Project – finance knowledge for students, teachers, and self-learners.
Prof. Aswath Damodaran - financial theory, with a focus in Corporate Finance, Valuation and Investments. Updated Data, Excel Spreadsheets.
Web Sites for Discerning Finance Students (Prof. John M. Wachowicz) -Links to finance web sites, grouped by topic
studyfinance.com - introductory finance web site at the University of Arizona
SECLaw.com - law of the financial markets
TheStreet.com Glossary - stock market related definitions
Finance
Finance
Finance topics
|
69737291
|
https://en.wikipedia.org/wiki/Advanced%20Logic%20Research
|
Advanced Logic Research
|
Advanced Logic Research, Inc. (ALR), was an American computer company founded in 1984 in Irvine, California by Gene Lu. The company marketed IBM PC compatibles across that standard's evolution until 1997, when it was acquired by Gateway 2000. ALR had a reputation for beating its larger competitors to market with compatibles featuring cutting-edge technologies but struggled with brand recognition among the fiercely competitive market of low-end PCs in the mid-1990s. According to computer journalist and collector Michael Nadeau, "ALR's business strategy was to be the first to market with the latest and fastest possible PC-compatible designs", a strategy that "often succeeded".
History
Foundation and early products (1984–1989)
Gene Lu (born 1954) founded Advanced Logic Research in 1984. Lu had emigrated with his family from Taiwan to El Monte, California, in 1963, and had worked for Computer Automation as a systems designer in the late 1970s. Among the company's first products was an 8088-equipped motherboard for Tava Corporation's Megaplus computer. In 1986, ALR announced the first i386-based personal computer, the Access 386, in July. It would have marked the first time a major component to the IBM PC standard was upgraded by a company outside IBM; however, ALR was beaten to market by Compaq with the release of the Deskpro 386 in September. Lu considered ALR's chief rival in the 1980s to be AST Research, another Irvine-based computer company also founded by ex–Computer Automation employees. In 1985, the Singapore-based holding company Wearnes Brothers Ltd. invested $500,000 in ALR and agreed to market the company's computers in Singapore, as well as provide overseas manufacturing services, in exchange for 40 percent of ownership. This stake grew to 60 percent over the following years.
ALR was one of the first companies to license the Micro Channel architecture from IBM in 1988. The MicroFlex 7000, released in January 1989 and configured with a 25-MHz i386 and 16 MB of SIMM random-access memory, was billed as outpacing IBM's MCA-based PS/2 Model 70 due to the inclusion of a proprietary cache prefetching system in its chipset. The company's i386-based FlexCache 25386 earned the company a PC Magazine Award for Technical Excellence for desktop computers in 1988. Year-to-year sales from September 1988 totaled $40 million—one-tenth of AST's but up from $5 million in 1986—prompting Lu negotiate buying out Wearnes Brothers' stake in the company. The buyout was completed in December 1988 for an undisclosed sum. ALR later ditched Micro Channel for the directly competing Extended Industry Standard Architecture in October 1989, releasing the PowerCache/4e later that year.
1989 armed robbery attempt
The company was the victim of an attempted armed robbery of its Irvine headquarters in April 1989. Four masked intruders brandished an assault rifle and a .45 caliber handgun at a security guard's head and demanded entry into the building. Two sanitation workers ran to safety upstairs in a locked room and screamed, prompting the gunmen to flee. The guard was uninjured, and no property was stolen. This attempted robbery was one of a wave of robberies targeting technology firms for cutting-edge computer chips across the United States in 1989—five of which occurred in Orange County alone from November 1988 to April 1989.
Success and IPO (1989–1992)
ALR performed well in 1989, posting revenue of $73.1 million in fiscal year 1989, double that of their 1988 revenue. The company additionally posted between $12 to $13 million for each of the first two months of Q1 1989, compared to Q1 1988's total revenue of $13 million. Lu expressed interest in launching ALR's IPO in the next year. A crowded computer marketplace and ALR's lack of brand recognition put this IPO into question among investment bank analysts and industry journalists; Walter Winnitzki wrote that "anyone who wants to succeed will need both advanced products and a differentiated distribution approach". Its IPO nevertheless commenced on March 6, 1990, with 2.65 million shares sold through PaineWebber.
ALR was ranked the 25th and 26th largest personal computer manufacturer globally in 1991 and 1992 respectively, according to Electronics magazine—ahead of Unisys' presence in the market but behind Zeos International. Most of ALR's computers were manufactured locally in Orange County, but ALR's contract with its Singaporean manufacturers bartered under its ownership by Wearnes Brothers continued into the early 1990s, in order to keep the price of some of its computers down.
Downturn and purchase (1992–1997)
Following strong growth in 1990 and 1991, the company posted its first quarterly loss in Q4 1992, following fierce competition in the low-end computer market and the then-ongoing recession in the United States leading to relatively high unemployment in California. The company laid off about 100 of its roughly 670 employees in October 1992, along with imposing a company-wide progressive salary cut for employees with salaries above $50,000—including Lu. ALR struggled through 1993, posting quarterly losses in all four fiscal quarters, before returning to profitability in Q1 1994. In March 1994, the company was awarded a patent for a microprocessor upgrade path that piggybacked off an existing processor while disabling it—a technology that ALR claimed was copied by Intel and several other PC manufacturers. ALR's stock rose from $1 per share to $7.125 following the announcement. Its shares fell to $5.125 in July that year, however, due to customers waiting for Intel's P54C redesign to the Pentium processor to be released that summer. ALR anticipated another Q3 loss.
The company released the Optima SLR, the first sub-$1000 PC with a Pentium, in July 1995. Clocked at 75 MHz, the system was bare-bones and included no monitor, hard drive, or peripherals, but it came configured with 8 MB of RAM and contained four PCI card slots—two used for a graphics card and multi-I/O card—and one ISA card slot. The Optima SLR was ALR's attempt to recapture the low-end computer market the company had lost, although InfoWorld opined that the move was opportune for resellers who would boost their own profit margins by including cheap peripherals.
Advanced Logic Research was purchased by Gateway 2000 in June 1997 in a stock swap valuated at $194 million. According to Money, the acquisition afforded Gateway with ALR's "high-end client/server and high-performance desktop innovations". The company was to continue operating as a subsidiary of Gateway, with Lu remaining president while simultaneously rising to vice presidency of Gateway 2000 itself.
Citations
References
External links
Advanced Logic Research at Michael Nadeau's Classic Tech
1997 mergers and acquisitions
American companies established in 1984
American companies disestablished in 1997
Computer companies established in 1984
Computer companies disestablished in 1997
Defunct computer companies based in California
Defunct computer hardware companies
|
31975187
|
https://en.wikipedia.org/wiki/Guidance%20Software
|
Guidance Software
|
Guidance Software, Inc. was a public company (NASDAQ: GUID) founded in 1997. Headquartered in Pasadena, California, the company developed and provided software solutions for digital investigations primarily in the United States, Europe, the Middle East, Africa, and the Asia/Pacific Rim. Guidance Software had offices in Brazil, Chicago, Houston, New York City, San Francisco, Singapore, United Kingdom and Washington, D.C. and employed approximately 371 employees. On September 14, 2017, the company was acquired by OpenText.
Best known for its EnCase digital investigations software, Guidance Software's product line was organized around four markets: digital forensics, endpoint security analytics, cyber security incident response, and e-discovery. The company served law-enforcement and government agencies, as well as corporations in various industries, such as financial and insurance services, technology, defense contracting, telecom, pharmaceutical, healthcare, manufacturing, and retail. The company operated through four business segments: products, professional services, training and maintenance, and operates two certification programs for the EnCase Certified Examiner (EnCE) and EnCase Certified eDiscovery Practitioner (EnCEP) designations. In May 2010, the company completed the acquisition of Tableau, LLC. In February 2012, Guidance Software acquired CaseCentral.
Notable case mentions
Guidance Software has been noted in a number of high-profile use cases. In 2002, Guidance Software's EnCase was used in the murder trial of David Westerfield to examine his computers and disks to connect him to child pornography. That same year, EnCase was used by French police to uncover emails from now-convicted shoe bomber Richard Colvin Reid.
In 2004, EnCase software was used in the trial of now convicted Scott Peterson for the murder of his wife, Laci Peterson. Computer forensic experts used EnCase to examine Peterson's five computer hard drives, which provided valuable evidence that he had shopped online for a boat, studied water currents, bought a gift for his mistress in the weeks leading up to his wife's death and showed interest in a computer map that included Brooks Island, where his wife was later found.
In 2005, American serial killer Dennis Lynn Rader (also known as the BTK killer) sent a floppy disk to FOX affiliate KSAS-TV in Wichita, Kansas. Using EnCase, police were able to find metadata embedded in a deleted Microsoft Word document that was, unbeknownst to Rader, on the disk. The metadata contained "Christ Lutheran Church", and the document was marked as last modified by "Dennis". A search of the church website turned up Dennis Rader as president of the congregation council. Police began surveillance of Rader.
In 2011, following Sony Online Entertainment's multiple security breaches, Sony said it would be working with Data Forté, Guidance Software and Protiviti to resolve its PlayStation breach. And in May 2011, after the killing of Osama bin Laden, it was reported that an assault team of Navy SEALs removed computers, hard drives, USB sticks and DVDs from bin Laden's compound for forensic analysis. Based on a job description supporting the task, Guidance Software's EnCase is believed to be the tool selected for analysis of the electronic gear. Later that year, Guidance Software's EnCase was noted as a forensic software tool used in the trial of Casey Anthony, following the death of her daughter Caylee Anthony. Investigators used EnCase to search digital cameras and computers. Using the software, Detective Sandra Osborne of Orange County Sheriff's Department, found correctly and incorrectly spelled searches for the word “chloroform.”
References
Computer security software companies
Digital forensics software
Software companies based in California
Technology companies based in Greater Los Angeles
Multinational companies headquartered in the United States
Companies based in Pasadena, California
Software companies established in 1997
1997 establishments in California
2017 mergers and acquisitions
Companies formerly listed on the Nasdaq
American subsidiaries of foreign companies
Software companies of the United States
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.